How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other...

1 year ago
95

🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame

How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other Key AI Risks in SaaS - The Hacker...

Security and IT teams are routinely forced to adopt software before fully understanding the security risks. And AI tools are no exception.
Employees and business leaders alike are flocking to generative AI software and similar programs, often unaware of the major SaaS security vulnerabilities they're introducing into the enterprise. A February 2023 generative AI survey of 1,000 executives revealed that 49% of respondents use ChatGPT now, and 30% plan to tap into the ubiquitous generative AI tool soon. Ninety-nine percent of those using ChatGPT claimed some form of cost-savings, and 25% attested to reducing expenses by $75,000 or more. As the researchers conducted this survey a mere three months after ChatGPT's general availability, today's ChatGPT and AI tool usage is undoubtedly higher. Security and risk teams are already overwhelmed protecting their SaaS estate (which has now become the operating system of business) from common vulnerabilities such as misconfigurations and over permissioned users. This leaves little bandwidth to assess the AI tool threat landscape, unsanctioned AI tools currently in use, and the implications for SaaS security. With threats emerging outside and inside organizations, CISOs and their teams must understand the most relevant AI tool risks to SaaS systems — and how to mitigate them.
1 — Threat Actors Can Exploit Generative AI to Dupe SaaS Authentication Protocols As ambitious employees devise ways for AI tools to help them accomplish more with less, so, too, do cybercriminals. Using generative AI with malicious intent is simply inevitable, and it's already possible.
AI's ability to impersonate humans exceedingly well renders weak SaaS authentication protocols especially vulnerable to hacking. According to Techopedia, threat actors can misuse generative AI for password-guessing, CAPTCHA-cracking, and building more potent malware. While these methods may sound limited in their attack range, the January 2023 CircleCI security breach was attributed to a single engineer's laptop becoming infected with malware. Likewise, three noted technology academics recently posed a plausible hypothetical for generative AI running a phishing attack: "A hacker uses ChatGPT to generate a personalized spear-phishing message based on your company's marketing materials and phishing messages that have been successful in the past. It succeeds in fooling people who have been well trained in email awareness, because it doesn't look like the messages they've been trained to detect." Malicious actors will avoid the most fortified entry point — typically the SaaS platform itself — and instead target more vulnerable side doors. They won't bother with the deadbolt and guard dog situated by the front door when they can sneak around back to the unlocked patio doors.
Relying on authentication alone to keep SaaS data secure is not a viable option. Beyond implementing multi-factor authentication (MFA) and physical security keys, security and risk teams need visibility and continuous monitoring for the entire SaaS perimeter, along with automated alerts for suspicious login activity. These insights are necessary not only for cybercriminals' generative AI activities but also for employees' AI tool connections to SaaS platforms.
2 — Employees Connect Unsanctioned AI Tools to SaaS Platforms Without Considering the Risks Employees are now relying on unsanctioned AI tools to make their jobs easier. After all, who wants to work harder when AI tools increase effectiveness and efficiency? Like any form of shadow IT, employee adoption of AI tools is driven by the best intentions. For example, an employee is convinced they could manage their time and to-do's better, but the effort to monitor and analyze their task management and meetings involvement feels like a large undertaking. AI can perform that monitoring and analysis with ease and provide recommendations almost instantly, giving the employee the productivity boost they crave in a fraction of the time. Signing up for an AI scheduling assistant, from the end-user's perspective, is as simple and (seemingly) innocuous as: Registering for a free trial or enrolling with a credit card Agreeing to the AI tool's Read/Write permission requests
Connect...

Loading comments...