How generative AI is creating new classes of security threats - VentureBeat

1 year ago
67

🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame

How generative AI is creating new classes of security threats - VentureBeat

June 18, 2023 11:10 AM Image Credit: Created with Midjourney Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More The promised AI revolution has arrived. OpenAI’s ChatGPT set a new record for the fastest-growing user base and the wave of generative AI has extended to other platforms, creating a massive shift in the technology world. It’s also dramatically changing the threat landscape — and we’re starting to see some of these risks come to fruition. Attackers are using AI to improve phishing and fraud. Meta’s 65-billion parameter language model got leaked, which will undoubtedly lead to new and improved phishing attacks. We see new prompt injection attacks on a daily basis. Users are often putting business-sensitive data into AI/ML-based services, leaving security teams scrambling to support and control the use of these services. For example, Samsung engineers put proprietary code into ChatGPT to get help debugging it, leaking sensitive data. A survey by Fishbowl showed that 68% of people who are using ChatGPT for work aren’t telling their bosses about it.  Event Transform 2023 Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. Register Now Misuse of AI is increasingly on the minds of consumers, businesses, and even the government. The White House announced new investments in AI research and forthcoming public assessments and policies. The AI revolution is moving fast and has created four major classes of issues. Asymmetry in the attacker-defender dynamic Attackers will likely adopt and engineer AI faster than defenders, giving them a clear advantage.  They will be able to launch sophisticated attacks powered by AI/ML at an incredible scale at low cost. Social engineering attacks will be first to benefit from synthetic text, voice and images. Many of these attacks that require some manual effort — like phishing attempts that impersonate IRS or real estate agents prompting victims to wire money — will become automated.  Attackers will be able to use these technologies to create better malicious code and launch new, more effective attacks at scale. For example, they will be able to rapidly generate polymorphic code for malware that evades detection from signature-based systems. One of AI’s pioneers, Geoffrey Hinton, made the news recently as he told the New York Times he regrets what he helped build because “It is hard to see how you can prevent the bad actors from using it for bad things.” Security and AI: Further erosion of social trust We’ve seen how quickly misinformation can spread thanks to social media. A University of Chicago Pearson Institute/AP-NORC Poll shows 91% of adults across the political spectrum believe misinformation is a problem and nearly half are worried they’ve spread it. Put a machine behind it, and social trust can erode cheaper and faster. The current AI/ML systems based on large language models (LLMs) are inherently limited in their knowledge, and when they don’t know how to answer, they make things up. This is often referred to as “hallucinating,” an unintended consequence of this emerging technology. When we search for legitimate answers, a lack of accuracy is a huge problem.  This will betray human trust and create dramatic mistakes that have dramatic consequences. A mayor in Australia, for instance, says he may sue OpenAI for defamation after ChatGPT wrongly identified him as being jailed for bribery when he was actually the whistleblower in a case. New attacks Over the next decade, we will see a new generation of attacks on AI/ML systems.  Attackers will influence the classifiers that systems use to bias models and control outputs. They’ll create malicious models that will be indistinguishable from the real models, which could cause real harm depending on how they’re used. Prompt injection attacks will become more common, too. Just a day after Microsoft introduced Bing Chat, a Stanford University student convinced the model to reveal its internal directives.   Attackers will kick off an arms race with adversarial ML tools that trick AI systems in various ways, poison the data they use or extract sensiti...

Loading comments...