AI Has Redefined Software Risk - Learn How Security Teams Can Update Their PlaybookWatch Now

Adversarial AI is on the rise: What you need to know

Researchers explain that as threat actors move to AI-enabled malware in active operations, existing defenses will fail.

Adversarial AI rise

To date, threat actors have used artificial intelligence (AI) mainly enhance their productivity, but that’s changing, a report released on November 5 by the Google Threat Intelligence Group (GTIG) has found. 

Adversaries are now deploying novel AI-enabled malware in active operations, the researchers said: “This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution."

For the first time, malware families such as PromptFlux and PromptSteal are using large language models (LLMs) during execution to dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand — and no longer hardcoding such functions into the malware.

While still nascent, this represents a significant step toward more autonomous and adaptive malware.

GTIG researchers

Here's what your team needs to know about the rise of adversarial AI.

See webinar: Developer Your New Playbook for AI-Driven Software Risk

The evolution of adversarial AI

Up to now, malware has not been customized to each environment it infects but rather has worked in pretty much the same way across all environments, said Sumedh Barde, head of product at Simbian.

That has allowed anti-malware and endpoint detection and response (EDR) tools to work by observing behavior on an infected host and then looking out for the same patterns across the millions of endpoints they protect. 

AI makes this challenging. It empowers adversaries to craft malware that adapts its behaviors to each endpoint, to camouflage what would be expected behaviors on each endpoint, and thus evade existing defense techniques. So the adversary doesn’t just gain productivity; they gain new ways to evade defenses.

Sumedh Barde

That greatly weakens signature-based cybersecurity protections, said Adam Arellano, CTO and field CISO at Traceable by Harness. 

There will still be a market and widespread use of signature-based tools, but as more and more adversaries start to use self-changing attacks, the less helpful those tools will be.

Adam Arellano

This is what we’ve been warning about with the OWASP Top 10 for LLMs framework, said Michael Bell, founder and CEO of Suzu Labs. “PromptFlux represents a shift from static malware signatures to adversarial AI that actively evades detection by rewriting itself in real time.”

The good news is that Google caught this while it’s still experimental, but the bad news is that once this capability matures, traditional security tools that rely solely on pattern matching will be almost useless except to defend against basic script kiddies.

Michael Bell

This evolution in the use of AI by threat actors is a game-changer, said Ensar Seker, CISO of SOCRadar.

We’re no longer just talking about cybercriminals using AI to write phishing emails or improve efficiency. We’re now entering a stage where AI is baked directly into the malware itself, malware that can analyze its environment, make autonomous decisions, and adjust its behavior midflight. That kind of dynamic threat elevates the risk profile significantly because traditional static detection techniques struggle against code that’s constantly reinventing itself.

Ensar Seker

Troy Leach, chief strategy officer at the Cloud Security Alliance, said the CLA has been theorizing about such advanced threats for years, expecting AI to make possible sophisticated attacks that will go unnoticed. “These findings also align with recent CSA studies we’ve conducted on the state of AI as well, anticipating that the visibility will become much more difficult with legacy defenses.”

Adversaries are like other developers using AI to increase productivity by accelerating research, automating reconnaissance, and drafting phishing lures. But the productivity advantage is being compounded by AI, as it now writes most of the scripts, debugs exploits, reverse engineers to discover new vulnerabilities, and translates code across languages instantly. This reduces the attacker’s time to impact from weeks to hours and lowers the skill barrier for global participation in cybercrime.

Troy Leach

Vibe hackers get the memo

The findings in the GTIG report came as no surprise to Cory Michal, CSO of AppOmn. “It confirms what we’re already seeing in SaaS attack campaigns,” he said. “Threat actors are leveraging AI to make their operations more efficient and sophisticated, just as legitimate teams use AI to improve productivity.”

We’ve observed attackers using AI to automatically generate data-extraction code, reconnaissance scripts, and even adversary-in-the-middle toolkits that adapt to defense. They’re essentially vibe-hacking, using generative AI to better mimic authentic behavior, refine social engineering lures, and accelerate the technical aspects of intrusion and exploitation.

Cory Michal

He said AI-enabled malware mutates its code, making traditional signature-based detection ineffective. “Defenders need behavioral EDR that focuses on what malware does, not what it looks like,” he said.

Michal recommended that detection tools focus on unusual process creation, scripting activity, or unexpected outbound traffic, especially to AI APIs such as Gemini, Hugging Face, and OpenAI. By correlating behavioral signals across endpoint, SaaS, and identity telemetry, organizations can spot when attackers are abusing AI and stop them before data is exfiltrated, he said.

This evolution underscores how AI makes modern malware more effective, he said. “Attackers are now using AI to generate smarter code for data extraction, session hijacking, and credentials theft, giving them faster access to identity providers and SaaS platforms where critical data and workflows live. As enterprises have moved their business processes, intellectual property, and customer data into SaaS, that ecosystem has become the most valuable and exposed attack surface.”

AI doesn’t just make phishing emails more convincing; it makes intrusion, privilege abuse, and session theft more adaptive and scalable. The result is a new generation of AI-augmented attacks that directly threaten the core of enterprise SaaS operations, data integrity, and extortion resilience.

Cory Michal

The adversarial AI marketplace matures

The GTIG report also said that the underground marketplace for illicit AI tools matured in 2025. “We have identified multiple offerings of multifunctional tools designed to support phishing, malware development, and vulnerability research, lowering the barrier to entry for less sophisticated actors,” the researchers wrote.

Andre Piazza, a security strategist at BforeAi, said SpamGPT, WormGPT, and FraudGPT are tools available on the dark web that lower the entry barrier for the creation of phishing campaigns, malware, or deepfakes.

They package the technical expertise required to deploy those threats into features accessible in a ready-made toolkit, with the added bonus of a friendly user interface.

Andre Piazza

Tim Erlin, a security strategist at Wallarm, said that as long as attackers are calling commercial LLMs for these use cases, Google, OpenAI, Meta, and others can work to prevent misuse of their models. But as the major LLMs become harder to abuse, Erlin expects adversaries to evolve their strategies. 

Attackers will likely shift in two directions. First, they will move to less protected and less popular models for their needs. Second, we’ll likely see the emergence of malicious LLM services designed specifically for these use cases.

Tim Erlin

Erlin said Google is on the right track with its work to strengthen their own models against attack, but they can’t do it alone, he said. “An industry standard for protecting AI and for enabling AI to protect itself needs to emerge. Research like the A2AS framework, to which Google has contributed, will be instrumental in shifting the AI threat landscape.”

Traceable by Harness’ Arellano said history has shown that the most inventive ways to use a technology are usually developed by people incentivized to misuse the technology.

It is going to be difficult to combat these new techniques, but there is a lot to be learned in the techniques themselves. Reverse engineering the attacks using the same AI is one way to better understand the attacks.

Adam Arellano
Back to Top