RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
Security OperationsJanuary 13, 2026

Adversarial AI is on the rise: What you need to know

Researchers explain that as threat actors move to AI-enabled malware in active operations, existing defenses will fail.

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
Adversarial AI rise

To date, threat actors have used artificial intelligence (AI) mainly enhance their productivity, but that’s changing, a report released on November 5 by the Google Threat Intelligence Group (GTIG) has found. 

Adversaries are now deploying novel AI-enabled malware in active operations, the researchers said: “This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution."

For the first time, malware families such as PromptFlux and PromptSteal are using large language models (LLMs) during execution to dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand — and no longer hardcoding such functions into the malware.

While still nascent, this represents a significant step toward more autonomous and adaptive malware.

GTIG researchers

Here's what your team needs to know about the rise of adversarial AI.

See webinar: Developer Your New Playbook for AI-Driven Software Risk

The evolution of adversarial AI

Up to now, malware has not been customized to each environment it infects but rather has worked in pretty much the same way across all environments, said Sumedh Barde, head of product at Simbian.

That has allowed anti-malware and endpoint detection and response (EDR) tools to work by observing behavior on an infected host and then looking out for the same patterns across the millions of endpoints they protect. 

AI makes this challenging. It empowers adversaries to craft malware that adapts its behaviors to each endpoint, to camouflage what would be expected behaviors on each endpoint, and thus evade existing defense techniques. So the adversary doesn’t just gain productivity; they gain new ways to evade defenses.

Sumedh Barde

That greatly weakens signature-based cybersecurity protections, said Adam Arellano, CTO and field CISO at Traceable by Harness. 

There will still be a market and widespread use of signature-based tools, but as more and more adversaries start to use self-changing attacks, the less helpful those tools will be.

Adam Arellano

This is what we’ve been warning about with the OWASP Top 10 for LLMs framework, said Michael Bell, founder and CEO of Suzu Labs. “PromptFlux represents a shift from static malware signatures to adversarial AI that actively evades detection by rewriting itself in real time.”

The good news is that Google caught this while it’s still experimental, but the bad news is that once this capability matures, traditional security tools that rely solely on pattern matching will be almost useless except to defend against basic script kiddies.

Michael Bell

This evolution in the use of AI by threat actors is a game-changer, said Ensar Seker, CISO of SOCRadar.

We’re no longer just talking about cybercriminals using AI to write phishing emails or improve efficiency. We’re now entering a stage where AI is baked directly into the malware itself, malware that can analyze its environment, make autonomous decisions, and adjust its behavior midflight. That kind of dynamic threat elevates the risk profile significantly because traditional static detection techniques struggle against code that’s constantly reinventing itself.

Ensar Seker

Troy Leach, chief strategy officer at the Cloud Security Alliance, said the CLA has been theorizing about such advanced threats for years, expecting AI to make possible sophisticated attacks that will go unnoticed. “These findings also align with recent CSA studies we’ve conducted on the state of AI as well, anticipating that the visibility will become much more difficult with legacy defenses.”

Adversaries are like other developers using AI to increase productivity by accelerating research, automating reconnaissance, and drafting phishing lures. But the productivity advantage is being compounded by AI, as it now writes most of the scripts, debugs exploits, reverse engineers to discover new vulnerabilities, and translates code across languages instantly. This reduces the attacker’s time to impact from weeks to hours and lowers the skill barrier for global participation in cybercrime.

Troy Leach

Vibe hackers get the memo

The findings in the GTIG report came as no surprise to Cory Michal, CSO of AppOmn. “It confirms what we’re already seeing in SaaS attack campaigns,” he said. “Threat actors are leveraging AI to make their operations more efficient and sophisticated, just as legitimate teams use AI to improve productivity.”

We’ve observed attackers using AI to automatically generate data-extraction code, reconnaissance scripts, and even adversary-in-the-middle toolkits that adapt to defense. They’re essentially vibe-hacking, using generative AI to better mimic authentic behavior, refine social engineering lures, and accelerate the technical aspects of intrusion and exploitation.

Cory Michal

He said AI-enabled malware mutates its code, making traditional signature-based detection ineffective. “Defenders need behavioral EDR that focuses on what malware does, not what it looks like,” he said.

Michal recommended that detection tools focus on unusual process creation, scripting activity, or unexpected outbound traffic, especially to AI APIs such as Gemini, Hugging Face, and OpenAI. By correlating behavioral signals across endpoint, SaaS, and identity telemetry, organizations can spot when attackers are abusing AI and stop them before data is exfiltrated, he said.

This evolution underscores how AI makes modern malware more effective, he said. “Attackers are now using AI to generate smarter code for data extraction, session hijacking, and credentials theft, giving them faster access to identity providers and SaaS platforms where critical data and workflows live. As enterprises have moved their business processes, intellectual property, and customer data into SaaS, that ecosystem has become the most valuable and exposed attack surface.”

AI doesn’t just make phishing emails more convincing; it makes intrusion, privilege abuse, and session theft more adaptive and scalable. The result is a new generation of AI-augmented attacks that directly threaten the core of enterprise SaaS operations, data integrity, and extortion resilience.

Cory Michal

The adversarial AI marketplace matures

The GTIG report also said that the underground marketplace for illicit AI tools matured in 2025. “We have identified multiple offerings of multifunctional tools designed to support phishing, malware development, and vulnerability research, lowering the barrier to entry for less sophisticated actors,” the researchers wrote.

Andre Piazza, a security strategist at BforeAi, said SpamGPT, WormGPT, and FraudGPT are tools available on the dark web that lower the entry barrier for the creation of phishing campaigns, malware, or deepfakes.

They package the technical expertise required to deploy those threats into features accessible in a ready-made toolkit, with the added bonus of a friendly user interface.

Andre Piazza

Tim Erlin, a security strategist at Wallarm, said that as long as attackers are calling commercial LLMs for these use cases, Google, OpenAI, Meta, and others can work to prevent misuse of their models. But as the major LLMs become harder to abuse, Erlin expects adversaries to evolve their strategies. 

Attackers will likely shift in two directions. First, they will move to less protected and less popular models for their needs. Second, we’ll likely see the emergence of malicious LLM services designed specifically for these use cases.

Tim Erlin

Erlin said Google is on the right track with its work to strengthen their own models against attack, but they can’t do it alone, he said. “An industry standard for protecting AI and for enabling AI to protect itself needs to emerge. Research like the A2AS framework, to which Google has contributed, will be instrumental in shifting the AI threat landscape.”

Traceable by Harness’ Arellano said history has shown that the most inventive ways to use a technology are usually developed by people incentivized to misuse the technology.

It is going to be difficult to combat these new techniques, but there is a lot to be learned in the techniques themselves. Reverse engineering the attacks using the same AI is one way to better understand the attacks.

Adam Arellano

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:Security Operations

More Blog Posts

Post-quantum security

Crypto group ushers in post-quantum security

Here’s a look at the Ethereum Foundation’s new PQC security effort — and why you need to modernize your SecOps.

Learn More about Crypto group ushers in post-quantum security
Crypto group ushers in post-quantum security
Cybercrime-as-a-service

Cybercrime-as-a-service forces a security rethink

With AI-powered tools readily available, sophisticated attacks no longer require sophisticated attackers.

Learn More about Cybercrime-as-a-service forces a security rethink
Cybercrime-as-a-service forces a security rethink
AI adoption guardrails

Why governance is key to safe AI adoption

A new CSA report stresses getting out in front of AI risk — and why it matters for SecOps.

Learn More about Why governance is key to safe AI adoption
Why governance is key to safe AI adoption
AI technical debt

AI technical debt: What it is — and why it matters

AI platforms exacerbate existing security risks. Here’s what you need to know to stay out of technical debt. 

Learn More about AI technical debt: What it is — and why it matters
AI technical debt: What it is — and why it matters

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top