Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial
We are entering a new era of cybercrime that leverages the growing availability of deepfake video and voice-cloning tools, AI-powered phishing kits, and synthetic identity packages. A massive wave of sophisticated attacks is coming that could soon force a rethink of enterprise security strategies.
An analysis by Group-IB of dark-web markets found that AI-powered crime tools are now so cheap and easy to obtain that even low-skill attackers with little more than a credit card can launch fast, large-scale attacks once reserved for well-resourced, highly sophisticated groups.
AI, as Group-IB noted, has industrialized cybercrime, allowing attackers to automate and scale faster than most defenders can detect and respond. “Unlike earlier waves of cybercrime, AI adoption by threat actors has been strikingly fast,” the company wrote in its report.
Group-IBAI is now firmly embedded as core infrastructure throughout the criminal ecosystem rather than an occasional exploit.
Here’s what you need to know about AI-enabled cybercrime services — and why you need to rethink your security strategy.
See webinar: Threat Intelligence 101: Why Context and Behavior Matter
Group-IB’s infiltration of underground markets uncovered a fast-growing ecosystem where AI-enabled attack tools are available to criminals, sometimes as subscription software. The security vendor found AI-driven phishing kits renting for between $30 and $200 a month and fake identity bundles, complete with AI-fabricated faces, cloned voices, and video, available for as little as $5. Also prolific in underground markets were “dark” large language models (LLMs) specifically fine-tuned for tasks such as phishing-content generation and malware development, services for jailbreaking LLMs, AI-powered information stealers, and remote-access Trojans.
For enterprise organizations, the rapid weaponization of AI is introducing brand-new security challenges. AI-generated threats leave few forensic traces, and adversaries can easily tailor variants for each target, which makes signature-based detection far less effective. AI is also undermining long-standing approaches to identity authentication and verification, as attackers can clone voices using as little as 10 seconds of audio scraped from social media or fabricate synthetic identities that exist in no authoritative database.
Group-IB identified several well-known threat groups that have already honed the use of AI crimeware to attack targets, including North Korea’s Lazarus Group, the Chinese threat actor GoldFactory, Iran-backed cyberespionage group APT35, and the cybercrime collective Scattered Spider.
An0ther analysis of AI-enabled crimeware, conducted by Sumsub last year, came to a similar conclusion: AI has accelerated both the quantity and quality of identity fraud. Of the fraud attempts that the company analyzed, about 28% involved the use of high-quality deepfakes, fake identities, multistage social engineering, or tampering with telemetry. “The U.S., in particular, has become a testing ground for international fraudsters, who roll out and refine new schemes before exporting them worldwide,” Sumsub’s “Identity Fraud Report 2025-26” said. “Emerging trends, like ‘employee fraud’ — where fake workers or payroll identities are generated to exploit benefits, internal systems, or compliance blind spots — often debut here, targeting the region’s mature digital and employment infrastructure.”
The consequences of these trends are becoming painfully obvious. In Q2 2025 alone, deepfake fraud cost organizations worldwide nearly $350 million in verifiable losses, Group-IB said. Its report included hair-raising stories, like the CFO who transferred $25 million to an attacker-controlled account after a Zoom call with what appeared to be the company’s CEO and the financial company that, in just eight months last year, had to defend against more than 8,000 deepfake fraud attempts where threat actors using AI-generated photos tried to bypass know-your-customer controls for loan applications.
AI-powered cybercrime tools and services don’t just increase attack scale and speed; they also tear down the skills barrier, enabling nearly anyone to carry out highly sophisticated attacks, said Jeremiah Clark, chief technology officer at Fenix24. “What’s different now is that attack techniques that used to require real expertise are being packaged as turnkey services,” he said.
Jeremiah ClarkWhen someone with zero technical background can spin up a convincing deepfake or deploy a phishing kit for the cost of a streaming subscription, it fundamentally changes the math on who your adversaries are and how many of them exist.
Organizations may be leaving themselves vulnerable to attack if they are stuck on the notion that sophisticated attacks require sophisticated attackers. The defender’s playbook needs to account for a much larger population of attackers operating at a much higher baseline of capability, Clark said. “That means rethinking detection strategies that rely on spotting known-bad patterns from a relatively small set of threat groups.”
The rapidly shifting attacker capabilities could force security leaders to rethink some long-standing defensive assumptions on multiple fronts, The rise of dark LLMs and AI tools tuned by adversaries could soon erode the effectiveness of detection mechanisms built around known threats. As AI crimeware becomes more available, defenders must prepare for a world in which malicious activity is more adaptive, more automated, and far harder to recognize using traditional indicators alone.
When attackers have access to LLMs fine-tuned specifically for malware development, the volume and variety of malicious code go up significantly, leaving signature-based mechanisms in the dust and rendering defenders unable to rely on their ability to recognize known malware, Clark said.
They are going to need to shift toward behavioral detection that lets them understand what’s happening in the environment at a deeper level. “Instead of asking, ‘Have we seen this binary before?’ the question becomes, ‘Is this behavior normal for this system, this user, this application?’ And that will require a solid baseline of what normal looks like so deviations are easier to spot,” Clark said.
But dark LLMs aren’t going to be the only new threat. Ram Varadarajan, CEO at Acalvio, pointed to another emerging threat as just one example: the potential for attackers to poison the Model Context Protocol (MCP) servers that organizations are deploying to connect enterprise LLMs to external data sources.
Ram VaradarajanThe more AI technology develops and the more broadly it’s deployed, the broader becomes the attack surface.
Security experts have long advocated that defenders augment signature-based detection with comprehensive behavioral analytics , but the AI-powered threats make doing so imperative. “For example, since dark LLMs produce polymorphic code that mutates in real time with each generation, the focus shifts to detecting anomalous system behaviors,” Varadarajan said. That means looking for “unusual data access patterns and command-and-control communication signatures rather than attempting to fingerprint the ever-changing malware payloads themselves.”
Fortunately, he said, even AI models follow a known set of behaviors, so organizations should be able to engineer tripwires in the environment that can detect malicious behavior, whether it’s a malicious external AI agent or an misaligned internal AI system.
Defending against these and other AI-enabled threat actors will require organizations to embed AI into their security postures as well, Group-IB said. That means using AI-powered tools themselves for threat detection, response, and fraud protection; automating where possible; and expanding dark-web monitoring.
A bit of paranoia might even be helpful, Varadarajan said.
Ram VaradarajanWe’ve now passed the point where we might be able to differentiate true from false in any online interaction.
“If the stakes are high on any particular transaction, then you need to raise the barrier of trust,” he said, whether it is to mitigate deepfake related risks or threats from adversarial AI tools.
Organizations should assume that novel AI-generated malware will get through their defenses, said Fenix24’s Clark, and they must understand their own application dependencies and the potential “blast radius” from a breach. “If something detonates and you don’t know what systems are connected to each other, you can’t contain it, and you can’t recover in the right order,” he said.
Jeremiah ClarkThe organizations that recover fastest from these incidents are the ones that already have their dependency maps and recovery sequences documented before anything happens.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial