RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
Security OperationsFebruary 19, 2026

Cybercrime-as-a-service forces a security rethink

With AI-powered tools readily available, sophisticated attacks no longer require sophisticated attackers.

man in suit
Jaikumar Vijayan, Freelance technology journalistJaikumar Vijayan
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
Cybercrime-as-a-service

We are entering a new era of cybercrime that leverages the growing availability of deepfake video and voice-cloning tools, AI-powered phishing kits, and synthetic identity packages. A massive wave of sophisticated attacks is coming that could soon force a rethink of enterprise security strategies.

An analysis by Group-IB of dark-web markets found that AI-powered crime tools are now so cheap and easy to obtain that even low-skill attackers with little more than a credit card can launch fast, large-scale attacks once reserved for well-resourced, highly sophisticated groups.

AI, as Group-IB noted, has industrialized cybercrime, allowing attackers to automate and scale faster than most defenders can detect and respond. “Unlike earlier waves of cybercrime, AI adoption by threat actors has been strikingly fast,” the company wrote in its report. 

AI is now firmly embedded as core infrastructure throughout the criminal ecosystem rather than an occasional exploit.

Group-IB

Here’s what you need to know about AI-enabled cybercrime services — and why you need to rethink your security strategy.

See webinar: Threat Intelligence 101: Why Context and Behavior Matter

Why cybercrime-as-a-service matters

Group-IB’s infiltration of underground markets uncovered a fast-growing ecosystem where AI-enabled attack tools are available to criminals, sometimes as subscription software. The security vendor found AI-driven phishing kits renting for between $30 and $200 a month and fake identity bundles, complete with AI-fabricated faces, cloned voices, and video, available for as little as $5. Also prolific in underground markets were “dark” large language models (LLMs) specifically fine-tuned for tasks such as phishing-content generation and malware development, services for jailbreaking LLMs, AI-powered information stealers, and remote-access Trojans.

For enterprise organizations, the rapid weaponization of AI is introducing brand-new security challenges. AI-generated threats leave few forensic traces, and adversaries can easily tailor variants for each target, which makes signature-based detection far less effective. AI is also undermining long-standing approaches to identity authentication and verification, as attackers can clone voices using as little as 10 seconds of audio scraped from social media or fabricate synthetic identities that exist in no authoritative database.

Group-IB identified several well-known threat groups that have already honed the use of AI crimeware to attack targets, including North Korea’s Lazarus Group, the Chinese threat actor GoldFactory, Iran-backed cyberespionage group APT35, and the cybercrime collective Scattered Spider.

Ominous consequences

An0ther analysis of AI-enabled crimeware, conducted by Sumsub last year, came to a similar conclusion: AI has accelerated both the quantity and quality of identity fraud. Of the fraud attempts that the company analyzed, about 28% involved the use of high-quality deepfakes, fake identities, multistage social engineering, or tampering with telemetry. “The U.S., in particular, has become a testing ground for international fraudsters, who roll out and refine new schemes before exporting them worldwide,” Sumsub’s “Identity Fraud Report 2025-26” said. “Emerging trends, like ‘employee fraud’ — where fake workers or payroll identities are generated to exploit benefits, internal systems, or compliance blind spots — often debut here, targeting the region’s mature digital and employment infrastructure.”

The consequences of these trends are becoming painfully obvious. In Q2 2025 alone, deepfake fraud cost organizations worldwide nearly $350 million in verifiable losses, Group-IB said. Its report included hair-raising stories, like the CFO who transferred $25 million to an attacker-controlled account after a Zoom call with what appeared to be the company’s CEO and the financial company that, in just eight months last year, had to defend against more than 8,000 deepfake fraud attempts where threat actors using AI-generated photos tried to bypass know-your-customer controls for loan applications.

Collapsing the attacker skills barrier

AI-powered cybercrime tools and services don’t just increase attack scale and speed; they also tear down the skills barrier, enabling nearly anyone to carry out highly sophisticated attacks, said Jeremiah Clark, chief technology officer at Fenix24. “What’s different now is that attack techniques that used to require real expertise are being packaged as turnkey services,” he said. 

When someone with zero technical background can spin up a convincing deepfake or deploy a phishing kit for the cost of a streaming subscription, it fundamentally changes the math on who your adversaries are and how many of them exist.

Jeremiah Clark

Organizations may be leaving themselves vulnerable to attack if they are stuck on the notion that sophisticated attacks require sophisticated attackers. The defender’s playbook needs to account for a much larger population of attackers operating at a much higher baseline of capability, Clark said. “That means rethinking detection strategies that rely on spotting known-bad patterns from a relatively small set of threat groups.”

The rapidly shifting attacker capabilities could force security leaders to rethink some long-standing defensive assumptions on multiple fronts, The rise of dark LLMs and AI tools tuned by adversaries could soon erode the effectiveness of detection mechanisms built around known threats. As AI crimeware becomes more available, defenders must prepare for a world in which malicious activity is more adaptive, more automated, and far harder to recognize using traditional indicators alone.

The rising threat of AI

When attackers have access to LLMs fine-tuned specifically for malware development, the volume and variety of malicious code go up significantly, leaving signature-based mechanisms in the dust and rendering defenders unable to rely on their ability to recognize known malware, Clark said.

They are going to need to shift toward behavioral detection that lets them understand what’s happening in the environment at a deeper level. “Instead of asking, ‘Have we seen this binary before?’ the question becomes, ‘Is this behavior normal for this system, this user, this application?’ And that will require a solid baseline of what normal looks like so deviations are easier to spot,” Clark said.

But dark LLMs aren’t going to be the only new threat. Ram Varadarajan, CEO at Acalvio, pointed to another emerging threat as just one example: the potential for attackers to poison the Model Context Protocol (MCP) servers that organizations are deploying to connect enterprise LLMs to external data sources. 

The more AI technology develops and the more broadly it’s deployed, the broader becomes the attack surface.

Ram Varadarajan

Security experts have long advocated that defenders augment signature-based detection with comprehensive behavioral analytics , but the AI-powered threats make doing so imperative. “For example, since dark LLMs produce polymorphic code that mutates in real time with each generation, the focus shifts to detecting anomalous system behaviors,” Varadarajan said. That means looking for “unusual data access patterns and command-and-control communication signatures rather than attempting to fingerprint the ever-changing malware payloads themselves.”

Fortunately, he said, even AI models follow a known set of behaviors, so organizations should be able to engineer tripwires in the environment that can detect malicious behavior, whether it’s a malicious external AI agent or an misaligned internal AI system.

Defending the new frontier requires defensive AI

Defending against these and other AI-enabled threat actors will require organizations to embed AI into their security postures as well, Group-IB said. That means using AI-powered tools themselves for threat detection, response, and fraud protection; automating where possible; and expanding dark-web monitoring.

A bit of paranoia might even be helpful, Varadarajan said. 

We’ve now passed the point where we might be able to differentiate true from false in any online interaction.

Ram Varadarajan

“If the stakes are high on any particular transaction, then you need to raise the barrier of trust,” he said, whether it is to mitigate deepfake related risks or threats from adversarial AI tools.

Organizations should assume that novel AI-generated malware will get through their defenses, said Fenix24’s Clark, and they must understand their own application dependencies and the potential “blast radius” from a breach. “If something detonates and you don’t know what systems are connected to each other, you can’t contain it, and you can’t recover in the right order,” he said. 

The organizations that recover fastest from these incidents are the ones that already have their dependency maps and recovery sequences documented before anything happens.

Jeremiah Clark

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:Security Operations

More Blog Posts

Post-quantum security

Crypto group ushers in post-quantum security

Here’s a look at the Ethereum Foundation’s new PQC security effort — and why you need to modernize your SecOps.

Learn More about Crypto group ushers in post-quantum security
Crypto group ushers in post-quantum security
AI adoption guardrails

Why governance is key to safe AI adoption

A new CSA report stresses getting out in front of AI risk — and why it matters for SecOps.

Learn More about Why governance is key to safe AI adoption
Why governance is key to safe AI adoption
Adversarial AI rise

Adversarial AI is on the rise: What you need to know

Researchers explain that as threat actors move to AI-enabled malware in active operations, existing defenses will fail.

Learn More about Adversarial AI is on the rise: What you need to know
Adversarial AI is on the rise: What you need to know
AI technical debt

AI technical debt: What it is — and why it matters

AI platforms exacerbate existing security risks. Here’s what you need to know to stay out of technical debt. 

Learn More about AI technical debt: What it is — and why it matters
AI technical debt: What it is — and why it matters

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top