RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecuritySeptember 2, 2025

The future is here: AI-assists new ransomware

ESET researchers have discovered malware that taps into OpenAI’s large language model to assist in ransomware attacks.

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
AI-borne malware has arrived

Threat actors have taken a step closer to creating an AI nightmare for security teams, say researchers who have discovered malware that can compromise a large language model (LLM) to assist in launching a ransomware attack.

Dubbed PromptLock by ESET and discovered by ESET researcher Anton Cherepanov, the ransomware is believed to be the first AI-borne malware. The malicious program contains embedded prompts that it sends to an open weights LLM, gpt-oss:20b, to generate Lua scripts. 

Although the prompts are static, the generated scripts can vary with each execution. The scripts can be used to exfiltrate files and subsequently encrypt them using the SPECK 128-bit encryption algorithm.

However, ESET researchers noted that the malware appears to be a work in progress, not an active threat. “The emergence of tools like PromptLock highlights a significant shift in the cyberthreat landscape,” Cherepanov said. “With the help of AI, launching sophisticated attacks has become dramatically easier, eliminating the need for teams of skilled developers.”

A well-configured AI model is now sufficient to create complex, self-adapting malware. If properly implemented, such threats could severely complicate detection and make the work of cybersecurity defenders considerably more challenging.

Anton Cherepanov

Here’s what you need to know about PromptLock — and what it portends.

Get Report: How AI Impacts Supply Chain Security

Self-driving malware takes to the road 

Lawrence Pingree, a technical evangelist at Dispersive Holdings who has published research on self-driving malware, said that PromptLock gives malware the potential of self-creation.

Ultimately, this is the very beginning of where malware becomes adaptive and self-driving in nature. It means that rather than malware having to be pre-programmed with many variations to fit an environment, the malware can modify itself during its execution phases, morphing to attack environments according to their runtime context.

Lawrence Pingree

It is a very serious development, Pingree said, “because it means that eventually we will have malware that is given a goal, and can independently, with its own agentic agency, choose how to do it.”

Elad Luz, head of research at Oasis Security, said PromptLock marks a pivot from static, pre-compiled ransomware toward runtime-generated attack logic. “Instead of shipping hard-coded behaviors, the implant calls a locally hosted large language model to synthesize Lua scripts on demand for discovery, exfiltration, and encryption,” he said. 

That moves ransomware closer to an adaptive system that can change its TTPs [tactics, techniques, and procedures] per host, eroding the value of traditional, signature-centric defenses.

Elad Luz

Just as important is where the AI runs, Luz said. PromptLock uses a local model via the Ollama API, reporting points to OpenAI’s gpt-oss:20b and available for offline use, “so there’s no telltale traffic to cloud AI services and less for defenders to block with API controls.”

That is an architectural choice that matters, Luz said. “It keeps prompts and generated code on-host, blending into normal developer and ops tooling.”

Proof of concept: Check

PromptLock is early but important proof that AI can be weaponized directly in the attack chain, said Satyam Sinha, CEO and co-founder of the security firm Acuvity. “Until now, AI has mostly been used around the edges, including phishing emails, deepfakes, or automating reconnaissance,” he said. 

PromptLock shows how models can sit inside the malware itself, generating code and logic on demand. That’s a turning point for security teams, because we’re facing adversaries that can potentially outsource creativity to machines.

Satyam Sinha

Toby Lewis, head of threat analysis at Darktrace, said that PromptLock looks more like a proof of concept than an in-the-wild ransomware campaign. “The capabilities of the malware itself and the fact that multiple samples were uploaded from the same source within a short window suggests an academic or independent researcher was testing whether the approach would be detected by security tools,” Lewis said.

Lewis said that what ESET found is ransomware that uses AI to generate some of its code. “Prewritten prompts are being used to get an AI system to generate scripts on demand, which are then executed like traditional ransomware,” he said. 

It’s not much different from modular malware we’ve seen before — the only twist is outsourcing some of the scripting to an AI model. That makes AI more of an assistance tool in the process, rather than a sign we’re facing truly AI-powered ransomware.

Toby Lewis

Lewis added that the only real edge this technique gives attackers is that AI-generated scripts might look slightly different each time, which could trip up legacy detection tools. “But any behavioral security approach would still catch the malicious activity,” he said. “For organizations, defensive strategies don’t need to change. At the end of the day, this is just generic ransomware.”

The real significance here isn’t the malware itself, but what it illustrates about AI’s role in the threat landscape, Lewis said. “AI can lower the barrier to entry while also both upskilling and deskilling attackers. Some generated scripts may not work at all if the model hallucinates, while in other cases, blindly running AI-generated code could spiral out of control and cause unintended damage. It’s a reminder of how AI, when misused, can introduce both new risks and unpredictable consequences.”

AI threat is a work in progress

Cherepanov acknowledged that PromptLock appears to be a work in progress and does not pose a serious threat to organizations. “That said,” he added, “we believe it is crucial to raise awareness within the cybersecurity community about such emerging risks to spark discussion, preparedness, and further research across the industry.”

Oasis Security’s Luz agrees that AI-borne threats are in the very early stages.

Near term, the risk is emerging but credible. PromptLock has not yet been seen in attacks, and running sizable local models carries resource and packaging hurdles. Those realities buy defenders time.

Elad Luz

Medium term, however, the technique of AI-generated, prompt-driven code execution poses a high risk of copycat adoption because it scales cheaply, adapts easily, and degrades static controls. “Combined with a ransomware ecosystem that continues to grow and evolve, organizations should plan for AI-assisted intrusions to become more common, not less,” Luz said.

Acuvity’s Sinha said that while PromptLock is not yet operational, the trajectory is serious. “Once code-generating implants circulate, we’ll see ransomware that mutates too quickly for traditional signature-based defenses to catch,” he said. “For enterprises, that means you can’t just wait for ‘known bad’ indicators. You need runtime visibility into how AI is behaving in your environment.”

Bob Erdman, associate vice president of development for the security firm Fortra, said that, by itself, the initial version of PromptLock is not overly concerning beyond the current problems we all face from ransomware actors today. 

This new trend will most likely result in more sophisticated ransomware and a leveling of the field among malicious actor capabilities, potentially fragmenting the ecosystem even more as other new actors acquire the more advanced capabilities that they currently are paying for from others.

Bob Erdman

Advantage, good guys?

Roger Grimes, a defense evangelist at KnowBe4, said that from now on, every hacker and malware attack will become AI-enabled. “That's because AI-enabled hacking will be more pervasive and more successful,” he explained. “The hackers and their malware creations will need to convert to AI-enabled methods to remain competitive in the hacking world. What hacker wants to use a non-AI, traditional tool and be less successful?”

In a world of AI-driven hacking and malware bots, the best defense is AI-driven cyber-defenses with strong human loops, Grimes said. “And for once, I have some moderate hope that the good guys will finally win.”

In the past, when the bad guys did something, it took the cyber-defenders a while to catch up and respond. But in this case, the good guys invented AI, and they have been improving and using it more than the bad guys, he said.

Now, everything the bad guys are doing was invented by the good guys first. For once, the bad guys are the followers. And that gives me a little hope for the future. I think there is a great chance that our algorithms will be better than theirs.

Roger Grimes

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top