The future is here: AI-borne ransomware has arrived

ESET researchers have discovered malware that taps into OpenAI’s large language model to assist in ransomware attacks.

AI-borne malware has arrived

Threat actors have taken a step closer to creating an AI nightmare for security teams, say researchers who have discovered malware that can compromise a large language model (LLM) to assist in launching a ransomware attack.

Dubbed PromptLock by ESET and discovered by ESET researcher Anton Cherepanov, the ransomware is believed to be the first AI-borne malware. The malicious program contains embedded prompts that it sends to an open weights LLM, gpt-oss:20b, to generate Lua scripts. 

Although the prompts are static, the generated scripts can vary with each execution. The scripts can be used to exfiltrate files and subsequently encrypt them using the SPECK 128-bit encryption algorithm.

However, ESET researchers noted that the malware appears to be a work in progress, not an active threat. “The emergence of tools like PromptLock highlights a significant shift in the cyberthreat landscape,” Cherepanov said. “With the help of AI, launching sophisticated attacks has become dramatically easier, eliminating the need for teams of skilled developers.”

A well-configured AI model is now sufficient to create complex, self-adapting malware. If properly implemented, such threats could severely complicate detection and make the work of cybersecurity defenders considerably more challenging.

Anton Cherepanov

Here’s what you need to know about PromptLock — and what it portends.

Get Report: How AI Impacts Supply Chain Security

Self-driving malware takes to the road 

Lawrence Pingree, a technical evangelist at Dispersive Holdings who has published research on self-driving malware, said that PromptLock gives malware the potential of self-creation.

Ultimately, this is the very beginning of where malware becomes adaptive and self-driving in nature. It means that rather than malware having to be pre-programmed with many variations to fit an environment, the malware can modify itself during its execution phases, morphing to attack environments according to their runtime context.

Lawrence Pingree

It is a very serious development, Pingree said, “because it means that eventually we will have malware that is given a goal, and can independently, with its own agentic agency, choose how to do it.”

Elad Luz, head of research at Oasis Security, said PromptLock marks a pivot from static, pre-compiled ransomware toward runtime-generated attack logic. “Instead of shipping hard-coded behaviors, the implant calls a locally hosted large language model to synthesize Lua scripts on demand for discovery, exfiltration, and encryption,” he said. 

That moves ransomware closer to an adaptive system that can change its TTPs [tactics, techniques, and procedures] per host, eroding the value of traditional, signature-centric defenses.

Elad Luz

Just as important is where the AI runs, Luz said. PromptLock uses a local model via the Ollama API, reporting points to OpenAI’s gpt-oss:20b and available for offline use, “so there’s no telltale traffic to cloud AI services and less for defenders to block with API controls.”

That is an architectural choice that matters, Luz said. “It keeps prompts and generated code on-host, blending into normal developer and ops tooling.”

Proof of concept: Check

PromptLock is early but important proof that AI can be weaponized directly in the attack chain, said Satyam Sinha, CEO and co-founder of the security firm Acuvity. “Until now, AI has mostly been used around the edges, including phishing emails, deepfakes, or automating reconnaissance,” he said. 

PromptLock shows how models can sit inside the malware itself, generating code and logic on demand. That’s a turning point for security teams, because we’re facing adversaries that can potentially outsource creativity to machines.

Satyam Sinha

Toby Lewis, head of threat analysis at Darktrace, said that PromptLock looks more like a proof of concept than an in-the-wild ransomware campaign. “The capabilities of the malware itself and the fact that multiple samples were uploaded from the same source within a short window suggests an academic or independent researcher was testing whether the approach would be detected by security tools,” Lewis said.

Lewis said that what ESET found is ransomware that uses AI to generate some of its code. “Prewritten prompts are being used to get an AI system to generate scripts on demand, which are then executed like traditional ransomware,” he said. 

It’s not much different from modular malware we’ve seen before — the only twist is outsourcing some of the scripting to an AI model. That makes AI more of an assistance tool in the process, rather than a sign we’re facing truly AI-powered ransomware.

Toby Lewis

Lewis added that the only real edge this technique gives attackers is that AI-generated scripts might look slightly different each time, which could trip up legacy detection tools. “But any behavioral security approach would still catch the malicious activity,” he said. “For organizations, defensive strategies don’t need to change. At the end of the day, this is just generic ransomware.”

The real significance here isn’t the malware itself, but what it illustrates about AI’s role in the threat landscape, Lewis said. “AI can lower the barrier to entry while also both upskilling and deskilling attackers. Some generated scripts may not work at all if the model hallucinates, while in other cases, blindly running AI-generated code could spiral out of control and cause unintended damage. It’s a reminder of how AI, when misused, can introduce both new risks and unpredictable consequences.”

AI threat is a work in progress

Cherepanov acknowledged that PromptLock appears to be a work in progress and does not pose a serious threat to organizations. “That said,” he added, “we believe it is crucial to raise awareness within the cybersecurity community about such emerging risks to spark discussion, preparedness, and further research across the industry.”

Oasis Security’s Luz agrees that AI-borne threats are in the very early stages.

Near term, the risk is emerging but credible. PromptLock has not yet been seen in attacks, and running sizable local models carries resource and packaging hurdles. Those realities buy defenders time.

Elad Luz

Medium term, however, the technique of AI-generated, prompt-driven code execution poses a high risk of copycat adoption because it scales cheaply, adapts easily, and degrades static controls. “Combined with a ransomware ecosystem that continues to grow and evolve, organizations should plan for AI-assisted intrusions to become more common, not less,” Luz said.

Acuvity’s Sinha said that while PromptLock is not yet operational, the trajectory is serious. “Once code-generating implants circulate, we’ll see ransomware that mutates too quickly for traditional signature-based defenses to catch,” he said. “For enterprises, that means you can’t just wait for ‘known bad’ indicators. You need runtime visibility into how AI is behaving in your environment.”

Bob Erdman, associate vice president of development for the security firm Fortra, said that, by itself, the initial version of PromptLock is not overly concerning beyond the current problems we all face from ransomware actors today. 

This new trend will most likely result in more sophisticated ransomware and a leveling of the field among malicious actor capabilities, potentially fragmenting the ecosystem even more as other new actors acquire the more advanced capabilities that they currently are paying for from others.

Bob Erdman

Advantage, good guys?

Roger Grimes, a defense evangelist at KnowBe4, said that from now on, every hacker and malware attack will become AI-enabled. “That's because AI-enabled hacking will be more pervasive and more successful,” he explained. “The hackers and their malware creations will need to convert to AI-enabled methods to remain competitive in the hacking world. What hacker wants to use a non-AI, traditional tool and be less successful?”

In a world of AI-driven hacking and malware bots, the best defense is AI-driven cyber-defenses with strong human loops, Grimes said. “And for once, I have some moderate hope that the good guys will finally win.”

In the past, when the bad guys did something, it took the cyber-defenders a while to catch up and respond. But in this case, the good guys invented AI, and they have been improving and using it more than the bad guys, he said.

Now, everything the bad guys are doing was invented by the good guys first. For once, the bad guys are the followers. And that gives me a little hope for the future. I think there is a great chance that our algorithms will be better than theirs.

Roger Grimes
Back to Top