Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial
The runaway adoption of OpenClaw and the subsequent fallout have sparked responses ranging from resigned to alarmed. One screed from an AI security researcher, laid out the situation starkly:
Disesdi Shoshana CoxSomebody took a bunch of autocomplete-on-adderall bots, trained them on Reddit until they were basically reddit-autocomplete-adderallbots, then some capital-J Jeeniuses made a fake Reddit for them ... and then certain parts of the tech world lost their collective mind because it ‘proved’ we’re in a singularity or something.
What it does proves to some security researchers is that they have been right to caution against a headlong rush into agentic AI deployments — and is now an important case study that demonstrates how agents can increase software supply chain risks.
Here’s why you need to learn the lessons from OpenClaw now — and get ahead of risk when the next agentic AI sensation comes along.
[ See webinar: Trust But Verify: Secure the AI You Build, Buy and Deploy ]
First, how did we get here? OpenClaw, a vibe-coded AI agent platform, was created late last year and named Clawdbot, which prompted a trademark complaint in January from Anthropic , maker of the popular Claude platform. Over the next three days, Clawdbot was rebranded, first as Moltbot, then as OpenClaw.
By the time the name issue had been settled, OpenClaw was being widely used — and abused. Quickly garnering 200,000 GitHub stars, OpenClaw became the center of an entire ecosystem that popped up nearly overnight and that includes ClawHub, a skills marketplace, and Moltbook, a social network where OpenClaw agents congregate and interact.
The appeal is easy to understand. Most chatbots just answer questions, but OpenClaw actually does things: triage emails, book reservations, write code, execute shell commands, and more. It’s an AI personal assistant with agency — like Apple's Siri with a streak of independence.
But this collection of AI coding assistants shipped without basic security controls. The result? A cascade of risk exposures that has escalated just as steeply as the adoption curve.
In early February, the project drew three high-impact security advisories in three days. More than 386 malicious skills flooded ClawHub, the platform’s skills marketplace. Wiz researchers found that Moltbook’s database was wide open, exposing 1.5 million API tokens, 35,000 email addresses, and plaintext credentials that had been included in agent-to-agent direct messages. Then Molt Road launched: an AI-agent-only black market trading stolen credentials and weaponized skills.
Researchers demonstrated zero-click backdoors, C2 implant deployment, and supply chain attacks, all using OpenClaw’s intended capabilities — with no software vulnerabilities required.
Security researcher Jamieson O’Reilly reported that in one week of part-time work spent digging through ClawHub, he found hundreds of exposed control servers leaking credentials, discovered backdoors in the top downloaded skill as well as many others, and unearthed a worm-friendly XSS vulnerability that enabled one-click account takeover. Included in his report are several warnings:
Jamieson O’ReillyWe’re accelerating into a world where AI writes code faster than humans ever could and features ship in days instead of months. … Attackers move fast too. … The attack surface is expanding at the same rate as the codebase. The AI ecosystem is speedrunning software development. It needs to speedrun security alongside it.
In a recent interview on the ResilientCyber podcast, O’Reilly said that even OpenClaw creator Peter Steinberger would have to agree that OpenClaw is a project that’s not enterprise-ready. As if to prove that point, Steinberger announced a week later that he’s joining OpenAI and relinquishing control of OpenClaw to an OpenAI-backed foundation so it can get the kind of security attention it needs.
Steinberger said the goal of the move is “to build an agent that even my mum can use” but with “a lot more thought on how to do it safely.”
Time will tell whether the foundation brings the security rigor the project needs, but the following three lessons stand out.
The good news for application security (AppSec) teams that now must secure the agentic AI ecosystem is that many of the fundamentals of software supply chain security still apply.
“The foundational concepts of trust, provenance, and dependency risk are identical,” said Dhaval Shah, senior director of product management at ReversingLabs (RL), who noted that third-party large language models (LLMs) are analogous to the open-source libraries in software repositories. The concepts of least privilege and secrets management still apply, and comprehensive software bills of materials (as well as AI-BOMs that enumerate AI-specific components) should be prerequisites for the procurement and deployment of agent-driven software.
Dhaval ShahIf an underlying dependency is compromised, the agent is compromised.
But security pros need to keep in mind that agentic AI changes the nature of supply chain dependencies.
One of the gravest exposures from supply chain components using agentic AI is that even natural-language text becomes vulnerable. OpenClaw skills stored in ClawHub are plaintext instructions written in Markdown files. Legacy AppSec tools such as static and dynamic application security tools and software composition analysis were already proving inadequate in light of modern software development trends, and an agentic platform like OpenClaw makes them completely irrelevant, as O’Reilly saw when testing ClawHub skills.
Jamieson O’ReillyCompletely malicious skills were getting zero out of 100 in the threat score because engines were looking for PowerShell scripts and executables. And this looks like a text file.
This has huge implications for software security, said RL’s Shah.“It completely flips the traditional AppSec paradigm on its head,” he said. “A malicious actor doesn’t need to write malware anymore. They just need to write highly persuasive, manipulative English.”
AppSec leaders will have to rethink how they examine artifacts, shifting from syntax analysis to intent and context analysis, he said. And, clearly it’s no longer enough to just scan the code, Shah added.
Dhaval ShahWe have to analyze the entire artifact in context: the instructions, the model interpreting them, and the permissions granted in runtime.
In some ways, ClawHub is playing out a scenario that software supply chain security professionals know well: A new hub for third-party components launches and is rapidly adopted, and malicious packages flood in before security can catch up.
Paul McCarty and his team at Open Source Malware have spent years hunting for threats in npm, GitHub, and Python Package Index (PyPI). When they turned their attention to ClawHub, they weren’t surprised by what they found.
They discovered the first malicious payload within minutes and eventually cataloged hundreds of malicious skills published between January 27 and February 2, McCarty wrote in his report. The skills masqueraded as cryptocurrency trading tools while delivering information-stealing malware to macOS and Windows systems.
Paul McCartyMany of the payloads we found were visible in plain text in the first paragraph of the SKILL.md file.
Since then, OpenClaw has partnered with VirusTotal to scan for malicious skills, but attackers have already adapted. McCarty’s latest research found a new campaign that bypasses VirusTotal scanning by hosting malware on lookalike OpenClaw websites and using skills purely as decoys rather than embedding payloads directly.
“The shift from embedded payloads to external malware hosting shows threat actors adapting to detection capabilities,” McCarty wrote in an update. “As AI skill registries grow, they become increasingly attractive targets for supply chain attacks.”
And the patterns are already spreading beyond ClawHub. Malicious VS Code extensions impersonating Moltbot have hit Microsoft’s Extension Marketplace — one extension, published by a user named “clawdbot” on January 27, stealthily drops malicious payloads on compromised hosts. And it’s not just OpenClaw; any agent ecosystem that allows third-party extensions inherits such risks, said Christopher Ijams, a cybersecurity engineer who writes about AI risks in his Substack, ToxSec.
Christopher IjamsClaude Code has marketplace skill vulnerabilities. … Your developers are probably already running something with the same attack surface.
The key is to start looking for ways to audit the agent supply chain.
“Know what code is inside your agents before deployment. ClawHub skills are just zip files — they’re auditable if you bother to audit them,” Ijams wrote, pointing to Cisco’s recently released Skill Scanner as a right-now resource to get started. It’s an open-source tool that combines static analysis, behavioral dataflow analysis, LLM-assisted semantic review, and VirusTotal scanning — a multilayered approach that reflects the complexity of detecting threats in natural-language instructions.
The merciless mocking of OpenClaw apparently got quick results, with the solo maintainer realizing he couldn’t remove malicious skills from his registry and needed to seek out institutional support.
But that doesn’t mean that the whole experience won’t be repeated, with another new agentic AI technology that has few guardrails being adopted so fast — followed by such strong pushback — that the creators see the need for guardrails.
Oganizations that treat agentic AI like any other supply chain dependency — auditing artifacts, vetting sources, and monitoring behavior — will be in a better position than those that assume someone else is handling security.
Learn how to develop your own AI security playbook in this webinar with Doug Levin and RL's Tomislav Peričin.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial