Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial
Autonomous AI agents are creating an entirely new category of software supply chain risk that few organizations are equipped to defend against.
The problem is that AI agents are fundamentally different from conventional software components, as Andrew Storms, vice president of security at Replicated, noted in a recent blog post.
Andrew StormsUnlike traditional software dependencies with deterministic behavior, agents operate through instructions interpreted by LLMs at runtime.
To create traditional software, developers import compiled code that behaves in a predictable and predetermined way. The code can be easily scanned for vulnerabilities, verified via cryptographic signatures, and isolated with scoped permissions to minimize security risks, Storms wrote.
AI agents, however, can behave unpredictably because their actions are determined not by the code itself but by how the large language model (LLM) interprets its instructions at runtime. Worse, agents often have administrative access to critical systems but lack the security controls found in traditional software.
The AI agent risk trifecta is completed when agents and skills are distributed via new marketplaces, some of which, like ClawHub, allow publishers with little or even no experience to upload their unvetted software. More often than not, the freely available agents lack the security features typically available for traditional software such as signatures, reputation systems, and audit trails, Storms said.
The result, Storms wrote: More than two decades of effort shoring up supply chain security are being upended virtually overnight. Here’s what you need to know about AI agents’ devastating effects on software supply chain security — and what you can do to fight back.
Get Report: Software Supply Chain Security Report 2026Discussion: Report webinar
The mandates and frameworks that emerged in the wake of the SolarWinds attack, which were bolstered by the widespread adoption of software bills of materials (SBOMs) and secure development practices, suddenly are insufficient to protect supply chains, Storms said, because we’re no longer importing established libraries with code we can inspect. We’re importing instructions that will be interpreted by an LLM, and although the LLM’s actions might be auditable, the reasoning behind the actions can be unknowable. But it gets much worse, he said, because agents often have broad permissions and so can execute commands, modify infrastructure, and take other actions that heighten risk.
Diana Kelley, CISO at Noma Security, agreed with Storms’ assessment of the problem, adding that traditional supply chain controls built for static artifacts such as signed code, scanned dependencies, and trusted repositories come up short when it comes to AI agents and skills. While you can generally understand the intended behavior of code when you review and scan it before deployment, Kelley said, it is impossible to predict what an AI agent will do because its behavior is assembled dynamically at runtime with LLM-generated outputs influencing what steps the agent will take next. “The LLM generates the response, and the agent turns that response into actions using connected tools,” she said. And those tools don’t have to be code. So, if someone hides harmful instructions inside a document or tool, the LLM may interpret those instructions as something to follow, and the agent may act on them.
Diana KelleyThat level of dynamic behavior and connectivity can create a fast-moving path from an untrusted external component to real internal impact.
Bad actors are already taking advantage of the new AI agent environment and populating agent skills repositories with malicious skills and payloads. As an example, Storms pointed to a study by Snyk, which looked at AI agent skills on ClawHub and skills.sh and found that 534 out of 3,984 contained at least one critical security vulnerability. Those vulnerabilities included malware, instructions for exposing secrets, and functions for executing prompt injection attacks. Another study, by Koi, uncovered 824 malicious AI skills on ClawHub that would expose organizations downloading them to a wide range of potential attacks.
What’s troubling, said Randolph Barr, CISO at Cequence Security, is that vulnerabilities in AI agent skills have much greater potential for damage.
Randolph BarrEarly npm or PyPI compromises typically resulted in malicious code executing within defined application boundaries. With AI agents, skills can effectively inherit the full permissions of the agent they are attached to. That changes the impact model materially.
If, for example, a harmful AI skill were integrated into a self-running process and a bad actor were to exploit prompt injection, the skill could enable data theft, unauthorized workflow changes, permissions misuse, and lateral movement within systems, Barr said. “The combination of prompt injection, autonomous action, and high-permission skills creates a multiplier effect that did not exist at scale in earlier package ecosystems,” he said.
Replicated’s Storms said the software supply chain can’t be protected without new controls specifically targeted at AI agents and agent skills. He proposes:
Noma Security’s Kelley said mitigation can’t happen until organizations recognize the dangers that come with AI agents that have access to systems, data, and workflows while being guided by probabilistic LLM output. In short, she said, risk exists anywhere an agent is connected to tools and has meaningful permissions.
We need stronger standards for agent provenance and accountability, she said, including cryptographic signing of skills, clearer publisher trust signals, and better auditability in agent marketplaces, similar to what is now available for traditional software supply chains. But for right now, visibility is essential, she said. “Inventory where agents are being used, which teams are deploying them, what they’re connected to, and what actions they are authorized to take.”
Once organizations acknowledge the problem, they must apply least privilege and make sure AI agents don’t inherit all of a user’s access by default, Kelley said. They should not have broad, standing credentials, especially in production environments or sensitive repositories. Organizations also should enforce runtime controls and monitoring.
Diana KelleyWith agents, the real risk is not just what code they contain; it’s what they are permitted to do at the moment they are invoked, using the tools and credentials they’ve been given.
Frameworks such as the NIST’s AI Risk Management Framework and the OWASP Top 10 for Agentic Applications are good starting points for organizations figuring out how to mitigate AI-specific risk, Cequence’s Barr said.
Organizations also need to enforce strong identity and access management for agents and skills, along with strict least-privilege rules, he said. Other advisable measures, he said, are setting up guardrails and policy engines to manage agent actions, using sandboxing and segmentation for execution environments, monitoring and logging all API and agent interactions, and being able to quickly disable or revoke skills if needed.
And one feature of AI-enabled environments that organizations must keep in mind, Barr said, is that they allow adversaries to experiment, automate, and iterate faster. The speed of exploitation increases because the infrastructure supporting experimentation has also accelerated, he said.
Randolph BarrAI agents extend the existing application attack surface; they do not replace it and should be governed with that reality in mind. The goal is not to slow innovation but to secure it intentionally.
Learn about ReversingLabs' new AI security platform, which secures AI development and deployment from foundation to production.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial