RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityFebruary 24, 2026

How AI agents upend supply chain security

Here’s what you need to know about their impact on software security — and what you can do to fight back. 

man in suit
Jaikumar Vijayan, Freelance technology journalistJaikumar Vijayan
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
AI upends SSCS

Autonomous AI agents are creating an entirely new category of software supply chain risk that few organizations are equipped to defend against.

The problem is that AI agents are fundamentally different from conventional software components, as Andrew Storms, vice president of security at Replicated, noted in a recent blog post. 

Unlike traditional software dependencies with deterministic behavior, agents operate through instructions interpreted by LLMs at runtime.

Andrew Storms

To create traditional software, developers import compiled code that behaves in a predictable and predetermined way. The code can be easily scanned for vulnerabilities, verified via cryptographic signatures, and isolated with scoped permissions to minimize security risks, Storms wrote.

AI agents, however, can behave unpredictably because their actions are determined not by the code itself but by how the large language model (LLM) interprets its instructions at runtime. Worse, agents often have administrative access to critical systems but lack the security controls found in traditional software. 

The AI agent risk trifecta is completed when agents and skills are distributed via new marketplaces, some of which, like ClawHub, allow publishers with little or even no experience to upload their unvetted software. More often than not, the freely available agents lack the security features typically available for traditional software such as signatures, reputation systems, and audit trails, Storms said.

The result, Storms wrote: More than two decades of effort shoring up supply chain security are being upended virtually overnight. Here’s what you need to know about AI agents’ devastating effects on software supply chain security — and what you can do to fight back. 

[ See webinar: Develop Your Playbook for AI-Driven Software Risk ]

Today’s mandates and frameworks are not enough

The mandates and frameworks that emerged in the wake of the SolarWinds attack, which were bolstered by the widespread adoption of software bills of materials (SBOMs) and secure development practices, suddenly are insufficient to protect supply chains, Storms said, because we’re no longer importing established libraries with code we can inspect. We’re importing instructions that will be interpreted by an LLM, and although the LLM’s actions might be auditable, the reasoning behind the actions can be unknowable. But it gets much worse, he said, because agents often have broad permissions and so can execute commands, modify infrastructure, and take other actions that heighten risk.

A new dimension of risk

Diana Kelley, CISO at Noma Security, agreed with Storms’ assessment of the problem, adding that traditional supply chain controls built for static artifacts such as signed code, scanned dependencies, and trusted repositories come up short when it comes to AI agents and skills.  While you can generally understand the intended behavior of code when you review and scan it before deployment, Kelley said, it is impossible to predict what an AI agent will do because its behavior is assembled dynamically at runtime with LLM-generated outputs influencing what steps the agent will take next. “The LLM generates the response, and the agent turns that response into actions using connected tools,” she said. And those tools don’t have to be code. So, if someone hides harmful instructions inside a document or tool, the LLM may interpret those instructions as something to follow, and the agent may act on them. 

That level of dynamic behavior and connectivity can create a fast-moving path from an untrusted external component to real internal impact.

Diana Kelley

Malicious AI skills are already proliferating

Bad actors are already taking advantage of the new AI agent environment and populating agent skills repositories with malicious skills and payloads. As an example, Storms pointed to a study by Snyk, which looked at AI agent skills on ClawHub and skills.sh and found that 534 out of 3,984 contained at least one critical security vulnerability. Those vulnerabilities included malware, instructions for exposing secrets, and functions for executing prompt injection attacks. Another study, by Koi, uncovered 824 malicious AI skills on ClawHub that would expose organizations downloading them to a wide range of potential attacks.

What’s troubling, said Randolph Barr, CISO at Cequence Security, is that vulnerabilities in AI agent skills have much greater potential for damage. 

Early npm or PyPI compromises typically resulted in malicious code executing within defined application boundaries. With AI agents, skills can effectively inherit the full permissions of the agent they are attached to. That changes the impact model materially.

Randolph Barr

If, for example, a harmful AI skill were integrated into a self-running process and a bad actor were to exploit prompt injection, the skill could enable data theft, unauthorized workflow changes, permissions misuse, and lateral movement within systems, Barr said. “The combination of prompt injection, autonomous action, and high-permission skills creates a multiplier effect that did not exist at scale in earlier package ecosystems,” he said.

AI-specific supply chain controls are needed

Replicated’s Storms said the software supply chain can’t be protected without new controls specifically targeted at AI agents and agent skills. He proposes: 

  • A code-signing equivalent to ensure cryptographic provenance for natural-language instructions
  • Runtime monitoring to catch deviant AI agent behavior
  • Just-in-time access provisioning by default and better visibility overall over models and agent environments

Noma Security’s Kelley said mitigation can’t happen until organizations recognize the dangers that come with AI agents that have access to systems, data, and workflows while being guided by probabilistic LLM output. In short, she said, risk exists anywhere an agent is connected to tools and has meaningful permissions.

We need stronger standards for agent provenance and accountability, she said, including cryptographic signing of skills, clearer publisher trust signals, and better auditability in agent marketplaces, similar to what is now available for traditional software supply chains. But for right now, visibility is essential, she said. “Inventory where agents are being used, which teams are deploying them, what they’re connected to, and what actions they are authorized to take.”

Once organizations acknowledge the problem, they must apply least privilege and make sure AI agents don’t inherit all of a user’s access by default, Kelley said. They should not have broad, standing credentials, especially in production environments or sensitive repositories. Organizations also should enforce runtime controls and monitoring. 

With agents, the real risk is not just what code they contain; it’s what they are permitted to do at the moment they are invoked, using the tools and credentials they’ve been given.

Diana Kelley

Where to start

Frameworks such as the NIST’s AI Risk Management Framework and the OWASP Top 10 for Agentic Applications are good starting points for organizations figuring out how to mitigate AI-specific risk, Cequence’s Barr said.

Organizations also need to enforce strong identity and access management for agents and skills, along with strict least-privilege rules, he said. Other advisable measures, he said, are setting up guardrails and policy engines to manage agent actions, using sandboxing and segmentation for execution environments, monitoring and logging all API and agent interactions, and being able to quickly disable or revoke skills if needed.

And one feature of AI-enabled environments that organizations must keep in mind, Barr said, is that they allow adversaries to experiment, automate, and iterate faster. The speed of exploitation increases because the infrastructure supporting experimentation has also accelerated, he said. 

AI agents extend the existing application attack surface; they do not replace it and should be governed with that reality in mind. The goal is not to slow innovation but to secure it intentionally.

Randolph Barr

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain SecurityArtificial Intelligence (AI)/Machine Learning (ML)

More Blog Posts

AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top