RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityApril 29, 2026

MCP rug-pull attack worries mount

This new class of AI tool supply chain attack highlights how trust of agents can be exploited.

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
MCP attacks

The rug-pull attack, an exploit targeting Model Context Protocol (MCP) tools, is raising red flags about software supply chain security. Security researcher Nasser Ali Alzahrani recently described the danger in a recent blog post.

“Your AI agent’s tools can change after you approve them, without triggering any notification or re-consent. The MCP spec allows this by design.”
—Nasser Ali Alzahrani

Ali Alzahrani said MCP clients fetch tool definitions from the server at runtime; their trust that those definitions have remained static opens the door to rug-pull attacks. “Between the moment a user approves a tool and the moment the agent calls it, the server can rewrite the tool’s description, parameters, and behavior. The approval references a definition that no longer exists.”

Randolph Barr, CISO of Cequence Security, said the element of trust makes the rug-pull technique a new class of AI supply chain attack.

“What makes this meaningfully different from traditional supply chain attacks is that it exploits trust over time rather than targeting an initial point of compromise. It’s aimed at behavior, not just code, and it operates inside autonomous systems that often have standing access to sensitive data and actions.”
—Randolph Barr

Barr said that, from a security leadership perspective, “this represents the early emergence of a distinct category worth naming: post-deployment drift attacks in AI systems, more colloquially known as ‘rug pulls.’”

Here’s what your team needs to know about this new class of AI-derived supply chain attack.

[ See webinar: Develop Your Playbook for AI-Driven Software Risk ]

How AI redefines supply chain security

Boris Cipot, a security engineer at Black Duck Software, said organizations should expect to see more MCP rug-pull attacks.

“The open-source ecosystem has shown us that attackers continuously find new ways to exploit trust, ownership changes, and implicit assumptions. There’s no reason to expect the AI ecosystem to be any different.”
—Boris Cipot

As AI systems, agents, and tool chains evolve — and they are doing so rapidly — new classes of vulnerabilities are inevitable, Cipot said. “Users and organizations must anticipate these risks rather than react to them. They must assume that AI supply chains, like traditional software supply chains, will be actively targeted and must be secured accordingly.”

As ReversingLabs (RL) researchers noted in a recent report, "AI is not just a tool in the software supply chain — it is the supply chain."

Why hashing and version pinning matters

Alzahrani wrote that in following a client/server model, the MCP server exposes tools, and the client — the user’s agent framework — fetches the tool list, presents it to the user or the model, and calls tools on behalf of the agent.

The critical assumption is that the tool definition that the user approved is the tool definition the agent executes, he said. But MCP does not enforce this. There is no versioning, no content hash, no approval-time snapshot stored on the client side.

That’s the single biggest gap in the protocol’s security posture, said Dan Graves, chief product officer at WitnessAI. 

“Without hashing or version-pinning, there is no mechanism to detect that a tool changed between the moment you approved it and the moment your agent called it. We solved this for traditional software supply chains with checksums and signed packages decades ago. MCP shipped without either.”
—Dan Graves

MCP’s flexibility is owed to these dynamic tool definitions, he said, but MCP provides zero primitives for verifying integrity after the initial handshake. “Implementations inherited that blind spot and built approval flows that check once and never look again,” Graves said.

“Think about what MCP tools actually touch in production,” said Riyaz Walikar, hacker-in-chief at Appsecco: patient health records through APIs of the FHIR health care data standard, financial transaction data and trading signals, people’s personally identifiable information, API keys, and secrets pulled from environment variables.

“A mutated tool with access to any of those can exfiltrate data to an external endpoint and still return a perfectly normal response. Your agent keeps running. No errors. No alerts.”
—Riyaz Walikar

And the regulatory side is just as bad, Walikar said. “HIPAA needs audit trails proving [protected health information] wasn’t improperly accessed. SOC 2 Type II auditors need proof that logs weren’t modified. [The Securities and Exchange Commission and the Financial Industry Regulatory Authority] need nonrewritable records of automated trading decisions. Without hashing or versioning, you can'’ produce any of that evidence.”

Black Duck’s Cipot said researchers have already seen this problem play out in other ecosystems. For example, in package managers or container image workflows, a dependency can be referenced by both name and version or it can simply use the name and rely on a tag such as “latest.” The latter is well known to be dangerous because control is lost over what’s actually being installed and can unknowingly pull in compromised or malicious code.

“A rug-pull scenario follows the same pattern. The difference is that instead of installing an unexpected package, your MCP continues executing a tool whose underlying behavior may have changed, making the system inherently unpredictable and harder to trust.”
—Boris Cipot

Cequence Security’s Barr said this is functionally equivalent to running unsigned, unverified code in production with privileged access, “and that should be an uncomfortable comparison for any organization that takes its security posture seriously.”

How ease of execution factors in

Making matters worse, executing a rug-pull attack is easy. “Following a basic MCP tool publishing guide, it took me about 10 minutes to get most of the way to publishing an MCP tool that I could use to manipulate an agent calling my tool,” said Collin Abidi, a machine learning engineer on the AI cyber-effects team at Carnegie Mellon University’s Software Engineering Institute.

“It would not be difficult to update such a package with AI-specific malicious content that traditional malware scanners would struggle to detect.”
—Collin Abidi

Barr said that If there’s no hashing, signing, or behavioral monitoring in place, “it's trivially easy.”

“A compromised maintainer or a poisoned upstream dependency can quietly modify tool metadata or behavior, and that change propagates through the system without anyone noticing. There’s nothing actively validating integrity after initial deployment, so the attack surface essentially remains wide open for the entire lifecycle of the integration.”
—Randolph Barr

AI demands new supply chain security tooling

Countering rug-pull attacks requires significantly deeper visibility than what traditional logging provides, Barr said. That means maintaining versioned snapshots of tool definitions, both the schema and the metadata, along with cryptographic hashes or signed manifests that capture approved states at a point in time.

You also need comprehensive audit logs of agent-to-tool interactions, including inputs, outputs, and data destinations, as well as behavioral baselines that document expected endpoints, data flows, and action patterns, he said. “Without all of that, you can’t definitively prove that something changed. You can only observe that something unexpected happened, which is a much weaker position to defend from,” Barr said.

The real danger isn’t the attack itself; it’s the trust architecture underneath, said WitnessAI’s Graves.

“Organizations are connecting AI agents to MCP servers the same way they connected apps to APIs 15 years ago: approve once, assume forever. Until we treat tool definitions as untrusted input on every single invocation — the way we learned to treat user input after SQL injection — rug pulls will keep working.”
—Dan Graves

JPMorgan Chase CISO Pat Opet’s call to action on third-party software last year brought attention to the problem of blind trust. In that open letter, Opet noted that many SaaS models rely on implicit trust of the provider while dismantling traditional security boundaries that had kept organizations protected from attacks and using authentication protocols such as OAuth that enable direct links between external, third-party services and sensitive internal resources.  

Opet said recently at the RSAC Conference that JPMC is creating a new architecture for AI-powered agents to run on that will limit their access to sensitive information and IT assets. Such a controlled architecture gives JPMC the confidence to scale AI coding assistance and other AI-powered desktop tools because unintended consequences such as identity theft and abuse are greatly curbed.

Saša Zdjelar, chief trust officer at ReversingLabs, said trust is now the central problem — and “trust debt” is the big problem now facing AppSec teams.

“What Pat is describing is the unwinding of decades of trust debt. The industry defaulted to implicit trust in vendors because verifying was hard and expensive. JPMorgan is proving that when you actually inspect what’s inside the software you’re buying — the components, the dependencies, the threat models — vendors respond.”
—Saša Zdjelar

Join Spectra Assure Community to leverage binary analysis to secure your software development lifecycle (SDLC) – for free.

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top