RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
AppSec & Supply Chain SecurityOctober 7, 2025

The Postmark MCP server attack: 5 key takeaways

A malicious Model Context Protocol package was found in the wild last week. Here are lessons from the compromise of the AI interface tool.

John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedIn

More Blog Posts

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / Twitter
LinkedInLinkedIn
FacebookFacebook
InstagramInstagram
YouTubeYouTube
blueskyBluesky
RSSRSS
Back to Top
blueskyBluesky
Email Us

The recent discovery of malicious MCP (Model Context Protocol) server code has some sobering ramifications for security teams. The code was discovered by Koi Security in an npm package called postmark-mcp that copied an official Postmark Labs library with the same name. The legitimate postmark-mcp library allows users to send emails using AI assistants. The malicious version Bcc’d every developer email to “phan@giftshop[.]club.” Before the malicious package was taken down, it was downloaded 1,643 times.

MCP is an interface standard that enables machine-learning (ML) models for AI tools such as GPT, Claude, and Gemini to interact with external tools, data sources, and services in a structured and actionable manner. Instead of simply generating text, MCP servers let the AI tools perform tasks such as accessing documents from a file system, querying a database, creating calendar events, and using APIs to trigger workflows or retrieve live data.

Abhay Bhargav, CEO of SecurityReviewAI, said that before the advent of MCP, building AI agents involved using various agent frameworks, each of which worked only with specific LLMs. “This was seen as problematic because of the high churn of users and companies for different frontier large language models and different versions of those same LLMs,” Bhatgav said.

MCP burst onto the scene as a neutral and easy solution. You could expose any tools to agents through MCP, and it would work across different LLMs, unrestricted by language and framework. This makes MCP extremely compelling as a way to expose your LLMs to tools, which is essential for any AI agent.

Abhay Bhargav

Crystal Morin, a senior cybersecurity strategist at Sysdig, said MCP servers have become “the next big thing” for AI innovation. “In essence, MCP servers act like a universal adapter, enabling AI models to easily communicate with databases, applications, and services without having to reinvent the wheel,” Morin said.

With an MCP server and relatively little effort, an AI model can get hooked into just about anything, leaving the users’ creativity as the only limit.

Crystal Morin

But while MCP may be a compelling tool for ML, the interface can be a headache for security teams, wrote Koi Security’s chief technology officer, Idan Dardikman, in a company blog post:  

These MCP servers run with the same privileges as the AI assistants themselves — full email access, database connections, API permissions — yet they don’t appear in any asset inventory, skip vendor risk assessments, and bypass every security control, from DLP to email gateways. By the time someone realizes their AI assistant has been quietly Bcc’ing emails to an external server for months, the damage is already catastrophic.

Idan Dardikman

Josh Devon, co-founder and former chief operating officer of Flashpoint, said the discovery of the Postmark MCP attack is momentous. “For the past year, the security community has been discussing the theoretical risks of the agentic layer of AI. This incident moves the threat from the theoretical to a tangible, real-world attack,” Devon said.

It is the canary in the coal mine, providing the first concrete proof that the AI agent supply chain is a high-stakes, unmanaged attack surface. For every CISO and GRC leader, this incident validates their concerns about the loss of control that comes with deploying autonomous agents.

Josh Devon

Here are five key lessons from the Postmark MCP incident.

Get Guide: How the Rise of AI Will Impact Supply Chain Security

1. Adversaries have a new attack surface

Dave Ferguson, director of product for software supply chain security at ReversingLabs (RL), said the incident continues a trend of the npm ecosystem being targeted for malicious campaigns. “It is not a sophisticated supply chain attack by any means, but leveraging an MCP server is a new wrinkle,” Ferguson said.

This particular attack may have been just a test, because the legitimate postmark-mcp project on GitHub is not popular — only one watch and two stars. Nevertheless, it is unlikely to be the last such incident because of the widespread interest in MCP at the moment.

Dave Ferguson

2. MCP can turn AI into malicious assistants

Dhaval Shah, senior director of product management at RL, said that while traditional malicious packages require developer awareness at the time of installation, “MCP servers … are designed for autonomous AI execution. A single backdoored line executes hundreds of times daily without human review.” 

The attack surface isn’t just broader. It’s automated and invisible. MCP servers turn AI assistants into unwitting accomplices, executing malicious code with God Mode permissions 24/7, often with deeper system access than a human developer would use in a single session.

Dhaval Shah

3. Supply chain governance is essential for AI

The problem with MCP servers — and the AI agents that rely on them — is that they’re designed for speed and autonomy. “That creates a tension with governance, which depends on visibility, ownership, and auditability,” said Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions. “When tools bypass human review, they create a blind spot. Actions are taken faster than they can be traced. Security teams then face an uphill battle trying to reconstruct intent after the fact,” he said.

Without proactive governance models — like assigning ownership, applying behavioral auditing, and enforcing circuit breakers — you risk losing the chain of accountability entirely. In short, we need to govern MCP servers the way we govern identity providers — because in practice, that’s what they are.

Rosario Mastrogiacomo

Trey Ford, chief strategy and trust officer for Bugcrowd, a crowdsourced bug bounty platform, said that MCP is a new technology under rapid development and adoption. 

Logging, visibility, and monitoring generally follow a new capability. This is the golden hour for malicious actors to target these technologies, as security teams do not yet have the telemetry to see in and investigate.

Trey Ford 

4. Compromised MCP servers pose big upstream risk

Dependencies typically impact one library or application, said Randolph Barr, CISO of Cequence Security. 

A malicious MCP server can sit inside live workflows and silently intercept sensitive transactions across multiple trusted tools like emails, patient data, and financial records, giving attackers far broader leverage.

Randolph Barr

Sysdig’s Morin said that a poisoned dependency is typically a one-off compromise that impacts everyone who installs it. An MCP server, in contrast, offers continuous integration within an environment and has access to APIs, data, and applications. 

Rather than just impacting a single codebase, which can be patched or rolled back, a rogue MCP server can modify or manipulate an entire AI supply chain and all of its connected workflows.

Crystal Morin

Ensar Seker, CISO of SOCRadar, said that what makes compromised MCP servers particularly dangerous is that they don’t just poison one node of the supply chain. “They operate as autonomous upstream control planes. They issue instructions, retrieve data across environments, and can adapt based on environmental feedback,” Seker said.

This is a step beyond static malware or backdoors. It’s programmable persistence at the supply chain level.

Ensar Seker

5. New identity attribution is needed

Morin said that threats move a lot faster when humans are hands-off, significantly shrinking the window for defenders to react. “While traditional threats can be identified with a single suspicious email or file modification, an MCP server and its AI agents can autonomously gather data or send emails. The potential blast radius is, therefore, much larger,” she said.

Security teams often have little visibility into how MCP servers operate. They need new forms of identity attribution for AI agents, task-based and ever-changing privilege for autonomous systems, and real-time AI-aware monitoring to keep pace with the changes — malicious or otherwise — that AI agents can make in seconds.

Crystal Morin

To defend against these threats, more rigorous agent validation pipelines, better behavioral monitoring of CI/CD environments, and a shift in mindset are needed, Morin said.

Treat every package or agent as untrusted until proven otherwise. Just as we moved from perimeter firewalls to zero trust in networking, we now need zero-trust AI agent architectures to secure the next-gen software supply chain.

Tags:AppSec & Supply Chain Security
John P. Mello Jr.
Postmark MCP attack

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Menu
Request a demo
Trust model flips

How agentic AI flips the trust model

As AppSec shifts focus from the components to data, your strategy needs updating. Are you on top of your trust debt?

Learn More about How agentic AI flips the trust model
How agentic AI flips the trust model
MCP attacks

MCP rug-pull attack worries mount

This new class of AI tool supply chain attack highlights how trust of agents can be exploited.

Learn More about MCP rug-pull attack worries mount
MCP rug-pull attack worries mount
AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Finger on map