Break Free from VirusTotal with ReversingLabs Threat IntelWatch AMA Replay

The Postmark MCP server attack: 5 key takeaways

A malicious Model Context Protocol package was found in the wild last week. Here are lessons from the compromise of the AI interface tool.

Postmark MCP attack

The recent discovery of malicious MCP (Model Context Protocol) server code has some sobering ramifications for security teams. The code was discovered by Koi Security in an npm package called postmark-mcp that copied an official Postmark Labs library with the same name. The legitimate postmark-mcp library allows users to send emails using AI assistants. The malicious version Bcc’d every developer email to “phan@giftshop[.]club.” Before the malicious package was taken down, it was downloaded 1,643 times.

MCP is an interface standard that enables machine-learning (ML) models for AI tools such as GPT, Claude, and Gemini to interact with external tools, data sources, and services in a structured and actionable manner. Instead of simply generating text, MCP servers let the AI tools perform tasks such as accessing documents from a file system, querying a database, creating calendar events, and using APIs to trigger workflows or retrieve live data.

Abhay Bhatgav, CEO of SecurityReviewAI, said that before the advent of MCP, building AI agents involved using various agent frameworks, each of which worked only with specific LLMs. “This was seen as problematic because of the high churn of users and companies for different frontier large language models and different versions of those same LLMs,” Bhatgav said.

MCP burst onto the scene as a neutral and easy solution. You could expose any tools to agents through MCP, and it would work across different LLMs, unrestricted by language and framework. This makes MCP extremely compelling as a way to expose your LLMs to tools, which is essential for any AI agent.

Abhay Bhatgav

Crystal Morin, a senior cybersecurity strategist at Sysdig, said MCP servers have become “the next big thing” for AI innovation. “In essence, MCP servers act like a universal adapter, enabling AI models to easily communicate with databases, applications, and services without having to reinvent the wheel,” Morin said.

With an MCP server and relatively little effort, an AI model can get hooked into just about anything, leaving the users’ creativity as the only limit.

Crystal Morin

But while MCP may be a compelling tool for ML, the interface can be a headache for security teams, wrote Koi Security’s chief technology officer, Idan Dardikman, in a company blog post:  

These MCP servers run with the same privileges as the AI assistants themselves — full email access, database connections, API permissions — yet they don’t appear in any asset inventory, skip vendor risk assessments, and bypass every security control, from DLP to email gateways. By the time someone realizes their AI assistant has been quietly Bcc’ing emails to an external server for months, the damage is already catastrophic.

Idan Dardikman

Josh Devon, co-founder and former chief operating officer of Flashpoint, said the discovery of the Postmark MCP attack is momentous. “For the past year, the security community has been discussing the theoretical risks of the agentic layer of AI. This incident moves the threat from the theoretical to a tangible, real-world attack,” Devon said.

It is the canary in the coal mine, providing the first concrete proof that the AI agent supply chain is a high-stakes, unmanaged attack surface. For every CISO and GRC leader, this incident validates their concerns about the loss of control that comes with deploying autonomous agents.

Josh Devon

Here are five key lessons from the Postmark MCP incident.

Get Guide: How the Rise of AI Will Impact Supply Chain Security

1. Adversaries have a new attack surface

Dave Ferguson, director of product for software supply chain security at ReversingLabs (RL), said the incident continues a trend of the npm ecosystem being targeted for malicious campaigns. “It is not a sophisticated supply chain attack by any means, but leveraging an MCP server is a new wrinkle,” Ferguson said.

This particular attack may have been just a test, because the legitimate postmark-mcp project on GitHub is not popular — only one watch and two stars. Nevertheless, it is unlikely to be the last such incident because of the widespread interest in MCP at the moment.

Dave Ferguson

2. MCP can turn AI into malicious assistants

Dhaval Shah, senior director of product management at RL, said that while traditional malicious packages require developer awareness at the time of installation, “MCP servers … are designed for autonomous AI execution. A single backdoored line executes hundreds of times daily without human review.” 

The attack surface isn’t just broader. It’s automated and invisible. MCP servers turn AI assistants into unwitting accomplices, executing malicious code with God Mode permissions 24/7, often with deeper system access than a human developer would use in a single session.

Dhaval Shah

3. Supply chain governance is essential for AI

The problem with MCP servers — and the AI agents that rely on them — is that they’re designed for speed and autonomy. “That creates a tension with governance, which depends on visibility, ownership, and auditability,” said Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions. “When tools bypass human review, they create a blind spot. Actions are taken faster than they can be traced. Security teams then face an uphill battle trying to reconstruct intent after the fact,” he said.

Without proactive governance models — like assigning ownership, applying behavioral auditing, and enforcing circuit breakers — you risk losing the chain of accountability entirely. In short, we need to govern MCP servers the way we govern identity providers — because in practice, that’s what they are.

Rosario Mastrogiacomo

Trey Ford, chief strategy and trust officer for Bugcrowd, a crowdsourced bug bounty platform, said that MCP is a new technology under rapid development and adoption. 

Logging, visibility, and monitoring generally follow a new capability. This is the golden hour for malicious actors to target these technologies, as security teams do not yet have the telemetry to see in and investigate.

Trey Ford 

4. Compromised MCP servers pose big upstream risk

Dependencies typically impact one library or application, said Randolph Barr, CISO of Cequence Security. 

A malicious MCP server can sit inside live workflows and silently intercept sensitive transactions across multiple trusted tools like emails, patient data, and financial records, giving attackers far broader leverage.

Randolph Barr

Sysdig’s Morin said that a poisoned dependency is typically a one-off compromise that impacts everyone who installs it. An MCP server, in contrast, offers continuous integration within an environment and has access to APIs, data, and applications. 

Rather than just impacting a single codebase, which can be patched or rolled back, a rogue MCP server can modify or manipulate an entire AI supply chain and all of its connected workflows.

Crystal Morin

Ensar Seker, CISO of SOCRadar, said that what makes compromised MCP servers particularly dangerous is that they don’t just poison one node of the supply chain. “They operate as autonomous upstream control planes. They issue instructions, retrieve data across environments, and can adapt based on environmental feedback,” Seker said.

This is a step beyond static malware or backdoors. It’s programmable persistence at the supply chain level.

Ensar Seker

5. New identity attribution is needed

Morin said that threats move a lot faster when humans are hands-off, significantly shrinking the window for defenders to react. “While traditional threats can be identified with a single suspicious email or file modification, an MCP server and its AI agents can autonomously gather data or send emails. The potential blast radius is, therefore, much larger,” she said.

Security teams often have little visibility into how MCP servers operate. They need new forms of identity attribution for AI agents, task-based and ever-changing privilege for autonomous systems, and real-time AI-aware monitoring to keep pace with the changes — malicious or otherwise — that AI agents can make in seconds.

Crystal Morin

To defend against these threats, more rigorous agent validation pipelines, better behavioral monitoring of CI/CD environments, and a shift in mindset are needed, Morin said.

Treat every package or agent as untrusted until proven otherwise. Just as we moved from perimeter firewalls to zero trust in networking, we now need zero-trust AI agent architectures to secure the next-gen software supply chain.

Back to Top