The fledgling Model Context Protocol standard has generated lots of headlines and discussion among developers, who like that MCP makes it easy to connect large language models (LLMs) directly with tools and data. The question is whether development teams and organizations are aware of the risks.
Unveiled by Anthropic AI in November 2024, MCP’s benefits are tantalizing, from allowing agents to easily connect to tools via standardized APIs, maintaining persistent sessions, running commands, and sharing context across workflows, Elena Cross, an AI DevOps researcher working on LLM security and protocol design, wrote in a recent post on Medium.
The big problem, said Cross, is that MCP is not secure by default, which opens up a lot of dangerous possibilities via code. She wrote, "If you’ve plugged your agents into arbitrary servers without reading the fine print — congrats, you may have just opened a side-channel into your shell, secrets, or infrastructure."
“MCP is powerful. But we’re seeing history repeat itself — with all the speed of AI agents, and none of the maturity of API security.”
—Elena Cross
Here's what you need to know about the power of MCP for AI coding — and how to make sure your organization has a handle on the risks before rolling it out.
[ Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security ]
What's missing is what matters
For MCP to continue in use, said Cross, it will need secure-by-default protocols. Without such improvements, developers using MCP are exposing their work to security risks including prompt injection vulnerabilities in AI tooling, tool poisoning attacks, silent definition mutations after installation, and cross-server tool shadowing, where a malicious server can override or intercept calls made to a trusted server. “Yes, it’s as bad as it sounds," she wrote.
These things are all possible because MCP does not include authentication standards, context encryption, tool integrity verification, and other critical security methods, Cross said.
“There’s no mechanism to say: ‘This tool hasn’t been tampered with.’ And users don’t see the full tool instructions that the agent sees.”
—Elena Cross
Understanding these shortcomings is important. Developers can ensure better security by using input validation, pinning versions of MCP servers and tools, and sanitizing tool descriptions. And platform builders can display their full tool metadata, use integrity hashes for server updates, and enforce session security. Lastly, users can help bolster MCP security by not connecting to random servers, monitoring session behavior such as product logs, and watching for unexpected tool updates, Cross stressed.
Cross's warning is one developers need to hear. In a recent article, Julien Chaumond, chief technology officer at Hugging Face, shared how his team implemented agents on MCP by using a loop from the TypeScript extension of JavaScript.
“It's going to make agentic AI way simpler going forward."
–Julien Chaumond
Chaumond's post highlights the power of MCP — but he did not raise any security issues. And that is where the worry lies. With the rise of AI coding (including agentic AI) and trends such as vibe coding, MCP in its default state is a risky proposition.
MCP in default mode is a red flag for enterprises
Dor Sarig, CEO and co-founder of Pillar Security, wrote a recent blog post on the security dangers of MCP recently. He told ReversingLabs that his company strongly advises exercising significant caution, particularly when considering deploying MCP in live business production environments.
“The core issue is that MCP, in its current stage, shares a characteristic with many foundational internet protocols like Telnet or early HTTP: It wasn't built with a security-first mindset, making it insecure by design from a practical standpoint.”
—Dor Sarig
He said the critical danger with MCP today is that data is executable. “Configurations and context provided through MCP are not merely passive information; they are interpreted by LLMs as live instructions,” Sarig said.
“This paradigm shift means that vulnerabilities can have more direct and immediate consequences, potentially leading to unintended actions or system compromise. Therefore, rushing MCP into production without a deep understanding of these risks and robust mitigating controls would be imprudent.”
—Dor Sarig
Luca Beurer-Kellner, CTO and co-founder of AI agent vendor Invariant Labs, said MCP needs more work before it can be used safely by enterprises.
“MCP itself isn’t insecure by design, but it increases the attack surface by pushing LLMs closer to tools and sensitive data. The real issue is that current LLMs remain vulnerable to prompt injections, and most popular agent systems lack strong guardrails. So yes, I recommend using caution and thinking carefully about trust and access boundaries.”
—Luca Beurer-Kellner
Eventually, the protocol’s security can be improved, at least partially, Beurer-Kellner said. “We’re making good progress, but securing agentic systems is hard — much harder than static LLM security. ... It’s clear that new threats will keep emerging as complexity grows.”
Kevin Swiber, CEO at Layered System, a consulting firm specializing on AI and APIs, recently posted on LinkedIn that the security concerns around MCP are very real, but he holds out hope that the standard will be a useful tool that eventually can be tamed.
Because MCP is based on Language Server Protocol (LSP) — and because integrated developer environments (IDEs) use LSP servers all the time — security concerns are high, Swiber said.
“The big difference is blast radius. Instead of only impacting the technology team, MCP has the potential to impact everyone in the business. Hijacking the CEO’s thoughts and plans from the AI host of their choice offers a new opportunity for bad actors.”
—Kevin Swiber
It's early days for MCP — keep that in mind
The security concerns with MCP should not be surprising since the protocol is still actively being created, Swiber said. The development community is continuing to provide useful feedback to Anthropic, and the company has responded with urgency to address enterprise-grade security requirements, he said. “We should always exercise caution with new technology, but that shouldn't stop us from exploring potential. MCP is in its early days."
“There have been concerns around security, and those concerns are actively being addressed. It would be understandable for an organization to be hesitant to adopt MCP today, but they should be preparing to adopt it in the near future if it could be helpful to their business goals.”
—Kevin Swiber
While developers should be cautious about exposing company-owned data, they can start experimenting with MCP today, said Swiber. “With open-source tooling and locally executable LLMs, it is possible to use MCP in a self-contained way today,” he said.
Swiber said MCP is an important protocol that bridges conversational interfaces to data and behavior that impacts the business. He suggested that developers stay up to date on the specifications and tooling while learning the security controls that are needed.
“With the widespread adoption of MCP, including the commitments of OpenAI and Microsoft, MCP is here to stay. The future may have more protocols to consider, but MCP is going to be relevant for a long time to come. It's worth investing the energy today in researching how this technology can help your business.”
—Kevin Swiber
A work in progress — security and all
Even Docker recently acknowledged the security concerns around MCP by releasing a Docker MCP Catalog and Toolkit in beta to provide a curated set of popular, containerized MCP servers to jump-start agentic AI development The Docker tools manage credentials, enforce access control, and secure the runtime environment, according to the company.
Invariant Labs' Beurer-Kellner said that to improve MCP today, his team has already released two open-source tools, MCP-scan and Guardrail, which can help developers audit and secure their MCP setups — and raise awareness of risk at this critical stage.
Pillar Security's Sarig said he believes that the security challenges inherent in MCP are solvable, but doing so will require "dedicated and multifaceted effort from the industry.” The needed protections, he said, include robust security standards, best practices, highly granular permission models, advanced monitoring, anomaly detection, response capabilities, and dedicated protection against AI-specific attack vectors.
“As the MCP specification matures and the security community scrutinizes it more deeply, we anticipate the evolution of these controls, enabling more secure integration with enterprise security frameworks. However, this will be an ongoing process, not an overnight fix.”
—Dor Sarig
But even as all this is happening in the background, enterprises and developers must be careful as they navigate the early days of MCP, he said.
“There's a significant risk that the undeniable cleverness and powerful capabilities of MCP could lead to a situation where enthusiasm outpaces the necessary caution and adherence to rigorous security practices among development teams. The ease with which MCP promises to connect AI to diverse tools can be a strong siren call.”
—Dor Sarig
To counteract this, a proactive and continuous educational effort is paramount. Security teams must take the lead in evangelizing the specific security flaws and the expanded attack surface that MCP introduces, Sarig said. “This isn't just about providing documentation; it's about fostering a security-aware culture where developers understand the implications of, for example, managing OAuth tokens within MCP servers or the potential for prompt injection through data fetched via MCP."
"Without this deep awareness and a commitment to secure development lifecycles, the allure of rapid feature development using MCP could indeed overshadow critical security considerations.”
—Dor Sarig
Keep learning
- Read the 2025 Gartner® Market Guide to Software Supply Chain Security. Plus: Join RL's May 28 webinar for expert insights.
- Get the white paper: Go Beyond the SBOM. Plus: See the Webinar: Welcome CycloneDX xBOM.
- Go big-picture on the software risk landscape with RL's 2025 Software Supply Chain Security Report. Plus: See our Webinar for discussion about the findings.
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and learn how RL discovered the novel threat in this
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.