Break Free from VirusTotal with ReversingLabs Threat IntelWatch AMA Replay

MCP credential weakness raises red flags

More than half of Model Context Protocol servers were found to rely on static, long-lived credentials. With AI agents on the rise, that’s a problem.

Red flags raised over MCP server credential weakness

MCP servers, which are important to the fast-developing AI stack, have a credentials problem. A recent analysis of more than 5,200 open-source Model Context Protocol server implementations by Astrix Security found that, while the vast majority of servers (88%) require credentials, more than half (53%) use credentials that rely on insecure, long-lived, static secrets, such as API keys and personal access tokens (PATs).

Relying on long-lived secrets as credentials, especially when saved on the user’s endpoint in a nonsafe manner, creates two serious security risks, Astrix Security researcher Tomer Yahalom explained.

The first: If an attacker manages to obtain your secret, they will be able to use it for a long time if it is long-lived. But the problem is later compounded.

The first risk is further amplified by the second risk, which is storing secrets in an unsafe manner. That can significantly increase the chances of an attacker gaining access to these credentials in the first place.

Tomer Yahalom

Rosario Mastrogiacomo, vice president of strategy and solutions engineering at Sphere Technology Solutions, said that static secrets such as API keys and PATs are the equivalent of permanent passwords that never expire. “Once leaked, they can be reused indefinitely and often go undetected in code repositories or logs. For AI systems, where agents can autonomously call APIs, this risk can be compounded,” he said. 

A single exposed key can grant persistent access to model weights, training data, or even production systems. Credentials sprawl in these environments creates invisible, systemic risk that’s difficult to contain.

Rosario Mastrogiacomo

The Astrix Security study also found some other serious concerns — and they come at a time when the use of AI coding and AI agents is on the rise. Here’s what your team needs to know about the risk — and what you can do about it.

Get Guide: How the Rise of AI Will Impact Supply Chain Security

Authentication security is still lagging for MCP

The Astrix team also found that adoption of more modern and secure authentication methods, like OAuth, for MCP servers is lagging. They noted that only 8.3% of the servers supported OAuth. While adoption is growing, they said, it’s still far behind, despite being the best approach for security.

As with any new technology, developers rush to utilize MCP without considering security, since adoption of secure authentication methods is often thought to be more complicated and time-consuming.

Tomer Yahalom

MCP implementations were designed for speed and interoperability, not zero trust, said Mastrogiacomo. “Many organizations still treat these servers as internal developer infrastructure rather than production assets. As a result, they don’t enforce token rotation, scoped credentials, or vault-based retrieval,” he said.

Culturally, security teams are still catching up to the reality that machine-to-machine and agent-to-agent authentication requires the same rigor as human access control. Legacy practices are hard to unlearn.

Rosario Mastrogiacomo

Another insecure practice discovered in 79%) of servers was the storage of API keys in environment variables. Abhay Bhargav, CEO of AppSecEngineer, said that environment variables have no access control, are long-lived credentials, and have no encryption or protection. 

They can be read by any process on the local machine and apps running in the environment. In addition, there’s no audit logging.

Abhay Bhargav 

Mastrogiacomo noted that environmental variables are not secure boundaries and are readable by any process on the host, logged in crash dumps, and often copied into build pipelines. “In shared or containerized environments, that exposure is magnified,” he said.

For AI agents that operate across multiple execution layers, environment variables become unintentional broadcast channels for credentials. It’s a silent but pervasive form of credentials leakage.

Rosario Mastrogiacomo

What’s at risk with MCP server security?

Getting MCP server security right is critical because a compromised MCP server provides direct access to sensitive resources, data, and tools and can amplify prompt-injection attacks into full-system compromises. Gal Moyal, of the CTO office at Noma Security, said that as organizations deploy multiple MCP servers to enable different AI capabilities, they expand their attack surface while introducing authentication vulnerabilities, supply chain risks from third-party servers, and visibility gaps that make it difficult to monitor what AI agents are actually doing. 

MCP servers represent a critical control point in the AI stack where proper security controls determine whether AI agents remain safely constrained or become pathways for data exfiltration, unauthorized access, and lateral movement across enterprise systems.

Gal Moyal

Traditional security tools can’t see MCP servers as distinct risk entities, Moyal said. “Endpoint detection treats them as legitimate processes. CNAPP [Cloud-native application protection platform] solutions don’t understand agent-to-server communication patterns. And manual inventories are obsolete the moment a developer spins up a new agent.”

MCP servers amplify both agent productivity and risk. Enterprise cybersecurity organizations require the visibility and runtime protection needed to embrace agentic AI securely while maintaining control over powerful, potentially destructive capabilities.

Gal Moyal

AppSecEngineer’s Bhargav said that in this case, like many others in the recent history of application development, speed of implementation has far outpaced the speed of security.

This will cause more pain for companies in the short term before it gets better. It’s essential for companies to get their developers and engineering teams trained on MCP risks and applying programmatic and, in many cases, custom defenses to protect themselves.

Abhay Bhargav

New free, open-source tool developed to mitigate risk

In addition to its research findings, Astrix Security released an open-source tool that wraps around any MCP server to pull secrets directly from a secure vault at runtime, ensuring that no sensitive secrets are exposed on host machines. 

Instead of relying on static credentials in configuration files, the tool pulls the relevant secret from a vault — currently, the project supports only AWS Secrets Manager — and starts the designated MCP server with the secret injected into its environment variables. Using the tool ensures that no exposed secrets exist on any machine hosting MCP servers.

Astrix Security’s Yahalom said that because the secret is stored in AWS, the user will need to be authenticated using the AWS command-line interface to access the secret, and since AWS credentials are automatically invalid and require reauthentication, this, in practice, serves as temporary access to the long-lived secret.

Mastrogiacomo said the wrapping approach Astrix proposes is solid. “It enforces just-in-time secrets retrieval, minimizing static exposure,” he said. But it’s not a complete solution to the problem, he added.

[One] single tool alone won’t solve this. The real progress will come from governance — defining ownership for every server, enforcing credential rotation policies, and integrating runtime attestation into identity workflows. We need a full identity lifecycle for AI infrastructure — not just better wrappers for secrets.

Rosario Mastrogiacomo

See related post: The Postmark MCP server attack: 5 key takeaways

Back to Top