ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why

How AI agents can weaponize IDEs

Research shows that AI coding can tap integrated development environments to become privileged insider threats. 

IDE insider threat

Software teams have embraced AI coding assistants to help them develop software faster, but they could be courting a software supply chain disaster.

A Microsoft senior security researcher has uncovered a new attack chain that he’s calling IDEsaster. The method leverages features from the base IDE layer. “It impacts nearly all AI IDEs and coding assistants using the same base IDE, affecting millions of users,” the researcher, Ari Marzuk, wrote.

Jacob Krell, senior director for secure AI solutions and cybersecurity at Suzu Labs, said IDEsaster demonstrates that the base IDE itself has become part of the attack surface.

“Every AI coding assistant built on the same IDE platform inherits the same exposure. Prior disclosures targeted individual applications. This targets the shared foundation underneath all of them.”
Jacob Krell

Here’s what you need to know about how AI agents can weaponize IDEs — and what your application security (AppSec) team can do about it.

[ See webinar: Trust But Verify: Secure the AI You Build, Buy and Deploy ]

Weaponized IDEs are a big problem

IDEsaster represents a significant threat expansion because AI coding assistants sit directly inside the environments where software and infrastructure are created, said Rajeev Raghunarayan, head of go-to-market at Averlon.

“That effectively turns the developer environment into a control point for large parts of the software supply chain, including code, dependencies, and deployment configurations.”
Rajeev Raghunarayan

Brett Smith, a distinguished software developer at SAS, enumerated the scope of the problem.

“I have 3,000 developers. They all use IDE of some type. I’d estimate that 85% of them use an IDE with AI capabilities. That’s 2,550 chances for an AI agent to exfiltrate secrets, data, and IP that I did not have to account for before.”
Brett Smith

And IDE-based attacks mean the threat from AI agents no longer stops at the development pipeline. “As AI tools in IDEs increasingly generate infrastructure definitions, deployment policies, and application code, automated changes can quickly propagate into production environments,” Smith said.

Stronger guardrails needed

Smith said organizations will need guardrails such as human-in-the-loop review and pre-deployment risk evaluation to ensure that  automated changes don’t quietly expand the attack surface.

Randolph Barr, CISO of Cequence Security, said that instead of exploiting common program flaws, attack methods such as IDEsaster use legal IDE features like tool execution, workspace configuration, and agent automation, as well as prompt injection — and regular vulnerability scanning might miss it because the platform is officially working as it should.

"This means that companies need to keep a close eye on how AI agents use repositories, tools, and APIs.”
Randolph Barr

Such attacks turn the IDE itself into a weaponized capability, said David Brumley, chief AI and science officer at Bugcrowd. 

“Successful attacks have all the privileges of the human developer. Since developers often have access to company crown jewels like source code, this is particularly concerning because critical secrets like source code and encryption keys could be exfiltrated.”
David Brumley

The ultimate insider

Chris McHenry, chief product officer at Aviatrix, said AI-weaponized IDEs are akin to insider threats.

“What’s super dangerous about IDEs and the agents that run in the IDEs is they have access to code, so they could modify it and then create a downstream supply chain attack. You can think about it almost like a worm for supply chain attacks.”
Chris McHenry

McHenry said that because AI agents that run in IDEs have access to a lot of development tools, something like a GitHub tool could be used for data exfiltration .

And because IDEs are highly privileged, the threat can be hidden. “If the IDE itself can inject malicious context into the LLM at that layer, the LLM itself doesn’t have to be compromised, so this is a mechanism for compromise that’s invisible to the user,” McHenry said.

Secure by Design for AI

Having identified the threat posed by IDEsaster, Microsoft’s Marzuk proposed a new security principle he calls “Secure for AI,” which extends Secure by Design principles to explicitly account for AI components. Using the Secure for AI principle, he said, systems must be designed and configured with explicit consideration for how existing and planned AI components can be used or misused.

Willy Leichter, CMO of PointGuard AI, said the idea behind Secure for AI “is important and overdue.”

“Secure for AI recognizes that systems must be evaluated, not just as traditional software, but as environments where AI can be manipulated, overtrusted, or granted unsafe access to powerful capabilities.”
Willy Leichter

Ensar Seker, CISO at SOCRadar, said Secure for AI is a useful evolution of Secure by Design thinking. 

“Development tools now have to assume that AI systems are active participants in the workflow, not just passive assistants. That means applying security controls around prompt integrity, plugin trust models, context isolation, and auditability of AI-generated changes.”
Ensar Seker

Secure for AI is a necessary evolution of architectural honesty, said Ram Varadarajan, CEO of Acalvio. “If we can’t verify the soul of the AI agent, we have to instead harden the environment it inhabits,” he said. “This is best done by leveraging the known model-based behaviors of AI attackers.” 

Suzu Labs’ Krell said Secure for AI aligns with the earlier warning from the U.S. Cybersecurity and Infrastructure Security Agency that AI is no exception to Secure by Design. “Marzuk’s contribution is making the gap specific. Existing features that were safe in a human-operated environment become unsafe when AI agents can interact with them autonomously,” he said.

“Vendors need to reassess every feature in their base IDE through the lens of what an autonomous agent could do with it.”
—Jacob Krell

Bugcrowd’s Brumley said developer tools are written with the understanding that only humans would be using them — an assumption now seen to be false. 

“The [Secure for AI] principle should prompt every IDE feature to be evaluated from the perspective of a malicious AI acting on behalf of the developer, and how damage can be limited or audited.”
—Will Brumley

How real is the risk?

Roger Grimes, CISO advisor at KnowBe4, said IDEs have always been targets for exploitation,  but exploits, even with AI, haven’t been  common.   

“I’m unaware of a single real-world attack involving both AI and an IDE. But theoretically, IDEs containing AI agents do pose an increased risk of abuse.”
Roger Grimes

Grimes recommended threat modeling for all programmers and defenders protecting development environments.

He added that mitigation begins by educating developers and users of IDE of potential abuse. “You can’t be on the lookout and stop what you don’t know about,” he said. “So start with appropriate security-awareness training, particularly around AI abuse of IDEs, if that’s what you are worried about. Then implement protections and controls that minimize the abuse.”

The IDEsaster threat might be thought of as the ultimate phishing ruse, said Suzu Labs’ Krell. “In a phishing attack, the attacker manipulates the human into taking an action they should not take. With AI IDEs, developers are effectively social engineering themselves. They auto-approve agent actions, grant broad permissions, and trust output without scrutiny because it is faster.”

“The attack surface is not new. What AI tools change is the blast radius, because an autonomous agent operating with broad default permissions can trigger these features at scale without the developer ever approving a single action.”
—Jacob Krell

It’s an old story, he said: build the capability first and retrofit the security. “That pattern has never aged well in software. Organizations need to treat AI IDE training with the same urgency they treat phishing awareness, because the underlying problem is the same,” Krell said.

Another hit to supply chain security

IDEsaster shows why AppSec teams must secure the entire agentic AI ecosystem. The good news is that the fundamentals of software supply chain security such as least privilege and secrets management still apply, said Dhaval Shah, senior director of product management at ReversingLabs, and comprehensive software bills of materials (as well as AI-BOMs that enumerate AI-specific components) should be prerequisites for the procurement and deployment of agent-driven software.

“The foundational concepts of trust, provenance, and dependency risk are identical.”
Dhaval Shah

Back to Top