How AWS averted an AI coding supply chain disaster

Here are six lessons learned from the near-miss that was the Amazon Q Developer incident. Don't let luck be your security strategy.

AWS Amazon Q attack

Amazon Web Services recently averted a potential software supply chain disaster when it discovered that malicious code had been inserted into an open-source repository accessed by a generative AI-powered assistant widely used to supercharge the software development workflow inside a popular source code editor.

The AWS security team discovered that the AI assistant — Amazon Q Developer for Visual Assistant Extension — had a GitHub token with excessive permissions in the configuration of a service, CodeBuild, used to compile source code, run tests, and produce software packages. “With that access token, the threat actor was able to commit malicious code into the extension’s open-source repository that was automatically included in a release,” AWS explained in a security bulletin.

After inspecting the malicious code, the bulletin noted, AWS Security determined that the malicious code was distributed with the extension but was unsuccessful in executing due to a syntax error. This prevented the malicious code from making changes to any services or customer environments.

Had the code executed, the damage to AWS users could have been disastrous. Here’s how the crisis was averted — and six key lessons about AI code security.

Download Today: The 2025 Software Supply Chain Security Report

Amazon Q incident: Luck or designed failure?

Neil Carpenter, principal solution architect at Minimus, a maker of secure container images and tools for vulnerability management, compared the Q Developer incident to the SolarWinds attack in 2021. It shows, he said, that if attackers can compromise developers’ desktops, they can potentially move farther down the supply chain to insert code into the projects those developers are working on, a step that can lead to broad compromises of IT and OT systems. “Depending on the threat actor, this may result in the exfiltration of sensitive data, in ransomware and data-wiping incidents, and in the widespread disruption of business processes,” he said.

Ensar Seker, CISO of the threat intelligence company SOCRadar, said AWS was extremely fortunate that the malicious code failed to run.

AWS basically dodged a bullet here. The only thing standing between this attack and a full-blown incident was the attacker’s error or perhaps a deliberate kill switch in the payload.

Ensar Seker

A 404 Media report said the hacker behind the malicious code was seeking to expose Amazon's AI security theater.

Had the malicious prompt been formatted correctly, we’d likely be talking about a major disaster, Seker said. “If that code had run properly, it would have tried to delete everything — local data, cloud data, even the logs of its own actions. You can imagine the fallout. A developer could have lost their entire project files and environment, and any connected AWS accounts could be stripped of critical assets — storage, servers, user accounts — without warning,” he said

If no backups existed, the potential damage could have included lost code, downtime, and permanent loss of critical data, Seker said. “Essentially, it was a near-factory reset of both the computer and the cloud account, a nightmare scenario for any individual or business relying on those resources,” he said.

Here are the six key lessons from the incident.

1. Prompt and thorough action helps avoid downstream problems

After discovering the compromise, Amazon immediately revoked and replaced the compromised credentials used in the attack. It also removed the malicious code from the codebase used by Q Developer and released a new version of the tool. It has also boosted security for CodeBuild, adding protections against memory dumps within container builds using unprivileged mode.

Swift action shouldn’t be limited to the target of an attack, cautioned Rosario Mastrogiacomo, chief strategy officer of Sphere Technology Solutions.

AWS took the right first steps by revoking credentials and releasing a fixed version, but customers still need to upgrade immediately and audit developer environments for excessive privileges.

Rosario Mastrogiacomo

2. AI fragility stems from the complexity and interconnectedness of systems

The issue with the Q Developer incident wasn’t the AI itself, but the surrounding infrastructure — an improperly scoped GitHub token in the CodeBuild configuration — said Casey Ellis, founder of Bugcrowd.

It’s a reminder that AI systems are only as secure as the weakest link in their development and deployment pipelines. This underscores the importance of treating AI as part of a broader software ecosystem, where traditional cybersecurity concerns like supply chain vulnerabilities still apply.

Casey Ellis

Satyam Sinha, CEO and co-founder of Acuvity, explained that Amazon Q Developer relied on several connected parts: the VS Code extension, its build pipeline, credentials, and the code repository. "A single misconfigured GitHub token in that chain allowed an attacker to add malicious code to an official release," he said.

Because AI coding assistants often have deep access to files, credentials, and other systems, even a small operational mistake can quickly become a serious security problem.

Satyam Sinha

3. AI agents expand the supply chain attack surface

AI systems often operate autonomously and at scale, which means that a single vulnerability can have far-reaching consequences, Ellis said. “In this case, the compromised extension could have acted as a vector for a supply chain attack, distributing malicious code to countless users,” he said.

The AWS report on the vulnerability explains that threat actors were able to obtain an access token through a memory dump to extract the source code repository credentials used to automate and execute builds, said Karen Walsh, CEO of Allegro Solutions. “Essentially, the threat actors committed malicious code into the open-source repository, and AWS removed the malicious code from the codebase.”

AI agents are applications that leverage open-source components that expand the software supply chain attack surface. Even with a new technology, malicious actors will rely on time-tested exploit methodologies.

Karen Walsh

4. Prompt injection attacks are amplified by AI agents

Acuvity's Sinha explained that Q Developer is an AI agent, an AI-powered coding assistant that turns human language into actions using tools such as the AWS CLI and local file commands. In this case, the malicious prompt told the AI to delete files, erase configurations, and remove AWS resources, he said. “With nearly a million installations, a successful attack could have triggered those destructive actions almost instantly across many environments,” Sinha said.

What the Amazon Q Developer incident shows is that when AI agents have broad access, compromising them can turn them into powerful tools for large-scale attacks, Sinha said.

Diana Kelley, CISO of Noma Security, said the incident demonstrates the reality of AI risk today.

Prompt injection is not a theoretical risk, it’s a reality. Indirect prompt injection attacks in an agentic AI system like this can trick the AI into executing unintended actions.

Diana Kelley

5. Treat AI extensions and developer tools as privileged software

Sphere Technology’s Mastrogiacomo recommended that security teams maintain a complete inventory of every extension and agent with system access, ensure that each one has a named human owner accountable for updates and incident response, and actively monitor for high-risk behaviors, such as mass file deletions or credential harvesting.

He also advised organizations to lock down build pipelines with tightly scoped tokens, branch protections, mandatory code reviews, signed releases, and binary analysis and reproducible builds.

At runtime, he continued, permissions should be minimized through short-lived credentials, read-only developer profiles, allow-lists for API calls, and sandboxing that blocks destructive actions by default. He added that organizations must conduct regular access reviews, revoke unused credentials, and rehearse kill-switch playbooks with engineering and security operations.

Modern agents aren’t just text generators—they’re operators. Once an agent can invoke tools, it inherits the identity and entitlements of the environment it’s running in. Compromise the agent’s prompt path or update channel, and you can commandeer those entitlements. That’s why we argue AI agents must be governed as first-class identities with least privilege, not treated like passive IDE plugins.

Rosario Mastrogiacomo

6. Make sure all AI tools are vetted and have access controls

As more agentic AI products emerge, and as businesses and individuals increasingly integrate them into sensitive environments, threat actors will find opportunities to hide malicious code in sneaky ways, said Anna Burkholder, a vulnerability researcher in the CERT division at Carnegie Mellon University’s Software Engineering Institute.

I don’t know that there is a cure-all answer to mitigate this risk, but part of the solution might be to understand that this threat exists, properly vet any AI application before it is incorporated into a sensitive environment, and impose clear access controls on it, such as ensuring that any code developed using an extension such as Amazon Q Developer is first run in a sandboxed or otherwise restricted environment.

Anna Burkholder

Luck is not a sustainable security strategy

AWS was fortunate that the malicious code failed to execute due to a syntax error, Bugcrowd’s Ellis said.

This was essentially a near miss, and if the code had executed, the potential harm could have been catastrophic — ranging from data exfiltration to widespread compromise of AWS accounts. This highlights the need for rigorous code review and automated testing processes to catch such issues before they reach production.

Casey Ellis
Back to Top