RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research
Why RL Built Spectra Assure Community
April 14, 2026

Why RL Built Spectra Assure Community

We set out to help dev and AppSec teams secure the village: OSS dependencies, malware, more. Learn how.

Read More about Why RL Built Spectra Assure Community
Why RL Built Spectra Assure Community

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityAugust 26, 2025

How AWS averted an AI supply chain disaster

Here are six lessons learned from the near-miss that was the Amazon Q Developer incident. Don't let luck be your security strategy.

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
AWS Amazon Q attack

Amazon Web Services recently averted a potential software supply chain disaster when it discovered that malicious code had been inserted into an open-source repository accessed by a generative AI-powered assistant widely used to supercharge the software development workflow inside a popular source code editor.

The AWS security team discovered that the AI assistant — Amazon Q Developer for Visual Assistant Extension — had a GitHub token with excessive permissions in the configuration of a service, CodeBuild, used to compile source code, run tests, and produce software packages. “With that access token, the threat actor was able to commit malicious code into the extension’s open-source repository that was automatically included in a release,” AWS explained in a security bulletin.

After inspecting the malicious code, the bulletin noted, AWS Security determined that the malicious code was distributed with the extension but was unsuccessful in executing due to a syntax error. This prevented the malicious code from making changes to any services or customer environments.

Had the code executed, the damage to AWS users could have been disastrous. Here’s how the crisis was averted — and six key lessons about AI code security.

Download Today: The 2025 Software Supply Chain Security Report

Amazon Q incident: Luck or designed failure?

Neil Carpenter, principal solution architect at Minimus, a maker of secure container images and tools for vulnerability management, compared the Q Developer incident to the SolarWinds attack in 2021. It shows, he said, that if attackers can compromise developers’ desktops, they can potentially move farther down the supply chain to insert code into the projects those developers are working on, a step that can lead to broad compromises of IT and OT systems. “Depending on the threat actor, this may result in the exfiltration of sensitive data, in ransomware and data-wiping incidents, and in the widespread disruption of business processes,” he said.

Ensar Seker, CISO of the threat intelligence company SOCRadar, said AWS was extremely fortunate that the malicious code failed to run.

AWS basically dodged a bullet here. The only thing standing between this attack and a full-blown incident was the attacker’s error or perhaps a deliberate kill switch in the payload.

Ensar Seker

A 404 Media report said the hacker behind the malicious code was seeking to expose Amazon's AI security theater.

Had the malicious prompt been formatted correctly, we’d likely be talking about a major disaster, Seker said. “If that code had run properly, it would have tried to delete everything — local data, cloud data, even the logs of its own actions. You can imagine the fallout. A developer could have lost their entire project files and environment, and any connected AWS accounts could be stripped of critical assets — storage, servers, user accounts — without warning,” he said

If no backups existed, the potential damage could have included lost code, downtime, and permanent loss of critical data, Seker said. “Essentially, it was a near-factory reset of both the computer and the cloud account, a nightmare scenario for any individual or business relying on those resources,” he said.

Here are the six key lessons from the incident.

1. Prompt and thorough action helps avoid downstream problems

After discovering the compromise, Amazon immediately revoked and replaced the compromised credentials used in the attack. It also removed the malicious code from the codebase used by Q Developer and released a new version of the tool. It has also boosted security for CodeBuild, adding protections against memory dumps within container builds using unprivileged mode.

Swift action shouldn’t be limited to the target of an attack, cautioned Rosario Mastrogiacomo, chief strategy officer of Sphere Technology Solutions.

AWS took the right first steps by revoking credentials and releasing a fixed version, but customers still need to upgrade immediately and audit developer environments for excessive privileges.

Rosario Mastrogiacomo

2. AI fragility stems from the complexity and interconnectedness of systems

The issue with the Q Developer incident wasn’t the AI itself, but the surrounding infrastructure — an improperly scoped GitHub token in the CodeBuild configuration — said Casey Ellis, founder of Bugcrowd.

It’s a reminder that AI systems are only as secure as the weakest link in their development and deployment pipelines. This underscores the importance of treating AI as part of a broader software ecosystem, where traditional cybersecurity concerns like supply chain vulnerabilities still apply.

Casey Ellis

Satyam Sinha, CEO and co-founder of Acuvity, explained that Amazon Q Developer relied on several connected parts: the VS Code extension, its build pipeline, credentials, and the code repository. "A single misconfigured GitHub token in that chain allowed an attacker to add malicious code to an official release," he said.

Because AI coding assistants often have deep access to files, credentials, and other systems, even a small operational mistake can quickly become a serious security problem.

Satyam Sinha

3. AI agents expand the supply chain attack surface

AI systems often operate autonomously and at scale, which means that a single vulnerability can have far-reaching consequences, Ellis said. “In this case, the compromised extension could have acted as a vector for a supply chain attack, distributing malicious code to countless users,” he said.

The AWS report on the vulnerability explains that threat actors were able to obtain an access token through a memory dump to extract the source code repository credentials used to automate and execute builds, said Karen Walsh, CEO of Allegro Solutions. “Essentially, the threat actors committed malicious code into the open-source repository, and AWS removed the malicious code from the codebase.”

AI agents are applications that leverage open-source components that expand the software supply chain attack surface. Even with a new technology, malicious actors will rely on time-tested exploit methodologies.

Karen Walsh

4. Prompt injection attacks are amplified by AI agents

Acuvity's Sinha explained that Q Developer is an AI agent, an AI-powered coding assistant that turns human language into actions using tools such as the AWS CLI and local file commands. In this case, the malicious prompt told the AI to delete files, erase configurations, and remove AWS resources, he said. “With nearly a million installations, a successful attack could have triggered those destructive actions almost instantly across many environments,” Sinha said.

What the Amazon Q Developer incident shows is that when AI agents have broad access, compromising them can turn them into powerful tools for large-scale attacks, Sinha said.

Diana Kelley, CISO of Noma Security, said the incident demonstrates the reality of AI risk today.

Prompt injection is not a theoretical risk, it’s a reality. Indirect prompt injection attacks in an agentic AI system like this can trick the AI into executing unintended actions.

Diana Kelley

5. Treat AI extensions and developer tools as privileged software

Sphere Technology’s Mastrogiacomo recommended that security teams maintain a complete inventory of every extension and agent with system access, ensure that each one has a named human owner accountable for updates and incident response, and actively monitor for high-risk behaviors, such as mass file deletions or credential harvesting.

He also advised organizations to lock down build pipelines with tightly scoped tokens, branch protections, mandatory code reviews, signed releases, and binary analysis and reproducible builds.

At runtime, he continued, permissions should be minimized through short-lived credentials, read-only developer profiles, allow-lists for API calls, and sandboxing that blocks destructive actions by default. He added that organizations must conduct regular access reviews, revoke unused credentials, and rehearse kill-switch playbooks with engineering and security operations.

Modern agents aren’t just text generators—they’re operators. Once an agent can invoke tools, it inherits the identity and entitlements of the environment it’s running in. Compromise the agent’s prompt path or update channel, and you can commandeer those entitlements. That’s why we argue AI agents must be governed as first-class identities with least privilege, not treated like passive IDE plugins.

Rosario Mastrogiacomo

6. Make sure all AI tools are vetted and have access controls

As more agentic AI products emerge, and as businesses and individuals increasingly integrate them into sensitive environments, threat actors will find opportunities to hide malicious code in sneaky ways, said Anna Burkholder, a vulnerability researcher in the CERT division at Carnegie Mellon University’s Software Engineering Institute.

I don’t know that there is a cure-all answer to mitigate this risk, but part of the solution might be to understand that this threat exists, properly vet any AI application before it is incorporated into a sensitive environment, and impose clear access controls on it, such as ensuring that any code developed using an extension such as Amazon Q Developer is first run in a sandboxed or otherwise restricted environment.

Anna Burkholder

Luck is not a sustainable security strategy

AWS was fortunate that the malicious code failed to execute due to a syntax error, Bugcrowd’s Ellis said.

This was essentially a near miss, and if the code had executed, the potential harm could have been catastrophic — ranging from data exfiltration to widespread compromise of AWS accounts. This highlights the need for rigorous code review and automated testing processes to catch such issues before they reach production.

Casey Ellis

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

NVD enrichment

Selective NVD enrichment: Why it matters

AI vulnerability reporting is overwhelming teams — and NIST. But for AppSec, scaling back analysis is cause for alarm.

Learn More about Selective NVD enrichment: Why it matters
Selective NVD enrichment: Why it matters
math strategy

How Mythos changes the AppSec calculus

Here are the facts on Claude Mythos — and why a layered application security framework is essential.

Learn More about How Mythos changes the AppSec calculus
How Mythos changes the AppSec calculus
Trust model flips

How agentic AI flips the trust model

As AppSec shifts focus from the components to data, your strategy needs updating. Are you on top of your trust debt?

Learn More about How agentic AI flips the trust model
How agentic AI flips the trust model
MCP attacks

MCP rug-pull attack worries mount

This new class of AI tool supply chain attack highlights how trust of agents can be exploited.

Learn More about MCP rug-pull attack worries mount
MCP rug-pull attack worries mount

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top