RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityApril 8, 2026

Claude Mythos: Get your AppSec game on

Anthropic's new AI is a 'step change' for exposing software flaws — but also ramps up exploits. Are you ready?

smiling woman
Ericka Chickowski, Freelance writer.Ericka Chickowski
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
AI agents risk

Anthropic has provided a frothy couple of weeks for teams following the AI space. First came a leak of marketing materials for the company’s impending release of its most advanced model to date, Claude Mythos (code name: Capybara). That was followed almost immediately by the exposure of the source code for Claude Code. Much of the coverage has felt a bit like TMZ for tech — light on details and heavy on hot takes. 

Some conspiracy-minded tech folk even wondered whether it wasn’t all intentional, in pursuit of publicity for Mythos. Most security strategists, however, were more interested in the substance of what was released — and what it means for security road maps.

For application security (AppSec) veterans, Mythos looms as a big boost in the acceleration of vulnerability discovery — and exploit development. In confirming the leak, Anthropic told Fortune that Mythos will be a “step change” in reasoning and cybersecurity research capabilities. 

Anthropic’s leaked blog post said:

“Although Mythos is currently far ahead of any other AI models in cyber capabilities, it presages an upcoming wave of models that can exploit vulnerabilities in ways that outpace the efforts of defenders.” 

Anthropic has now launched Claude Mythos Preview to select partners as part of a responsible disclosure process, which includes Project Glasswing, "an effort to use Mythos Preview to help secure the world’s most critical software, and to prepare the industry for the practices we all will need to adopt to keep ahead of cyberattackers," the company said.

Here’s what you need to know about Claude Mythos — and what it portends for AppSec teams.

[ See webinar: Stop Trusting Packages — Start Verifying | See also: Mythos FAQ]

Vulnpocalypse: What it is — and why it matters

The Anthropic leaks sparked concerns by researchers at the [un]prompted AI security conference recently. Heather Adkins, vice president of security engineering at Google, was among those alarmed about “something close to a cataclysmic increase” in vulnerability discovery and disclosure. John “Four” Flynn, vice president of security and privacy at Google Deepmind, said that what he dubbed the “vulnpocalypse” has already begun . 

Also at [un]prompted, Adam Laurie, CISO at Alpitronic, demonstrated how he used Claude to automate a hardware hacking lab — and own an LPC chip in seven minutes. Adam Křivka, AI security engineer at AISLE, showcased an AI system that discovered 12 zero-day vulnerabilities in the OpenSSL codebase. And Sergej Epp, CISO at Sysdig, demoed an AI-assisted attack that moved from stolen credentials to full administrator access in a target AWS environment in just eight minutes.

Anthropic researchers were also on the bill, giving attendees a peek at what Mythos would bring. Nicolas Carlini, an Anthropic research scientist, said new state-of-the-art AI models are finding zero-days even in software projects that have been extensively tested for decades.

“LLMs can autonomously, and without fancy scaffolding, find and exploit zero-days in critical software. And they are getting good scarily fast. These new capabilities will alter the threat landscape and require [that] we rethink security in the coming years.’
—Nicolas Carlini

In short, seasoned security researchers and big thinkers say we are on the precipice of a huge shakeup in how vulnerabilities are found, exploited, and remediated.

AI is shifting into high gear

Security researcher and software developer Thomas Ptacek wrote in a think piece recently, Vulnerability Research Is Cooked:

“You can’t design a better problem for an LLM agent than exploitation research. Vulnerabilities are found by pattern-matching bug classes and constraint-solving for reachability and exploitability. Precisely the implicit search problems that LLMs are most gifted at solving. Agents are uncannily skilled at software development, and vulnerabilities are at the apex of that skill.”
—Thomas Ptacek

Phil Venables, a former Google CISO and now a partner at Ballistic Ventures, said that a year ago he had expected AI to impact cybersecurity only incrementally. Now he thinks the negative impacts will be bigger and more immediate. Nonetheless, he expects an even larger positive impact as defenses are improved by applying AI models and agentic capabilities to automated vulnerability remediation. 

“I am short-term pessimistic but wildly long-term optimistic.”
—Phil Venables

Others were also optimistic. Deepmind’s Flynn told the [un]prompted gathering that tools such as Google’s Code Mender could turn back the vulnpocalypse. Code Mender is an autonomous agent designed to debug and fix complex vulnerabilities, and it is just one example of automated remediation tools that defenders expect will be augmenting the AppSec tool stack soon.

Such defensive tools will help, wrote Marcus Hutchins, principal threat researcher for Expel. But even more effective may be the economics of finding and fixing bugs.

“Defenders are the ones with all the resources. They’re the ones building multi-billion dollar AI models specifically for auditing software, which criminals can’t even come close to finding the funding to build.”
—Marcus Hutchins

The agentic software factory is near

The Anthropic leaks also highlighted that the agentic AI attack surface is large and growing larger. AI security researcher Jiten Oswal wrote that the leaked code included multiple feature flags.

“The leak unveiled that Anthropic is sitting on a treasure trove of unreleased, fully-built agentic features.”
—Jiten Oswal

It all points to the next generation of large language models being purpose-built for agentic action, and Nipun Gupta, founder of the agentic AI security firm Optimus Labs, thinks the agentic software factory can’t be too far off.

“Which also means your agents become a new attack surface when they have so much capability, when they have so much to do with not just the setup and the builder collaboration, but also making and taking actions on your behalf. So your software supply chain is at much, much greater risk.”
—Nipun Gupta

Challenge your assumptions

The spread of agentic action in development pipelines makes many security road map assumptions nonviable, said Chris Hughes of ResilientCyber in a recent post. 

“The human-in-the-loop is not functioning as a meaningful safety control. It is a formality that users power through to maintain their workflow.”
—Chris Hughes

Advanced models and agent autonomy are going to open up whole new classes of risks, Gupta said, especially in organizations that have granted agents the same level of access to systems that an experienced red teamer might have.

“We used to compromise the machine and install a lot of these products that would allow us to maintain persistent access to the victim’s machine. Now we don’t need to because agents have already done that. So all I need to do is have a prompt injection, compromise the agent, and then I have persistent access forever.”
—Nipun Gupta

Observability and visibility matter more than ever

At the end of the day, what’s needed is just better security overall. When applied to agentic environments, the fundamental patterns still look familiar: establishing strong visibility, instituting controls that restrict either agentic privileges or actions, and building up layered security mechanisms that backstop one another.

Dhaval Shah, senior director of product management for ReversingLabs, said security leads should be thinking about this through the lens of zero trust and deep artifact assessment. 

“They need to accept that they cannot rely entirely on preventative scanning for AI agents. Because the inputs and outputs are natural language and highly dynamic, signature-based detection will fail.”
—Dhaval Shah

ResilientCyber’s Hughes said that no matter the agentic use case, organizations should be doubling down on visibility and observability to understand where agents exist within the infrastructure and what they’re authorized to do. From there, they should institute a mix of both deterministic and probabilistic controls.

“Build hard boundaries, layer deterministic and probabilistic controls, invest in runtime visibility and treat agent permissions as an infrastructure problem, not a user behavior problem. We need to build security programs for how humans actually interact with them, not how we wish they would.”
—Chris Hughes

FAQ: Mythos AI and What It Means

What is Claude Mythos? Anthropic's most advanced AI model to date. Not explicitly trained for cybersecurity, but its improvements in reasoning and code gave it the ability to find and exploit software vulnerabilities at a level that surpasses all but the most skilled human researchers.

How did we first learn about it? It leaked. In late March 2026, Anthropic accidentally made draft marketing materials publicly accessible, followed days later by the unintended exposure of the Claude Code source code via an npm packaging error. Anthropic has since formally announced Mythos through Project Glasswing.

What is the "vulnpocalypse"? A term describing the inflection point where AI can discover and exploit zero-days faster than defenders can patch them. Google's Heather Adkins called it "something close to a cataclysmic increase" in vulnerability discovery. Anthropic's own Mythos research suggests that point has arrived.

What has Mythos already found? Thousands of previously unknown, high-severity vulnerabilities across every major operating system and web browser, including a bug in OpenBSD that had gone undetected for 27 years. Anthropic researcher Nicolas Carlini said he found more bugs in a few weeks with Mythos than in the rest of his career combined.

Can Mythos chain vulnerabilities to build exploits? Yes — and that's what makes it particularly significant. It can combine multiple low-risk flaws into sophisticated attack chains, changing both the speed and complexity of what automated offensive tooling can produce.

What is Project Glasswing? Anthropic's controlled initiative to put Mythos Preview to work for defenders before equivalent capabilities reach adversaries. Launch partners include AWS, Apple, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, NVIDIA, and Palo Alto Networks, plus roughly 40 additional organizations. Anthropic is backing the effort with $100 million in usage credits and $4 million in donations to open-source security organizations.

Why not release Mythos publicly? Because the same capabilities that help defenders find vulnerabilities help attackers exploit them. Anthropic concluded the model is too dangerous for general release until the industry has time to act on what it surfaces.

What should AppSec teams do now? Treat Mythos as a planning event, not a news item. Audit your software inventory, prioritize patching of high-severity known vulnerabilities, invest in behavioral detection tools that go beyond signature-based scanning, and track Project Glasswing's coordinated disclosures as they are published — that is where Mythos-discovered vulnerabilities will surface first.

Learn how RL's free Spectra Assure Community can help your development and AppSec teams get deep insights into your software supply chain via binary analysis.

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top