RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityApril 23, 2026

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

man in suit
Jaikumar Vijayan, Freelance technology journalistJaikumar Vijayan
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
AI coding racing

The sheer speed and scale of AI-generated software is overwhelming many of the security teams tasked with assessing software packages for things such as vulnerabilities and logic flaws — creating what many see as a dangerous and growing imbalance between code creation and security validation. 

QA and security teams unable to keep up leave their organizations exposed to mounting technical debt, heightened supply chain risk, and a greater likelihood of vulnerabilities ending up in production environments. 

At the same time, the expanding use of AI to hunt for vulnerabilities — an activity engaged in by both researchers and adversaries — and the emergence of next-generation tools such as Claude Mythos to accelerate the hunt is compounding the issue by forcing security teams to contend with far more flaws than they can realistically remediate.

Here’s what you need to know about the imbalance — and what you can do about it, including leveraging AI to fight AI.

[ See webinar: Stop Trusting Packages. Start Verifying Them. ]

A flood of vulnerabilities

Security experts observe that even well-resourced teams with mature processes are caught in a bottleneck and have had to shift their focus from vulnerability discovery to triage and remediation. 

The situation is particularly acute in the open-source ecosystem. One effect of the deluge of newly found vulnerabilities on open source was seen when, effective March 27, HackerOne paused new submissions to its Internet Bug Bounty program, citing the difficulty maintainers have validating and fixing so many. The cURL Project took the same step in January because AI-generated submissions had overwhelmed its security team.

In cloud environments, human-driven vulnerability remediation has become unsustainable. A recent survey by Sysdig uncovered what appears to be a plateauing in the ability of organizations to remediate critical and high-severity vulnerabilities in their cloud environments despite using mature tools and processes and proper prioritization techniques.

The report said that AI is enabling proofs of concept and vulnerability exploits at speeds faster than humans can respond:

“We must face an uncomfortable truth: Organizations have optimized human workflows as far as they can, but have reached a vulnerability ceiling despite mature processes. The problem isn’t from a lack of effort but a shift in the battlefield.”
—Sysdig survey report

AI is eating development

Tools such as ChatGPT and Claude and coding-focused platforms such as GitHub Copilot and Amazon CodeWhisperer are enabling everyone — from experienced developers and software engineers to so-called vibe coders — to rapidly generate functional code with minimal oversight — or no oversight, as the prevalence of what is being referred to as shadow AI suggests. Such activity, happens outside formal software development lifecycles, bypasses established security reviews, code repositories, and governance controls.

How widespread is the usage of AI coding by developers? Sonar Source reported earlier this year that its survey of about 1,150 software developers found that 72% were using AI tools on a daily basis to write code. Respondents said AI is currently generating 42% of their code and that they expect that percentage to increase by 50% by 2027. 

Developers, the report added, are using AI to build prototypes, to develop production-grade software for internal use, for customer-facing applications, and in mission-critical environments. And although nearly all respondents (96%) expressed doubts about the functional correctness of AI-generated code, only 48% review their code before committing it to production.

Incorrect functionality isn’t the only problem. Tests that Veracode conducted last year on AI-generated code across Java, Python, C#, and JavaScript environments showed that AI models introduced a risky vulnerability in nearly half the tests.

Randolph Barr, CISO of Cequence Security, said the game has changed for application security (AppSec), even though the principles of secure development haven’t really changed. Practices such as “shift left,” threat modeling, and secure code review are all still very relevant.

“[AI-assisted] coding just blew up the throughput. A developer using Copilot or Cursor can produce in an afternoon what used to take a week. Our security review processes were never built for that pace.”
—Randolph Barr

He said reviewers can be lulled into trusting AI-generated code that looks clean and confident and not realize that it’s just wrong because there are no obvious red flags. And developers are now more prone to shipping logic they haven’t fully reasoned through because they accepted a suggestion rather than wrote it. 

Barr is concerned by another big issue: AI doesn’t know an organization’s systems as well as a human developer does. It knows public patterns, but it doesn’t know, for instance, an organization’s specific tenant-isolation model or authorization boundaries — and that gap between “generically correct” and “correct for our architecture” is exactly where security problems live, he said.

Security needs to ramp up to machine speed

Barr said that the short timespan between vulnerability discovery and exploitation is something new in his 20 years of experience. “When I started, a new threat would emerge and you had months to study it, respond, and adapt. That’s gone,” he said. “The gap between a capability appearing and it being widely adopted before anyone fully understands the risk is now weeks.” 

There’s only one way to fight that, Barr said. 

“The organizations that handle this well won’t be the ones that slow AI adoption down; they’ll be the ones whose security teams are running at the same speed as their developers.”
—Randolph Barr

That means using AI to battle AI coding, an approach trumpeted by others who have watched the threat landscape evolve. Jeff Williams, CTO at Contrast Security, said forward-thinking organizations will continuously produce software with strong and verifiable security properties.

“The future belongs to whoever can build automated software factories that reliably produce secure code and generate the assurance case to prove it. That is the real shift coming into view.”
—Jeff Williams

It’s his belief that organizations and the industry in general have spent far too long treating security as an endless penetrate-and-patch exercise, where software producers find some flaws, fix a few, and call the rest risk management. “As AI makes insecurity more visible, that model starts to look inadequate,” he said.

The race to manage AppSec risk is on

What this new security challenge boils down to is this: Development and AppSec teams have to discover and remediate their flaws before someone else does. And Williams said that means using AI to prevent, find, and remediate vulnerabilities. Vulnerability prioritization will fade as a priority, with the focus shifting to minimize potential exposure windows.  

“If defenders cannot find and fix their own vulnerabilities incredibly quickly, AI-assisted attackers will find and exploit them instead.”
—Jeff Williams

Sysdig’s survey found that many organizations are responding to an environment where threats are coming at them at machine speed by deploying agentic AI to triage alerts, investigate risks, and even initiate automated remediation action with minimal human intervention. 

Humans remain important for overseeing autonomous agents and setting guardrails and policies for their safe operation.

“Autonomous remediation, executed within human‑driven guardrails, is how organizations will keep pace with shrinking exploit timelines.”
—Sysdig survey report

Learn about how to leverage ML-BOMs to provide immediate visibility into every LLM in your environment.

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?
AI agents risk

Claude Mythos: Get your AppSec game on

Anthropic's new AI is a 'step change' for exposing software flaws — but also ramps up exploits. Are you ready?

Learn More about Claude Mythos: Get your AppSec game on
Claude Mythos: Get your AppSec game on

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top