RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Product & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityOctober 30, 2025

AI is ramping up coding velocity — and risk

AI is producing code up to four times faster — but with 10 times more AppSec lapses. Here’s what you need to know.

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
blurred red lines indicating traffic timelapse along AI city scape

Developers using AI coding assistants are producing code as much as four times faster than their unassisted peers, but that code contains 10 times more security problems.

That’s one of the findings from Apiiro’s recent analysis of the code stored in tens of thousands of code repositories and the AI-assisted production of several thousand developers from Fortune 50 enterprises, Apiiro product manager Itay Nussbaum wrote recently.

Pull requests are ballooning, vulnerabilities are multiplying, and shallow syntax errors are being replaced with costly architectural flaws.

Itay Nussbaum

Nussbaum said AI-assisted developers produce three to four more commits than their non-AI-using peers, but those commits don’t land as small, incremental merges. They are packaged into fewer pull requests (PRs) overall, each one significantly larger in scope, touching more files and services per change. 

That packaging is the problem. Bigger, multitouch PRs slow review, dilute reviewer attention, and raise the odds that a subtle break slips through.

Itay Nussbaum

Dwayne McDaniel, developer advocate at GitGuardian, said reviews of new code should always include human feedback, and that takes time. 

The more code that needs to be read and understood by a person, the more likely it is that something will slip through undetected thanks to human error. At the same time, those same reviewers are being pushed to work faster, increasing the chance that they might skim and rubber-stamp the PR rather than doing the due diligence to ensure it does not introduce new vulnerabilities.

Dwayne McDaniel

Here’s what you need to know about the risk coming from higher-velocity coding.

Get Guide: How the Rise of AI Will Impact Supply Chain Security

How AI coding expands the blast radius

Derek Rush, managing senior consultant at Bishop Fox, an offensive security testing and consulting firm, said PRs should generally be single-purpose, focused on one feature or bug fix. “Failing to properly restrict AI code assistants to pursue clear, objective-based goals and test-driven development will lead to code bloat,” he said. 

The risk increases when developers accept code output from an LLM [large language model] without review. Just because an LLM identifies a way to accomplish something doesn’t mean it’s the only or best way, especially without full context. Larger, multipurpose PRs compound that risk, making it harder to assess quality, functionality, and security.

Derek Rush

Melody (MJ) Kaufmann, an author and instructor with O’Reilly Media, said that when AI-generated commits are bundled into large PRs, a single flaw can simultaneously impact multiple services. 

That expanded blast radius makes vulnerabilities far more dangerous, since one missed issue during review can ripple across an entire system.

Melody (MJ) Kaufmann

Diana Kelley, CISO at Noma Security, said that when AI assistants multiply developer throughput, the rhythm of the software development lifecycle (SDLC) may change.

In a secure SDLC, incremental change can be a very positive thing, because smaller changes can be carefully reviewed and assessed for impact on the overall codebase, but when a single merge suddenly touches dozens of files and services, the impact goes way up.

Diana Kelley

AI coding is a double-edged sword

Apiiro’s Nussbaum said the finding that the PR volume of AI-generated code is nearly a third of that of other code translates to more emergency hotfixes, more incident response, and a higher probability that issues slip into production before review catches them.

The reason that happens, Nussbaum said, is that big, multitouch PRs tend to introduce multiple issues at once, so every merge carries more potential for damage. The faster AI accelerates output, the faster unreviewed risk accumulates.

By June 2025, AI-generated code was introducing over 10,000 new security findings per month across the repositories in the Apiiro study — a tenfold spike in just six months compared with December 2024. “And the curve isn’t flattening; it’s accelerating,” Nussbaum wrote.

These flaws span every category of application risk, including open-source dependencies, insecure coding patterns, exposed secrets, and cloud misconfigurations, Nussbaum said.

AI is multiplying not one kind of vulnerability, but all of them at once. For security teams, that means a shift from managing issues to drowning in them.

Itay Nussbaum

How reliable is this new data?

Jeff Williams, CTO and co-founder of Contrast Security, said he was surprised to hear the claim of four times velocity. Williams said a study by Google showed a 10% increase in velocity and that another by METR found a 19% decrease. He was also surprised to hear about 10 times more vulnerabilities, he said. 

The studies I’m reading are suggesting that AI-generated code has roughly the same amount of vulnerabilities. I’m frankly put off by the FUD, and it makes me question whether these aren’t the results of noisy and weak detectors.

Jeff Williams

But Neil Carpenter, principal solution architect at security firm Minimus, said a tenfold increase in security findings is believable. It’s really a matter of context, he said. “AI assistants are like an entry-level programmer. They have the technical skills, but they lack the context and experience that allow them to use those skills effectively in all cases. The tasks that require context — architecture decisions, how to size a PR for effective review, different levels of sensitivity for tokens — are where you see AI failing,” he said. And AI assistants can operate in a number of different ways, he noted.

Less mature orgs are going to have developers with personal accounts using GPT-5 or Claude, while more mature organizations will have centralized control and guardrails. The more guardrails that are in place to give AI the context and the decision-making capabilities, the less you’ll see of the 10x stats and other security issues.

Neil Carpenter

For example, he said, an AI assistant with no context might pull nginx from DockerHub because it’s an easily available public source. But savvy organizations are going to give the AI rules that require using containers from a trusted set of images, which gives the AI clear instructions and context.

Carpenter said that effective rules should greatly reduce security findings. Nginx on DockerHub has 83 total CVEs, 27 of them high or critical, but a hardened nginx image has none. Increases in architectural flaws also highlights this challenge of context, he added. 

AI assistants, when not given proper context, often rebuild or rewrite functionality, instead of calling out to other functions or modules in the application. More code to do the same amount of work, which results in increased attack vectors and lower reliability.

Neil Carpenter

What kind of risk management changes are needed for AI coding?

Eran Kinsbruner, application security (AppSec) evangelist at Checkmarx, argued that AI assistants are fundamentally reshaping software risk by moving security from a reactive to a proactive stance. 

Instead of detecting vulnerabilities after code is deployed, intelligent agents can now identify and remediate issues as code is written, providing prevention in real time.

Eran Kinsbruner 

Kinsbruner said that because AI-driven assistants can operate autonomously across IDEs and CI/CD pipelines while interpreting context, enforcing policies, and continuously learning from new threats, they create a preventative layer of protection that scales with development speed and complexity. 

Intelligent agents have the capability, when created and utilized correctly, to secure AI-generated code at the speed it’s created, specifically tailored for developers, AppSec leaders, and CISOs. When integrating within AppSec platforms, they can optimize vulnerability detection and prevention, while maintaining development speed in the AI-generated code age.

Eran Kinsbruner

But Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions, is concerned that AI is reshaping software risk by blurring accountability. He also advocates more oversight.

We’re moving from code written by humans to code written on behalf of humans. Governance, not generation, will determine whether that shift strengthens or weakens security.

Rosario Mastrogiacomo

Kevin Gaffney, CTO of the cyber-incident response company CYGNVS, said organizations need to establish clear guidelines when using AI coding. “AI-generated code is still your code,” he cautioned. “Developers must understand what they’re shipping, even if AI helped write it. Code reviews must remain small and focused. Security practices can’t be compromised for speed.”

The future isn’t about replacing human judgment with AI. It’s about augmenting human capabilities while maintaining human responsibility. AI should make developers more productive, not less accountable. Knowledge is knowing a tomato is a fruit. Wisdom is not putting it in a fruit salad.

Kevin Gaffney

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

AI agents risk

Claude Mythos: Get your AppSec game on

Anthropic's new AI is a 'step change' for exposing software flaws — but also ramps up exploits. Are you ready for it?

Learn More about Claude Mythos: Get your AppSec game on
Claude Mythos: Get your AppSec game on
28

28 application security stats that matter

AI and open source are redefining the software threat landscape. Here are the key statistics you need to know.

Learn More about 28 application security stats that matter
28 application security stats that matter
axios

Axios: How AppSec teams should respond

Here's a mitigations checklist and best practices. Plus: How RL’s xBOM and Spectra Assure Community can help.

Learn More about Axios: How AppSec teams should respond
Axios: How AppSec teams should respond
Software trust debt

How JPMC tackles software ‘trust debt’

JPMorgan Chase CISO Patrick Opet discussed his letter on third-party software risk — and how that has played out.

Learn More about How JPMC tackles software ‘trust debt’
How JPMC tackles software ‘trust debt’

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top