ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why

BSIMM16: AI redefines the AppSec landscape

AI coding is the new reality — and it will further destabilize software supply chain security. So step up your AppSec.

AI coding and AppSec

It has become impossible to discuss software development without considering the effects of artificial intelligence, and the 16th edition of Black Duck’s annual Building Security in Maturity Model (BSIMM) report makes it clear that AI is the defining challenge for application security (AppSec) today.

The BSIMM16 report portrays organizations in 2026 as needing to secure their AI coding while defending against AI-enabled attacks. It identifies three major organizational shifts: 

  • A 10% rise in teams using attack intelligence to track emerging AI vulnerabilities
  • A 12% increase in the use of risk-ranking methods to determine where code generated by large language models (LLMs) is safe to deploy
  • A 10% increase in applying custom rules to automated code-review tools to better catch AI-generated code issues

Caroline Wong, chief strategy officer at Axari, said AI coding is “already shaping how software is built” — and that has broad implications.

Security teams now have to account for code that was generated, modified, or reviewed by systems that do not reason the way humans do. That affects threat modeling, code review expectations, and accountability.

Caroline Wong

And, she added, AppSec teams must be prepared to question provenance, how prompts are handled, training data exposure, and whether existing controls are up to the new AI challenges. “Pretending this is still a future problem puts teams behind before they even start,” she said.

Here are key takeaways from the BSIMM16 report.

Shifting left — and up

The BSIMM16 report finds that AI-assisted development and LLMs have become part of everyday development workflows. That can have significant ramifications for security teams, said Dag Flachet, co-founder of Codific.

With AI-assisted code, the job of security teams moves both up and left.

Dag Flachet 

Shifting left — meaning tackling things such as security requirements formulation and threat modeling earlier in the software development lifecycle — is well known in the industry, he said, but the emerging trend is shifting up, where raw coding mistakes are less likely, but business logic vulnerabilities, architectural vulnerabilities, and supply chain issues arise. “This stems from the way AI-assisted development patches together reliable working components but may miss some of the big-picture issues,” he said.

Black Duck CEO Jason Schmitt said the real risk with AI-generated code is the illusion of correctness, with code looking polished and professional but containing serious security flaws. 

We’re witnessing a dangerous paradox: developers increasingly trust AI-produced code that lacks the security instincts of seasoned experts.

Jason Schmitt

Organizations are going to need new architectures to deal with the sorts of issues addressed in the BSIMM16 report, said Saumitra Das, vice president of engineering at Qualys, because they are being inundated with AI-generated code but lack people who can really understand that code. They will need to review that code using AI models trained on diverse datasets, he said. 

We need automation via, for example, MCP, that can take any code being compiled and send it to vendor A for security reviews, understand the findings, and use vendor B to automate the patching of the issues found.

Saumitra Das

“Even if we find issues with large generated codebases, we will need agentic workflows to fix them with minimal human intervention,” Das added.

A new focus on supply chain security

BSIMM16 also found that software supply chain security is getting much-needed attention, with organizations  looking beyond their internally developed code to secure the entire software supply chain ecosystem.

That attention is warranted, said Jason Soroko, a senior fellow at Sectigo. “Organizations should assume that AI-generated code expands their software supply chain risk, not just their development speed.” 

AI can amplify dependency sprawl and introduce opaque third-party components that traditional AppSec programs were not built to inventory or govern at rapid-release cadence. The result is a widening gap where shipping gets easier while accountability and assurance get harder and the downstream cost shows up as security exposure, compliance friction, and slower incident response when something breaks.

Jason Soroko

He said that to close that gap, security teams need to treat AI output like third-party software and enforce their third-party controls on the developer workflow. He recommended starting with dependency management because organizations that are on top of their open-source dependencies report far higher preparedness.

Next, organizations should harden the pipeline using automatic continuous monitoring to accelerate remediation because teams that do that are able to fix critical vulnerabilities much more often and much more quickly.

Third, Soroko said, organizations should make the validation of software bills of materials non-optional for suppliers because teams that always validate supplier SBOMs report stronger third-party readiness and faster one-day remediation. 

Fourth, they should boost their compliance maturity by implementing multiple controls because having three or more controls is known to increase one-day remediation by more than 50%, Soroko said.

Finally, organizations should put all the requirements into CI with clear pass/fail gates, codified policy, and audit-ready evidence so security becomes repeatable at AI speed instead of having to be negotiated release by release, he said.

SBOM adoption on the rise

Another sign in the BSIMM16 report that supply chain security is becoming a core priority is a significant increase in SBOM adoption for deployed software. It reported a rise of more than 40% in organizations using SBOMs to establish standardized technology stacks.

This surge will help organizations understand exactly what’s in their software — whether written by humans, AI, or third parties — and be an aid in responding quickly when vulnerabilities surface, Black Duck’s Schmitt said. “As regulatory mandates expand, SBOMs are moving beyond compliance. They’re becoming foundational infrastructure for managing risk in an AI-driven development landscape,” he said.

Codific’s Flachet agreed that the regulatory environment has much to do with the increase in SBOM adoption.

“SBOMs become both a legal and a practical obligation. If you are selling a product in the EU, the new European regulations, such as NIS2 and CRA, force you to generate SBOMs for everything you do.”

Dag Flachet

From a practical perspective, however, we are also seeing an increased prevalence of supply chain issues, he said. “More issues stem from vulnerable components, so checking for this earlier in the SDLC makes sense.”

“Don’t waste time building with a broken component only to change it later,” he advised. “And always maintain a good inventory of everything in your systems so that if a new issue emerges (think Log4j) you know exactly where it is.”

Why AppSec maturity matters

For security professionals, BSIMM16 offers guidance for building a well-structured AppSec program. “Using a maturity model, as opposed to a compliance framework, gives much more depth to each one of the practices as it acknowledges a wider range of possible implementation,” Flachet said.

“Maturity models, by definition, also recognize that your ideal target for an activity may not be the highest level of maturity,” he said. “Instead, you can calibrate your security posture based on the risk profile, risk appetite, and resources available in your context.”

Axari’s Wong said BSIMM16 shows what organizations are actually doing, not what a framework says they should be doing. “That distinction matters when you are accountable for outcomes, not just intent,” she said.

“BSIMM gives security leaders a reality check. It helps you calibrate maturity, explain tradeoffs, and justify priorities using evidence from peers who are dealing with the same constraints. I have always valued it as a grounding mechanism. It pulls the conversation out of theory and back into practice.”
—Caroline Wong

It’s helpful, Wong said, to have a tool that reflects where software security is headed and the BISIMM report reinforces a shift many security leaders are already feeling.

Progress now depends less on adding tools and more on integration, judgment, and execution. That shift is uncomfortable, but it is also where real improvement happens.

Caroline Wong
Back to Top