Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure for Software Supply Chain Security
Get Free TrialMore about Spectra Assure Free Trial
To the dismay of many security pros, the U.S. Department of Commerce’s National Institute of Standards and Technology has announced that it is scaling back the enrichment of Common Vulnerabilities and Exposures (CVEs) in the National Vulnerability Database (NVD).
In the past, NIST’s NVD program analyzed all CVEs to add details such as severity scores and product lists that help cybersecurity professionals prioritize and mitigate vulnerabilities, the agency explained in a statement. Going forward, NIST will add details to, or “enrich,” only those CVEs that meet certain criteria. They must either:
Executive Order 14028 was issued in May 2021 following a series of major cyberattacks, including the SolarWinds supply chain attack and the Colonial Pipeline ransomware attack. Aiming to modernize and strengthen the federal government’s cybersecurity posture, the order included a number of new security standards, including a mandate that software vendors provide software bills of materials (SBOMs) so agencies could know what components are in the software they use.
In its statement, NIST explained that the enrichment changes are being driven by a surge in CVE submissions — they increased 263% between 2020 and 2025. “We don’t expect this trend to let up anytime soon,” the agency wrote. Submissions during the first three months of 2026 were about one-third higher than the same period last year, NIST noted.
“We are working faster than ever. We enriched nearly 42,000 CVEs in 2025 — 45% more than any prior year. But this increased productivity is not enough to keep up with growing submissions.”
—NIST
Here's what you need to know about NVD's move to selective enrichment — and why it matters.
[ See webinar: Stop Trusting Packages. Start Verifying Them. ]
The selective enrichment policy is expected to create many challenges for security teams, including large backlogs of CVEs with no actionable context, increased manual analysis to determine relevance and risk, greater variability in data quality across public sources, and blind spots in cloud, SaaS, open-source, and third-party ecosystems that fall outside NIST’s prioritization criteria.
It will also be more difficult to matching CVEs to the versions of the programs they affect since scanners and other tools use the product-matching enrichment field to determine which versions of software are affected, Dan Lorenc, CEO and founder of Chainguard, explained in a recent edition of the Resilient Cyber podcast.
“If a new vulnerability in one of those pieces of software appears today, the NVD won’t be able to match that to the software.”
—Dan Lorenc
Resilient Cyber podcast host Chris Hughes said this presents a serious problem for teams managing AppSec risk.
“If you can’t map vulnerabilities to products in your ecosystem, you’re kind of SOL.”
—Chris Hughes
Nick (Ning) Mo, CEO and co-founder of Ridge Security Technology, said reducing enrichment exemplifies how current vulnerability nt (VM) is failing.
“Security teams get thousands of CVEs with scores that don’t really tell them what is exploitable in their own environment. A high CVSS number doesn’t [necessarily] mean an attacker can use it in your setup. A medium one can be the one that breaks you. So teams patch by score or theory, and the list keeps growing.”
—Nick (Ning) Mo
Re-prioritizing CVEs at the framework level helps, but it doesn’t fix the bigger problem, Mo said. “What’s missing is validation on top of VM findings. You need to actually prove in your own environment which vulnerabilities chain into a real attack path and which ones don’t. That’s how you turn thousands of CVEs into the handful that actually matter,” he said.
Denis Calderone, principal and CTO at Suzu Labs, thinks an overhaul of the system was probably inevitable given the increased volume of new CVE submissions.
“We suspect that AI-assisted discovery is probably already pushing that number higher. After all, Microsoft just had its second-largest Patch Tuesday ever, and even the Zero Day Initiative says their incoming submissions have tripled thanks to AI tools.”
—Denis Calderone
The KEV the top-priority tier. “That is the right call and something we’ve been doing with our clients for some time now, so we’re very happy to see that becoming the official model.” But, he added, the announcement leaves gaps that need attention.
“The biggest concern for us is that NIST will no longer independently score CVEs when the submitting authority has already provided a CVSS score. That sounds efficient until you remember that the submitting authority is often the vendor, and vendors don’t always get their own bugs right.”
—Denis Calderone
He cited a recent incident involving F5, when a BIG-IP vulnerability was scored as an 8.7 denial-of-service issue for five months but then was reclassified as a 9.8 remote code execution. “For organizations using CVSS to drive patching priority, that miscategorization meant the real risk sat in the wrong queue for five months while attackers were already exploiting it, “Calderone said. NIST’s scaling back of independent validation means that kind of thing will be more likely to happen, he added.
He also thinks that addressing the processing volume problem without modifying the scoring methodology is a mistake. For example, he said, “CVSS still scores vulnerabilities in isolation. It doesn’t model chainability, where an attacker combines a medium-severity information disclosure with a medium-severity privilege escalation and ends up with critical impact.”
“Neither bug scores as urgent on its own, but together they give you full system compromise.”
—Denis Calderone
NIST’s scoring methodology doesn’t incorporate the newer Exploit Prediction Scoring System (EPSS), which estimates real-world exploitation probability. And the “critical software” definition under EO 14028 is narrower than most people realize, Calderone cautioned. “It covers operating systems, browsers, endpoint security, and identity systems, but a vulnerability in your SaaS platform or your industrial control system may not qualify for enrichment at all.”
He advised organizations that have been relying on the NVD as their primary source for vulnerability context to build their own prioritization stack, which should include the KEV and the EPSS, integrated into their own context.
“The days of waiting for NIST to tell you what matters are over.”
—Denis Calderone
David Lindner, chief information security and data privacy officer at Contrast Security, said NIST’s decision to prioritize high-impact vulnerabilities will force security teams to shift from reactive compliance based on raw CVSS scores to proactive risk management driven by threat intelligence. “Modern defenders must move beyond the noise of total CVE volume and instead focus their limited resources on the CISA KEV list and exploitability metrics,” he said.
“While this transition may disrupt legacy auditing workflows, it ultimately matures the industry by demanding that we prioritize actual exposure over theoretical severity.”
—David Lindner
Relying on a curated subset of actionable data is far more effective for national resilience than maintaining a comprehensive but unmanageable archive of every minor bug, he added.
AI spam has become so prevalent, said Jim Sherlock, vice president for AI and cybersecurity R&D at ProCircular, that the community has dubbed it death by a thousand slops.
“Now that anyone with access to a frontier model can just point it at a codebase and say, ‘Go hunt,’ they’re … slamming maintainers with endless reports, hoping to score some cash, a little clout, or just a pat on the back. No maintainer of open-source projects is immune to it.”
—Jim Sherlock
He likened AI spam to a crude DDoS attack on a code maintainer’s free time. “It’s causing them to either lose their minds or, like the guy maintaining the popular cURL project, just shutter their bug bounty programs entirely,” Sherlock said.
“Burning out an open-source dev is one thing, but the impact this noise is having on national security infrastructure is a whole different ballgame. Between projects pulling the plug on bounties and federal databases giving up on comprehensive analysis just to survive the backlog, the entire cybersecurity industry is being forced to radically re-engineer how it handles vulnerability reporting in the AI era.”
—Jim Sherlock
All of this is happening while the next class of hyper-capable frontier large language models, such as Claude Mythos, are “lining up on the tarmac, engines running, ready to flood the zone faster than any human could ever hope to patch,” Sherlock said.
In the Resilient Cyber podcast, Josh Bressers, vice president of security at Anchore, called for patience as the industry searches for solutions to the vulnerability-overload problem. “I know this is chaos, and it’s difficult, and it’s easy to yell at people and accuse people of being dumb or mean or evil or whatever, but everyone in this space is horribly overworked,” he said.
“I think we’ve reached the point where some of the structure that was holding everything up is collapsing. It’s scary, and it’s terrible, but just have some patience. We’re working on it.”
—Josh Bressers
He added that the severity of the problem makes it incumbent on security pros to be constructive. “Don’t just complain; find somewhere to help. Goodness knows there’s plenty to do,” he said.
Jeremy Long, a principal engineer at ServiceNow and founder and project lead of the OWASP Dependency Check Program, said that from an attacker’s perspective, targeting the software development supply chain makes most sense given the large attack surface and evolving complexity with today's software.
In a 2025 Black Hat conference talk, "Reflections on Trust in the Software Supply Chain," Long explained how the threat of software supply chain attacks has changed, making organizations’ current defenses inadequate. This gap in security coverage is the result of changes in both the intention and methodology of software supply chain attacks, Long explained.
This comes down to the difference between the two ways supply chain threats can be described: vulnerable and malicious. Tools that organizations are currently using to secure their supply chains, including software composition analysis (SCA) and static application security testing (SAST), pinpoint only the threats that can be classed as vulnerable.
The tracking of CVEs is an example of a software supply chain security protocol that centers on vulnerable rather than malicious threats, which are intentional campaigns by threat actors meant to cause harm. Long stressed that while these vulnerabilities at times have the potential to cause damage, they are not the major threat to supply chains.
"I do think that in a lot of these cases, some of these vulnerabilities might be overhyped."
—Jeremy Long
Long said that early software supply chain attacks relied on vulnerable threats that could be detected and patched, but more recent supply chain attacks, such as SolarWinds and 3CX, were catastrophic not because of vulnerabilities, but because of malicious threats such as malware insertion or the abuse of secrets leaks.
Focusing only on finding vulnerable threats, which tools such as SCA and SAST can spot, will leave a gap in organizations’ defenses against supply chain attacks, Long said. Organizations that want to properly defend against today's software supply chain attacks will have to adopt tooling and measures that detect and mitigate malicious threats, he said.
Long recommends modern tooling such as binary analysis, which can detect threats such as malicious build-time dependencies. This type of protocol can provide a comparison of build versions, showcasing anomalies that traditional testing misses and that further analysis may deem malicious.
Recent federal guidance on software supply chain security from the Enduring Security Framework recommends shifting from legacy AST to binary analysis and reproducible builds to tackle supply chain threats.
Learn how RL's Spectra Assure Community can help AppSec teams secure their SDLC.