Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free TrialAssessing software risk is a crucial task for security operations (SecOps) teams, who are bombarded by more than 4,000 alerts a day. A key tool historically for is the Common Vulnerability Scoring System (CVSS). However, while the CVSS has proven valuable in identifying risks associated with software bugs, it is less effective when applied to the complexities of artificial intelligence (AI) systems. With that in mind, OWASP has launched a new AI Vulnerability Scoring System (AIVSS) project.
Ken Huang, co-chair of the Cloud Security Alliance's AI Safety Working Group, wrote in his Agentic AI substack, said to think of the AIVSS as an extension to the existing CVSS framework — a new framework to understand and score AI-specific risks.
Ken HuangIt's an initiative born out of a critical need to standardize how we identify, assess, and communicate security vulnerabilities specific to AI systems.
Traditional vulnerability scoring systems were never designed with the intricacies of AI in mind, said Marko Simeonov, founder and CEO of the cybersecurity services firm Plainsea.
We need the AIVSS because the threats posed by AI vulnerabilities aren’t always obvious or directly exploitable in the conventional sense," Simeonov said. He cited data poisoning or model inversion that "might not crash a system or open a port, but they can alter decision outcomes, compromise privacy, or degrade the integrity of entire AI-driven services" as an example of shifts in risk.
Marko SimeonovAI systems don’t just execute code. They learn, adapt, and behave in ways that are often unpredictable and opaque, even to their creators. The AIVSS is an essential response to such shifts.
Get Report: How AI Impacts Supply Chain Security
Organizations urgently need the AIVSS because existing frameworks like CVSS and EPSS are insufficient for the dynamic, adaptive, and autonomous nature of AI agents, said Rosario Mastrogiacomo, chief strategy officer for Sphere Technology Solutions.
Unlike traditional software vulnerabilities, AI systems can learn, drift from intended behavior, and autonomously escalate privileges, requiring new scoring methods that reflect the unpredictability and agency of these digital actors, Mastrogiacomo said.
Rosario MastrogiacomoWithout an AIVSS, organizations risk blind spots in their security posture — failing to identify, prioritize, and remediate vulnerabilities uniquely associated with AI systems, such as cognitive instability, prompt injection, and delegation drift.
Neil Carpenter, principal solution architect for the security firm Minimus, explained that CVSS and related vulnerability intelligence are used today to evaluate the potential impact of software vulnerabilities, and make decisions on how to mitigate those vulnerabilities. "In practice, that almost always comes down to, 'If you have version ABC of this software package installed, upgrade to ABD," he said.
With artificial intelligence and, particularly, with generative AI, vulnerabilities may not be so deterministic, Carpenter said.
Neil CarpenterMany of the problems defined in the OWASP Top 10 for LLM applications and machine learning are tied to the data, models, and process flow used in AI applications rather than specific software packages. A CVSS-like framework that recognizes this may be useful to organizations in evaluating vulnerability risk if they are heavy users of AI.
However, Carpenter cautioned that the success of the AIVSS framework will hinge on its ability to accurately assess risk and to prescribe remediation. "The industry's experience with vulnerability management has taught us that it's not enough to just point out the risk — there must be a clear path to addressing the risk," he said.
Allen Householder, a principal engineer in the CERT Division at Carnegie Mellon University's Software Engineering Institute, said that software-based systems have flaws and vulnerabilities that require coordination and response among stakeholders to ensure mitigations and fixes are developed, distributed, and widely deployed in a timely manner. These actions mitigate the risk of negative consequences from adversarial exploitation or accidental AI misuse, he said. "Although the process that creates AI can look very different, AI is still software, and these concerns remain," Householder said.
Allen HouseholderFlaws in AI systems can be broader than what many folks think about with software vulnerabilities. At the CERT Division, our approach is that it is less useful to argue whether a flaw in an AI system is a vulnerability in the traditional cybersecurity sense. Instead, we consider, 'Does the response to this knowledge require coordination? With whom? And with what urgency?'
No organization we have encountered has enough capacity to resolve every problem they become aware of. There is a need to prioritize that coordination and response, both in the overall ecosystem and for each stakeholder — software suppliers, service providers, system deployers, security response teams, and end users, Householder said. "Tools like AIVSS are an attempt to turn a set of facts about a particular vulnerability in an AI system into an ordered scale that can help stakeholders prioritize their respective responses," he said.
Traditional vulnerability response tends to focus on fixes, but flaws in AI systems can be harder to decisively fix because many of their undesired behaviors are a result of the statistical nature of the models they are built on, Householder stressed.
Allen HouseholderAI vulnerability response might need to focus more on risk mitigation. So it's appropriate for an AI vulnerability response prioritization framework to accommodate a different and broader set of concerns than a traditional vulnerability prioritization scheme might include.
However, he warned that AIVSS, as it is currently described, has some legacy baggage from CVSS. "AIVSS continues the use of arbitrary weights on categorical factors to produce a numerical score, which can, in our opinion, obfuscate the salient details that might vary among stakeholders. It also allows common errors such as averaging CVSS scores," he observed.
In his post on Substack, Huang outlined the roadmap for the AVISS project. It included addressing specific security challenges of AI deployed on mobile devices, developing specialized scoring calculators for specific industries, establishing educational programs and professional certifications to build expertise in AI vulnerability management, and expanding the AIVSS framework beyond agentic AI to encompass other forms of AI.
Sphere Technology's Mastrogiacomo said that expanding OWASP's AIVSS into a comprehensive scoring framework is desirable because it aligns vulnerability assessment with real-world AI governance needs. "Organizations need a holistic framework that doesn't just list vulnerabilities but also contextualizes them, addressing issues like identity drift, autonomous privilege escalation, and ethical risks," he said.
A comprehensive framework provides standardized metrics and clear guidance, allowing security teams to proactively manage AI-specific vulnerabilities rather than reacting post-incident, Mastrogiacomo said. "It also facilitates regulatory compliance by explicitly addressing accountability, explainability, and human oversight — key requirements under regulations like the EU AI Act and NYC Local Law 144."
Plainsea's Simeonov stressed that the AIVSS provides more complete AI governance.
Marko SimeonovTo put it simply, a scoring system alone helps you classify issues. A comprehensive framework helps you govern them.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial