The AI security landscape has become a maze of overlapping vendor claims and made-up categories, leaving organizations struggling to distinguish between products that can actually help and those that are just marketing noise.
A new report on the AI security market by Latio outlines how confusion comes from lumping together very different security challenges under broad labels such as "AI trust," "risk," and "security management." In reality, AI security breaks down into four separate problem areas, each requiring its own approach, tools, and expertise.
Organizations need to understand these distinctions to avoid blind spots, wasted resources, and ineffective protection against real-world risks, the report states. Here are the key takeaways from the report so that your team can make solid tooling decisions that will provide better security outcomes.
[ Report: How AI Impacts Supply Chain Security | Blog: How to Secure your AI with an ML-BOM ]
AI security domains, defined
Latio identified the four domains of AI security as end-user data control, AI posture management, application runtime protection, and AI for security operations. Each category, according to Latio, requires its own specific technologies, skills, and implementation approaches. Yet a tendency by many vendors to blur these boundaries has made it harder for organizations to implement the right tools for their specific security gaps.
The report encourages organizations to think about the AI risks that are specific to each category when developing strategies for addressing those risks.
End-user data control
End-user data control is about IT teams ensuring that employees use AI tools securely, to protect against data leaks. For the moment, the top risks in this area involve employees sharing sensitive data via AI chatbots and AI tools that are configured to allow the use of sensitive data for training.
End-user data control has three key subcategories, the report states:
- 1. Prevention of data loss via AI chatbots
- 2. SaaS access control for AI tooling within productivity platforms
- 3. Secure code creation, particularly with developer-facing tools
AI posture management
Risks from AI posture management involve issues such as misconfigurations in and unauthorized access to the infrastructure that supports AI. For good posture management, it is essential to know what AI exists in a particular environment and how it is configured.
Just as software bills of materials (SBOMs) can aid software composition analysis, ML-BOMs and AI-BOMs can track model provenance and dependencies for machine learning and AI components, the report notes:
"The goal is to prevent tampering or unintentional biases, such as tweaking a model to favor a specific product in a shopping platform. In short, AI posture tools aim to secure the infrastructure and lifecycle of AI systems, helping organizations identify, manage, and harden what powers their AI."
Application runtime protection
Runtime application protection is relevant mostly for organizations building their own AI-powered applications. The focus is on protecting internally grown apps against common AI threats such as prompt injection, bias injection, and model reconnaissance, where a threat actor might probe an AI model to discover its training data, behavior and structure for use in future attacks.
AI for security operations
The fourth category centers on using AI-powered security capabilities to strengthen existing security operations. This refers to a rapidly emerging breed of products built from the ground up with AI security in mind. Latio identified multiple vendors and products that are currently available to help organizations address risk associated with each of the other three AI risk areas.
Strategic considerations for AI security
To effectively address AI risks, security leaders need to understand these distinctions and figure out how to prioritize and implement them within existing budgets and organizational structures. The key is to not approach AI security as a standalone category. Rather it is more "a gluing together of use cases that intersect with nearly every aspect of modern enterprise security," Latio said.
"From data loss prevention to application protection to infrastructure posture, new risks are emerging just as quickly as new tools are being introduced to tackle them."
Tooling helps
Many existing security tools offer capabilities for mitigating at least some early AI risk. And a slew of new AI-native security tools is available to address more in-depth security requirements. "Ultimately, the organizations that thrive in this new landscape will be the ones that treat AI security as an extension of their broader security strategy, built on visibility, informed by context, and ready to evolve as the technology does," Latio said.
Dave Tyson, chief intelligence officer at Apollo Information Systems, said Latio's assessment that AI is not a standalone category is spot on. Rather, it’s a very powerful new technology class that organizations must evaluate and understand before making risk decisions, he said.
Tyson likened the frenetic pace of AI adoption to what enterprise adoption of cloud and mobile technologies.
"The challenge with AI is that it imposes multiple security problems in one paradigm."
—Dave Tyson
Security teams have to consider what AI has access to and how that data is protected and used. They also must consider all the different ways that adversaries can abuse and manipulate AI models and tools into taking unintended action, he said. "Then you have the complexity of connecting multiple models together to create multidisciplinary advantages with agentic AI and finally how its raw computing power can be leveraged outside of its native functionality as a weapon against others," Tyson said.
Keep pace with the velocity of change
Security controls must keep pace with the velocity at which an organization might be deploying AI-enabled capabilities. And the technologies they choose to implement these controls need to be targeted at the specific risk they are meant to address. The Latio report itself pointed to existing tools as offering at least some foundational capabilities for mitigating AI-related risk.
While an emerging slew of AI-native security tools offers superior capabilities for mitigating AI risk, there are instances where an organization might be able to extend these existing tool sets to AI security use cases, Latio said in its report:
"While incumbent vendors in these spaces may not offer the same feature depth as newer, AI-specific players, it is a viable strategy for security leaders to wait for these vendors to catch up rather than rushing to adopt a new stack."
Make decisions based on risk exposure
Tyson advocates that organizations make that decision based solely on risk exposure and risk tolerance. What security decision makers need to consider when weighing whether to adapt existing tools or adopt new AI-native capabilities is that adversaries aren't waiting around.
"Adversaries are already fully weaponizing AI to exploit the gaps created by this indecision. The inflection point for the selection of new security controls and countermeasures in contemporary security strategy has been based on the risk your organization faces from definable threats that are beyond your tolerance for that risk."
—Dave Tyson
Gal Moyal, of the CTO office at Noma Security, also argued that waiting around for existing tools to catch up might be a dangerous strategy. CISOs and cybersecurity experts alike agree that security for AI cannot wait for incumbent vendors to catch up, especially with the overnight emergence of agentic AI, Moyal said.
"The amount of unmanaged AI security debt and risk that will accumulate in an enterprise will be unmanageable if an enterprise waits for the dev cycle of a legacy cyber ISV to address [AI risks.] The demand and adoption curve … is already faster than any incumbent ISV can realistically support. Shadow AI is a very real risk to innovative companies."
—Gal Moyal
Moyal said another reason why organizations might want to deploy AI-native security tools sooner rather than later is that they can act as a catalyst for AI. "Instead of security holding AI release cycles back, an AI security platform provides a company the needed risk controls, allowing R&D to move quickly. They need AI security guardrails, not gates, as soon as possible," Moyal said
James McQuiggan, security awareness advocate at KnowBe4, said he recommends that organizations consider the specific risks that each AI tool addresses. Look for mappings to AI risks, such as prompt injection, data poisoning, or misconfigured agents. Ask whether the product can integrate existing security workflows and ensure that when implementing the tool, it shouldn't be standalone and should support various cyber and IT technologies, such as SIEM, SOAR, or XDR, he said.
Verify how the product handles visibility and logging and determine if it offers insights into its runtime model decisions, input/output tracking, and access. "What's the maturity of the threat-detection logic?" McQuiggan said. "Has the vendor conducted any red-team exercises, threat assessments, or modeling on the service to identify vulnerabilities?"
"AI is advancing at a faster rate than cybersecurity tools can be implemented and brought to market. Organizations must balance proactive risk management with forward-looking investments to minimize the risks associated with incorporating AI into their services or products."
—James McQuiggan
When assessing tools, risk assessment and threat modeling are key
One big mistake organizations make is confusing shopping for tools with solving a problem, said Neil Carpenter, security strategist at Minimus. As with shopping for any security capability, organizations need to start with risk assessment and threat modeling. They need to figure out what the problems they need to solve are before they enter the marketplace.
An organization’s velocity in building, deploying, and consuming AI services along with the sensitivity of the data and the threat models for those services are the only factors that should determine whether it’s prudent to wait for existing security solutions to embrace security AI workloads, he said.
"For those on the bleeding edge, adopting AI-native tools may be a necessity, but this isn’t universal. Many organizations are already struggling with addressing more universal security concerns. Shifting resources and budget from core security problems to AI-specific solutions may cause more risk than it solves for [if done incorrectly.]"
—Neil Carpenter
New NIST guidance identifies key AI and ML challenges. Learn why ReversingLabs Spectra Assure should be an essential part of your solution.
Keep learning
- Read the 2025 Gartner® Market Guide to Software Supply Chain Security. Plus: See RL's webinar for expert insights.
- Get the white paper: Go Beyond the SBOM. Plus: See the Webinar: Welcome CycloneDX's xBOM.
- Go big-picture on the software risk landscape with RL's 2025 Software Supply Chain Security Report. Plus: See our Webinar for discussion about the findings.
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and learn how RL discovered the novel threat,
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.