Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free TrialThe National Institute of Standards and Technology’s latest guidance, on how to secure artificial intelligence (AI) applications against manipulation and attacks achieved with adversarial machine learning (ML), represents a major step toward establishing a standard framework for understanding and mitigating the growing threats to AI applications, but it's still insufficient. Fortunately, there are six steps your organization can take right now to address adversarial ML vulnerabilities.
AI application security should be a priority. AI use is already widespread, permeating most development workflows. In a 2024 GitHub survey, more than 97% of respondents said they have used AI coding tools at work, and a 2025 Elite Brains study concluded that AI now generates 41% of all code — 256 billion lines were written by AI last year alone.
Dhaval Shah, senior director of product management at ReversingLabs (RL), said attacks may be designed to “exploit capabilities during the development, training, and deployment phases of the ML lifecycle,” as the NIST guidance states.
Dhaval ShahThis prevalence makes understanding adversarial machine learning threats particularly urgent, as vulnerable AI systems are increasingly embedded throughout the software supply chain.
Model sharing is another area fraught with risk, especially with regard to issues within ML models, such as serialization and deserialization, said Shah. Pickle, commonly used to compress AI models, is inherently unsafe because it allows embedded Python code to run when the model loads, and that opens the door to malicious actors, who can use it to inject harmful code into the model files, he said.
Dhaval ShahWhen you serialize an ML model, you're essentially packing it into a file format that can be shared. It's similar to compressing a complex software application into a single file for easy distribution. But certain file formats allow code execution during deserialization.
Legacy application security testing (AST), both static and dynamic, as well as software composition analysis (SCA), miss such threats, Shah said. “These security risks are hidden, and they’re not covered by traditional SAST tools because those tools don’t analyze code for intent, only weaknesses and known vulnerabilities,” he said.
Malcolm Harkins, chief security and trust officer at the AI security firm HiddenLayer, said that to deal with modern supply chain threats, organizations need to incorporate better tooling and visibility into their entire development ecosystems. Many organizations have already suffered adversarial ML attacks, but only 25% of security and IT teams have the awareness and a level of acumen that they need to start to secure AI, he said.
Malcolm HarkinsThe existing enterprise security stack does not protect AI — particularly AI models — from being attacked.
Here's what you need to know about NIST's adversarial ML guidance — and six key actions every organization should be taking right now.
Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security
RL’s Shah said the 2025 edition of the NIST guidance is a good place for enterprises to get their feet wet on preparing for adversarial ML. It provides a taxonomy, arranged in a conceptual hierarchy, that includes key types of ML methods, lifecycle stages of attack, and attacker goals, objectives, capabilities, and knowledge. “This organizational approach helps companies systematically assess their vulnerabilities,” he said.
The guidance also explicitly addresses securing AI supply chains, managing risks posed by autonomous AI agents, and securing enterprise-grade generative AI (gen AI) integrations through detailed reference architectures. However, Shah emphasized NIST’s own acknowledgment of the guidance’s limitations: "[There] are theoretical problems with securing AI algorithms that simply haven't been solved yet," and available defenses currently lack robust assurances of complete risk mitigation.
Dhaval ShahThe guide is best viewed as an essential starting point rather than a comprehensive solution.
Shah provided a breakdown of the good and bad aspects of NIST’s adversarial ML guidance.
Shah stressed that the guidance is useful — but not a comprehensive solution.
Dhaval ShahUnfortunately, the framework doesn’t solve the fundamental challenges of secure AI, but it does provide a structured approach to understanding, categorizing, and beginning to address them.
Here are six key actions every organization should be taking right now to protect AI applications and the supply chain that surrounds them.
While these measures will significantly improve your organization’s security posture with respect to AI application threats, organizations need to stay on the alert as attacks continue to evolve, and they must keep up with the latest mitigation approaches — especially since 70% of CISOs say their organizations are on the bleeding edge as innovators, early adopters, or early majority adopters of new AI technologies, as a 2024 Evanta Community Pulse survey found.
For example, agentic AI — autonomous AI systems that can take action based on high-level goals — present their own set of risks. This up-and-coming AI technology may be vulnerable to agent hacking, a type of prompt injection where attackers insert malicious instructions into data ingested by AI agents, and may also be vulnerable to remote code execution, database exfiltration, and automated phishing attacks.
Also, recent studies have shown that advanced AI models sometimes resort to deception when faced with losing scenarios. “In a security context, that could mean misrepresenting capabilities or gaming internal metrics,” Shah said, “In the next 12 months, organizations should approach agentic AI with caution.
Harkins said that a survey HiddenLayer did of 250 senior IT and security folks found that about three-quarters had already seen some sort of AI incident or breach — "and 45%, indicated that issue was because of malware embedded in a model they got from a public repository.” That means the time to start taking action is now, Harkins said.
Malcolm HarkinsIdentify and catalog your AI assets, do risk assessments and threat modeling for the attack vectors for AI, perform model robustness testing and validation, and make sure your models are strengthened to withstand adversarial attacks.
While the NIST framework now includes guidance on securing AI supply chains, dealing with risks posed by autonomous AI agents and securing enterprise-grade gen AI integrations through detailed reference architectures requires a new set of tooling, including binary analysis, Shah said.
Dhaval ShahReversingLabs’ focus on detecting malware, tampering, malicious implants, and embedded threats helps organizations better manage the complexity and unpredictability of agentic and AI-driven systems.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial