RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityApril 17, 2025

NIST's adversarial ML guidance: 6 action items for your security team

ML attacks are evolving, putting mitigation a step behind. Here’s what to focus on — and why traditional AppSec tooling is not up to the job.

robert l mitchell headshot
Robert L. MitchellRobert L. Mitchell
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
decapitated robot toy

The National Institute of Standards and Technology’s latest guidance, on how to secure artificial intelligence (AI) applications against manipulation and attacks achieved with adversarial machine learning (ML), represents a major step toward establishing a standard framework for understanding and mitigating the growing threats to AI applications, but it's still insufficient. Fortunately, there are six steps your organization can take right now to address adversarial ML vulnerabilities.

AI application security should be a priority. AI use is already widespread, permeating most development workflows. In a 2024 GitHub survey, more than 97% of respondents said they have used AI coding tools at work, and a 2025 Elite Brains study concluded that AI now generates 41% of all code — 256 billion lines were written by AI last year alone.

Dhaval Shah, senior director of product management at ReversingLabs (RL), said attacks may be designed to “exploit capabilities during the development, training, and deployment phases of the ML lifecycle,” as the NIST guidance states.

This prevalence makes understanding adversarial machine learning threats particularly urgent, as vulnerable AI systems are increasingly embedded throughout the software supply chain.

Dhaval Shah

Model sharing is another area fraught with risk, especially with regard to issues within ML models, such as serialization and deserialization, said Shah. Pickle, commonly used to compress AI models, is inherently unsafe because it allows embedded Python code to run when the model loads, and that opens the door to malicious actors, who can use it to inject harmful code into the model files, he said.

When you serialize an ML model, you're essentially packing it into a file format that can be shared. It's similar to compressing a complex software application into a single file for easy distribution. But certain file formats allow code execution during deserialization.

Dhaval Shah

Legacy application security testing (AST), both static and dynamic, as well as software composition analysis (SCA), miss such threats, Shah said. “These security risks are hidden, and they’re not covered by traditional SAST tools because those tools don’t analyze code for intent, only weaknesses and known vulnerabilities,” he said.

Malcolm Harkins, chief security and trust officer at the AI security firm HiddenLayer, said that to deal with modern supply chain threats, organizations need to incorporate better tooling and visibility into their entire development ecosystems. Many organizations have already suffered adversarial ML attacks, but only 25% of security and IT teams have the awareness and a level of acumen that they need to start to secure AI, he said.

The existing enterprise security stack does not protect AI — particularly AI models — from being attacked.

Malcolm Harkins

Here's what you need to know about NIST's adversarial ML guidance — and six key actions every organization should be taking right now.

Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security

NIST guidance: A good place to start

RL’s Shah said the 2025 edition of the NIST guidance is a good place for enterprises to get their feet wet on preparing for adversarial ML. It provides a taxonomy, arranged in a conceptual hierarchy, that includes key types of ML methods, lifecycle stages of attack, and attacker goals, objectives, capabilities, and knowledge. “This organizational approach helps companies systematically assess their vulnerabilities,” he said.

The guidance also explicitly addresses securing AI supply chains, managing risks posed by autonomous AI agents, and securing enterprise-grade generative AI (gen AI) integrations through detailed reference architectures. However, Shah emphasized NIST’s own acknowledgment of the guidance’s limitations: "[There] are theoretical problems with securing AI algorithms that simply haven't been solved yet," and available defenses currently lack robust assurances of complete risk mitigation.

The guide is best viewed as an essential starting point rather than a comprehensive solution.

Dhaval Shah

Shah provided a breakdown of the good and bad aspects of NIST’s adversarial ML guidance.

The good

  • The guidance provides standardized terminology in adversarial ML that the ML and cybersecurity communities can both agree upon.
  • It includes a comprehensive taxonomy of attack types (evasion, data poisoning, privacy attacks, misuse attacks, supply chain model attacks, and direct prompt and indirect prompt attacks) across both predictive and gen AI systems.
  • It addresses attacks against all viable learning methods (supervised, unsupervised, semi-supervised, federated, reinforcement) across multiple data modalities.
  • It includes an index and glossary to help with understanding, navigating and referencing the taxonomy.

The bad

  • The guidance acknowledges that "at this stage with the existing technology paradigms, the number and power of attacks are greater than the available mitigation techniques."
  • It also states that there are "theoretical limits on the general strength of current mitigation techniques" such as data sanitization and model guardrails.
  • It also calls the defenses AI experts have devised for adversarial attacks thus far are "incomplete at best."
  • It advises that organizations must still "apply traditional cybersecurity measures to harden the model and the platform it runs on" and develop a risk budget they can accept.

Shah stressed that the guidance is useful — but not a comprehensive solution.

Unfortunately, the framework doesn’t solve the fundamental challenges of secure AI, but it does provide a structured approach to understanding, categorizing, and beginning to address them.

Dhaval Shah

6 steps to protect your organization from adversarial ML

Here are six key actions every organization should be taking right now to protect AI applications and the supply chain that surrounds them.

  • Inventory AI use. Know where and how AI-generated code, models, or decisions are being introduced in your organization. Be sure to include ML bills of materials (ML-BOMs) to highlight dependencies from third-party models and packages.
  • Scan beyond the source code. Traditional AST misses binary, container, and model-level tampering. Use binary analysis tools to detect hidden threats such as malware and embedded secrets.
  • Generate and monitor SBOMs. Include models and datasets in your software bills of materials. Your SBOM needs to go beyond code to include model provenance. “You need the equivalent of an AI-BOM — an AI bill of materials,” HiddenLayer's Harkins said.
  • Secure the tool chain. Protect CI/CD pipelines, training environments, and deployment containers. Think of the entire ML lifecycle, not just the model.
  • Align with NIST lifecycle stages. Use the NIST taxonomy to stress-test your development stages against known threat vectors.
  • Establish a response plan. Have a dedicated incident response playbook for AI-related attacks, including rollback and retraining strategies.

Be vigilant — and be ready to adapt as attacks evolve

While these measures will significantly improve your organization’s security posture with respect to AI application threats, organizations need to stay on the alert as attacks continue to evolve, and they must keep up with the latest mitigation approaches — especially since 70% of CISOs say their organizations are on the bleeding edge as innovators, early adopters, or early majority adopters of new AI technologies, as a 2024 Evanta Community Pulse survey found.

For example, agentic AI — autonomous AI systems that can take action based on high-level goals — present their own set of risks. This up-and-coming AI technology may be vulnerable to agent hacking, a type of prompt injection where attackers insert malicious instructions into data ingested by AI agents, and may also be vulnerable to remote code execution, database exfiltration, and automated phishing attacks.

Also, recent studies have shown that advanced AI models sometimes resort to deception when faced with losing scenarios. “In a security context, that could mean misrepresenting capabilities or gaming internal metrics,” Shah said, “In the next 12 months, organizations should approach agentic AI with caution.

Harkins said that a survey HiddenLayer did of 250 senior IT and security folks found that about three-quarters had already seen some sort of AI incident or breach — "and 45%, indicated that issue was because of malware embedded in a model they got from a public repository.” That means the time to start taking action is now, Harkins said.

Identify and catalog your AI assets, do risk assessments and threat modeling for the attack vectors for AI, perform model robustness testing and validation, and make sure your models are strengthened to withstand adversarial attacks.

Malcolm Harkins

While the NIST framework now includes guidance on securing AI supply chains, dealing with risks posed by autonomous AI agents and securing enterprise-grade gen AI integrations through detailed reference architectures requires a new set of tooling, including binary analysis, Shah said.

ReversingLabs’ focus on detecting malware, tampering, malicious implants, and embedded threats helps organizations better manage the complexity and unpredictability of agentic and AI-driven systems.

Dhaval Shah

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top