The National Institute of Standards and Technology’s latest guidance on how to protect applications from adversarial machine learning (ML) should serve as a solid starting point for understanding and addressing the risks of adversarial ML, but it doesn’t offer a total solution: the fundamental challenges of securing AI remain a work in progress.
But there are six key steps every organization should be taking right now (as outlined in this recent post on RL Blog) to protect AI applications and the underlying supply chains upon which those applications are built.
Here are three ways ReversingLabs Spectra Assure can ensure that your AI applications are safe to use, whether you’re incorporating an AI model into your own applications or purchasing software with AI embedded within.
[ Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security ]
ML: A Rising Threat
As the use of AI as a coding tool is growing, so are the risks of adversarial ML attacks. Most recently malware made its way undetected into an ML model that had been uploaded to the Hugging Face model repository, making its way past built-in detection mechanisms. The nullifAI malware was only detected after ReversingLabs threat researchers analyzed the model with Spectra Assure.
How did this ML malware get past Hugging Face’s defenses? To be shared on the Hugging Face platform, models must first be stored in a portable data serialization format — a binary format that application security tools, including software composition analysis (SCA) tools — can’t process.
(Serialization is the process of converting a trained model into a shareable file format. Deserialization is the process of unpacking the file so that the model can be loaded back into memory and used. In this case the model had been serialized using Pickle, and the data included Python code that could execute automatically upon deserialization. In this way, the malware could create new processes and execute arbitrary commands on the system that attempted to deserialize the AI model data.

Serialized files (like Pickle) can contain more than just instructions—they can also include hidden malicious code that can run automatically when deserialized. That’s why it’s dangerous to load serialized files without checking them.
1. Scan Beyond the Source Code
Spectra Assure can take a fully compiled binary with an ML model in it and detect hidden threats and vulnerabilities. It essentially deserializes the file to see what’s in it. In this case Spectra Assure detected the malware because it analyzed the binary file — it recognizes popular serialized model formats — identified the file format, extracted the data, deconstructed it, and detected the presence of malware. Spectra Assure also detects vulnerabilities, secrets, licenses or tampering. It then compared the data against ReversingLabs’ threat repository — one of the largest such databases in the world that also contains signatures for known bad ML code and includes threat hunting policies specific to AI.
Spectra Assure also has other engines, models, and heuristics that enable it to detect malware, vulnerabilities and other threats. It can perform behavioral analysis to identify attempts to make unsafe function calls, create new processes, execute commands, open network connections to exfiltrate data, or an array of other unusual behaviors that might indicate malicious intent. It also classifies each risk discovered into a priority and risk category for prioritization and gives you a full report. (Learn more: Detecting Malware in ML and LLM Models with Spectra Assure)

2. Inventory Your AI Use
Knowing where and how ML models exist in your organization is key to getting a handle on areas of potential risk, but a traditional software bill of materials (SBOM) is not enough when it comes to securing ML models. Spectra Assure has multiple xBOM capabilities that go beyond the traditional SBOM. It includes a machine learning BOM (ML-BOM) that creates a bill of materials for all data sets and models for AI and ML, and a SaaSBOM that identifies the relationship of the software to SaaS components — including anything the code reaches out to and touches.

3. Secure the Development Toolchain
Spectra Assure analysis protects your entire CI/CD pipeline, training environments and deployment containers by allowing you to be aware of when software has been tampered with anywhere along the software supply chain. It can quickly identify which components present a risk.
One Surefire Way to Minimize ML Model Risk
AI is here to stay, and its use is growing — you can’t avoid it, nor do we recommend you do. But if you are going to embed an ML model into your product and sell it to the world, or release it to internal constituents, you need to ensure that it’s secure. Likewise, if you’re planning to use third-party software with embedded AI features, you need to ensure that it’s clean.
Before allowing a third-party LLM into your development environment or authorizing the use of any third-party software with embedded ML models, use Spectra Assure to check for embedded malware, vulnerabilities or other potentially risky behaviors. It’s the only way you can thoroughly vet the software as you would any other application. Only then can you adopt it with confidence.
Learn more about how Spectra Assure detects malware in ML and LLM models from Dhaval Shah, Senior Director of Product Management at ReversingLabs.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.