RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research
Why RL Built Spectra Assure Community
April 14, 2026

Why RL Built Spectra Assure Community

We set out to help dev and AppSec teams secure the village: OSS dependencies, malware, more. Learn how.

Read More about Why RL Built Spectra Assure Community
Why RL Built Spectra Assure Community

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

Products & TechnologyMay 20, 2025

NIST Adversarial ML Guidance: How RL Can Secure Your Organization

New NIST guidance identifies ML challenges. Here’s why ReversingLabs Spectra Assure should be an essential part of your solution.

black and white dan petrillo headshot
Dan PetrilloDan Petrillo
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
city skyline with colorful bars popping up

The National Institute of Standards and Technology’s latest guidance on how to protect applications from adversarial machine learning (ML) should serve as a solid starting point for understanding and addressing the risks of adversarial ML, but it doesn’t offer a total solution: the fundamental challenges of securing AI remain a work in progress.

But there are six key steps every organization should be taking right now (as outlined in this recent post on RL Blog) to protect AI applications and the underlying supply chains upon which those applications are built.

Here are three ways ReversingLabs Spectra Assure can ensure that your AI applications are safe to use, whether you’re incorporating an AI model into your own applications or purchasing software with AI embedded within.

Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security

ML: A Rising Threat

As the use of AI as a coding tool is growing, so are the risks of adversarial ML attacks. Most recently malware made its way undetected into an ML model that had been uploaded to the Hugging Face model repository, making its way past built-in detection mechanisms. The nullifAI malware was only detected after ReversingLabs threat researchers analyzed the model with Spectra Assure.

How did this ML malware get past Hugging Face’s defenses? To be shared on the Hugging Face platform, models must first be stored in a portable data serialization format — a binary format that application security tools, including software composition analysis (SCA) tools — can’t process.

(Serialization is the process of converting a trained model into a shareable file format. Deserialization is the process of unpacking the file so that the model can be loaded back into memory and used. In this case the model had been serialized using Pickle, and the data included Python code that could execute automatically upon deserialization. In this way, the malware could create new processes and execute arbitrary commands on the system that attempted to deserialize the AI model data.

serialization process flow chart

Serialized files (like Pickle) can contain more than just instructions—they can also include hidden malicious code that can run automatically when deserialized. That’s why it’s dangerous to load serialized files without checking them.

1. Scan Beyond the Source Code

Spectra Assure can take a fully compiled binary with an ML model in it and detect hidden threats and vulnerabilities. It essentially deserializes the file to see what’s in it. In this case Spectra Assure detected the malware because it analyzed the binary file — it recognizes popular serialized model formats — identified the file format, extracted the data, deconstructed it, and detected the presence of malware. Spectra Assure also detects vulnerabilities, secrets, licenses or tampering. It then compared the data against ReversingLabs’ threat repository — one of the largest such databases in the world that also contains signatures for known bad ML code and includes threat hunting policies specific to AI.

Spectra Assure also has other engines, models, and heuristics that enable it to detect malware, vulnerabilities and other threats. It can perform behavioral analysis to identify attempts to make unsafe function calls, create new processes, execute commands, open network connections to exfiltrate data, or an array of other unusual behaviors that might indicate malicious intent. It also classifies each risk discovered into a priority and risk category for prioritization and gives you a full report. (Learn more: Detecting Malware in ML and LLM Models with Spectra Assure)

abstract visualization of reversinglabs sorting data into a spectra report

2. Inventory Your AI Use

Knowing where and how ML models exist in your organization is key to getting a handle on areas of potential risk, but a traditional software bill of materials (SBOM) is not enough when it comes to securing ML models. Spectra Assure has multiple xBOM capabilities that go beyond the traditional SBOM. It includes a machine learning BOM (ML-BOM) that creates a bill of materials for all data sets and models for AI and ML, and a SaaSBOM that identifies the relationship of the software to SaaS components — including anything the code reaches out to and touches.

saasbom and ml bom comparison

3. Secure the Development Toolchain

Spectra Assure analysis protects your entire CI/CD pipeline, training environments and deployment containers by allowing you to be aware of when software has been tampered with anywhere along the software supply chain. It can quickly identify which components present a risk.

One Surefire Way to Minimize ML Model Risk

AI is here to stay, and its use is growing — you can’t avoid it, nor do we recommend you do. But if you are going to embed an ML model into your product and sell it to the world, or release it to internal constituents, you need to ensure that it’s secure. Likewise, if you’re planning to use third-party software with embedded AI features, you need to ensure that it’s clean.

Before allowing a third-party LLM into your development environment or authorizing the use of any third-party software with embedded ML models, use Spectra Assure to check for embedded malware, vulnerabilities or other potentially risky behaviors. It’s the only way you can thoroughly vet the software as you would any other application. Only then can you adopt it with confidence.

Learn more about how Spectra Assure detects malware in ML and LLM models from Dhaval Shah, Senior Director of Product Management at ReversingLabs.

Learn more about Spectra AssureTalk with an expert

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:Products & Technology

More Blog Posts

Retrohunting Telegram Bots

Spectra Analyze in Action: Retrohunting Bots

Learn how to use ReversingLabs’ Spectra Analyze to expand your detection of malicious Telegram C2 bots.

Learn More about Spectra Analyze in Action: Retrohunting Bots
Spectra Analyze in Action: Retrohunting Bots
QR Code Phishing Is Evolving: Here’s How Your Detection Can Keep Up

QR Code Phishing Evolves: How to Keep Up

Here's what you need to know about the rise of quishing — and how your threat hunting team can get out in front of it.

Learn More about QR Code Phishing Evolves: How to Keep Up
QR Code Phishing Evolves: How to Keep Up
Why RL Built Spectra Assure Community

Why RL Built Spectra Assure Community

We set out to help dev and AppSec teams secure the village: OSS dependencies, malware, more. Learn how.

Learn More about Why RL Built Spectra Assure Community
Why RL Built Spectra Assure Community
How a Simple YARA Rule Catches What AV Misses

ClickFix: YARA Rules Catch What AV Misses

Learn about the antivirus detection gap — and how to develop a simple YARA rule using Spectra Analyze.

Learn More about ClickFix: YARA Rules Catch What AV Misses
ClickFix: YARA Rules Catch What AV Misses

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top
ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabs
ReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu