RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityApril 8, 2025

The race to secure the AI/ML supply chain is on — get out front

Software supply chain risks from artificial intelligence and machine learning are getting real. Here are key insights from RL’s new report.

smiling woman with glasses
Carolynn van Arsdale, Writer, ReversingLabs.Carolynn van Arsdale
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
six men in singlets running race on track

The explosive growth in the use of generative artificial intelligence (gen AI) has overwhelmed enterprise IT teams. To keep up with the demand for new AI-based features in software — and to deliver software faster in general — development teams have embraced machine learning-based AI coding tools.

Hugging Face, a leading AI development platform, said in September 2024 that it had hit a milestone by hosting 1 million ML models — up from just 300,000 in 2023. That fast growth comes with a price. Increasing complexity makes software supply chain security essential.

With the rise of AI coding, the race to secure the software supply chain is getting more heated. AI/ML and software packages are now one and the same, giving threat actors more avenues for attack. Here are takeaways from the AI/ML risk section of ReversingLabs' 2025 Software Supply Chain Security Report.

Download: 2025 Software Supply Chain Security ReportSee the SSCS Report Webinar

AI and supply chain risk gets real, fast

When AI began making headlines with the introduction of OpenAI’s ChatGPT in 2022, it became clear to security practitioners that threat actors would tap the technology to improve long-standing attack methods, such as spearphishing and malware. In 2024, security teams started to process the new opportunities that AI and ML — and the technology ecosystem that supports them — are creating for malicious actors.

Attackers now have multiple avenues to choose from when targeting software supply chains, leveraging weak links in the software supply chain to infiltrate sensitive development or IT organizations where AI technology is being used. In February 2025, ReversingLabs threat researcher Karlo Zanki discovered two malicious ML models residing on Hugging Face that managed to evade the platform’s security scanning feature.

The malicious ML models found on Hugging Face were using Python’s popular Pickle format, which allows for serialization of the ML model. As Dhaval Shah, senior director of product management at RL, recently wrote in a technical blog post, Pickle files are “inherently unsafe” because they allow embedded Python code to run when the model is loaded. Despite this, Pickle is still a widely used file format that won’t be going away anytime soon.

In his post, Shah stressed that the hidden Python code in the ML model on Hugging Face could have serious consequences: executing malicious commands, inserting malware onto internal systems, sending unauthorized communications, or even corrupting other locally installed Pickle files.

RL researchers have also documented a steady string of open-source software (OSS) supply chain attacks on platforms such as npm and the Python Package Index (PyPI), which are the primary packages that AI/ML developers frequent. The recent discoveries by Zanki and the RL research team showcase how Picklescan — the tool used by Hugging Face to detect suspicious Pickle files — failed to flag the two malicious ML models as unsafe.

OWASP leads the charge on AI/ML development best practices

While the supply chain threats tied to AI and ML infrastructure seem to be outpacing the security community’s ability to manage such risks, the Open Worldwide Application Security Project (OWASP) foundation has undertaken important efforts to get a handle on risk. In November 2024, OWASP released its Top 10 Risks for LLM Applications. The resource lists the most prominent risks facing AI and ML infrastructure today, such as prompt injection; unbounded security, vector, and embedding vulnerabilities; system prompt leakage; and excessive agency.

OWASP also released CycloneDX v1.6 last year, which introduced a machine-readable format for software bills of materials (SBOMs) that can be applied to ML models. Shortly after, OWASP released its LLM AI Security and Governance Checklist, which raises the bar for development teams by promoting AI and ML security best practices.

These resources from OWASP are a great place for organizations to start. However, discoveries such as nullifAI make it increasingly clear that more advanced tools to assess software supply chain security are now required to get a handle on risk. The Enduring Security Framework working group recommends application security (AppSec) tools that employ binary analysis and reproducible builds.

AI risk makes software supply chain security essential

With the ongoing tools gap that results from legacy AppSec tooling lagging behind supply chain risk, enterprises are now exposed by AI/ML risks, across ML infrastructure and commercial software products that feature AI capabilities. This exposure makes it essential for developers, AppSec, and third-party cyber-risk management (TPCRM) teams to vet their AI and ML infrastructure for supply chain risks.

AI and ML are now fully interconnected with the software supply chain. That makes tooling that allows security teams to assess threats for behavior that identifies unsafe function calls and suspicious and malicious behaviors in ML files — particularly with risky formats such as Pickle and the primary OSS package tools — critical for managing software risk across your enterprise.

Dive deeper into the state of AI/ML risk with RL’s 2025 Software Supply Chain Security Report.

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?
AI agents risk

Claude Mythos: Get your AppSec game on

Anthropic's new AI is a 'step change' for exposing software flaws — but also ramps up exploits. Are you ready?

Learn More about Claude Mythos: Get your AppSec game on
Claude Mythos: Get your AppSec game on

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top