RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

Products & TechnologyDecember 11, 2025

NDAA puts AI cyber risk in the crosshairs

What does the future of AI security look like? The latest National Defense Authorization Act gives us a glimpse.

sasa zdjelar headshot black and white
Saša ZdjelarSaša Zdjelar
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
DoD and NDAA

As the applications and adoption of generative AI explode, the sideline conversation about the cybersecurity risks posed by AI systems has grown much louder. That’s especially true as threats and attacks such as AI-generated malware, malicious AI models and attacks on the AI development pipeline shift from hypothetical to actual to (coming soon) routine. 

What’s the “fix” for these growing AI risks? That’s a tough question to answer, but I’ll say this: if your organization is eager to embrace AI, but worried about the cyber risks that come with it, you might want to take a close look at the proposed National Defense Authorization Act (NDAA) for ideas about the kinds of controls and requirements that you will need to embrace. 

The NDAA, for those of you who don’t follow defense policy closely, is a massive piece of legislation passed annually by the U.S. Congress that authorizes the funding for The U.S. Department of Defense (lately: “The Department of War”) and other national security activities. In this age of political polarization and stalemates on Capitol Hill, the NDAA is notable because it is one of the few pieces of legislation that gets routinely passed with strong bi-partisan support. That makes it a reliable indicator of Congressional priorities and concerns. 

That’s why this year’s NDAA is so important. Among the mountain of traditional and mundane spending authorizations are a wide range of new requirements specific to the military’s use of artificial intelligence. Here are some of my takeaways after reviewing the (636 page!) Joint Explanatory Statement issued by Congress on the NDAA (PDF). 

See webinar: AI Redefines Software Risk: Develop a New Playbook

SBOMs are needed for AI systems

Section 1512 of the NDAA calls for “any policy, regulation, guidance, or requirement issued by the Department of Defense relating to the use, submission, or maintenance of a software bill of materials” to apply also to “artificial intelligence systems, models, and software used, developed, or procured by the Department." 

This shouldn’t be a surprise. RL wrote back in June about the DoD’s introduction of the Software Fast Track (SWFT) program, an initiative that is part of DoD’s drive to modernize its software procurement process and IT infrastructure. DoD CIO Katie Arrington wrote in a memo announcing SWFT that the DoD would fast-track suppliers that offer usable software bills of materials (SBOMs) and continuous risk assessments and that the SBOM expectations would extend to AI/ML systems. 

The latest NDAA puts Congress squarely in line with DoD calls for AI SBOMs (aka “AI-BOMs”) and greater transparency into the AI supply chain:

“We believe that any policy, regulation, guidance, or requirement issued by the Department of Defense relating to the use, submission, or maintenance of a software bill of materials should also apply…to artificial intelligence systems, models, and software used, developed, or procured by the Department."

The NDAA also includes a call for the Secretary of Defense to develop policies covering the cybersecurity and governance to address threats like AI model tampering, adversarial attacks, and AI supply chain vulnerabilities along with physical and cybersecurity procurement requirements for AI systems (Section 1513). 

Threats to the AI supply chain

Given the growing list of attacks targeting the AI supply chain, that makes sense. As far back as 2023, researchers were warning about AI supply chain threats like the compromise of 1,500 Hugging Face API tokens, putting millions of AI users vulnerable.

In the last year, RL researchers documented a steady string of open-source software (OSS) supply chain attacks on platforms such as npm and the Python Package Index (PyPI), which are the primary packages that AI/ML developers frequent. That includes the Shai-hulud worm that compromised thousands of npm packages and the accounts of open source maintainers including developers at leading AI companies.

AI-centric open-source platforms have also fallen into the crosshairs of malicious actors. In February, for example, RL threat researcher Karlo Zanki discovered “nullifAI” — a campaign in which malicious ML models were deployed on the Hugging Face open source directory while evading the platform’s “Picklescan” security feature.

AI: Shields Up!

The NDAA makes clear: the days of simply hypothesizing about AI and ML threats are over. It’s time for a more proactive: “shields up” approach to AI security. 

At RL, we’re focused on empowering that transition by providing development- and end user-organizations with critical insights into the makeup of AI, as well as the tools needed to detect threats that may lurk in AI and ML technology. That includes RL's ability to scan AI and ML model files like Python Pickle File (PKL) and Open Neural Network Exchange (ONNX) for evidence of tampering and malware, or unexplained behaviors without access to the underlying source code.

With the ML-BOM capability in RL's Spectra Assure product, a Spectra’s SAFE Report can provide visibility into every ML model in your environment. A SAFE report can identify more than 8,000 publicly available models from sources like Hugging Face and offer detailed insights — without requiring access to the underlying source code.

No silver bullets in sight

Let’s be clear, there’s no “silver bullet” for the many cyber risks attached to generative AI and ML technologies. But it's also too late to simply close our eyes to the risks that already exist. The NDAA’s clear emphasis on AI transparency via AI-BOMs — and its call to monitor and prevent attacks that rely on malicious or tampered-with AI models and other AI supply chain risks — is a signal to all of us that the days of magical thinking about AI are over, while a period of strategic thinking has finally arrived.   


Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:Products & Technology

More Blog Posts

QR Code Phishing Is Evolving: Here’s How Your Detection Can Keep Up

QR Code Phishing Evolves: How to Keep Up

Here's what you need to know about the rise of quishing — and how your threat hunting team can get out in front of it.

Learn More about QR Code Phishing Evolves: How to Keep Up
QR Code Phishing Evolves: How to Keep Up
Why RL Built Spectra Assure Community

Why RL Built Spectra Assure Community

We set out to help dev and AppSec teams secure the village: OSS dependencies, malware, more. Learn how.

Learn More about Why RL Built Spectra Assure Community
Why RL Built Spectra Assure Community
How a Simple YARA Rule Catches What AV Misses

ClickFix: YARA Rules Catch What AV Misses

Learn about the antivirus detection gap — and how to develop a simple YARA rule using Spectra Analyze.

Learn More about ClickFix: YARA Rules Catch What AV Misses
ClickFix: YARA Rules Catch What AV Misses

How to Examine Polyglot Files with Spectra Analyze

Here's how to assess a sample using Spectra Analyze in your environment — and create a YARA rule.

Learn More about How to Examine Polyglot Files with Spectra Analyze
How to Examine Polyglot Files with Spectra Analyze
Polyglot File Examination with Spectra Analyze

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top
ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabs
ReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu