AI Has Redefined Software Risk - Learn How Security Teams Can Update Their PlaybookRegister Now

NDAA puts AI cyber risk in the crosshairs

What does the future of AI security look like? The latest National Defense Authorization Act gives us a glimpse.

DoD and NDAA

As the applications and adoption of generative AI explode, the sideline conversation about the cybersecurity risks posed by AI systems has grown much louder. That’s especially true as threats and attacks such as AI-generated malware, malicious AI models and attacks on the AI development pipeline shift from hypothetical to actual to (coming soon) routine. 

What’s the “fix” for these growing AI risks? That’s a tough question to answer, but I’ll say this: if your organization is eager to embrace AI, but worried about the cyber risks that come with it, you might want to take a close look at the proposed National Defense Authorization Act (NDAA) for ideas about the kinds of controls and requirements that you will need to embrace. 

The NDAA, for those of you who don’t follow defense policy closely, is a massive piece of legislation passed annually by the U.S. Congress that authorizes the funding for The U.S. Department of Defense (lately: “The Department of War”) and other national security activities. In this age of political polarization and stalemates on Capitol Hill, the NDAA is notable because it is one of the few pieces of legislation that gets routinely passed with strong bi-partisan support. That makes it a reliable indicator of Congressional priorities and concerns. 

That’s why this year’s NDAA is so important. Among the mountain of traditional and mundane spending authorizations are a wide range of new requirements specific to the military’s use of artificial intelligence. Here are some of my takeaways after reviewing the (636 page!) Joint Explanatory Statement issued by Congress on the NDAA (PDF)

See webinar: AI Redefines Software Risk: Develop a New Playbook

SBOMs are needed for AI systems

Section 1512 of the NDAA calls for “any policy, regulation, guidance, or requirement issued by the Department of Defense relating to the use, submission, or maintenance of a software bill of materials” to apply also to “artificial intelligence systems, models, and software used, developed, or procured by the Department." 

This shouldn’t be a surprise. RL wrote back in June about the DoD’s introduction of the Software Fast Track (SWFT) program, an initiative that is part of DoD’s drive to modernize its software procurement process and IT infrastructure. DoD CIO Katie Arrington wrote in a memo announcing SWFT that the DoD would fast-track suppliers that offer usable software bills of materials (SBOMs) and continuous risk assessments and that the SBOM expectations would extend to AI/ML systems. 

The latest NDAA puts Congress squarely in line with DoD calls for AI SBOMs (aka “AI-BOMs”) and greater transparency into the AI supply chain:

“We believe that any policy, regulation, guidance, or requirement issued by the Department of Defense relating to the use, submission, or maintenance of a software bill of materials should also apply…to artificial intelligence systems, models, and software used, developed, or procured by the Department."

The NDAA also includes a call for the Secretary of Defense to develop policies covering the cybersecurity and governance to address threats like AI model tampering, adversarial attacks, and AI supply chain vulnerabilities along with physical and cybersecurity procurement requirements for AI systems (Section 1513). 

Threats to the AI supply chain

Given the growing list of attacks targeting the AI supply chain, that makes sense. As far back as 2023, researchers were warning about AI supply chain threats like the compromise of 1,500 Hugging Face API tokens, putting millions of AI users vulnerable.

In the last year, RL researchers documented a steady string of open-source software (OSS) supply chain attacks on platforms such as npm and the Python Package Index (PyPI), which are the primary packages that AI/ML developers frequent. That includes the Shai-hulud worm that compromised thousands of npm packages and the accounts of open source maintainers including developers at leading AI companies.

AI-centric open-source platforms have also fallen into the crosshairs of malicious actors. In February, for example, RL threat researcher Karlo Zanki discovered “nullifAI” — a campaign in which malicious ML models were deployed on the Hugging Face open source directory while evading the platform’s “Picklescan” security feature.

AI: Shields Up!

The NDAA makes clear: the days of simply hypothesizing about AI and ML threats are over. It’s time for a more proactive: “shields up” approach to AI security. 

At RL, we’re focused on empowering that transition by providing development- and end user-organizations with critical insights into the makeup of AI, as well as the tools needed to detect threats that may lurk in AI and ML technology. That includes RL's ability to scan AI and ML model files like Python Pickle File (PKL) and Open Neural Network Exchange (ONNX) for evidence of tampering and malware, or unexplained behaviors without access to the underlying source code.

With the ML-BOM capability in RL's Spectra Assure product, a Spectra’s SAFE Report can provide visibility into every ML model in your environment. A SAFE report can identify more than 8,000 publicly available models from sources like Hugging Face and offer detailed insights — without requiring access to the underlying source code.

No silver bullets in sight

Let’s be clear, there’s no “silver bullet” for the many cyber risks attached to generative AI and ML technologies. But it's also too late to simply close our eyes to the risks that already exist. The NDAA’s clear emphasis on AI transparency via AI-BOMs — and its call to monitor and prevent attacks that rely on malicious or tampered-with AI models and other AI supply chain risks — is a signal to all of us that the days of magical thinking about AI are over, while a period of strategic thinking has finally arrived.   

Back to Top