Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free TrialSolutions designed to protect the software supply chain can also be used to protect machine-learning (ML) models from similar attacks.Two such solutions are the Supply-chain Levels for Software Artifacts (SLSA) framework and Sigstore.
SLSA (pronounced "salsa") is a security framework — a checklist of standards and controls to prevent tampering, improve integrity, and secure packages and infrastructure. Sigstore is an open-source project focused on improving supply chain security by providing a framework and tooling for securely signing and verifying software artifacts, including release files, container images, binaries, and software bills of materials (SBOMs).
Mihai Maruseac, Sarah Meiklejohn, and Mark Lodato argued in a recent post on the Google Security Blog that ML model makers should extend their use of the software supply chain security tools to protect ML supply chains from attack. Using Sigstore, ML model builders can sign a model so that anyone using it can be confident it's the exact one the builder, or trainer, created. The team noted:
Signing models discourages model hub owners from swapping models, addresses the issue of a model hub compromise, and can help prevent users from being tricked into using a bad model.
Meanwhile, SLSA — used to describe how a software artifact is built and implements controls to prevent tampering — can be used to provide information not covered in ML model signing, such as a compromised source control or training process, and vulnerability injection. The team wrote:
Our vision is to include specific ML information in a SLSA provenance file, which would help users spot an undertrained model or one trained on bad data. Upon detecting a vulnerability in an ML framework, users can quickly identify which models need to be retrained, thus reducing costs.
While the tools are a great first step for securing AI applications, they're not a complete solution. Here's what your security team needs to know about using SLSA and Sigstore to secure ML models.
Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security
Digital signatures, when used correctly, can ensure that software, including AI platforms, have not been tampered with, said ReversingLabs field CISO Matt Rose. But signatures are no panacea.
Matt RoseThe problem is that the data of the AI platform is typically not secured in the same way. You need to worry about the supply chain for the software itself and the data it uses to function.
Steve Wilson, chief product officer for Exabeam, said that by integrating digital signatures into AI development and deployment processes, organizations can significantly enhance the security and trustworthiness of their ML models and the data they are built upon. This, in turn, contributes to the broader goal of ensuring responsible and trustworthy AI systems.
Steve WilsonWhile digital signatures are a powerful tool for enhancing supply chain security, they are not a panacea and come with certain limitations and challenges, particularly in AI and machine-learning models.
Wilson cited a number of issues associated with digital signatures and AI, including:
Similarly, dynamic data that is continuously changing or being updated can pose challenges for digital signature verification, since the signatures can become outdated rapidly. This limitation underscores the need for additional mechanisms and strategies to secure the training data and dynamic data that play a critical role in the performance and behavior of AI models, beyond the verification of static, foundational model artifacts.
Sigstore and SLSA are great for what they were designed to do — which is to secure the software supply chain, Rose said. But he said the problem is that even if the AI software package itself is not compromised, the data that the AI platform uses may still be tainted.
Matt RoseThese approaches need to be extended beyond just securing the software itself.
Exabeam‘s Wilson said that the nuanced nature of ML systems brings about a distinct set of challenges and considerations regarding supply chain security. He said the SLSA framework serves as a solid foundation, but adapting it to the unique landscapes of AI and large language models (LLMs) requires a deeper level of contemplation and, potentially, the evolution of the framework itself, he explained.
Steve WilsonWhile SLSA lays a strong groundwork for supply chain security, the distinctive aspects of AI systems call for a tailored approach. This might involve extending SLSA, integrating it with other standards like ML-BOM, and fostering a broader understanding and community engagement to ensure supply chain security in the rapidly evolving landscape of AI and large language models.
Jeremy Newberry, a cybersecurity architect and strategist with Merlin Cyber, said that SLSA and Sigstore are good starts to the overall requirements, but they don’t answer the question for growing or self-improvement.
Jeremy NewberryThey feel like a legacy approach to a new problem, and I believe a more modular and adaptive approach needs to be taken.
Google’s approach to AI supply chain security is a good first step toward securing ML models, but it's a fundamentally flawed approach, said Merlin Cyber solutions engineer Dean Webb.
Dean WebbThe Foundation Model Transparency Index rated Google’s AI at only 40% transparent, so we need more from the AI vendors than their instructions on how we, the customers, can shoulder the full security load. We need their transparency and cooperation in sharing that security load.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial