<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1076912843267184&amp;ev=PageView&amp;noscript=1">

RL Blog

|

Secure AI development guidance: What software teams need to know

With AI's spread, it's time to get a handle on security. U.K. and U.S. cyber-watchdogs say to start with Secure by Design, but don't stop there. Here are key takeaways from their new guidelines.

John P. Mello Jr.
Blog Author

John P. Mello Jr., Freelance technology writer. Read More...

ai-system-guidance-cisa-ncsa

The use of generative AI systems has been spreading like wildfire, and if systems are not developed securely, the blaze could end up burning your organization. To help organizations tackle the problem, the United Kingdom's National Cyber Security Centre (NCSC) and the U.S. Cybersecurity and Infrastructure Security Agency (CISA) recently released "Guidelines for Secure AI System Development." In it, they note:

"AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realized, it must be developed, deployed, and operated in a secure and responsible way."

The agencies' guidelines are significant not only because they represent an effort to get ahead of security problems with AI, but also because they've garnered the support of cyber-watchdogs from 16 other nations, including France, Germany, Italy, Japan, Australia, and New Zealand. The guidelines also represent input from 19 AI-centered organizations, including Amazon, Anthropic, Google, IBM, Microsoft, OpenAI, and RAND.

Michael Leach, legal compliance manager at Forcepoint, said the guidelines take aim at addressing the primary concern at the heart of AI adoption and use: security. And the timing was critical, with teams now recognizing that security is paramount in all phases of the software development lifecycle (SDLC) to ensure that AI is used effectively and safely. 

"Responsible government cooperation on the secure development, deployment, and operation of AI between countries like the U.S. and U.K. is what I believe most of us have been waiting for before we adopt and readily use current and future AI capabilities as a pioneering technology to move society forward in the right direction."
Michael Leach

With AI now infiltrating almost every organization, it's time to get a handle on AI system security. Here's what your software team needs to know about the new NCSA and CISA guidance.

[ Learn more: How legacy AppSec is holding back Secure by Design | See Webinar: Secure by Design: Why Trust Matters for Risk Management ]

Secure by Design is the first step

Hitesh Sheth, president and CEO of security firm Vectra AI, said the new guidelines represent a genuine effort to deliver a much-needed global standard on secure AI design,  and the CISA's Secure by Design is a critical building block.

“With AI evolving at an unprecedented rate and businesses increasingly keen to adopt it, it’s vital that developers fully consider the importance of cybersecurity when creating AI systems at the earliest opportunity. Therefore, this Secure by Design approach should be welcomed."
Hitesh Sheth

The Secure by Design guidelines should be used in conjunction with established cybersecurity, risk management, and incident response best practices. Those principles prioritize:

  • Taking ownership of security outcomes for customers
  • Embracing radical transparency and accountability
  • Building organizational structure and leadership so that Secure by Design is a top business priority

The new AI guidelines acknowledge that following Secure by Design principles requires significant resources throughout a development system’s lifecycle, investment in prioritizing features and mechanisms, and the implementation of tools that protect customers at each layer of the system design and across all stages of the SDLC.

But by following the new AI guidelines and Secure by Design, organizations can prevent costly redesigns later — and safeguard customers and their data in the process.

The importance of transparency

Sheth said that cooperation will empower developers across the globe to make more informed cybersecurity decisions about AI. “It’s encouraging to see the U.K. and U.S. work hand in hand, and with agencies from 16 other countries confirming they will endorse and co-seal the guidelines,” he said.

“Transparency is vital when it comes to AI development, so these guidelines should act as a springboard for the delivery of reliable and secure innovation that can transform how we live and work.”
—Hitesh Sheth

A key contributor to the transparency will be documentation. The production of comprehensive documentation supports transparency and accountability, the AI guidelines noted. 

The guidelines urge AI system developers to document the creation, operation, and lifecycle management of any models, datasets, and meta or system prompts. That documentation should include security-relevant information such as the sources of training data (including fine-tuning data and human or other operational feedback), intended scope and limitations, guardrails, cryptographic hashes or signatures, retention time, suggested review frequency, and potential failure modes.

To facilitate the documentation, the guidance suggest model cards, data cards, meta or system prompts, and software bills of materials (SBOMs).

Four pillars of AI system security

The guidelines, which are closely aligned with the software lifecycle practices defined by the NTSC and CISA, are organized around four key areas:

  • Secure by Design: This encompasses raising staff awareness of AI security threats and mitigations, designing systems for security and functionality, assessing risks to the system through threat modeling, and considering security trade-offs when selecting AI models.
  • Secure development: This entails tracking assets and securing the supply chain; documenting data, models, and prompts; and managing technical debt throughout the system lifecycle.
  • Secure deployment: This includes securing infrastructure, developing incident response procedures, and releasing AI systems responsibly after security evaluations.
  • Secure operations: This includes monitoring system behaviors and inputs, updating security procedures, and sharing learned security lessons.

The how and why of AI is essential

Chaitanya Belwal, a senior director at the security firm Tanium, said that while the guidelines touch on the transparency of AI models, more should have been included on interpretability, providing some insight or explanation into how and why a model makes the predictions or decisions that it does. “While the document is intended for use at a high level, and it is not supposed to give specifics, one thing it should address is the interpretability of the models.”

“Right now, there are special notes on building machine-learning models, and it also discusses some extra procedures to handle adversarial machine learning (AML), including prompt injection attacks and handling data corruption. But if a model is not interpretable, the developers cannot address several of the challenges mentioned in the document.”
Chaitanya Belwal

Deep neural networks are notorious for being black box–like, and the reasons for assigning particular weights to specific inputs can be decided only after tracing all the steps in developing the system, Belwal explained.

“Guidance on interpretability of a model will help align the industry and force it to innovate new techniques and come up with an interpretability score for each model."
—Chaitanya Belwal

Software producers are on the hook

Securing AI development is going to be critical, especially since the consequences of not doing so could be very painful for providers.

The complexity of modern software supply chains makes it harder for end users to understand where responsibility for secure AI lies, the guidelines explained. The agencies added that users — whether end users or providers incorporating an external AI component — do not typically have sufficient visibility or expertise to fully understand, evaluate, or address risks associated with the AI systems they are using.

For that reason, the guidelines' authors reasoned, providers of AI components should bear the security burden for their products.

Software teams should implement security controls and mitigations where possible within their models, pipelines, and systems, the guidelines recommend, and, where settings are used, implement the most secure option as the default. Where risks cannot be mitigated, the provider should be responsible for informing users further down the supply chain of the risks that they and their own users are accepting and advising them on how to use the component securely.

Paul Brucciani, a cybersecurity advisor with WithSecure (formerly F-Secure), said that puts a lot of responsibility on software teams.

“It is interesting to note that the responsibility to develop secure AI lies with the provider, who is not only responsible for data curation, algorithmic development, design, deployment, and maintenance, but also for the security outcomes of users further down the supply chain."
Paul Brucciani

With a system compromise potentially leading to tangible or widespread physical or reputational damage, significant loss of business operations, leakage of sensitive or confidential information, and legal implications, AI cybersecurity risks should be treated as critical, the AI guidance stresses.

Get up to speed on key trends and learn expert insights with The State of Software Supply Chain Security 2024. Plus: Explore RL Spectra Assure for software supply chain security.

More Blog Posts

    Special Reports

    Latest Blog Posts

    Chinese APT Group Exploits SOHO Routers Chinese APT Group Exploits SOHO Routers

    Conversations About Threat Hunting and Software Supply Chain Security

    Reproducible Builds: Graduate Your Software Supply Chain Security Reproducible Builds: Graduate Your Software Supply Chain Security

    Glassboard conversations with ReversingLabs Field CISO Matt Rose

    Software Package Deconstruction: Video Conferencing Software Software Package Deconstruction: Video Conferencing Software

    Analyzing Risks To Your Software Supply Chain