<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1076912843267184&amp;ev=PageView&amp;noscript=1">

RL Blog

|

Can AI-based software supply chain risk be tamed by NeMo Guardrails?

Nvidia's tool is among the first to promise to manage the risk from generative AI. Here's what it can do — and an analysis of the scope of risk from AI.

John P. Mello Jr.
Blog Author

John P. Mello Jr., Freelance technology writer. Read More...

 

nemo-guardrails-ai-chatgpt

Given the pressure on developers to push the latest and greatest application into production, it's no surprise that they've turned to tools that employ artificial intelligence and large language models (LLM) to accelerate their productivity.

OpenAI's ChatGPT has almost become a household name, and it now offers developers Codex, which powers GitHub Copilot. Amazon is offering CodeWhisperer, and the BigCode Project, a joint venture of Hugging Face and ServiceNow, recently introduced StarCoder, which, unlike the proprietary OpenAI and Amazon tools, is available under an Open Responsible AI Licenses license.

Use of generative AI tools to develop software was top of mind for many security professionals many at RSA Conference 2023. The internet is rife with anecdotes about generative AI screwups in the consumer sphere. Arjan Durresi, a professor of computer science at Indiana University-Purdue University Indianapolis (Purdue-Indy), is concerned about the first wave of generative AI.

"You can get some very wrong answers with these GPT-type tools. If you're applying the tools to a critical application, you can create big trouble. Mark my words: Sooner or later there will be harm involved."
Arjan Durresi

To avoid the potential harm that could be caused by applications developed with generative AI tools, Nvidia has introduced NeMo Guardrails, which is one of the first tools available that is meant to keep programs built with LLMs accurate, appropriate, on topic — and secure.

Here's a look at this early attempt at managing the risk from generative AI — along with analysis of the scope of that risk to the software supply chain.

[ See special report: The Evolution of App Sec | Get eBook: Why Traditional App Sec Testing Fails on Supply Chain Security ]

NeMo Guardrails gets out front

NeMo Guardrails allows developers to set up three types of boundaries for AI-based integrations with developer tools:

  • Topical guardrails: These prevent apps from wandering into irrelevant areas. A retailer, for instance, wouldn't want its AI customer service assistant to start discussing the weather with a customer.
  • Safety guardrails: These ensure that accurate and appropriate information is provided by an app using generative AI. These guardrails can be used to prevent the app from using inappropriate language and require it to use information from creditable sources.
  • Security guardrails: These restrict apps from making connections to third-party programs known to be unsafe.

NeMo Guardrails is an open-source offering that can work with all the tools that enterprise app developers use. It is designed to work with a broad range of LLM-enabled applications, such as Zapier, Nvidia noted in its launch. Zapier is an automation platform used by over 2 million businesses.

It can also run on top of LangChain, an open-source toolkit that developers are rapidly adopting to plug third-party applications into the power of LLMs.

NeMo Guardrails is also being incorporated into the NVIDIA NeMo framework, which includes everything users need to train and tune language models using a company’s proprietary data, and it is part of the Nvidia AI Foundations, a family of cloud services for businesses that want to create and run custom generative AI models based on their own datasets and domain knowledge.

Much of the framework is already available as open source code on GitHub, Nvidia added, and enterprises can get it as a complete and supported package as part of the Nvidia AI Enterprise software platform. It is also available as a service.

Good first step, but no guarantees on security

NeMo provides developers with a way to establish boundaries and limitations on how generative AI works in their applications, but it offers no guarantees of security, said Michael Erlihson, a principal data scientist at Salt Security. "This tool may help developers in establishing ethical guidelines and mitigating harmful or malicious outcomes resulting from AI models, but the effectiveness of these guidelines depends on the developers’ knowledge of potential risks and their capability to implement suitable measures of control," he said.

"While guardrails can help to mitigate certain risks, they do not guarantee complete protection and safety of your application." 
Michael Erlihson

Reece Hayden, a research analyst at ABI Research, said tools such as NeMo Guardrails will be effective for low-code/no-code application development by putting structural and quality guarantees on the code generated by the LLM or a fine-tuned model.

"Given that AI will increasingly democratize application development, guardrails that ensure effective LLM output will have a large impact on the accuracy, quality, and security of new applications."
Reece Hayden

Purdue-Indy's Durresi said of NeMO Guardrails that it is better than nothing but added that he worries that it could create a "false feeling of safety."

"Developers may use them and think they're safe. That's not true. It boils down to who is building these applications. They have to guarantee the quality of the output, not the tool itself."
—Arjan Durresi

[ Related read: Could code-writing AI wipe out humans via software backdoors? ]

Why comprehensive supply chain security is key

As more generative AI tools are introduced into the development cycle to automate building applications, it is important that organizations think about their overall security strategy, advised Kern Smith, a vice president for Zimperium.

"While AI is a powerful tool, it falls very much into the category of 'trust but verify' with regards to the output it produces. It is important that organizations conduct assessments using third-party tooling to validate that what was created meets best practice standards and is secure."
Kern Smith

Smith said that's even more true for AI tools that could be susceptible to third-party manipulation, or introduction of supply chain-style attacks, similar to what has been seen with third-party software development kits (SDKs).

"The introduction of AI into development is exciting but also proves that regardless of the methods or tools used to develop apps, the same security principles and external validation requirements still apply."
—Kern Smith

ReversingLabs Field CISO Matt Rose said the risk of generative AI extends beyond the immediate development lifecycle, noting, "AI is great if a query includes nonsensitive data, and the AI is creating something that's not proprietary to anybody, but if you're creating something that includes proprietary data, that's very concerning."

"Software is all about speed of delivery of new products, features, and capabilities. I worry that people are putting sensitive data into an AI engine to generate a document or white paper or something like that. You could be giving away the keys to the castle by trying to solve a problem quickly."
Matt Rose

Roger Grimes, a defense evangelist at KnowBe4, said organizations need to recognize the limitations with AI up front. "Human programmers innately understand thousands of things that don't have to be put in a scoping document," he said.

"Every human involved understands these cultural requirements without them having to be said. AI, until it is better trained, will simply do what it is told, and if it isn't told everything correctly and completely, it's going to make mistakes that were driven by a lack of inclusive specifications."
Roger Grimes

Get up to speed on key trends and learn expert insights with The State of Software Supply Chain Security 2024. Plus: Explore RL Spectra Assure for software supply chain security.

More Blog Posts

    Special Reports

    Latest Blog Posts

    Chinese APT Group Exploits SOHO Routers Chinese APT Group Exploits SOHO Routers

    Conversations About Threat Hunting and Software Supply Chain Security

    Reproducible Builds: Graduate Your Software Supply Chain Security Reproducible Builds: Graduate Your Software Supply Chain Security

    Glassboard conversations with ReversingLabs Field CISO Matt Rose

    Software Package Deconstruction: Video Conferencing Software Software Package Deconstruction: Video Conferencing Software

    Analyzing Risks To Your Software Supply Chain