Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free TrialGiven the pressure on developers to push the latest and greatest application into production, it's no surprise that they've turned to tools that employ artificial intelligence and large language models (LLM) to accelerate their productivity.
OpenAI's ChatGPT has almost become a household name, and it now offers developers Codex, which powers GitHub Copilot. Amazon is offering CodeWhisperer, and the BigCode Project, a joint venture of Hugging Face and ServiceNow, recently introduced StarCoder, which, unlike the proprietary OpenAI and Amazon tools, is available under an Open Responsible AI Licenses license.
Use of generative AI tools to develop software was top of mind for many security professionals many at RSA Conference 2023. The internet is rife with anecdotes about generative AI screwups in the consumer sphere. Arjan Durresi, a professor of computer science at Indiana University-Purdue University Indianapolis (Purdue-Indy), is concerned about the first wave of generative AI.
Arjan DurresiYou can get some very wrong answers with these GPT-type tools. If you're applying the tools to a critical application, you can create big trouble. Mark my words: Sooner or later there will be harm involved.
To avoid the potential harm that could be caused by applications developed with generative AI tools, Nvidia has introduced NeMo Guardrails, which is one of the first tools available that is meant to keep programs built with LLMs accurate, appropriate, on topic — and secure.
Here's a look at this early attempt at managing the risk from generative AI — along with analysis of the scope of that risk to the software supply chain.
Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security
NeMo Guardrails allows developers to set up three types of boundaries for AI-based integrations with developer tools:
NeMo Guardrails is an open-source offering that can work with all the tools that enterprise app developers use. It is designed to work with a broad range of LLM-enabled applications, such as Zapier, Nvidia noted in its launch. Zapier is an automation platform used by over 2 million businesses.
It can also run on top of LangChain, an open-source toolkit that developers are rapidly adopting to plug third-party applications into the power of LLMs.
NeMo Guardrails is also being incorporated into the NVIDIA NeMo framework, which includes everything users need to train and tune language models using a company’s proprietary data, and it is part of the Nvidia AI Foundations, a family of cloud services for businesses that want to create and run custom generative AI models based on their own datasets and domain knowledge.
Much of the framework is already available as open source code on GitHub, Nvidia added, and enterprises can get it as a complete and supported package as part of the Nvidia AI Enterprise software platform. It is also available as a service.
NeMo provides developers with a way to establish boundaries and limitations on how generative AI works in their applications, but it offers no guarantees of security, said Michael Erlihson, a principal data scientist at Salt Security. "This tool may help developers in establishing ethical guidelines and mitigating harmful or malicious outcomes resulting from AI models, but the effectiveness of these guidelines depends on the developers’ knowledge of potential risks and their capability to implement suitable measures of control," he said.
Michael ErlihsonWhile guardrails can help to mitigate certain risks, they do not guarantee complete protection and safety of your application.
Reece Hayden, a research analyst at ABI Research, said tools such as NeMo Guardrails will be effective for low-code/no-code application development by putting structural and quality guarantees on the code generated by the LLM or a fine-tuned model.
Reece HaydenGiven that AI will increasingly democratize application development, guardrails that ensure effective LLM output will have a large impact on the accuracy, quality, and security of new applications.
Purdue-Indy's Durresi said of NeMO Guardrails that it is better than nothing but added that he worries that it could create a "false feeling of safety."
Arjan DurresiDevelopers may use them and think they're safe. That's not true. It boils down to who is building these applications. They have to guarantee the quality of the output, not the tool itself.
Related read: Could code-writing AI wipe out humans via software backdoors?
As more generative AI tools are introduced into the development cycle to automate building applications, it is important that organizations think about their overall security strategy, advised Kern Smith, a vice president for Zimperium.
Kern SmithWhile AI is a powerful tool, it falls very much into the category of 'trust but verify' with regards to the output it produces. It is important that organizations conduct assessments using third-party tooling to validate that what was created meets best practice standards and is secure.
Smith said that's even more true for AI tools that could be susceptible to third-party manipulation, or introduction of supply chain-style attacks, similar to what has been seen with third-party software development kits (SDKs).
Kern SmithThe introduction of AI into development is exciting but also proves that regardless of the methods or tools used to develop apps, the same security principles and external validation requirements still apply.
ReversingLabs Field CISO Matt Rose said the risk of generative AI extends beyond the immediate development lifecycle, noting, "AI is great if a query includes nonsensitive data, and the AI is creating something that's not proprietary to anybody, but if you're creating something that includes proprietary data, that's very concerning."
Matt RoseSoftware is all about speed of delivery of new products, features, and capabilities. I worry that people are putting sensitive data into an AI engine to generate a document or white paper or something like that. You could be giving away the keys to the castle by trying to solve a problem quickly.
Roger Grimes, a defense evangelist at KnowBe4, said organizations need to recognize the limitations with AI up front. "Human programmers innately understand thousands of things that don't have to be put in a scoping document," he said.
Roger GrimesEvery human involved understands these cultural requirements without them having to be said. AI, until it is better trained, will simply do what it is told, and if it isn't told everything correctly and completely, it's going to make mistakes that were driven by a lack of inclusive specifications.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial