RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityMay 30, 2023

Can AI-based software supply chain risk be tamed by NeMo Guardrails?

Nvidia's tool is among the first to promise to manage the risk from generative AI. Here's what it can do — and an analysis of the scope of risk from AI.

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
two handed throw bowling ball

Given the pressure on developers to push the latest and greatest application into production, it's no surprise that they've turned to tools that employ artificial intelligence and large language models (LLM) to accelerate their productivity.

OpenAI's ChatGPT has almost become a household name, and it now offers developers Codex, which powers GitHub Copilot. Amazon is offering CodeWhisperer, and the BigCode Project, a joint venture of Hugging Face and ServiceNow, recently introduced StarCoder, which, unlike the proprietary OpenAI and Amazon tools, is available under an Open Responsible AI Licenses license.

Use of generative AI tools to develop software was top of mind for many security professionals many at RSA Conference 2023. The internet is rife with anecdotes about generative AI screwups in the consumer sphere. Arjan Durresi, a professor of computer science at Indiana University-Purdue University Indianapolis (Purdue-Indy), is concerned about the first wave of generative AI.

You can get some very wrong answers with these GPT-type tools. If you're applying the tools to a critical application, you can create big trouble. Mark my words: Sooner or later there will be harm involved.

Arjan Durresi

To avoid the potential harm that could be caused by applications developed with generative AI tools, Nvidia has introduced NeMo Guardrails, which is one of the first tools available that is meant to keep programs built with LLMs accurate, appropriate, on topic — and secure.

Here's a look at this early attempt at managing the risk from generative AI — along with analysis of the scope of that risk to the software supply chain.

Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security

NeMo Guardrails gets out front

NeMo Guardrails allows developers to set up three types of boundaries for AI-based integrations with developer tools:

  • Topical guardrails: These prevent apps from wandering into irrelevant areas. A retailer, for instance, wouldn't want its AI customer service assistant to start discussing the weather with a customer.
  • Safety guardrails: These ensure that accurate and appropriate information is provided by an app using generative AI. These guardrails can be used to prevent the app from using inappropriate language and require it to use information from creditable sources.
  • Security guardrails: These restrict apps from making connections to third-party programs known to be unsafe.

NeMo Guardrails is an open-source offering that can work with all the tools that enterprise app developers use. It is designed to work with a broad range of LLM-enabled applications, such as Zapier, Nvidia noted in its launch. Zapier is an automation platform used by over 2 million businesses.

It can also run on top of LangChain, an open-source toolkit that developers are rapidly adopting to plug third-party applications into the power of LLMs.

NeMo Guardrails is also being incorporated into the NVIDIA NeMo framework, which includes everything users need to train and tune language models using a company’s proprietary data, and it is part of the Nvidia AI Foundations, a family of cloud services for businesses that want to create and run custom generative AI models based on their own datasets and domain knowledge.

Much of the framework is already available as open source code on GitHub, Nvidia added, and enterprises can get it as a complete and supported package as part of the Nvidia AI Enterprise software platform. It is also available as a service.

Good first step, but no guarantees on security

NeMo provides developers with a way to establish boundaries and limitations on how generative AI works in their applications, but it offers no guarantees of security, said Michael Erlihson, a principal data scientist at Salt Security. "This tool may help developers in establishing ethical guidelines and mitigating harmful or malicious outcomes resulting from AI models, but the effectiveness of these guidelines depends on the developers’ knowledge of potential risks and their capability to implement suitable measures of control," he said.

While guardrails can help to mitigate certain risks, they do not guarantee complete protection and safety of your application.

Michael Erlihson

Reece Hayden, a research analyst at ABI Research, said tools such as NeMo Guardrails will be effective for low-code/no-code application development by putting structural and quality guarantees on the code generated by the LLM or a fine-tuned model.

Given that AI will increasingly democratize application development, guardrails that ensure effective LLM output will have a large impact on the accuracy, quality, and security of new applications.

Reece Hayden

Purdue-Indy's Durresi said of NeMO Guardrails that it is better than nothing but added that he worries that it could create a "false feeling of safety."

Developers may use them and think they're safe. That's not true. It boils down to who is building these applications. They have to guarantee the quality of the output, not the tool itself.

Arjan Durresi

Related read: Could code-writing AI wipe out humans via software backdoors?

Why comprehensive supply chain security is key

As more generative AI tools are introduced into the development cycle to automate building applications, it is important that organizations think about their overall security strategy, advised Kern Smith, a vice president for Zimperium.

While AI is a powerful tool, it falls very much into the category of 'trust but verify' with regards to the output it produces. It is important that organizations conduct assessments using third-party tooling to validate that what was created meets best practice standards and is secure.

Kern Smith

Smith said that's even more true for AI tools that could be susceptible to third-party manipulation, or introduction of supply chain-style attacks, similar to what has been seen with third-party software development kits (SDKs).

The introduction of AI into development is exciting but also proves that regardless of the methods or tools used to develop apps, the same security principles and external validation requirements still apply.

Kern Smith

ReversingLabs Field CISO Matt Rose said the risk of generative AI extends beyond the immediate development lifecycle, noting, "AI is great if a query includes nonsensitive data, and the AI is creating something that's not proprietary to anybody, but if you're creating something that includes proprietary data, that's very concerning."

Software is all about speed of delivery of new products, features, and capabilities. I worry that people are putting sensitive data into an AI engine to generate a document or white paper or something like that. You could be giving away the keys to the castle by trying to solve a problem quickly.

Matt Rose

Roger Grimes, a defense evangelist at KnowBe4, said organizations need to recognize the limitations with AI up front. "Human programmers innately understand thousands of things that don't have to be put in a scoping document," he said.

Every human involved understands these cultural requirements without them having to be said. AI, until it is better trained, will simply do what it is told, and if it isn't told everything correctly and completely, it's going to make mistakes that were driven by a lack of inclusive specifications.

Roger Grimes

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top