RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityAugust 19, 2025

OWASP GenAI IR Guide 1.0: How to put it to work

Here's how to integrate AI-specific risks into your existing security incident response (IR) playbook.

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
OWASP AI incident response

Artificial intelligence (AI) is invading organizations — and with that comes a raft of security risks, that are unlike the typical threats security teams are equipped to tackle. The Open Worldwide Application Security Project's (OWASP) GenAI Security Project has released a new incident response guide to help teams better monitor and secure AI applications.

Generative AI and its large language model (LLM) outputs combine with a push to grant GenAI applications agency.And with that access, attackers can elicit critical information by simply altering the semantics of an input, the OWASP guide explained. "The 2025 McKinsey State of AI survey notes that fewer than 50% of organizations are working to mitigate security risks associated with GenAI, suggesting that there is still substantial work to be done in understanding how best to approach GenAI security," the 82-page guide notes.

Here's what you need to know about the expanding risk landscape coming from AI — and how you can use OWASP's guide to take action.

Get Report: How AI Impacts Supply Chain Security

Why the OWASP AI incident response guide is needed

Kevin Bocek, chief innovation officer at Venafi, said the OWASP GenAI Incident Response Guide is urgently needed because AI agents are now working across business, connecting to sensitive data — and taking actions. "Security teams and developers should look at this guide as a resource into the future of agents working autonomously built on LLMs," he said. 

Attackers are moving fast, and understanding exploits and monitoring for them will be a significant challenge in the years to come.

Kevin Bocek

MJ Kaufmann, an author and instructor at the technology publisher O'Reilly Media, said organizations need GenAI-specific response strategies to match the risks at hand today. "This guide is about building institutional muscle memory before a high-profile incident occurs," she said.

AI is not just a feature anymore — it’s infrastructure.

MJ Kaufmann

What makes an AI incident different?

The OWASP guide begins with how to define an AI incident — which is not an easy task. There are no widely accepted definitions of what constitutes an AI Incident, nor have any authoritative governmental bodies issued a definition, the guide explained.

Arvind Parthasarathi, founder and CEO of the cyber incident response firm CYGNVS, said one questions he likes to ask is, "How do you really define an AI incident if everything that we are doing in the world is starting to get pervaded by AI? "

Johnathon Miller, CISO at Lumifi Cyber, said that traditional cybersecurity incident investigations often follows a predictable path that has been guided by a wealth of knowledge by security researchers, security operations teams, and various publicly documented examples from incidents that are shared by Information Sharing and Analysis Centers (ISACS), intelligence agencies, security providers and others, explained , a managed detection and response services company.

These are frequently updated and adjusted over time on frameworks like MITRE that allow security investigators to follow a general guideline for investigations, he said. However, when it comes to Generative AI incidents, they can be more challenging, as the definition of what is normal is still evolving.

An AI model will still produce hallucinated or nonsensical output, and it's often difficult to determine if this is a benign quirk or a malicious injection into the output from the prompt. This ambiguity and uncertainty are largely due to a lack of telemetry and metrics that are available for traditional cybersecurity incidents.

Johnathon Miller

These incidents are tricky to detect because they don’t follow traditional attack patterns of code execution, system compromise, or traditional indicators of compromise," Kaufmann said. "Instead, the attack can live entirely in user inputs, model behavior, or data leakage through outputs, which makes them easier to miss with standard tools.

How to deal with AI supply chain attacks

After discussing the definition of AI incidents, the guide offers advice on preparing for and dealing with specific events, such as attacks on AI systems, third-party model providers, and AI supply chains. "AI supply chains can get complicated because every vendor that you rely on is now using AI, and they're all using different AI," said CYGNVS's Parthasarathi. "If you've got a CRM system and HR system and a financial system, like every one of them is using some kind of AI, and they're all using different LLMs or maybe some combination."

Kaufmann said that AI supply chains are inherently more opaque than software supply chains.

It’s difficult to trace where training data came from, who modified a model, or how its outputs were influenced. That lack of transparency creates a trust gap, and attackers can exploit it by embedding risks where traditional tools don’t look, in weights, tokens, or data itself, not just code.

MJ Kaufmann

Unlike code that can be easily scanned today for vulnerabilities and attacks, supply chain attacks may not be observable until an AI system is running and targeted, Venafi's Bocek said. "Attacks on AI supply chain may be specific to manipulating training data to a long-term social engineering attack on models that are only executed under certain prompts and with certain data."

How to get the most out of the OWASP GenAI guide 

Bocek said the guide can be very useful to security teams because it gives specific indicators of compromise to look for and compares them to traditional attacks where security teams have controls and response mechanisms in place. "This enables security teams to understand risks to how their business is using AI and plan for detection and response," he said. "It helps prepare security teams for the roles and outcomes that will be required, and how they can begin to train teams." 

Security teams now have indicators of compromise that they can begin to monitor and build on. "They can then establish the level and type of risks their organization can accept in the new agent AI world," Bocek explained. "It’s a huge step as we head to a world of AI agents working across businesses, connecting to sensitive data, and taking actions."

Kaufmann said that by using this guide, teams can integrate GenAI-specific risks into their existing incident response playbooks, train their teams on new attack types, and develop detection and escalation processes tailored to AI-driven systems.

Developers, too, can benefit from the guide. "It helps developers understand the paths attackers are and will take to attack their AI systems," Bocek said. "With this knowledge, developers can assess how they are using AI, make changes, and also partner with security teams to monitor AI systems for attack."

The guide is an excellent awareness tool for devs, as it outlines attack types and failure modes they might not have encountered yet, such as model poisoning, jailbreak chaining, or prompt-based data exfiltration, Kaufmann said.

This early exposure can help teams build safer GenAI features from the start, not just patch them after a breach.

MJ Kaufmann

New risks demand more focus

Bocek noted that the OWASP GenAI guide is timely because AI is creating new risks. "AI is not deterministic and flaws in training or misuse can quickly emerge," he explained. "AI systems drift over time, as we see from hallucinations, and don’t get back to a good working state."

"We need the response guide to be able to identify attacks on AI systems and differentiate from training or operational flaws," he said. Bocek cited the example of an attacker seeking to extract training data or seeking to take a system offline by attacking the model to deny service. "This is made even more complicated, since AI systems aren’t deterministic, so they can be difficult to assess exactly where an incident occurred, why, and how to remediate," he said.

As we head into a world where AI agents are making decisions, connecting to databases and systems of record like ERP and HR, and working increasingly autonomously. This guide is an important step in arming security teams and developers in improving systems and preparing responses.

Kevin Bocek

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top