OWASP GenAI Incident Response Guide 1.0: How to put it to work

Here's how to integrate AI-specific risks into your existing security incident response (IR) playbook.

OWASP AI incident response

Artificial intelligence (AI) is invading organizations — and with that comes a raft of security risks, that are unlike the typical threats security teams are equipped to tackle. The Open Worldwide Application Security Project's (OWASP) GenAI Security Project has released a new incident response guide to help teams better monitor and secure AI applications.

Generative AI and its large language model (LLM) outputs combine with a push to grant GenAI applications agency.And with that access, attackers can elicit critical information by simply altering the semantics of an input, the OWASP guide explained. "The 2025 McKinsey State of AI survey notes that fewer than 50% of organizations are working to mitigate security risks associated with GenAI, suggesting that there is still substantial work to be done in understanding how best to approach GenAI security," the 82-page guide notes.

Here's what you need to know about the expanding risk landscape coming from AI — and how you can use OWASP's guide to take action.

Get Report: How AI Impacts Supply Chain Security

Why the OWASP AI incident response guide is needed

Kevin Bocek, chief innovation officer at Venafi, said the OWASP GenAI Incident Response Guide is urgently needed because AI agents are now working across business, connecting to sensitive data — and taking actions. "Security teams and developers should look at this guide as a resource into the future of agents working autonomously built on LLMs," he said. 

Attackers are moving fast, and understanding exploits and monitoring for them will be a significant challenge in the years to come.

Kevin Bocek

MJ Kaufmann, an author and instructor at the technology publisher O'Reilly Media, said organizations need GenAI-specific response strategies to match the risks at hand today. "This guide is about building institutional muscle memory before a high-profile incident occurs," she said.

AI is not just a feature anymore — it’s infrastructure.

MJ Kaufmann

What makes an AI incident different?

The OWASP guide begins with how to define an AI incident — which is not an easy task. There are no widely accepted definitions of what constitutes an AI Incident, nor have any authoritative governmental bodies issued a definition, the guide explained.

Arvind Parthasarathi, founder and CEO of the cyber incident response firm CYGNVS, said one questions he likes to ask is, "How do you really define an AI incident if everything that we are doing in the world is starting to get pervaded by AI? "

Johnathon Miller, CISO at Lumifi Cyber, said that traditional cybersecurity incident investigations often follows a predictable path that has been guided by a wealth of knowledge by security researchers, security operations teams, and various publicly documented examples from incidents that are shared by Information Sharing and Analysis Centers (ISACS), intelligence agencies, security providers and others, explained , a managed detection and response services company.

These are frequently updated and adjusted over time on frameworks like MITRE that allow security investigators to follow a general guideline for investigations, he said. However, when it comes to Generative AI incidents, they can be more challenging, as the definition of what is normal is still evolving.

An AI model will still produce hallucinated or nonsensical output, and it's often difficult to determine if this is a benign quirk or a malicious injection into the output from the prompt. This ambiguity and uncertainty are largely due to a lack of telemetry and metrics that are available for traditional cybersecurity incidents.

Johnathon Miller

These incidents are tricky to detect because they don’t follow traditional attack patterns of code execution, system compromise, or traditional indicators of compromise," Kaufmann said. "Instead, the attack can live entirely in user inputs, model behavior, or data leakage through outputs, which makes them easier to miss with standard tools.

How to deal with AI supply chain attacks

After discussing the definition of AI incidents, the guide offers advice on preparing for and dealing with specific events, such as attacks on AI systems, third-party model providers, and AI supply chains. "AI supply chains can get complicated because every vendor that you rely on is now using AI, and they're all using different AI," said CYGNVS's Parthasarathi. "If you've got a CRM system and HR system and a financial system, like every one of them is using some kind of AI, and they're all using different LLMs or maybe some combination."

Kaufmann said that AI supply chains are inherently more opaque than software supply chains.

It’s difficult to trace where training data came from, who modified a model, or how its outputs were influenced. That lack of transparency creates a trust gap, and attackers can exploit it by embedding risks where traditional tools don’t look, in weights, tokens, or data itself, not just code.

MJ Kaufmann

Unlike code that can be easily scanned today for vulnerabilities and attacks, supply chain attacks may not be observable until an AI system is running and targeted, Venafi's Bocek said. "Attacks on AI supply chain may be specific to manipulating training data to a long-term social engineering attack on models that are only executed under certain prompts and with certain data."

How to get the most out of the OWASP GenAI guide 

Bocek said the guide can be very useful to security teams because it gives specific indicators of compromise to look for and compares them to traditional attacks where security teams have controls and response mechanisms in place. "This enables security teams to understand risks to how their business is using AI and plan for detection and response," he said. "It helps prepare security teams for the roles and outcomes that will be required, and how they can begin to train teams." 

Security teams now have indicators of compromise that they can begin to monitor and build on. "They can then establish the level and type of risks their organization can accept in the new agent AI world," Bocek explained. "It’s a huge step as we head to a world of AI agents working across businesses, connecting to sensitive data, and taking actions."

Kaufmann said that by using this guide, teams can integrate GenAI-specific risks into their existing incident response playbooks, train their teams on new attack types, and develop detection and escalation processes tailored to AI-driven systems.

Developers, too, can benefit from the guide. "It helps developers understand the paths attackers are and will take to attack their AI systems," Bocek said. "With this knowledge, developers can assess how they are using AI, make changes, and also partner with security teams to monitor AI systems for attack."

The guide is an excellent awareness tool for devs, as it outlines attack types and failure modes they might not have encountered yet, such as model poisoning, jailbreak chaining, or prompt-based data exfiltration, Kaufmann said.

This early exposure can help teams build safer GenAI features from the start, not just patch them after a breach.

MJ Kaufmann

New risks demand more focus

Bocek noted that the OWASP GenAI guide is timely because AI is creating new risks. "AI is not deterministic and flaws in training or misuse can quickly emerge," he explained. "AI systems drift over time, as we see from hallucinations, and don’t get back to a good working state."

"We need the response guide to be able to identify attacks on AI systems and differentiate from training or operational flaws," he said. Bocek cited the example of an attacker seeking to extract training data or seeking to take a system offline by attacking the model to deny service. "This is made even more complicated, since AI systems aren’t deterministic, so they can be difficult to assess exactly where an incident occurred, why, and how to remediate," he said.

As we head into a world where AI agents are making decisions, connecting to databases and systems of record like ERP and HR, and working increasingly autonomously. This guide is an important step in arming security teams and developers in improving systems and preparing responses.

Kevin Bocek
Back to Top