ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why

GenAI Security Project ramps up guidance

With AI ramping up risk, OWASP stepped up its project to help AppSec teams get up to speed — and take action.

AI ramps up risk

New resources for providing practical guidance and tools for securing generative and agentic AI have been released by OWASP's GenAI Security Project.

The new resources expand the project's Q2 2026 Updated Landscape Guide by updating its vendor and tooling ecosystem documentation and by adding an agentic red-teaming taxonomy that provides a structured, lifecycle-wide framework for identifying, measuring, mitigating, and governing AI risk.

OWASP project co-chair and co-founder Scott Clinton said:

"With the pace of change around AI and agentic architectures and related risks, the landscape guide is updated two to three times a year to capture the latest risk and coverage areas mapped to solutions that are helping organizations to address these new risks."

Clinton added that the Q2 update has been revised based on solutions submitted to the project. The solutions are then mapped to the risks and mitigations. "This edition captures even more solutions that are focused on addressing the OWASP Top 10 Risks for Agentic Security we released in December last year."

Here are the key updates to OWASP's GenAI Security Project that matter.

[ See webinar: Develop Your Playbook for AI-Driven Software Risk ]

1. Project now connects risks to solutions

The landscape guide's purpose is simply to connect documented risks and mitigations to emerging or existing open-source or commercial solutions, Clinton explained.

"It documents what risks they cover, what mitigations they provide, and how they fit into the evolving secure SDLC for AI and agentic applications. The result is a guide that is community and practitioner driven, free of vendor bias, that goes beyond simply a list of solutions but mapping to specific capabilities across the SLDC."
Scott Clinton

After the initial release of the OWASP Top 10 for LLMs, the project realized that while it was great to identify the risks of GenAI, it also raised many questions, Clinton said. Were there open-source or commercial solutions that could help with this?  What risks did they cover? What mitigations do they provide? How did these new risks impact the secure SLDC process and team roles, if any?

Since the guide was introduced, its listings have steadily grown. The first publication identified fewer than 30 solutions that address these risks. They now number nearly 200.

2. Red teaming outcomes in focus

Clinton expects the update to the documentation for the vendor and tooling ecosystem to help security practitioners and teams understand which solutions they may want to implement and what risks and mitigations their current tooling covers.

At the same time, he continued, they can gain an understanding of how to extend their existing SDLC processes to meet the requirements laid out to more securely support the development, deployment, and operation of GenAI and agentic applications and systems.

He added that the new red teaming landscape is a response to the community's need to better understand and educate one another about which solutions are the best fit for red teaming agentic and AI apps.

It also allows them to consider how to leverage AI and agentic capabilities to accelerate and improve red teaming outcomes.

"The working group looked across red, blue, and purple teaming roles and identified what key needs and capabilities were necessary. The red teaming taxonomy captures that. The landscape applies the taxonomy to a community-sourced list of solutions that meet some or all of the criteria, making it easier for red teams to improve their red teaming programs."
Scott Clinton

3. AI ramps up need for education and resources

Since its founding in 2023, the project's ranks have swelled to 25,000 members. One driver behind membership growth, Clinton noted, is the rapid pace of technology, adoption, and change of GenAI and agentic deployments, which is driving a continuous need for education.

Another driver Clinton cited is the risk amplified by AI technology. Security practitioners on the front lines are facing AI and agentic attacks with increasing velocity, he explained, while CISOs and IT leaders are trying to manage their companies through that threat environment.

Yet another driver, he continued, is the desire of practitioners for help addressing their frontline needs. "They're looking to learn, looking to work with peers and want practical guidance free of vendor bias," he said.

"In short, we see continued growth because of the community. It is one of the leading places practitioners can learn, share, and collaborate to solve the immediate day-to-day problems they are facing with trusted, open, peer-reviewed guidance."
Scott Clinton

AI redefines software risk

ReversingLabs’ Software Supply Chain Security Report 2026 focuses on how AI has fundamentally changed software development – and software supply chain risk – in 2025. With the rise of shadow AI in the form of AI-assisted coding and the popularity of public platforms such as Hugging Face, it became clear that enterprise governance is needed to manage the rising risks from AI.

One incident RL researchers discovered this past year showed how AI can heighten risk. The malicious campaign took place on Hugging Face after threat actors exploited the Python ML model file format known as Pickle to distribute malware. RL dubbed this new technique “nullifAI,” and it proved a fruitful one for threat actors, who used it again on PyPI to target users of Alibaba AI Labs – demonstrating that malware-embedded ML models have entered the threat landscape. 

Other threats linked to AI expanded in 2025. For example, prompt injection, which is a form of model corruption where an attacker attempts to manipulate an AI model, was listed as the No. 1 threat in OWASP’s Large Language Model (LLM) Top 10 list.

And, with model context protocol (MCP) servers booming in popularity, researchers discovered the first-ever instance of a malicious MCP server spotted in the wild — and distributed via npm. The malicious package, postmark-mcp, showed how the fast evolving MCP infrastructure can be exploited by attackers to extend their malicious reach.

Learn how RL's free Spectra Assure Community can help your development and AppSec teams get deep insights into your software supply chain via binary analysis.

Back to Top