RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityApril 17, 2024

Where GenAI intersects with threat modeling: 3 key benefits for AppSec

Generative AI can ease the burdens of threat modeling — and speed it up. But it's not a panacea. Here's what security teams can realistically expect.

smiling woman
Ericka Chickowski, Freelance writer.Ericka Chickowski
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
Where GenAI intersects with threat modeling: 3 key benefits for AppSec

As application security (AppSec) security leaders seek to drive Security by Design initiatives in 2024, threat modeling is becoming more prevalent. In one recent study, 73% of companies said they do threat modeling of their software at least annually, and half said they do it for every release. And 74% of the surveyed organizations said they'll grow their threat modeling programs in the coming year.

While the rise of software supply chain attacks has made the need for threat modeling clear to a growing number of companies, it is a labor-intensive practice that's difficult to automate and requires many person-hours. But many practitioners hope that generative AI (GenAI) and large language models (LLMs) can help ease those burdens and speed the process.

Here are three major benefits that security teams can realistically expect from the intersection of GenAI and threat modeling — and what not to expect.

See: 10 tips for building a threat modeling programSpecial: The State of Software Supply Chain Security

1. Get a handle on threat modeling's subtasks

Prevailing opinion from AppSec and threat modeling experts is that GenAI is a long way from offering any kind of end-to-end automation of the threat modeling process. But they believe that when GenAI is targeted and limited in scope, it could help threat modeling teams, both experienced and beginner, to crush the subtasks of threat modeling.

Chris Romeo, co-founder and CEO of threat modeling firm Devici (and a co-author of the Threat Modeling Manifesto), said in a recent roundtable on AI in threat modeling that GenAI can be a useful tool for threat modelers, but it's not a panacea:

I don't see a world where we just have the AI do the threat model and we would all sign off on it and say, 'Yeah, that's perfect!' It's not going to replace anything we do. But there's a world where that AI can help us be better at what we do. And I think that's the near-term value [proposition].

Chris Romeo

Brook Schoenfield, author of many books on threat modeling and CTO of Resilient Software Security, said in the same roundtable discussion that AI shouldn't be the "great, grand replacement" of human threat modelers. The subtasks that AI could be used to assist or help automate could include creating data flow diagrams (DFDs) and generating potential threat scenarios.

Schoenfield explained that AI can not only assist veterans to speed up their tasks, but also hold the hand of the less experienced.

Let's look at really discrete problems and solve some of the things that will help. More importantly, help the hundreds of thousands or millions of developers who don't have all that attack knowledge and don't have the time to go get it.

Brook Schoenfield

Abhay Bhargav, chief research officer at the training firm AppSecEngineer, said this is exactly the approach he advocates. He's developing trainable methodologies for developers and security teams to speed up threat modeling using this approach. In a recent webinar, Bhargav said he believes this is the path for drastically cutting down on the time it takes for threat modeling teams to generate usable models.

The approach I take and I teach is to go from this big, massive, contiguous task — we need to do a threat model — to breaking it down into component patterns and passing a pattern, along with all of your other input, into an LLM. Then the LLM generates the output for that pattern. For example, you could say, 'Please generate the security objectives for this system.' And then, 'Please generate the threat scenarios for those security objectives and these (additional) information assets.

Abhay Bhargav

2. Eliminate blank-page syndrome

One of the big benefits that GenAI can bring to threat modeling is kick-starting thought processes around potential attacks and vulnerable surfaces, said Kim Wuyts, a privacy engineer and threat modeling advocate who works as a manager of cyber and privacy for PwC Belgium. She noted in the recent threat modeling roundtable:

For people suffering from the blank-page syndrome, you get something to get you going. It's not really automation as people like to think about AI. I put in this prompt with three sentences about a scenario, and look, I've got 20 useful threats. That's great, because it saves you some time, but it's just the low-hanging fruit.

Kim Wuyts

Schoenfield said threat modelers often "get stuck in only the places they know" and GenAI can help a threat modeling team get out of thinking ruts.

Just getting started is a big deal for people, and looking comprehensively after that. Asking the AI, 'What are all the domains I should look at for this system?' might actually be a huge win.

Brook Schoenfield

3. Hand-hold your team with knowledge and well-timed guidance

AI in threat modeling could also act as a particularly informed assistant offering deep access to knowledge and well-timed guidance, said threat modeling advocate Izar Tarandach, senior principal security architect at SiriusXM and a participant in the threat modeling roundtable.

I would love to see, not one LLM doing the whole thing, but small agents here and there being almost like Microsoft Clippy. It's like having a copilot while you're doing threat modeling that's helping you in those small tasks that you need to get a good threat model done. But it's not doing it for you; it is bringing you information and knowledge.

Izar Tarandach

Devici's Romeo said he likes to call this "AI-infused threat modeling," adding that hopefully, if it is done right, it will be something more than an irritating chatbot. This is what he's currently exploring in his own work.

What we're doing is figuring out how we can infuse AI in certain points of the threat modeling system to make results better for you as the person or faster to generate. In a lot of cases, you won't even know where there's an LLM generating it. That's my goal.

Chris Romeo

This kind of hand-holding can be especially important for developers on the team who are not used to thinking like attackers, Tarandach said. Having GenAI as a resource could allow such developers to ask questions such as, "Given certain system parameters, how would you attack the system?" and that could be invaluable for kicking off threat modeling discussions, he said.

It is not going to create a threat model for you, but it might very well inform a threat model for you.

Izar Tarandach

Keep GenAI's limitations firmly in mind

As AppSec teams and developers seek to glean these benefits from AI-infused threat modeling, they've got to keep in mind that they come with about as many caveats as there are disclaimers. GenAI, in particular, is not very explainable even by the experts in the LLM world — and it is difficult to verify the integrity and accuracy of its outputs.

Romeo said that means you need to treat AI assistance like a new member to the threat modeling team who doesn't yet have a ton of experience. That machine-driven team member can uncover new ideas and bring them to the table, but it is the diversity and knowledge base of the rest of the team that should lead decisions and shape the final output of each threat model. "AI is not ready for prime time," he said. "AI is not ready to be in the critical path of security decisions."

PwC Belgium's Wuyts said that, for now, AI is relegated in threat modeling to the role of junior assistant.

Go for AI as an assistant, not as a decision maker. If we don't understand it, then we cannot trust it.

Kim Wuyts

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top