Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial
A new guide on threat modeling for the cloud in the era of artificial intelligence has been released by the Top Threats Working Group of the Cloud Security Alliance (CSA). The guide notes that security practices, including threat modeling, were primarily developed for static, on-premises systems. However, the adoption of both cloud computing and AI introduced architectural changes, dynamic behaviors, and attack surfaces that legacy approaches do not fully address.
As a result, existing security and threat modeling practices are inadequate for handling the risks of cloud-native and AI-integrated environments, the CSA writes, adding that updating security approaches to reflect the evolving technological landscape has beome imperative.
The new guide, Cloud Threat Modeling v2.0, builds on the version released in 2021 and outlines a practical methodology for cloud threat modeling. “Our approach enables organizations to initiate or advance their threat modeling practices, assess security controls and gaps, and facilitate architecture and mitigation decisions in today’s cloud-first, AI-enabled landscape,” the guide’s authors write.
The 56-page document covers threat modeling frameworks, core threat modeling activities, how to create cloud threat models, and modern threat modeling tools. Here’s what you need to know — and why it matters.
Traditional threat modeling approaches were designed for traditional, deterministic software systems with relatively fixed boundaries, predictable trust zones, and infrastructure that changed slowly, said Diana Kelley, CISO of Noma Security. “Cloud-native and AI-driven environments do not behave that way,” she said.
Diana KelleyThe cloud replaces static perimeters with layers of abstraction, global distribution, and rapid provisioning, while AI introduces software that is nondeterministic, can leverage context, generate content, guide autonomous actions, and evolve based on inputs.
When organizations apply older threat modeling techniques to these modern environments, they can miss the risks created by ephemeral infrastructure and the sometimes unpredictable nature of generative AI, she said.
Patrick Enderby, senior product marketing manager at ReversingLabs (RL), said cloud threat modeling is evolving and the newest CSA guide makes one point unmistakably clear: no cloud architecture is secure if you cannot trust the software running inside it.
Patrick EnderbyNew architectural realities make it harder, not easier, to understand the true behavior of the software deployed across multi-cloud environments.
Derek Fisher, director of the Cyber Defense and Information Assurance Program at Temple University and author of Threat Modeling Best Practices: Proven Frameworks and Practical Techniques to Secure Modern Systems, agreed that conventional threat modeling may not cover all threat possibilities in cloud environments, which are by nature changeable.
Derek FisherIt’s not difficult to swap out a service or configuration with a few clicks of a button or a push of modified environment code. New containers and new serverless functions can be created and removed before a threat model can be reviewed.
For AI environments, the technology is moving rapidly and the use cases are not always clear, Fisher said. “There is also a fundamental difference between the syntactic and semantic execution paths. Traditional systems focus on controlled syntactical paths, where the flow of logic is defined. In AI environments, attackers can manipulate the meaning derived by the model rather than exploiting technical vulnerabilities in code.”
AI coding also presents challenges for threat modelers. “When an AI generates most of the code and design, with minimal human input, it disrupts the typical threat modeling process, which relies on a human developer who understands the system’s design and intent,” said Larry Maccherone, founder and CTO of Transformation.dev.
Larry MaccheroneThis shift means security specialists may need to reverse-engineer the system, possibly with their own AI tools, just to get a baseline understanding.
The CSA guide notes that AI systems introduce threat vectors such as adversarial inputs, model theft, data poisoning, and inference leakage that are not typically covered in conventional threat models. In cloud-hosted AI environments, these threats may arise through compromised storage feeding training pipelines (enabling data poisoning) or from insecure serverless inference functions that can be exploited for model extraction.
The complexity of AI pipelines, opaque inference behavior, and reliance on large-scale training data further necessitate updated modeling approaches that explicitly account for these risks, the guide says.
Tim Freestone, chief strategy and marketing officer at Kiteworks, said AI systems introduce attack vectors that target the models themselves — such as adversarial attacks that manipulate inputs to deceive AI outputs, data poisoning that corrupts training datasets, and model extraction that steals proprietary algorithms through systematic querying.
Beyond model-targeted threats, AI creates unique input manipulation risks like prompt injection, where malicious inputs trick large language models into unauthorized actions, and context manipulation that exploits how AI processes information to influence decisions, he said.
Tim FreestoneThese vulnerabilities require specialized defenses including adversarial testing, secure model development practices, and AI-specific governance frameworks that go far beyond traditional cybersecurity controls focused on network perimeters and data protection.
While new attack vectors are a concern, the bigger issue is that the software-creation process itself has fundamentally changed, Transformation’s Maccherone said. “Secure by Design, which once meant safe defaults, is now more about integrating security reviews into the design phase,” he said. “The good news is that new AI-driven development practices still often start with design documents. This is our chance to adapt.”
Larry MaccheroneThe key is to have the developer — or AI nudger — perform a threat model on the design before any code is written. This ongoing discipline is even more critical now. Every product increment should have its design documented and reviewed for security, even if the only security expert involved is an AI.
Threat modeling aligns directly with Secure by Design principles, said Frank Sclafani, director of cybersecurity enablement at Deepwatch.
Frank SclafaniIt enables organizations to identify and address potential threats proactively during the design phase, rather than reacting to them post-deployment. This preventative approach is applicable and effective across traditional IT, cloud architectures, and AI-driven applications.
Abhay Bhargav, CEO of AppSecEngineer, sees an important and symbiotic relationship between threat modeling and Secure by Design.
Abhay BhargavThreat modeling done early on in the requirement stage can lead to Secure by Design. Secure by Design starts with threat modeling.
John Carberry, chief marketing officer at Xcape, maintains that new frameworks are needed to prioritize cloud control planes, identity fabrics, data flows, and AI pipelines.
John CarberryWhen implemented effectively, modern threat modeling embodies Secure by Design principles, translating the shift-left philosophy into actionable design choices, safeguards, and testable assumptions. However, threat modeling must also become a live discipline that is integrated into your CI/CD workflow and architectural reviews, constantly reflecting new services, new dependencies, and new threats, since cloud environments, [infrastructure as code], and AI stacks evolve often.
Organizations that approach cloud and AI threat modeling as a continuous security measure will fare better than those that have a yearly exercise, he predicted. “If your threat model looks like a static network diagram from 2010, you’re bringing a whiteboard to a gunfight.”
Threat modeling is how Secure by Design becomes operational, said Rosario Mastrogiacomo, chief strategy officer for Sphere Technology Solutions. “It identifies where guardrails are needed before systems go live.”
Rosario MastrogiacomoAnd because cloud and AI systems are always changing, threat modeling must be continuous, not a one-time exercise.
The biggest takeaway from CSA is that cloud threat modeling must become identity-centric and autonomy-aware, he said. “Static models can’t keep up with dynamic infrastructures and intelligent systems.”
RL's Enderby said the CSA recognized threats to cloud systems, applications, and environments are unique, making attacks "viable against them with varying impacts," the guide noted.
Patrick EnderbyFor organizations developing or consuming SaaS, this means threat modeling must extend beyond diagrams and IAM boundaries. It must inspect the software (cloud-native containers) itself.
He said binary-level analysis and tamper detection was critical to produce evidence-backed assurance — something the CSA explicitly calls for: “Each mitigation should be directly related to a specific threat… Evidence and assurance must be attached or linked to validate controls.”
In its guide, the CSA argues that threat modeling is no longer optional. It is a foundational practice for building trust, enabling secure adoption, and ensuring resilience in modern cloud and AI-driven environments, the alliance declares.
As organizations expand into multi-cloud, hybrid, and AI-enabled ecosystems, the call to action is clear:
CSA guideBegin or advance your cloud threat modeling practice now. Start small if necessary, scale iteratively, and embed modeling into development and operations pipelines. Doing so will not only strengthen resilience against today’s most pressing threats but also prepare organizations for the emerging risks of tomorrow’s interconnected, AI-driven digital landscape.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial