Developing trustworthy AI: 9 key threat categories

CSA’s AI Controls Matrix can help development and AppSec teams distill priorities for securing the AI software supply chain.

AICM trustworthy AI threat categories

Most software engineering teams are now expected to build AI applications, and they’re going to need security architects and application security professionals to help guide them toward building trustworthy ones. For the architects and application security (AppSec) pros, governance and controls frameworks will be essential — but which ones?

Tech leaders and standards bodies have started flooding the field with new sets of guidelines on security, ethics, and privacy that range from extremely detailed to very high-level and theoretical. Security leaders have to sift through it all and decide which guidelines they will rely on when establishing priorities for engineering teams.

The Cloud Security Alliance (CSA) has just released a valuable addition to these guidelines with its AI Controls Matrix. There’s nothing gauzy about the AICM; it’s a comprehensive spreadsheet that organizes well over 200 controls across 18 security domains, including application and interface security and supply chain security.

Here’s how the AICM can help set the stage for securing AI development — and how to take your AI supply chain security to the next level.

Get Report: How AI Impacts Supply Chain Security

Shared responsibility for AI security is key

Ken Huang, co-chair of the AI Safety Working Groups for the CSA, said the alliance was “laser-focused” on developing the AICM for an audience tasked with building and running AI in the cloud. One special emphasis: a shared responsibility model.

“It provides much-needed clarity on control applicability and ownership across the different layers of the AI stack — from the cloud service provider to the model provider and the final application provider,” Huang wrote recently on Substack.

Chris Hughes, CEO of Aquia, wrote on his Substack that the AICM’s real differentiator is that it harmonizes these controls with all the big frameworks, from the United States’ National Institute of Standards and Technology (NIST), the British Standards Institution (BSI), and the International Standards Organization (ISO).

Its ability to tie together the various leading frameworks from a mapping perspective also makes it a great resource for organizations to measure their maturity across leading AI frameworks in an effective manner.

Chris Hughes

The AICM piggybacks on the work the CSA already has done with its Cloud Controls Matrix. To differentiate the controls for AI — and for tailoring it specifically to AI deployment risks — the AICM cross-references all of the domains and controls against nine AI threat categories:

Model manipulation: Threats that attempt to evade detection by manipulating the model to produce inaccurate or misleading results use techniques such as prompt injection, which exploit flaws in the model’s logic and decision making.

Data poisoning: Threats that manipulate the training data that shapes a model’s logic can include malicious and intentional injection of data points or unintentional corruption of data. Such threats can teach the model incorrect patterns and produce untrustworthy results.

Sensitive data disclosure: Some threats can cause unauthorized access, exposure, or leakage of sensitive information processed and stored by a large language model’s service provider. 

Model theft: Malicious actors who gain unauthorized access to or replication of an LLM  can then reverse engineer a model’s architecture or proprietary algorithms. 

Model/service failure/malfunctioning: This broad group of threats includes a range of bugs, hardware failures, hallucinations, and such that could cause the AI model to malfunction or produce unreliable outputs.

Insecure supply chain: AI-specific components of the software supply chain, including AI software libraries, open-source or proprietary models, datasets, and hardware and other infrastructure, can all contain flaws that worsen the insecurity of the software supply chain.

Insecure apps/plugins: AI expands the application threat surface with novel vulnerabilities, which will grow rapidly as agentic AI adds connections between AI systems and traditional enterprise software.

Denial of service: Threats in this category disrupt the availability of functionality of the AI service or models that power an AI-driven application.

Loss of governance/compliance: AI applications could be at risk of breaking governance or compliance policy, giving rise to new liabilities and the potential for regulatory penalties.

With new AI threats, it’s time for mature controls

For development and AppSec teams trying to find the biggest AI security gaps in their software and systems, thoroughly understanding these nine major AI threats is essential to building trustworthy AI software and systems, Faisal Khan, director of quality assurance at Academian and co-chair of AICM’s working group, said in this podcast episode.

If you are building and deploying AI applications, there are a lot of new threats that you should be aware of.

Faisal Khan

Khan explained that using the context of those threats and mapping existing controls to the AICM can help software teams ascertain which threats their applications may be most at risk from. This is crucial because many of these threats bring with them huge financial, legal, and reputational consequences, he said.

Sam Washko, who heads one of AICM’s task groups, said in the same podcast that builders should also focus on new control domains that are specific to AI-driven applications.

A lot of these threats can result in arbitrary code execution on your system, which could be disastrous. It’s important to note that we added a whole new domain for model security, and that covers a lot of attacks on machine-learning models and what controls you should be following.

Sam Washko

As software teams mature their development process with software artifact scanning for the AppSec domain, AI model security should include model artifact scanning. This will be important for securing the AI supply chain, Washko said.

It’s important for the model provider to show after training that it’s secure, but it’s probably more important for application providers and orchestrated service providers and consumers if they’re getting their models from third parties.

Sama Washko

Why modern AppSec tooling is key

Dhaval Shah, senior director of product management at ReversingLabs, wrote recently that developers building AI-enhanced applications need comprehensive visibility into their entire AI supply chain. 

One way to achieve that is a machine learning bill of materials (ML-BOM), which builds on software BOMs to help you identify potentially malicious open-source models before they can be integrated into your products. As regulatory requirements evolve, an ML-BOM automatically generates comprehensive inventories of all AI components, streamlining compliance documentation, Shah wrote.

The question isn’t whether AI will become more prevalent in your organization — it’s whether you’ll have the right tools to secure it.

Dhaval Shah

Want to secure your AI supply chain with an ML-BOM? Dhaval Shah explains how it works.

Back to Top