RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecuritySeptember 16, 2025

Trustworthy AI is key: 9 key threat categories

CSA’s AI Controls Matrix can help development and AppSec teams distill priorities for securing the AI software supply chain.

smiling woman
Ericka Chickowski, Freelance writer.Ericka Chickowski
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
AICM trustworthy AI threat categories

Most software engineering teams are now expected to build AI applications, and they’re going to need security architects and application security professionals to help guide them toward building trustworthy ones. For the architects and application security (AppSec) pros, governance and controls frameworks will be essential — but which ones?

Tech leaders and standards bodies have started flooding the field with new sets of guidelines on security, ethics, and privacy that range from extremely detailed to very high-level and theoretical. Security leaders have to sift through it all and decide which guidelines they will rely on when establishing priorities for engineering teams.

The Cloud Security Alliance (CSA) has just released a valuable addition to these guidelines with its AI Controls Matrix. There’s nothing gauzy about the AICM; it’s a comprehensive spreadsheet that organizes well over 200 controls across 18 security domains, including application and interface security and supply chain security.

Here’s how the AICM can help set the stage for securing AI development — and how to take your AI supply chain security to the next level.

Get Report: How AI Impacts Supply Chain Security

Shared responsibility for AI security is key

Ken Huang, co-chair of the AI Safety Working Groups for the CSA, said the alliance was “laser-focused” on developing the AICM for an audience tasked with building and running AI in the cloud. One special emphasis: a shared responsibility model.

“It provides much-needed clarity on control applicability and ownership across the different layers of the AI stack — from the cloud service provider to the model provider and the final application provider,” Huang wrote recently on Substack.

Chris Hughes, CEO of Aquia, wrote on his Substack that the AICM’s real differentiator is that it harmonizes these controls with all the big frameworks, from the United States’ National Institute of Standards and Technology (NIST), the British Standards Institution (BSI), and the International Standards Organization (ISO).

Its ability to tie together the various leading frameworks from a mapping perspective also makes it a great resource for organizations to measure their maturity across leading AI frameworks in an effective manner.

Chris Hughes

The AICM piggybacks on the work the CSA already has done with its Cloud Controls Matrix. To differentiate the controls for AI — and for tailoring it specifically to AI deployment risks — the AICM cross-references all of the domains and controls against nine AI threat categories:

Model manipulation: Threats that attempt to evade detection by manipulating the model to produce inaccurate or misleading results use techniques such as prompt injection, which exploit flaws in the model’s logic and decision making.

Data poisoning: Threats that manipulate the training data that shapes a model’s logic can include malicious and intentional injection of data points or unintentional corruption of data. Such threats can teach the model incorrect patterns and produce untrustworthy results.

Sensitive data disclosure: Some threats can cause unauthorized access, exposure, or leakage of sensitive information processed and stored by a large language model’s service provider. 

Model theft: Malicious actors who gain unauthorized access to or replication of an LLM  can then reverse engineer a model’s architecture or proprietary algorithms. 

Model/service failure/malfunctioning: This broad group of threats includes a range of bugs, hardware failures, hallucinations, and such that could cause the AI model to malfunction or produce unreliable outputs.

Insecure supply chain: AI-specific components of the software supply chain, including AI software libraries, open-source or proprietary models, datasets, and hardware and other infrastructure, can all contain flaws that worsen the insecurity of the software supply chain.

Insecure apps/plugins: AI expands the application threat surface with novel vulnerabilities, which will grow rapidly as agentic AI adds connections between AI systems and traditional enterprise software.

Denial of service: Threats in this category disrupt the availability of functionality of the AI service or models that power an AI-driven application.

Loss of governance/compliance: AI applications could be at risk of breaking governance or compliance policy, giving rise to new liabilities and the potential for regulatory penalties.

With new AI threats, it’s time for mature controls

For development and AppSec teams trying to find the biggest AI security gaps in their software and systems, thoroughly understanding these nine major AI threats is essential to building trustworthy AI software and systems, Faisal Khan, director of quality assurance at Academian and co-chair of AICM’s working group, said in this podcast episode.

If you are building and deploying AI applications, there are a lot of new threats that you should be aware of.

Faisal Khan

Khan explained that using the context of those threats and mapping existing controls to the AICM can help software teams ascertain which threats their applications may be most at risk from. This is crucial because many of these threats bring with them huge financial, legal, and reputational consequences, he said.

Sam Washko, who heads one of AICM’s task groups, said in the same podcast that builders should also focus on new control domains that are specific to AI-driven applications.

A lot of these threats can result in arbitrary code execution on your system, which could be disastrous. It’s important to note that we added a whole new domain for model security, and that covers a lot of attacks on machine-learning models and what controls you should be following.

Sam Washko

As software teams mature their development process with software artifact scanning for the AppSec domain, AI model security should include model artifact scanning. This will be important for securing the AI supply chain, Washko said.

It’s important for the model provider to show after training that it’s secure, but it’s probably more important for application providers and orchestrated service providers and consumers if they’re getting their models from third parties.

Sama Washko

Why modern AppSec tooling is key

Dhaval Shah, senior director of product management at ReversingLabs, wrote recently that developers building AI-enhanced applications need comprehensive visibility into their entire AI supply chain. 

One way to achieve that is a machine learning bill of materials (ML-BOM), which builds on software BOMs to help you identify potentially malicious open-source models before they can be integrated into your products. As regulatory requirements evolve, an ML-BOM automatically generates comprehensive inventories of all AI components, streamlining compliance documentation, Shah wrote.

The question isn’t whether AI will become more prevalent in your organization — it’s whether you’ll have the right tools to secure it.

Dhaval Shah

Want to secure your AI supply chain with an ML-BOM? Dhaval Shah explains how it works.

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top