Break Free from VirusTotal with ReversingLabs Threat IntelWatch AMA Replay

How to secure AI in container workloads

Use of AI in container workloads is growing — but security is not native. That makes additional controls essential. Here’s what you need to know.

Containers and AI security

As AI use expands in enterprise applications and throughout many aspects of businesses, it was inevitable that it would also show up in container workloads. But with containers, AI introduces hidden security dangers as well as adding intriguing possibilities.

When AI is used in the applications and code that run inside containers, it can be harder to observe what is happening in those environments — even as the AI could be expanding attack surfaces and introducing unmonitored automation.

A recent webinar, “Where AI Security Really Happens: Inside the Container,” sheds some light on the security dangers of AI use in containers and how to address them. The webinar featured two experts from cloud-native security vendor Aqua Security: chief marketing officer Matthew Richards, and Assaf Morag, director of security research. 

Richards and Morag discussed some of the key security challenges with AI use in containers: the dynamic and interconnected nature of containers and the increased risks from threats such as prompt injection, model manipulation, and unauthorized access to sensitive inference data.

Here are key action items for securing AI in container workloads. 

Get Guide: How the Rise of AI Will Impact Supply Chain Security

How do you minimize risk from AI in containers?

Richards told RL Blog that enterprises and software teams can minimize the threats from and solidify the beneficial powers of AI’s use within containers by taking steps to ensure that their applications are adhering to the highest security standards.

Do not let the team fire with AI before they are aimed in the right direction. Security does not have to slow down development. Take a moment and make sure you know what you need, and then take full advantage of what you have before you add more tools.

Matthew Richards

Richards said that while tools will need to be added to fully protect AI workloads, those already in use can, if implemented properly, go a long way toward protecting them. Morag said one critical tool that isn’t employed enough is threat modeling.

You need to do threat modeling to understand the threats to whatever you are trying to defend, get the visibility, and then create a [defense] plan and back it with funds.

Assaf Morag

Morag said enterprises must also know about and track all the AI models or AI-related workloads that are running in their organizations — and too many don’t. He said that when he asks IT leaders about those things, most say they do not know. “When you can see or visualize them, or understand where your AI workloads are in the organization, I think that is the most basic step. And I think organizations still struggle with that,” he said. 

With AI workloads in containers growing, controls are essential

Richards said it is increasingly common to deploy AI workloads in containers, which should make related security concerns get a lot more attention. 

Why are most people not thinking about AI security in these containers? As far as I am concerned, secure those containers and you will help secure AI for your enterprise.

Matthew Richards

To increase the security and safety of AI use in containers, Richards said, enterprises must incorporate a behavior-detection engine that can build profiles of normal AI behavior and recognize when behavior slips outside of those norms. “If done right, you will be able to see and act on all of this new information in real time so action can be taken to prevent a successful attack before it happens,” he said.

Steps to secure AI use in containers

Harden AI workloads and minimize attack surfaces by removing anything that is not needed, Richards said. Remove unnecessary privileges, drop unused capabilities, and enforce strong isolation between workloads. “Then harden what is left and follow good hygiene,” he said. “Prevent drift to deployed containers — AI or not.”

Protecting runtimes is also important, Richards said, adding that it is best to find a runtime-monitoring tool that augments what you already have, so as to avoid adding more software. Ideally, vendors’ new AI capabilities will include container security at no additional cost. Regardless, enterprises must go beyond static vulnerability scanning and establish behavioral baselines for AI workloads. “Watch for deviations. Build policies and enforce them,” Richards said.

Enterprises should also integrate the DevSecOps tool chain into their security operations, including instrumentation and container-native controls that deploy without code changes or SDKs, so they can make their systems more secure as they use containers and AI. “It is always easier to successfully deploy if there is no change in behavior required for successful deployment,” Richards said. 

To make these security controls a reality, many larger organizations have been creating and implementing AI governance roles within their operations, he said. “It is essentially setting those policies, deciding what you want to do with it, and then making sure of what you allow or disallow.”

Container security has always been a problem

Securing containers in the age of AI is difficult because containers were not designed with their workloads in mind, Richards said. 

They were actually designed to allow microservices to co-reside inside a single virtual machine with no native security boundaries. So you have to go in there and be able to put tools in place to secure those containers. But the container is a super-lightweight way to package up and deploy and move around a workload.

Matthew Richards

Complicating matters, he said, is that an application may comprise many containers working together. Such bundles are faster, easier to adapt, easier to reuse, and easier to develop, he said.

And each of those containers has their own CI/CD-like release process. So one container might update every day because there are new patches every day, and another container might need to be updated every week. There is all of this happening.

Matthew Richards

 Ultimately, the importance, value, and power of AI and containers will make all of this extra security work worthwhile, Richards. “We are in the middle of a digital transformation. We need to reduce costs. We need to be more agile. We need to get to market faster. How can we do that? By using … containers on the backend to make it super-easy to deploy, update, and manage.”  

Ensuring the security of it all will be the key success with AI in containers, Richards said.

You can secure AI applications by understanding what is going on in those container endpoints and making sure that what you want to happen is actually what is happening in those endpoints. And if not, then you do something to fix it.

Matthew Richards

ML-BOMs are one critical control to consider for AI

Dhaval Shah, Director of Product Management at ReversingLabs, wrote recently about how critical visibility is for the AI supply chain. "Without proper visibility into these AI integrations, organizations face data exfiltration, regulatory non-compliance, and intellectual property exposure," he wrote.

Shah said that if you're familiar with a Software Bill of Materials (SBOM) — which provides a detailed inventory of all software components in your applications — to think of ML-BOM as an extension that adds visibility into AI/ML components. While an SBOM helps you track traditional software dependencies, ML-BOMs focus on the unique components and risks introduced by AI and machine learning. This visibility helps identify risks like backdoored models and unauthorized AI service connections that standard security tools miss, Shah said.

The integration of AI into enterprise software is accelerating, and the threats targeting these components are increasing too. One recent attack, nullifAI, represents just the beginning of what security researchers expect to be a new frontier in software supply chain attacks, Shah said.

ML-BOMs give you the foundation to navigate this evolving landscape with confidence. By maintaining comprehensive visibility into your AI components, you'll be better equipped to detect emerging threats early, adapt to changing regulations, and make informed decisions about AI adoption.

Dhaval Shah
Back to Top