Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure for Software Supply Chain Security
Get Free TrialMore about Spectra Assure Free Trial
AI is changing how application security (AppSec) decides who and what to trust. To secure software, AppSec teams are moving from static, identity-based trust to dynamic, probabilistic, behavior-based trust. They’re also adjusting to a shift in security from the component layer of applications to the data layer.
Because AI pushes a meaningful portion of security risk up — from static components such as libraries, services, containers into the data and context the system consumes at runtime — classic AppSec controls, while still necessary, are no longer sufficient.
Pat Opet, CISO at JPMorgan Chase (JPMC), in his call-to-action open letter on third-party software risk last year, warned that many SaaS models rely on implicit trust of the provider while dismantling traditional security boundaries that had kept organizations protected from attacks. That problem is just as acute with AI coding and vibe coding.
Saša Zdjelar, chief trust officer at ReversingLabs (RL), said the problem extends to what he calls “trust debt,” a type of technical debt whose accumulation AI is accelerating.
“What Pat is describing is the unwinding of decades of trust debt. The industry defaulted to implicit trust in vendors because verifying was hard and expensive.”
—Saša Zdjelar
Here’s why agentic AI is upending how AppSec handles risk — and why managing your trust debt is now essential.
[ See webinar: Stop Trusting Packages. Start Verifying Them. ]
The core issue is that systems based on large language models (LLMs) lack a reliable security boundary between instructions and data inside the prompt, said Christopher Jess, senior R&D manager at Black Duck Software. That means that “when developers concatenate trusted instructions with untrusted content, they can accidentally grant attacker-controlled data the ability to steer decisions,” he said.
“Indirect prompt injection is specifically defined as malicious prompts ingested from separate data sources, like web content or plugins, during normal operation and can lead to outcomes such as data exfiltration or unauthorized actions without the attacker ever logging into the application.”
—Christopher Jess
This new reality has operational ramifications, he said.
“Data security becomes application security. Joint government guidance on AI data security states plainly that machine-learning models learn their decision logic from data, so an attacker who can manipulate data can manipulate the logic of the AI-based system.”
—Christopher Jess
JPMC is experimenting with ways to reduce its implicit trust in third parties and regain control over data and execution environments, Opet said in a recent discussion at the RSAC Conference. The bank is using tokenization to limit data exposure, implementing “confidential computing” models that greatly reduce SaaS vendors’ access to sensitive data, and testing a “bring your own cloud” model, in which SaaS vendors operate from infrastructure deployed within JPMC’s protected environment.
Opet said JPMC is creating a new architecture for AI-powered agents to run on that will limit their access to sensitive information and IT assets. In effect, he said, this will separate the employee desktop from the agent desktop. “Ideally we would want these agents to run an ecosystem where they [have] an identity but no entitlements,” Opet said.
The way AI agents behave is what lies behind the attack surface’s expansion to the data layer, said Elad Luz, head of research at Oasis Security
“Agents don’t execute predefined code paths. They reason over data and use it to decide what to do next. That means untrusted data is no longer passive input. It can steer an agent’s behavior, turning observation into control.”
—Elad Luz
New forms of prompt injection, poisoned context windows, and manipulated sources are constantly being discovered, he said, “and the attack surface is enormous and very difficult to fully secure.”
Tim Freestone, chief strategy and marketing officer at Kiteworks, said that while the greatest software application risks today are at the data layer, risk still exists at network perimeters and endpoints and inside application components. So component-layer security remains necessary, but it’s insufficient when AI agents can reach across repositories and workflows at machine speed, without the ethical hesitation a human employee might exercise.
“The strategic ramification is clear: Organizations that fail to enforce governance at the data layer — controlling what is accessed, by whom or what, under what policy — will find themselves securing infrastructure that AI simply bypasses. The security perimeter hasn’t been breachedt; it’s been made irrelevant.”
—Tim Freestone
Jeff Williams, CTO and co-founder of Contrast Security, noted that with AI, the code no longer decides which data should stay data and which should be turned into behavior.
“It’s incredibly dangerous to pass any untrusted data to an LLM, and we don’t really have any great ways to solve this yet. All we can do is establish boundaries — least privilege, policy enforcement, and runtime controls — that minimize attacks and control exploitation.”
—Jeff Williams
Williams also recommended putting strict policy checks around tool use, separating trusted from untrusted data, requiring human approval for high-impact actions, and using runtime monitoring and protection on everything that matters: prompts, retrieved context, tool calls, and actions.
“If you cannot see what the agent actually did in runtime, you’re basically guessing.”
—Jeff Williams
AI systems exposed to untrusted data must have their privileges reduced to match the author of that data, said David Brauchler, technical director and head of AI and ML security at the NCC Group.
“We also need to better manage data provenance. We don’t have to know where every piece of data comes from, but we do have to know how much we trust that data relative to the intended execution context.”
—David Brauchler
Brauchler questioned the wisdom of tools such as OpenClaw that take a “one agent to rule them all” approach, adding that “we need to split tasks between low-privilege, sandboxed models and then convert their responses to safe data types.”
Aner Gelman, vice president of product at Salt Security, said organizations also need to secure the LLM that powers the agents and the action layer. “On top of validating and securing an organization’s MCPs, having a robust API security program helps organizations secure the underlying infrastructure that allows agents to operate,” he said.
For his part, Chris McHenry, chief product officer at Aviatrix, recommends controlling internet egress. “Full default-deny may not be practical for every environment, but it’s powerful where you can apply it. And even where you can’t, you should be layering.”
He advised blocking command-and-control, access to file-sharing sites, and POST and PUT requests to external services. “Apply URL category filtering and threat blocking — the same web-filtering controls you’d put in front of users,” he said.
“These are containment-based approaches and good hardening practices that every security practitioner already understands. AI agents aren’t as unfamiliar as they seem. Treat the workloads like users on the network and you’re already ahead.”
—Chris McHenry
A control that organizations most consistently underestimate is the governance gate between AI reasoning and execution, said Yogesh Thanvi, a member of the ISACA Emerging Trends Working Group and a senior software development engineer at Akamai Technologies. Last year’s Chevrolet chatbot incident, where a hacker manipulated a dealership AI agent into agreeing to sell a car for $1, showed what can happen when an autonomous actor has no policy checkpoint between reasoning and action, he said.
“An agent that can call APIs, write to databases, or trigger transactions without that checkpoint is operating with privileged access and no supervision. Every sensitive downstream action should require an explicit authorization decision, not an implicit one inherited from the model’s confidence level.”
—Yogesh Thanvi
Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions and author of AI Identities: Governing the Next Generation of Autonomous Actors, said organizations need to treat AI agents as identities with ownership, access limits, and lifecycle controls. He also recommended data sanitization pipelines to help remove sensitive or malicious inputs, continuous adversarial testing to expose manipulation risks, tight scoping of access to APIs and tools, and frequent rotation of short-lived credentials. Audit trails must capture inputs, outputs, and decisions, not just access events, he stressed.
Kiteworks’ Freestone said the controls that will make the most difference now are containment capabilities: purpose binding that limits what an AI agent is authorized to do, kill switches that can rapidly terminate a misbehaving agent, and network isolation that prevents lateral movement into sensitive systems. Layered beneath those are data-layer enforcement mechanisms, which include attribute-based access control that evaluates every AI request against policy in real time, input validation that guards against prompt injection and data poisoning, and tamper-evident audit logging that creates the evidentiary trail regulators and counsel will demand, he said.
“The common thread is shifting from observation to action. Most organizations can watch an AI agent do something unexpected, but far fewer can actually stop it.”
—Tim Freestone
Eran Kinsbruner, vice president of product marketing at Checkmarx, said risk arises from agentic applications that bring both faster development cycles and more complex system behavior. Careful management is needed, but uncertainty still reigns about how to secure these systems effectively, he said.
“Organizations that regularly test their assumptions, refine their approach, and stay flexible in how they respond to new risks will be in a stronger position as the landscape continues to evolve.”
—Eran Kinsbruner
Contrast Security’s Williams said formulating a practical strategy for securing agentic AI applications is not complicated, but it does require discipline, “which is not how I’d describe most of the AI efforts I’ve seen.”
“I think the most important thing is to watch what [the agent] actually does in production. Base security decisions on runtime truth, not just architecture diagrams or source code. AI agents increase the speed and complexity of software behavior, which makes runtime visibility and control even more important.”
—Jeff Williams
Brett Smith, a distinguished software developer at SAS, said teams must always remember their AppSec foundations.
“Sanitize all incoming data, grant minimal necessary permissions, and validate outputs before use. These concepts are not new, but they are critically important when you have autonomous agents operating in your infrastructure.”
—Brett Smith
While Opet’s open letter to software providers centered on JPMC’s intention to send noncompliant firms packing, doing that is only part of a broader software supply chain risk program at the bank. JPMC has also boosted its ability to assess the risks across the supply chain, using deep, third-party security assessments of suppliers’ software, threat intelligence to enable proactive vendor risk detection, and business context-focused risk analyses that assess the sensitivity of data and operational impacts associated with specific applications.
Opet said at the RSAC Conference that systems that scale with generative AI can give technologists “a great way to make the business much more effective,” but human-assisted AI tools such as AI coding agents present a challenge. They can greatly increase worker productivity, but their effectiveness depends on their access to employee data. That creates severe cybersecurity and data privacy risks, he said.
JPMC’s move to a controlled architecture should give the bank the confidence to scale AI coding assistance and other AI-powered desktop tools because unintended consequences such as identity theft and abuse will be greatly curbed, RL’s Zdjelar said
Noting that AI coding is putting trust debt in the spotlight, Zdjelar said JPMC’s position on supply chain risk is a good model for organizations to work toward.
“The question every CISO should be asking now isn’t whether they can afford to do what JPMC is doing. It’s whether they can afford not to.”
—Saša Zdjelar
This new class of AI tool supply chain attack highlights how trust of agents can be exploited.
AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.
Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.


