RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

AppSec & Supply Chain SecurityApril 30, 2026

How agentic AI flips the trust model

As AppSec shifts focus from the components to data, your strategy needs updating. Are you on top of your trust debt?

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
Trust model flips

AI is changing how application security (AppSec) decides who and what to trust. To secure software, AppSec teams are moving from static, identity-based trust to dynamic, probabilistic, behavior-based trust. They’re also adjusting to a shift in security from the component layer of applications to the data layer.

Because AI pushes a meaningful portion of security risk up — from static components such as libraries, services, containers into the data and context the system consumes at runtime — classic AppSec controls, while still necessary, are no longer sufficient. 

Pat Opet, CISO at JPMorgan Chase (JPMC), in his call-to-action open letter on third-party software risk last year, warned that many SaaS models rely on implicit trust of the provider while dismantling traditional security boundaries that had kept organizations protected from attacks. That problem is just as acute with AI coding and vibe coding.

Saša Zdjelar, chief trust officer at ReversingLabs (RL), said the problem extends to what he calls “trust debt,” a type of technical debt whose accumulation AI is accelerating. 

“What Pat is describing is the unwinding of decades of trust debt. The industry defaulted to implicit trust in vendors because verifying was hard and expensive.”
—Saša Zdjelar

Here’s why agentic AI is upending how AppSec handles risk — and why managing your trust debt is now essential. 

[ See webinar: Stop Trusting Packages. Start Verifying Them. ]

Who are you going to trust?

The core issue is that systems based on large language models (LLMs) lack a reliable security boundary between instructions and data inside the prompt, said Christopher Jess, senior R&D manager at Black Duck Software. That means that “when developers concatenate trusted instructions with untrusted content, they can accidentally grant attacker-controlled data the ability to steer decisions,” he said.

“Indirect prompt injection is specifically defined as malicious prompts ingested from separate data sources, like web content or plugins, during normal operation and can lead to outcomes such as data exfiltration or unauthorized actions without the attacker ever logging into the application.”
—Christopher Jess

This new reality has operational ramifications, he said.

“Data security becomes application security. Joint government guidance on AI data security states plainly that machine-learning models learn their decision logic from data, so an attacker who can manipulate data can manipulate the logic of the AI-based system.”
—Christopher Jess

JPMC is experimenting with ways to reduce its implicit trust in third parties and regain control over data and execution environments, Opet said in a recent discussion at the RSAC Conference. The bank is using tokenization to limit data exposure, implementing “confidential computing” models that greatly reduce SaaS vendors’ access to sensitive data, and testing a “bring your own cloud” model, in which SaaS vendors operate from infrastructure deployed within JPMC’s protected environment.

Opet said JPMC is creating a new architecture for AI-powered agents to run on that will limit their access to sensitive information and IT assets. In effect, he said, this will separate the employee desktop from the agent desktop. “Ideally we would want these agents to run an ecosystem where they [have] an identity but no entitlements,” Opet said. 

The new home for AppSec risk

The way AI agents behave is what lies behind the attack surface’s expansion to the data layer, said Elad Luz, head of research at Oasis Security

“Agents don’t execute predefined code paths. They reason over data and use it to decide what to do next. That means untrusted data is no longer passive input. It can steer an agent’s behavior, turning observation into control.”
—Elad Luz

New forms of prompt injection, poisoned context windows, and manipulated sources are constantly being discovered, he said, “and the attack surface is enormous and very difficult to fully secure.”

Tim Freestone, chief strategy and marketing officer at Kiteworks, said that while the greatest software application risks today are at the data layer, risk still exists at network perimeters and endpoints and inside application components. So component-layer security remains necessary, but it’s insufficient when AI agents can reach across repositories and workflows at machine speed, without the ethical hesitation a human employee might exercise. 

“The strategic ramification is clear: Organizations that fail to enforce governance at the data layer — controlling what is accessed, by whom or what, under what policy — will find themselves securing infrastructure that AI simply bypasses. The security perimeter hasn’t been breachedt; it’s been made irrelevant.”
—Tim Freestone

Jeff Williams, CTO and co-founder of Contrast Security, noted that with AI, the code no longer decides which data should stay data and which should be turned into behavior.

“It’s incredibly dangerous to pass any untrusted data to an LLM, and we don’t really have any great ways to solve this yet. All we can do is establish boundaries — least privilege, policy enforcement, and runtime controls — that minimize attacks and control exploitation.”
—Jeff Williams

Control access and egress

Williams also recommended putting strict policy checks around tool use, separating trusted from untrusted data, requiring human approval for high-impact actions, and using runtime monitoring and protection on everything that matters: prompts, retrieved context, tool calls, and actions. 

“If you cannot see what the agent actually did in runtime, you’re basically guessing.”
—Jeff Williams

AI systems exposed to untrusted data must have their privileges reduced to match the author of that data, said David Brauchler, technical director and head of AI and ML security at the NCC Group.

“We also need to better manage data provenance. We don’t have to know where every piece of data comes from, but we do have to know how much we trust that data relative to the intended execution context.”
—David Brauchler

Brauchler questioned the wisdom of tools such as OpenClaw that take a “one agent to rule them all” approach, adding that “we need to split tasks between low-privilege, sandboxed models and then convert their responses to safe data types.” 

Aner Gelman, vice president of product at Salt Security, said organizations also need to secure the LLM that powers the agents and the action layer. “On top of validating and securing an organization’s MCPs, having a robust API security program helps organizations secure the underlying infrastructure that allows agents to operate,” he said.

For his part, Chris McHenry, chief product officer at Aviatrix, recommends controlling internet egress. “Full default-deny may not be practical for every environment, but it’s powerful where you can apply it. And even where you can’t, you should be layering.”

He advised blocking command-and-control, access to file-sharing sites, and POST and PUT requests to external services. “Apply URL category filtering and threat blocking — the same web-filtering controls you’d put in front of users,” he said.

“These are containment-based approaches and good hardening practices that every security practitioner already understands. AI agents aren’t as unfamiliar as they seem. Treat the workloads like users on the network and you’re already ahead.”
—Chris McHenry

Containment is essential

A control that organizations most consistently underestimate is the governance gate between AI reasoning and execution, said Yogesh Thanvi, a member of the ISACA Emerging Trends Working Group and a senior software development engineer at Akamai Technologies. Last year’s Chevrolet chatbot incident, where a hacker manipulated a dealership AI agent into agreeing to sell a car for $1, showed what can happen when an autonomous actor has no policy checkpoint between reasoning and action, he said. 

“An agent that can call APIs, write to databases, or trigger transactions without that checkpoint is operating with privileged access and no supervision. Every sensitive downstream action should require an explicit authorization decision, not an implicit one inherited from the model’s confidence level.”
—Yogesh Thanvi

Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions and author of AI Identities: Governing the Next Generation of Autonomous Actors, said organizations need to treat AI agents as identities with ownership, access limits, and lifecycle controls. He also recommended data sanitization pipelines to help remove sensitive or malicious inputs, continuous adversarial testing to expose manipulation risks, tight scoping of access to APIs and tools, and frequent rotation of short-lived credentials. Audit trails must capture inputs, outputs, and decisions, not just access events, he stressed.

Kiteworks’ Freestone said the controls that will make the most difference now are containment capabilities: purpose binding that limits what an AI agent is authorized to do, kill switches that can rapidly terminate a misbehaving agent, and network isolation that prevents lateral movement into sensitive systems. Layered beneath those are data-layer enforcement mechanisms, which include attribute-based access control that evaluates every AI request against policy in real time, input validation that guards against prompt injection and data poisoning, and tamper-evident audit logging that creates the evidentiary trail regulators and counsel will demand, he said.

“The common thread is shifting from observation to action. Most organizations can watch an AI agent do something unexpected, but far fewer can actually stop it.”
—Tim Freestone

Get back to the AppSec fundamentals

Eran Kinsbruner, vice president of product marketing at Checkmarx, said risk arises from agentic applications that bring both faster development cycles and more complex system behavior. Careful management is needed, but uncertainty still reigns about how to secure these systems effectively, he said.

“Organizations that regularly test their assumptions, refine their approach, and stay flexible in how they respond to new risks will be in a stronger position as the landscape continues to evolve.”
—Eran Kinsbruner

Contrast Security’s Williams said formulating a practical strategy for securing agentic AI applications is not complicated, but it does require discipline, “which is not how I’d describe most of the AI efforts I’ve seen.” 

“I think the most important thing is to watch what [the agent] actually does in production. Base security decisions on runtime truth, not just architecture diagrams or source code. AI agents increase the speed and complexity of software behavior, which makes runtime visibility and control even more important.”
—Jeff Williams

Brett Smith, a distinguished software developer at SAS, said teams must always remember their AppSec foundations.

“Sanitize all incoming data, grant minimal necessary permissions, and validate outputs before use. These concepts are not new, but they are critically important when you have autonomous agents operating in your infrastructure.”
—Brett Smith

Why trust debt matters

While Opet’s open letter to software providers centered on JPMC’s intention to send noncompliant firms packing, doing that is only part of a broader software supply chain risk program at the bank. JPMC has also boosted its ability to assess the risks across the supply chain, using deep, third-party security assessments of suppliers’ software, threat intelligence to enable proactive vendor risk detection, and business context-focused risk analyses that assess the sensitivity of data and operational impacts associated with specific applications.

Opet said at the RSAC Conference that systems that scale with generative AI can give technologists “a great way to make the business much more effective,” but human-assisted AI tools such as AI coding agents present a challenge. They can greatly increase worker productivity, but their effectiveness depends on their access to employee data. That creates severe cybersecurity and data privacy risks, he said.

JPMC’s move to a controlled architecture should give the bank the confidence to scale AI coding assistance and other AI-powered desktop tools because unintended consequences such as identity theft and abuse will be greatly curbed, RL’s Zdjelar said

Noting that AI coding is putting trust debt in the spotlight, Zdjelar said JPMC’s position on supply chain risk is a good model for organizations to work toward.

“The question every CISO should be asking now isn’t whether they can afford to do what JPMC is doing. It’s whether they can afford not to.”
—Saša Zdjelar

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain SecurityArtificial Intelligence (AI)/Machine Learning (ML)

More Blog Posts

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top
ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabs
ReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu

MCP rug-pull attack worries mount

This new class of AI tool supply chain attack highlights how trust of agents can be exploited.

Learn More about MCP rug-pull attack worries mount
MCP rug-pull attack worries mount

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
MCP attacks
AI coding racing
Finger on map