RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
AppSec & Supply Chain SecurityFebruary 27, 2025

Agentic AI and development: How to get ahead of rising risk

Software teams will need to get on board with agentic AI. But AppSec teams need new visibility and controls for the SDLC.

smiling woman
Ericka Chickowski, Freelance writer.Ericka Chickowski
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
retro toy robot

As technology leadership pushes ever harder to deeply embed AI agents into software development lifecycles — in some cases, even using agentic AI to replace midlevel developers — application security (AppSec) is about to go from complex to a lot more complicated.

The industry is abuzz with hype about how agentic AI could handle functions either semi- or fully autonomously, but as always with hot new technology, the security implications have yet to be fully assessed, said Aquia chief executive Chris Hughes.

While there is tremendous potential and nearly unlimited use cases, there are also key security considerations and challenges.

Chris Hughes

As with so many transformational advances, security teams will get nowhere by trying to obstruct agentic AI. Security leaders and teams must prepare the organization for these new AI agents with new visibility, controls, and governance for the entire software development lifecycle (SDLC).

Here's what your AppSec team needs to know about what's coming with agentic AI — and how to manage risk with increasing SDLC complexity.

Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security

The agentic AI genie is out of the bottle

Agentic AI, artificial intelligence systems designed to make autonomous decisions and actions within business systems, are not a new phenomenon. But enhancements to natural-language processing (NLP) and the advanced reasoning of large language models (LLMs) are making agentic AI capable of making more complex, chained decisions — and then adapting them to less-defined use cases for the business.

These increases in the capabilities and versatility of agentic AI are appealing to enterprises. Gartner estimates that by 2028, about 35% of software will utilize AI agents — and that the agents will make it possible to automate at least 15% of today’s day-to-day work decisions. This estimate encompasses automatable tasks across a range of business functions, from sales to project management.

Tom Coshow, senior director analyst for Gartner, wrote recently that “software developers are likely to be some of the first affected, as existing AI coding assistants gain maturity," Coshow wrote.

Recent coverage by Axios claims that agentic AI is poised to land in 2025. And Meta’s Mark Zuckerberg told Joe Rogan in a recent interview that in 2025, Meta and other companies will have an AI "that can effectively be a sort of midlevel engineer.”

Because the advancements with AI are exciting from an engineering perspective, there’s no putting the agentic AI genie back in the bottle, experts agree, even though AI will bring significant tech and business risks to the application stack.

Agentic AI builds on low-code and no-code

In many ways, agentic AI is extending what the low-code and no-code movement started years ago in its push to arm citizen developers and streamline development workflows. Many of today’s coding assistants and automated AI agents evolved from low-code/no-code platforms.

Some development experts say that agentic AI is poised to blow up the business process layer, replacing business logic for business process workflows, Ed Anuff, chief product officer at Datastax, wrote in a recent think piece about AI agents at The New Stack. In many cases, it will handle a huge chunk of the development work that engineering teams work on today, whether for integration or whole new applications.

When agentic AI is applied to business process workflows, it can replace fragile, static business processes with dynamic, context-aware automation systems.

Ed Anuff

Know the risks of agentic AI in development

In many ways, agentic AI will serve to abstract security problems. Organizations will need to build safeguards and governance around how the agents operate, the security of the code, and the security of the models that run them, all while maintaining and improving the traditional guardrails for the security and quality of code and logic that’s produced either by humans or AI, said Dhaval Shah, senior director of product management for ReversingLabs.

Securing AI in development is like playing chess where the pieces move by themselves. With AI in development, not everything that can be secured can be seen, and not everything that can be seen can be secured.

Dhaval Shah

In particular, agentic AI ratchets up the risks of software supply chain security, Dhaval said, explaining that the addition of AI agents to the development workflow challenges traditional models in two big ways.

First, AI agents blur traditional trust boundaries by seamlessly mixing proprietary, open-source, and generated code, making traditional software composition analysis ineffective. Second, they introduce new dependencies we can't easily track or verify, from model weights to training data, creating blind spots in our security monitoring.

Dhaval Shah

Shah said there are three major risks that AppSec pros will need to stay ahead of as agentic AI takes hold within their development organizations: dependency chain opacity, an expanded attack surface, and emergent behaviors.

Dependency chain opacity

As AI agents and coding assistants are tasked with autonomously selecting and integrating dependencies, supply chain blind spots are going to grow and become more plentiful, Shah said. “Agentic AI creates blind spots in our security visibility. Unlike human developers, who might carefully vet a library, AI can pull from numerous sources simultaneously, making traditional dependency tracking insufficient,” he said.

Expanded attack surface

As agentic AI-driven coding assistants grow more sophisticated in executing multistep, chained software engineering tasks, they’ll be touching and interacting with a broader range of systems, applications, and APIs. This is going to expand the attack surface of not only the applications but the development stack itself.

This interconnected nature creates a broader attack surface where a single weak link can compromise the entire workflow. For example, an AI agent coordinating a supply chain could be exploited to inject malicious instructions across multiple systems.

Dhaval Shah

Emergent behaviors

As AI collaborates with human developers, emergent vulnerabilities may arise from unforeseen interactions between AI-generated snippets and hand-crafted code, Shah said. “This blend can create novel, complex failure modes that defy traditional testing and threat models.”

For example, research is already emerging showing how attackers are turning their sights to open AI models to establish novel malware attack techniques. ReversingLabs research recently outlined one such scheme that targeted machine-learning sharing platform Hugging Face with models containing malicious code designed to avoid that platform’s security scanning mechanism.

How security teams can come together

Security professionals will need to collaborate to stay abreast of the risks presented by agentic AI and create the right blend of visibility controls over an increasingly complicated SDLC. OWASP recently introduced new threats and mitigations guidance focused on agentic AI, complete with concrete threat modeling information and advice on early mitigation strategies.

Aquia's Hughes said that a recent thought piece titled "Governing AI Agents," by Noam Kolt of the Governance of AI Lab at Hebrew University, should be required reading for AppSec teams.

As we prepare to see pervasive agent use and implementation, we need to address many issues related to agentic governance.

Chris Hughes

ReversingLabs' Shah said security leaders need to balance strategic oversight with immediate controls, because agentic AI is already here. That means deploying AI-aware monitoring that tracks both code generation and dependency inclusion, creating automated security gates that match AI development speed, and establishing clear boundaries for AI tool usage in critical code.

On the broader strategic front, Shah said organizations will need to implement trust-but-verify automated security baseline checks and to maintain human-review check points for security-critical change to code and logic. He also recommended that, wherever possible, teams should be running AI development in contained environments with defined boundaries.

Think of it like giving AI a sandbox to play in, but with clear rules and constant supervision. The key isn't containing AI — it's channeling its power within secure guardrails.

Dhaval Shah

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

AI coding racing

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?
Finger on map

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
CRA accelerates advantage

The CRA is coming: Are you ready?

Here's how the EU's Cyber Resilience Act will reshape the software industry — and how that accelerates advantages.

Learn More about The CRA is coming: Are you ready?
The CRA is coming: Are you ready?

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top