Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial
Anyone in the software industry who still hasn’t accepted the fact that security that’s bolted on after the fact rather than built in isn’t all that secure is likely to see the light once AI-aided development increases volume and multiplies vulnerabilities.
In fact, AI has already prompted even laggard organizations to shift left and adopt Secure by Design principles in their development pipelines, offsetting AI-aided development with AI-native security.
Building AI directly into the architecture of a security platform changes what the platform can do, said Randolph Barr, CISO of Cequence Security.
Randolph BarrAI-native means built, not bolted. It should not be a feature enhancement.
Barr compared what’s happening now to what occurred during the shift from running applications on premises to using software as a service (SaaS). “Organizations that re-architected for the cloud achieved scalability and resilience, while those that simply layered SaaS wrappers over legacy systems created long-term operational friction,” he said.
In the case of baked-in AI-native security, Barr continued, it can read code, build context graphs, link findings, and drive workflows for prioritization and remediation. “This lets automated, contextual decision making happen across the SDLC,” he said
If AI is used only after scans of the code to generate summaries, it’s just adding to output, Barr said. “Event-driven pipelines, graph- or vector-based representations, and learning-centric workflows that really improve signal quality … are all signs of a true AI-native system,” he said.
Here's what you need to know about AI-native application security (AppSec) — and why it matters for advancing your software risk management.
[ See webinar: Trust But Verify: Secure the AI You Build, Buy and Deploy ]
AppSec teams have to keep pace with modern development velocity, said Melody (MJ) Kaufmann, an author and instructor at O’Reilly Media. “AI-generated code increases both speed and risk,” she said.
Eric Schwake, director of cybersecurity strategy at Salt Security, agreed that building AI into the architecture of security platforms is necessary with AI-driven software development. “AI-driven platforms can continuously discover assets, correlate behavior across environments, and identify patterns that humans would miss," he said.
And while development is accelerating, AI’s benefits are also accruing to the bad guys. “Fight fire with fire,” advised Brett Smith, a software developer at SAS.
Brett SmithWe have to fight back against aggressive attacks powered by AI, using AI for things like root-cause analysis, anomaly detection, and predictive threat modeling.
With AI, defenses are quickly outmatched by offensive capability, he said. “Attackers are already using AI to fuzz our defenses. They generate polymorphic malware and script complex attacks at machine speed. If your defense relies on human analysts staring at dashboards, the battle is already lost.”
But a note of caution was sounded by Willy Leichter, CMO of PointGuard AI. “As AI becomes core to security, it also expands the attack surface and becomes a prime target," he said.
Nor is AI-native integration the final word in software security, said Steven Swift, managing director of Suzu Testing.
Steven SwiftBuilding AI into security platforms is most helpful when it’s another layer added into an existing, well-functioning security stack. It is not a good replacement for a security stack.
AI-native security is most appropriate when a platform naturally requires prompts that fit into existing context windows, he said. “AI is great for speed and when a nondeterministic answer is acceptable.”
Faster attacks require faster responses. Eran Kinsbruner, vice president of product marketing at Checkmarx, noted that an AI coding vulnerability can be exploited in less than an hour. Organizations naturally want to release software to the market faster, but they can’t sacrifice quality and security to speed, he said.
Eran KinsbrunerWith AI-native security, they can continue driving velocity, but with built-in, baked-in safeguards throughout the entire process. They can continuously deliver software and value to their customers while increasing the velocity.
That helps security teams more effectively deal with the AI threat landscape, he said, looking for prompt injection and system prompt leakage early in the software development lifecycle (SDLC), address excessive agency, and tackle new threats such as unbound consumption, and vector and embedding weaknesses.
All of this will take some time to figure out, said Saumitra Das, vice president of engineering at Qualys, but shifting security earlier in the SDLC is going to have to become the norm.
Saumitra DasWaiting for SecOps to come back and tell you what to fix later will not work in the new world of AI-generated software.
One of the greatest risks arises because large language models (LLMs) and autonomous agents introduce unpredictability, delegation, and decision making into application environments, said Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions and author of AI Identities: Governing the Next Generation of Autonomous Actors.
Rosario MastrogiacomoTraditional AppSec tools were designed to detect vulnerable code patterns, not to evaluate reasoning systems that can dynamically call APIs, escalate tasks, or chain actions together.
What’s needed, he said, are AI-native AppSec platforms that can model agent behavior over time, detect when an agent’s actions drift from its baseline, and trace complex API chains across services. Such platforms “can analyze intent signals, flag prompt injection attempts, and monitor tool invocation patterns,” he said.
“In dynamic, multi-agent ecosystems, the risk is no longer a single flawed line of code, Mastrogiacomo said. “It’s a cascade of decisions. AI-native AppSec is built to observe and interpret those cascades before they become incidents.”
Maria Paula Ariza, a senior security engineer at Iru, cautioned that security teams need to be cautious about treating everything that AI-native tools produce as fact.
Maria Paula ArizaThere are many instances where a tool will flag code as an issue simply because it lacks full context of the feature it’s reviewing — even tools that claim to take into account full feature context. In other cases, a finding may be marked as critical when it does not truly warrant that level of severity, again due to limited context.
But such caution has always been warranted, she said. “Most security tools have similar limitations. In my opinion, there will always need to be a security professional involved to validate findings and provide the final layer of judgment and verification.”
Security teams were being overwhelmed by security alerts long before AI-generated code juiced development speed beyond what human security teams can handle, said Goh Ser Yoong, CISO of the Ryt Bank in Kuala Lumpur, Malaysia, and a member of the ISACA Emerging Trends Working Group. But AI-native tools don’t suffer from alert fatigue.
Goh Ser YoongAn AI-native AppSec tool would be able to detect, generate the exact patched code, and submit a pull request along with it. This will help enable the cybersecurity team to review a proposed PR fix along with the engineering team, thus reducing mean time to remediate.
Nonetheless, he said, security teams can’t blindly accept all the PR submissions without understanding the underlying logic. “Those auto-fixes could introduce a high volume of technical debt,” Yoong said.
They could also become an attack surface. “Those fixes may not arrive if the tool itself is vulnerable to being tricked into omitting certain vulnerabilities and skip sending the alerts because it has been told to ignore and mark such vulnerabilities as safe without humans finding it out,” he warned.
Sphere’s Mastrogiacomo said AI-native AppSec marks a shift in how we think about AppSec in an era of autonomous systems.
Rosario MastrogiacomoAs AI agents increasingly provision access, generate code, and interact with live production systems, the security perimeter is no longer just infrastructure or software. It includes decision-making entities.
Organizations have to think of AI systems as governed actors, not just tools, he said. “That means embedding identity controls, behavioral monitoring, and lifecycle oversight from day one. Security in the AI era is not about slowing down innovation; it’s about ensuring autonomy does not outpace accountability.”
Learn how to develop your own AI security playbook in this webinar with Doug Levin and RL's Tomislav Peričin.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial