Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial
Software teams who still haven’t accepted that security bolted on after the fact isn’t secure are now in the hot seat as AI-aided development increases volume and multiplies vulnerabilities.
AI has already prompted even laggard organizations to shift left and adopt Secure by Design principles in their development pipelines, and now the move is on to offset AI-aided development with AI-native security.
Building AI directly into the architecture of a security platform changes what the platform can do, said Randolph Barr, CISO of Cequence Security. “AI-native means built, not bolted. It should not be a feature enhancement,” he said.
What’s happening now with the the shift from running applications on premises to using software as a service is that organizations have re-architected for the cloud scalability and resilience — while those who simply layered in SaaS wrappers over legacy systems "created long-term operational friction,” Barr said.
In the case of AI-native security, it can read code, build context graphs, link findings, and drive workflows for prioritization and remediation. “This lets automated, contextual decision making happen across the SDLC,” Barr said
Barr said that AI is used only after scans of the code to generate summaries, it’s just adding to output. “Event-driven pipelines, graph- or vector-based representations, and learning-centric workflows that really improve signal quality … are all signs of a true AI-native system,” he said.
Here 's what you need to know about AI native-security — and why it matters to managing risk.
Learn more: 4 Pillars of AI Security: How to Protect Your AI Lifecycle
Application security (AppSec) teams have to keep pace with modern development velocity, said Melody (MJ) Kaufmann, an author and instructor at O’Reilly Media. “AI-generated code increases both speed and risk,” she said.
Eric Schwake, director of cybersecurity strategy at Salt Security, agreed that building AI into the architecture of security platforms is necessary with AI-driven software development. "AI-driven platforms can continuously discover assets, correlate behavior across environments, and identify patterns that humans would miss,” he said.
And while development is accelerating, AI’s benefits are also accruing to the bad guys. Fight fire with fire, said Brett Smith, a software developer at SAS. "We have to fight back against aggressive attacks powered by AI, using AI for things like root-cause analysis, anomaly detection, and predictive threat modeling," he said.
With AI, defenses are quickly outmatched by offensive capability, he said. “Attackers are already using AI to fuzz our defenses. They generate polymorphic malware and script complex attacks at machine speed. If your defense relies on human analysts staring at dashboards, the battle is already lost,” Smith said.
Willy Leichter, CMO of PointGuard AI, gacve a note of caution. “As AI becomes core to security, it also expands the attack surface and becomes a prime target,” he said.
Steven Swift, managing director of Suzu Testing, n0ted that AI-native integration is not the final word in software security.
“Building AI into security platforms is most helpful when it’s another layer added into an existing, well-functioning security stack. It is not a good replacement for a security stack.”
—Steven Swift
AI-native security is most appropriate when a platform naturally requires prompts that fit into existing context windows, Swift said. “AI is great for speed and when a nondeterministic answer is acceptable.”
Faster attacks require faster responses. Eran Kinsbruner, vice president of product marketing at Checkmarx, noted that an AI coding vulnerability can be exploited in less than an hour. Organizations naturally want to release software to the market faster, but they can’t sacrifice quality and security to speed, he said.
“With AI-native security, they can continue driving velocity, but with built-in, baked-in safeguards throughout the entire process. They can continuously deliver software and value to their customers while increasing the velocity.”
—Eran Kinsbruner
That helps security teams more effectively deal with the AI threat landscape, he said, looking for prompt injection and system prompt leakage early in the software development lifecycle (SDLC), address excessive agency, and tackle new threats such as unbound consumption, and vector and embedding weaknesses.
All of this will take some time to figure out, said Saumitra Das, vice president of engineering at Qualys, but shifting security earlier in the SDLC is going to have to become the norm.
“Waiting for SecOps to come back and tell you what to fix later will not work in the new world of AI-generated software.”
—Saumitra Das
One of the greatest risks arises because large language models (LLMs) and autonomous agents introduce unpredictability, delegation, and decision making into application environments, said Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions and author of AI Identities: Governing the Next Generation of Autonomous Actors.
“Traditional AppSec tools were designed to detect vulnerable code patterns, not to evaluate reasoning systems that can dynamically call APIs, escalate tasks, or chain actions together.”
—Rosario Mastrogiacomo
What’s needed, he said, are AI-native AppSec platforms that can model agent behavior over time, detect when an agent’s actions drift from its baseline, and trace complex API chains across services. Such platforms can analyze intent signals, flag prompt injection attempts, and monitor tool invocation patterns, he said.
In dynamic, multi-agent ecosystems, the risk is no longer a single flawed line of code, Mastrogiacomo said. “It’s a cascade of decisions. AI-native AppSec is built to observe and interpret those cascades before they become incidents.”
Maria Paula Ariza, a senior security engineer at Iru, cautioned that security teams need to be cautious about treating everything that AI-native tools produce as fact.
“There are many instances where a tool will flag code as an issue simply because it lacks full context of the feature it’s reviewing — even tools that claim to take into account full feature context. In other cases, a finding may be marked as critical when it does not truly warrant that level of severity, again due to limited context.”
—Maria Paula Ariza
But such caution has always been warranted, she said. “Most security tools have similar limitations. In my opinion, there will always need to be a security professional involved to validate findings and provide the final layer of judgment and verification.”
Security teams were being overwhelmed by security alerts long before AI-generated code juiced development speed beyond what human security teams can handle, said Goh Ser Yoong, CISO of the Ryt Bank in Kuala Lumpur, Malaysia, and a member of the ISACA Emerging Trends Working Group. But AI-native tools don’t suffer from alert fatigue.
“An AI-native AppSec tool would be able to detect, generate the exact patched code, and submit a pull request along with it. This will help enable the cybersecurity team to review a proposed PR fix along with the engineering team, thus reducing mean time to remediate.”
—Goh Ser Yoong
Nonetheless, he said, security teams can’t blindly accept all the PR submissions without understanding the underlying logic. Those auto-fixes could introduce a high volume of technical debt, Yoong said.
They could also become an attack surface. “Those fixes may not arrive if the tool itself is vulnerable to being tricked into omitting certain vulnerabilities and skip sending the alerts because it has been told to ignore and mark such vulnerabilities as safe without humans finding it out,” Yoong warned.
Sphere’s Mastrogiacomo said AI-native AppSec marks a shift in how we think about AppSec in an era of autonomous systems. "As AI agents increasingly provision access, generate code, and interact with live production systems, the security perimeter is no longer just infrastructure or software. It includes decision-making entities." he said.
Organizations have to think of AI systems as governed actors, not just tools, Mastrogiacomo said.
“That means embedding identity controls, behavioral monitoring, and lifecycle oversight from day one. Security in the AI era is not about slowing down innovation; it’s about ensuring autonomy does not outpace accountability.”
—Rosario Mastrogiacomo
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial