Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial
By giving developers the ability to generate code using simple AI prompts based on their own “vibes” at the moment, vibe coding is being taken up at a rapid clip. Nonetheless, deep concerns about the practice remain.
Code generated using AI prompts is not written with an eye on matters such as critical code security, regulatory compliance, and privacy guardrails. For businesses, that makes it a dangerous proposition.
But efforts are underway to address those concerns. Vendors are beginning to release purpose-built tools to developmement teams that are meant to tame the vibe-coding Wild West.
Security expert Chris Hughes wrote about one such tool, OX Security's VibeSec, in a recent blog post. Here’s what you need to know about VibeSec and other developer-focused tools — and why they are not a comprehensive control for managing risk from vibe- and AI-coding.
Get Guide: How the Rise of AI Will Impact Software Supply Chain Security
Hughes writes frequently about the seduction of vibe coding and the practice’s risks for businesses. What keeps application security (AppSec) leaders up at night are things such as its inherent lack of security and a worry that developers will lose expertise.
He said in his post that VibeSec addresses some of the biggest concerns about vibe coding: It uses large language models (LLMs) and agents to tackle some of AppSec’s longstanding security challenges, “including friction for developers and their workflows, runaway vulnerability backlogs, and the burden of determining whether remediations will cause breaks in functionality.”
Chris HughesOX’s VibeSec approach involves combining several foundational purpose-built capabilities … to equip AppSec practitioners and security teams to keep pace with the new development velocity.
Those capabilities include OX Mind, a multi-agent system that uses retrieval-augmented generation (RAG) to gather information and relevant threat models for building code; an AI data lake that leverages OX’s security database; and environmental mapping to create context for an organization’s infrastructure, codebase, needs, and architecture.
VibeSec, which is provided as SaaS, also offers policy integration, to help make security policies actionable, and trusted security practices such as cross-platform threat modeling.
Broadly, VibeSec works to prevent risky code from being written in the first place by providing security requirements and attention to the AI agent before it generates code. As AI coding agents switch tasks during development, VibeSec helps them focus on security without adding friction, OX said.
Hughes stressed that there are clear positives to VibeSec’s approach.
Chris HughesIn the era of agentic coding and exponential developer productivity, OX’s VibeSec platform introduces a new security paradigm, leveraging the same emerging technologies as our development peers, and breaking the legacy model of cybersecurity being a late adopter and laggard. It does all of this while integrating natively with developers’ workflows and tooling and bringing Secure-by-Design from rumor to reality.
In addition, he said, it helps reduce vulnerability backlogs; minimizes developer toil with context-driven, threat-informed automated remediations and runtime insights; and breaks free from the “shift-left” security model of initiating security reviews earlier in the software development lifecycle, which he said has failed to keep up with modern development practices and tools such as agents.
Hughes told RL Blog that while VibeSec will be helpful, it’s not a comprehensive control.
Chris HughesIt is definitely not a total fix for the problems of vibe coding. As in cybersecurity, there is no silver bullet. That said, it is an excellent example of cybersecurity and the vendors in our ecosystem leveraging LLMs and agents to address longstanding systemic challenges in AppSec.
Ultimately, VibeSec and other vendors’ vibe-coding tools could help make the practice less dangerous because they bake security into the development workflow natively, Hughes said. “And it leverages AI to not just find problems, but to directly provide fixes to the developer while they are writing code before anything ever reaches production. This helps mitigate risks, including in critical enterprise environments. These are the sort of AppSec tools the community needs in the era of vibe coding and AI code generation.”
Hughes said that while he sees the benefits of vibe coding for developer creativity and productivity, it is not workable for enterprises without better security controls. “It cannot be done without some level of security rigor, validation, and the integration of security capabilities into those workflows,” he said.
Chris HughesIt has been shown in study after study that AI coding tools and platforms are producing code with inherent vulnerabilities. This is not surprising, given that most of the major models are trained on large bodies of open-source code, which of course brings with it vulnerabilities and weaknesses that the models then perpetuate.
Neatsun Ziv, the co-founder and CEO of OX Security, told RL Blog that the company created VibeSec because it is dangerous that so many enterprise developers are using vibe coding in their work without integrating their company-required security and safety controls. Many companies do not even know that it is happening inside their operations, he added.
Ziv likened this to shadow IT on development teams.
Neatsun ZivRight now, we are seeing that about 4% of all new codebases from a few hundred customers are being written using AI, and it is accelerating fast. About five months ago, the industry discovered what is called ‘agent mode,’ or ‘you only live once’ — YOLO. It is happening. It is mind-blowing.
Ziv said he has not seen one developer try vibe coding and then go back to their old methods. “I am talking about thousands of developers in our install base,” he said.
Neatsun ZivI am not saying it is safe. I am saying it is an opportunity to fix something that has been broken in our industry for a long time now. Vibe coding developers are already there — security is just catching up.
Despite the promises of tools such as VibeSec, leading analysts stress that vendors are racing to catch up to vibe coding’s risks.
Katie Norton, a DevSecOps analyst at IDC, said VibeSec is positioning itself as a security counterpart to this new style of AI-assisted development.
She said the platform embeds dynamic, organization-specific security context directly into AI coding agents and editors, so vulnerabilities can be prevented before or while code is generated. It runs autonomously in the background while developers use AI tools such as Copilot or Cursor, continuously drawing on live data from the organization’s environment, including code, cloud, APIs, and runtime, to guide secure code suggestions in real time.
Katie NortonIn theory, this allows the tools to produce Secure by Design suggestions, or at least to prevent known insecure patterns from being written in the first place.
But despite VibeSec’s claim that it is the first vibe-coding security tool to market, other vendors are doing similar things. “This is a general trend happening in AppSec,” she said.
Katie NortonASPM [application security posture management] platforms are particularly well suited for this type of technology because they already specialize in aggregating, normalizing, and contextualizing security data across the entire software ecosystem. That contextual intelligence is what AI coding agents need to make informed decisions.
By feeding that context into AI development tools, ASPM can become an intelligence layer that allows autonomous agents to enforce security policy and risk prioritization dynamically, at the point of code creation, she said.
Because of the speed and volume of AI-generated code, Norton said, the broader movement across the industry is bringing security back into developer environments. “Earlier IDE-based security plugins were essentially reactive, adding the squiggle lines under issues, but they only had the context of the file being worked on and often just pointed a developer out to a training or website that generally described the vulnerability or issue and generic advice on how to fix it,” Norton said.
Katie NortonThat model helped raise awareness, but it relied on the developer to stop, interpret the finding, and manually fix it, which does not scale in the era of AI-accelerated development. Automated remediation was the next step, where actual code fixes that are specific to the code the developer was working on would be suggested by AI … but these were often still scoped to only the file in use.
Today, there is a shift from detection to co-creation, where security is no longer just commenting on what is written but is participating in the writing process itself, Norton said. “Agent-assisted or even agent-to-agent collaboration is what we are seeing now, where security agents work alongside coding agents to automatically make edits, suggest safer alternatives, or apply organization-specific guardrails in real time,” she said
But the bigger question is whether developers are ready for this, Norton said. “I do not have a good sense of the accuracy or the willingness of developers to have agents autonomously co-work with them yet. I imagine there would be concerns around changes impacting the functionality of the code, being aware of the changes being made, etc.,” she said.
Paul Nashawaty, principal analyst for application development and modernization at theCUBE Research, said that while VibeSec takes direct aim at one of the biggest headaches in modern development — insecure AI-generated or vibed code — it is still too early to declare that such tools will be able to deliver true safety and security to developers and enterprises.
Paul NashawatyThey are pitching it as a prevention-first solution that embeds an organization’s own security rules and context directly into the code-generation process. It is worth keeping the enthusiasm in check until the claims are verified. Prevention at the generation layer is the right architectural approach, but the proof will be in how reliably it works across diverse environments and models and how much friction it adds for developers.
Also needed is insight into how tools such as VibeSec handle context and data privacy, since embedding organization-specific knowledge into AI workflows introduces its own governance challenges, he said.
“VibeSec looks like a promising evolution in securing AI-driven development, and OX is thinking in the right direction,” Nashawaty said. But he recommends that organizations do a controlled pilot first, then test it in a low-risk environment, measure its impact, and validate its prevention claims with their own data before expanding. If it performs as advertised, this could become a strong tool in the “shift-left without slowing down” toolbox, he said.
The rise of agentic AI for coding introduces the next wave of software supply chain risk, said Dhaval Shah, senior director of product management for ReversingLabs. He said security leaders need to balance strategic oversight with immediate controls because agentic AI is on the rise. That means deploying AI-aware monitoring that tracks both code generation and dependency inclusion, creating automated security gates that match AI development speed, and establishing clear boundaries for AI tool usage in critical code.
On the broader strategic front, Shah said, organizations will need to implement trust-but-verify automated security baseline checks and to maintain human-review checkpoints for security-critical changes to code and logic. He also recommended that, wherever possible, teams should be running AI development in contained environments with defined boundaries.
Dhaval ShahThink of it like giving AI a sandbox to play in, but with clear rules and constant supervision. The key isn't containing AI — it's channeling its power within secure guardrails.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial