Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free TrialCoding done with artificial intelligence has rapidly changed how software is created. Netcorp recently provided a compendium of statistics from a variety of surveys and sources showing that nearly half of all code that was written during the first six months of 2025 was AI-generated, and that 82% of developers say they use AI tools weekly.
Contributing to the proliferation of AI-generated code has been the rise of vibe coding, which enables code to be created not only faster, but also by people with limited coding experience. One experienced systems engineer wrote on Reddit that by using vibe coding, his organization had seen a 30% decrease in the time it takes for a feature to go from first proposal to production. “This is huge for us,” noted the author, who goes by TreeTopologyTroubado on Reddit.
But applications written by folks with limited coding experience can create problems for security teams, said Marty Barrack, CISO of XiFins, a health care information technology company and a member of the ISACA Emerging Trends Working Group.
Marty Barrack[Nontechnical people] are not likely to bring the understanding of security issues that well-trained developers bring when using AI tools to help them write code.
Dwayne McDaniel, developer advocate at GitGuardian, said the biggest risks come from skipping the security basics. Because teams don’t want to lose the advantage of vibe coding’s huge speed gains, they may assume that the code the AI generates is secure enough and does not require further review, McDaniel said.
Dwayne McDanielExperienced developers have the background knowledge to read the code and do a gut check that a pattern or an authentication method is bad for a production environment, but I don’t think that describes the majority of people vibe-coding projects right now.
Here’s what you need to know about the new risks emerging from vibe coding — and five lessons to consider before releasing vibed code into production.
Get Guide: How the Rise of AI Will Impact Supply Chain Security
Vibe coding introduces security risks that are distinct from and more immediate than those arising from traditional AI-assisted coding, said Eran Kinsbruner, evangelist for application security (AppSec) at Checkmarx.
Eran KinsbrunerWhile using AI to write code in general raises concerns around insecure patterns, hallucinated dependencies, or license violations, vibe coding amplifies these risks because of how developers interact with AI, not just what AI generates.
In many cases, vibe coders have different skills from those of a developer who uses AI coding tools within the IDE, Kinsbruner said. “In vibe coding, developers rely heavily on natural-language prompts and continuous code suggestions, often skipping traditional guardrails like peer reviews, linting, or manual validation,” he said. “This flow encourages trust without verification, meaning code is merged or executed faster, but often without context, traceability, or security checks.”
Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions, said that when developers rely on what feels right rather than verifying what’s correct, they strip away the controls that make AI-generated code safe.
Rosario MastrogiacomoThe distinctive risk is that no one owns the decision making. You get code in production that works but can’t be explained, with inconsistent patterns for things like authentication, input validation, and encryption. That loss of traceability and accountability is what makes vibe coding uniquely dangerous.
Here are five essential lessons to consider about vibe coding.
Just because something works functionally doesn’t mean that it’s also secure. “AIs and LLMs [large language models] must be given explicit instructions and goals for security,” said Naomi Buckwalter, senior director of product security at Contrast Security.
Naomi BuckwalterFor example, in many cases, authentication and authorization checks are an absolute requirement for service calls to backend data stores. But without broad application insight or explicit instructions, how will the AI be aware enough to include this in the code? The AI needs all the context in order to code something functional and secure. A vibe coder might not be aware of this.
However, TreeTopologyTroubado said that one key step his team takes is to always start with a solid design document and architecture and then building in smaller pieces — always writing their tests first. He wrote:
You still always start with a technical design document. This is where a bulk of the work happens. The design doc starts off as a proposal doc. If you can get enough stakeholders to agree that your proposal has merit, you move on to developing out the system design itself. This includes the full architecture, integrations with other teams, etc.
TreeTopologyTroubado also said to include review in the design before launching the development effort. “This is where you have your team’s design doc absolutely shredded by Senior Engineers. This is good. I think of it as front-loading the pain,” he wrote.
“By generating more code and configurations faster than review and ops can absorb, vibe coding inflates remediation queues and mean time to recovery,” said Iftach Ian Amit, founder and CEO of Gomboc.AI, a provider of automated cloud infrastructure security solutions.
Iftach Ian AmitThis generates the need to address misconfigurations and inaccuracies later on in the cycle and piling even more rework on engineers.
Sphere Technology’s Mastrogiacomo added that each AI-generated snippet might solve a problem slightly differently, so one flaw can appear in dozens of places. “Without documentation or rationale behind those choices, triage becomes guesswork,” he said. “Security teams end up spending more time understanding the code than fixing it. That’s how fast coding turns into slow recovery.”
Vibe-coded solutions add a lot of new functionality to existing applications or build new ones, said Amit. “Naturally, this necessitates the use of third-party libraries and third-party modules in your application,” he continued. “When you are using third-party libraries and modules in your application, there are also a lot of insecure libraries that are automatically pulled into your project because of the vibe-coded IDE [integrated development environment]. As a result, you have major supply chain security risks, as well as application security issues like those produced by insecure code.”
GitGuardian’s McDaniel said that technical debt was originally supposed to mean “do what it takes to get to production, and we will then do the work to pay off this debt and do it correctly as soon as possible,” but “with vibe coding, many projects are speeding up delivery, and then just moving ahead to the next feature or product, with no plan to ever clean up the debt.”
“Secure coding practices demand rigorous control of token handling, session expiration, encryption, and identity validation — details that vibe coding tends to gloss over,” said Checkmarx’s Kinsbruner. When AI-generated code “just works,” he noted, it’s easy to ship it without realizing that secrets or keys may be hardcoded, improperly scoped, or stored in plaintext.
In addition, session or token lifetimes may be insecurely set, enabling replay or hijacking attacks. Authentication flows may bypass multifactor authentication or skip validation under specific conditions, he added, and developers may not fully understand the AI-suggested code paths or the dependencies they introduce, leading to fragile authentication logic that attackers can exploit.
David Svoboda, a software security engineer at Carnegie Mellon University’s Software Engineering Institute, acknowledged that the hazards of vibe coding can be produced using traditional development practices, especially when the developers are inexperienced. He warned, though, that vibe coding can add some hazards that are unlikely even with novice developers. For example, vibe coding can leak sensitive information outside the company, he noted.
David SvobodaMost developers have a test process that they use to determine when the code is done. This may involve compiling the code and running it on some test cases. Vibe coding can only produce code that passes the test process, and this requires the coder to communicate the entire test process to the LLM — a step many coders will skimp on.
Svoboda added that all developers, including novices, do learn and get better over time. “In contrast, LLMs do not improve. Or rather, they only improve in incremental versions,” he said.
LLMs are typically limited by the number of words, in English or code, that they can process at a time, Svoboda noted. And while simple programs are small enough for the LLM to handle at once, most production software contains too many words for the LLM to understand all at once. “This limits the LLM’s ability to understand the code, in much the same way as examining only a few of the code files would limit even the most seasoned developer,” he said.
Casey Ellis, founder of Bugcrowd, said that organizations need to first recognize the inevitability of AI-assisted coding and then try to empower and guide it.
Casey EllisThat’s going to mean something different for every organization. However, I think it’s important as a guiding principle.
Beyond that, Ellis said, several areas are needed:
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial