Vibe coding is having its moment as the latest hoped-up AI technology, but busy enterprise development and security operations teams have to be aware of its risks.
In a recent blog post for ShiftMag, Frantisek Lucivjansky, a principal developer at Infobip, wrote that vibe coding’s greatest danger is that the uninformed have illusions about it, thinking that if they can use it to build something that works, all is well. But the resulting code "might also be a black box, brittle and opaque," Lucivjansky wrote.
“That’s not engineering. That’s hoping.”
—Frantisek Lucivjansky
Developers have to understand the code they produce, and that requires the whole process of building applications, Lucivjansky said, including writing, debugging, and refactoring code — often discarding partial solutions along the way. “It is not just mechanical work. It’s cognitive training. It teaches us how things fit together. It’s through this very struggle that we build confidence in our understanding and acquire the mental models necessary to work with complexity,” he wrote.
For Lucivjansky, a big problem with vibe coding is that AI leaves responsibility and trust out, exposing enterprises to code errors, incompatibilities, security gotchas, and worse.
“Trust in software isn’t just about whether it runs. It’s about whether we understand it well enough to take responsibility for it. And if no one can honestly say, ‘I know how this works,’ then we’ve built a liability, not a system.”
—Frantisek Lucivjansky
Going beyond blind trust (read: visbility) is key for software risk management. Here's what you need to know about the risks associated with vibe coding.
[ Report: How AI Impacts Supply Chain Security | Blog: How to Secure your AI with an ML-BOM ]
Vibe coding: Good for experimentation — not production
Dylan Beattie, founder and director of software development consultancy Ursatile, wrote a recent blog post about vibe coding's shortcomings. It may be a great tool to experiment with, he wrote, but if you are writing software that you intend to ship to paying customers, “that’s a whole different ball game.”
"One of the genuinely positive things about tools like Copilot and ChatGPT is that they empower people with minimal development experience to create their own programs. Little programs that do useful things — and that’s awesome. More power to the users. But that’s not product development, it’s programming. They aren’t the same thing. Not even close."
—Dylan Beattie
To Beattie, the rise of vibe coding is just the latest indication that a lot of people working in tech don’t understand the difference between programs and products. “Probably the single most important lesson I’ve learned in my career, the thing that I would argue is the hallmark of experience, is understanding just how much work it takes to turn a working program into a viable product,” he wrote.
How vibe coding can get ugly in enterprises
Scott Germaise, a longtime digital product management executive, wrote in a recent LinkedIn post about what he called "the good, the bad, and the tragically ugly" in the world of AI-generated code. “In some ways, there's potentially legitimate value that will come out of this,” wrote Germaise. “At the same time, I think it's likely we'll see some seriously tragic outcomes.”
“Newer developers will lose critical thinking skills needed to build great and safer code if they rely on vibe coding. ... Even with great AI coding tools, we'll still likely need senior level coding skills and system architects.”
—Scott Germaise
But leveling up developers' skills can be problematic even today, he wrote, when remote working makes it difficult for mentoring to happen.
For mission-critical industries such as health care, aerospace, and public infrastructure, AI-generated code may not meet rigorous safety and reliability standards required, wrote Germaise. “A single unnoticed AI-generated error could have catastrophic consequences. Will individuals or teams rushing to market have the background and sense to do all the proper regression testing needed?”
In addition, governance, risk, and compliance (GRC) teams may struggle to keep up, he wrote. “AI is certainly being used and planned for in safety and mission critical applications from wildfire detection to homeland security. In these cases though, the tech is being used as a tool, not necessarily a full solution.”
There are huge financial risks to enterprises as well. “Businesses relying too heavily on AI-generated code without oversight may find themselves exposed to major security vulnerabilities, compliance failures, or lawsuits, ultimately damaging their bottom line,” Germaise wrote.
The dangers of vibe coding may outweigh rewards
Kevin Breen, senior director of cyberthreat research at Immersive Labs, said that for developers using vibe coding today, "the dangers currently outweigh the rewards.” AI can be a powerful tool in the development process, but it does not have the maturity and level of understanding needed to build secure code, Breen told RL Blog.
“I urge developers not to use GenAI to 'vibe code' full applications, but they should be empowered to use it as part of the developer workflow. All of this, of course, needs to be supported by human review to ensure code integrity and security across any business.”
—Kevin Breen
Using vibe coding to build an entire tech stack with front-end and back-end components today is highly risky for enterprises, Breen said. “The AI will write poor code and not understand authentication or data flow logic, and business logic flaws will likely creep into the code base. On the other hand, vibe-coding small modules or stand-alone scripts, while still prone to including bad or vulnerable code, has a lower likelihood as the AI doesn't hit complexity or context window limits that are a large contributor to hallucinations.”
As with most things in security, it's about understanding the tools and how they work, understanding your risks, and then putting the right steps, tools and processes in place to mitigate the danger as much as possible. Finally, as with any code that is written, a human must remain in the loop, especially when it comes to reviewing any code written in full or in part by AI before it goes live. The dangers are real, and saying “The AI did it” is not going to get anyone off the hook.
Vibe coding: Review before sending
Georgia Weidman, founder and CEO of Bulb Security, said vibe coding is like autocorrect for software, needing the same precautions.
“It can save time, but if you trust it unquestioningly, you’re going to ship bugs, vulnerabilities, or worse. The issue isn’t the use of AI; it’s the abdication of engineering judgment. The risk isn’t in the tool — it’s in how developers use it without a deep understanding of the output. That’s where vibe coding becomes a real liability.”
—Georgia Weidman
But there is no going back to application development without AI, she said. “Organizations need a balanced approach. Acknowledge the benefits of AI-assisted development. Blocking it entirely is a losing battle. This isn’t about banning AI. It’s about managing the risks.”
Managing risks must include the setting of clear guardrails, with AI output receiving reviews with at least the same rigor that human-generated code receives, Weidman said. And for audits and attribution, developers should disclose when they have used AI tools to generate code, just as they would cite a third-party library, she added.
“In security, we train developers to think like attackers, to ask, ‘How could someone abuse this code? Where could this logic be exploited?' That same level of scrutiny should apply to AI-generated suggestions. Instead of asking, ‘Does this code run?’ we need to ask, ‘Is it safe? Does it introduce assumptions I can’t verify? Could this create a vulnerability or leak information?’”
—Georgia Weidman
Is vibe coding destined to fail?
So are things hopeless when it comes to vibe coding? Not at all, Lucivjansky, the ShiftMag author, told RL Blog in an interview. “AI is changing our industry, and it will change it more dramatically over next years," he said.
"If enterprises have individuals or teams who are interested in the AI world in general, they should not block them to explore new ways of software development and delivery. They must simply ensure that the quality bar is not lowered and that it fulfills all existing compliance which already applies for people or existing processes.”
—Frantisek Lucivjansky
Lucivjansky called vibe coding “just current hype" that will slowly go away because developers will see its flaws and other people will soon realize that writing software is not just about code itself. Other timesaving tools in the past, including drag-and-drop editors, tried to mimic developers, and they didn’t succeed, he noted.
What will be critical, Lucivjansky said, is that any vibe coding work be rigorously vetted. “If engineering teams are willing to experiment with vibe coding, they should understand vide-coded code the same way as any other code which is manually written and apply the same quality and security processes for it as well.”
The most important lesson: Less experienced developers should not fully rely on AI, he said. “They should continue building their knowledge the hard way — trying to write something from scratch without using AI, learn how to read and debug code, understand deeply the programming language of their choice and all its capabilities, and so on,” said Lucivjansky.
“AI in this case should be used as a helper tool to achieve it, not as a tool which does everything for us.”
—Frantisek Lucivjansky
Vibe coding should be a reminder to developers that trading quick progress for knowledgeable coding is not a strategy for success, Lucivjansky said. “Vibe coding generates tons of code in seconds, which someone needs to review, understand, and build a connection to. If that is omitted from the process, then we don’t know how the system works, and we don’t have anyone who we can ask about it besides AI. And if AI struggles, then we are screwed.”
That is why knowing how to code step by step is better than asking AI to do the work for you, Lucivjansky said.
“It’s far easier and faster to write code by hand than try to understand someone else’s code — in this case AI-generated code — because coding is not just about code itself. It is also about designing architecture, communicating decisions in efficient ways, [and] getting deep understandings of the system. If we let everything be vibe-coded, we will lose what we value the most: knowledge.”
—Frantisek Lucivjansky
Learn about NIST's adversarial ML guidance — and how ReversingLabs can secure your organization.
Keep learning
- Read the 2025 Gartner® Market Guide to Software Supply Chain Security. Plus: See RL's webinar for expert insights.
- Get the white paper: Go Beyond the SBOM. Plus: See the Webinar: Welcome CycloneDX's xBOM.
- Go big-picture on the software risk landscape with RL's 2025 Software Supply Chain Security Report. Plus: See our Webinar for discussion about the findings.
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and learn how RL discovered the novel threat,
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.