ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Vibeware bad vibes

Many developers have been vibe coding their way to higher productivity, and now the producers of malicious software are doing the same.

That trend was outlined by Bitdefender researcher Radu Tudorică in a blog post last month.

“Pakistan-based threat actor APT36, also known as Transparent Tribe, has pivoted from off-the-shelf malware to ‘vibeware,’ an AI-driven development model that produces a high-volume, mediocre mass of implants. Using niche languages like Nim, Zig, and Crystal, the actor seeks to evade standard detection engines while leveraging trusted cloud services, including Slack, Discord, Supabase, and Google Sheets, for command and control.”
Radu Tudorică

Bitdefender technical solutions director Martin Zugec explained how the evasion works. 

“Most behavioral detection modules are trained on common languages like C++ or Go. Using niche languages like Nim or Zig tests the depth of these engines, often resetting detection baselines and bypassing signature-based performance layers.”
Martin Zugec

Those niche languages produce compact binaries, have fewer established detection signatures, and are less familiar to traditional security tooling, said Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions. “This reduces the likelihood of static and heuristic detection while enabling rapid iteration,” he said.

“More importantly, these languages support efficient compilation pipelines, allowing attackers to generate high volumes of slightly varied artifacts — each unique enough to evade signature-based defenses but inexpensive to produce. The goal isn’t elegance. It’s entropy.”
Rosario Mastrogiacomo

Here’s what you need to know about vibeware — and how to stop it from bringing bad vibes to your organization.

[ See webinar: Stop Trusting Packages — Start Verifying Them ]

How vibeware works

Modern endpoint security uses multiple behavioral layers, with signatures serving mainly as a performance optimization to quickly identify known threats, said Zugec. That doesn’t work with vibeware.

“Depending on how a security product is implemented, it may be less effective at profiling the intent of a binary written in a less common language. The priority for security teams is to ensure their tools are not just language-agnostic but capable of identifying functional outcomes regardless of the runtime environment.”
—Martin Zugec

Phil Wylie, chief security evangelist at the cybersecurity services firm Suzu Labs, noted the parallel to DDoS attacks. 

“Conceptually, this is similar to a denial-of-service attack, but instead of targeting infrastructure availability, it targets the defender’s ability to detect real threats. It becomes a denial-of-detection problem.”
Phil Wylie

Zugec explained how a distributed DoD works.

“The goal is to throw diverse, disposable malware samples against a target with the hope that one will eventually stick. This is particularly effective against unmonitored environments, which unfortunately still describes most businesses. By using AI to scale mediocrity, an attacker can find gaps in the defenses of companies that have historically flown under the radar while neglecting basic hygiene.”
—Martin Zugec

It’s all about scale

Vibeware highlights a shift in attacker strategy from sophistication to scale, said Rajeev Raghunarayan, head of go-to-market at Averlon. 

“By generating large volumes of varied malware, attackers are creating more noise than most security teams can realistically triage.”
Rajeev Raghunarayan

Raghunarayan added that the volume overwhelms decision making by compressing response time.

Collin Hogue-Spears, senior director of solution management at Black Duck Software, said that flooding detection systems with poor-quality malware forces defenders to spend analyst hours triaging broken code instead of hunting the one functional implant that has already established persistence.

“The goal is not bypassing your defenses. It is exhausting the people who run them.”
Collin Hogue-Spears

Hogue-Spears said APT36 deploys four or five implants per endpoint, each written in a different language with a different C2 channel: a Nim loader for Cobalt Strike, a Crystal-based Warcode instance loading Havoc, a Rust-based SupaServ backdoor, and a Zig-based ZigShell exfiltrator. 

“Neutralize one, and the others keep running. Triage all five, and the incident-response team loses the rest of their shift chasing samples that were designed to be disposable in the first place.”
—Collin Hogue-Spears

Compositional opacity is the point

Roger Grimes, CISO advisor at KnowBe4, said producing high-quality malware is more trouble than it’s worth when malicious chum will do. “It’s low quality because they are using AI to write it, and they aren’t inherently very good programmers themselves, or, even just as likely, don’t care,” he said.

“Even low-quality malware is successful in some percentage of cases. So if you can build lots of it and push it out, enough of it is going to get through for the malicious hackers to believe they have achieved success. But I think if the AI could write high-quality and sophisticated malware for them, they’d use it. I don’t buy the assumption that they are intending to create low-quality malware as part of their flooding campaign.”
Roger Grimes

Vibeware is being used to achieve what’s known as compositional opacity, said Ram Varadarajan, CEO of Acalvio. 

“It blends malicious intent with legitimate cloud-native behaviors and API calls, effectively engineering a cognitive denial of service that obscures the attack’s true signature within a high-volume fog of seemingly benign functional logic.”
Ram Varadarajan

Suzu Labs’ Wylie recommended that organizations respond by focusing on behavioral detection, strong outbound traffic controls, application allow-listing, and automation to reduce alert fatigue. 

“The biggest lesson vibeware teaches us is that AI is changing attacker economics. The real risk isn’t smarter malware. It’s cheaper malware produced at a scale security teams were never designed to handle.”
—Phil Wylie

While a typical DoS attack targets network bandwidth, vibeware targets the triage capacity of unmonitored environments, said Bitdefender’s Zugec. 

“In an environment where every detection should trigger a response, a flood of unique samples is only effective if the organization lacks active [security operations center] or [managed detection and response] monitoring. The goal is to find a functional opening while the defender is overwhelmed by automated noise.”
—Martin Zugec

AppSec fundamentals are essential

We are no longer fighting hackers — we are fighting an automated assembly line that never sleeps, said Noelle Murata, a senior security engineer at Xcape. 

“If your defense strategy relies on recognizing a known bad file, you are trying to win a war of attrition against an adversary with an infinite supply of ammunition.”
Noelle Murata

Defending against automated malware-assembly lines requires abandoning reactive security models, said Jason Soroko, a senior fellow at Sectigo. Defenders should turn to tools that evaluate what a process is doing rather than how its code is structured. They also must audit their cloud services and strictly enforce zero-trust enforcement to contain unauthorized outbound communication, he said.

“As the barrier to generating malware continues to fall, resilience depends on a methodical architecture that anticipates industrialized tactics and neutralizes their core behaviors before volume wins.”
Jason Soroko

Zugec said that AI does not introduce new or novel attacks — it simply adds scalability to existing threats. “Organizations that have neglected basic security for a long time are now exposed by this volume, and they must quickly catch up on foundational hygiene,” he said. What's needed: Employing network segmentation, the principle of least privilege, and active endpoint monitoring, and making the environment hostile and unpredictable to attackers.

But Averlon’s Raghunarayan said changes are needed.

“The focus has to shift from trying to analyze every signal to understanding which exposures actually matter. The priority is reducing the pathways attackers can use to reach critical systems and remediating high-risk exposures before they can be exploited.”
—Rajeev Raghunarayan

David Brumley, chief AI and science officer at Bugcrowd, said vibeware doesn’t require new defenses, but it does punish outdated ones. 

“The fundamentals still apply: layers of defense, good detection, and tested remediation plans for when the inevitable compromise actually happens.”
David Brumley 

The shift toward vibeware marks the end of the era when defenders could defeat an adversary by “breaking” their code, Murata said. “By leveraging AI to mass-produce implants in niche languages like Nim and Zig, threat actors have moved from handcrafted excellence to a model of infinite, mediocre repetition. This isn’t just a technical change. It is a fundamental shift in the economics of defense, she said.

“When an attacker can generate a new, unique variant every five minutes, the cost of being caught drops to zero. To survive this, organizations must abandon the hope of detecting the file and instead focus on the immutable behaviors of an attack — unauthorized data staging, unusual API calls to Slack or Google Sheets, and identity anomalies.”
—Noelle Murata

Risk management in the AI coding age

JPMorgan Chase CISO Patrick Opet spoke at his third-party risk call-to-action letter at RSAC Conference recently, noting that systems that scale with generative AI can give technologists “a great way to make the business much more effective,” but human-assisted AI tools such as AI coding agents present a challenge.

Opet said JPMC is creating a new architecture for AI-powered agents to run on that will limit their access to sensitive information and IT assets. In effect, he said, this separates the employee desktop from the agent desktop.

“Ideally we would want these agents to run an ecosystem where they [have] an identity but no entitlements,” Opet said. That could be a virtual machine or container. When AI-powered tools need access to a resource outside that controlled environment, IT administrators need to understand the need for that access, who that agent is working on behalf of, and what events led to the request for access.

Such a controlled architecture gives JPMC the confidence to scale AI coding assistance and other AI-powered desktop tools because unintended consequences such as identity theft and abuse are greatly curbed.

The problem is much bigger than enterprise SaaS deployments, said Saša Zdjelar, chief trust officer at ReversingLabs (RL).

“What Pat is describing is the unwinding of decades of trust debt. The industry defaulted to implicit trust in vendors because verifying was hard and expensive. JPMorgan is proving that when you actually inspect what’s inside the software you’re buying — the components, the dependencies, the threat models — vendors respond.”
Saša Zdjelar

JPMC’s position on supply chain risk is forward-thinking, said Zdelar, and other firms should be considering the same strategy.

“The question every CISO should be asking now isn’t whether they can afford to do what JPMC is doing. It’s whether they can afford not to.”
—Saša Zdjelar

Learn how RL's free Spectra Assure Community can help your development and AppSec teams get deep insights into your software supply chain via binary analysis.

Back to Top