RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
Security OperationsJuly 31, 2023

WormGPT: Business email compromise amplified by ChatGPT hack

Selling for $1,000 on the dark web, the email fraud tool leverages generative AI to improve cybercriminals' effectiveness.

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
WormGPT: Business email compromise amplified by ChatGPT hack

Since OpenAI introduced ChatGPT to the public last year, generative AI large language models (LLMs) have been popping up like mushrooms after a summer rain. So it was only a matter of time before online predators, frustrated by the guardrails deployed by developers to keep abuse of the LLMs in check, cooked up their own model for malevolent purposes.

Such a model was recently discovered by cybersecurity services company SlashNext. It's called WormGPT. Daniel Kelley, a reformed black-hat hacker who works with SlashNext to identify threats and tactics employed by cybercriminals, wrote in a company blog post:

As the more public GPT tools are tuned to better protect themselves against unethical use, the bad guys will create their own. The evil counterparts will not have those ethical boundaries to contend with. [W]e see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes.

Daniel Kelley

Kelley said that in addition to building custom modules, WormGPT's creators are advertising their wares to fellow bad actors. According to one source, WormGPT is selling for $1,000 on the dark web.

Here's what researchers know about WormGPT — and what your team can do to fight back against this new AI-fueled threat.

Learn more about ReversingLabs Threat Intelligence Explore Threat Intelligence for Microsoft Sentinel

Generative AI hack trained for mischief

WormGPT is believed to be based on the GPT-J LLM, which isn't as powerful as OpenAI's GPT-4. But for an adversary's purposes, it doesn't have to be. GPT-J is an open-source LLM developed in 2021 by EleutherAI. It supports 6 billion parameters with 825GB of training data. By comparison, GPT-4 supports 175 billion parameters with 1.5TB of training data.

Kelley said WormGPT is believed to have been trained on a diverse array of data sources, with an emphasis on malware-related data. The specific datasets used in training the model, though, have been kept confidential by the model's author, he added.

Experiments with WormGPT to produce an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice were "unsettling," Kelley said.

WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

Daniel Kelley

Business email compromise (BEC) fraud occurs when an email that appears to originate with the higher-ups in an organization is sent to a lower-level employee, usually requesting a money transfer into an account controlled by a hacker. According to the FBI, BEC losses by businesses totaled more than $2.7 billion in 2022.

Phishing emails

Kelley said the development of AI technologies has introduced a new vector for BEC attacks, with tools such as ChatGPT making it easy to generate humanlike text based on the input it receives. Generative AI enables cybercriminals to automate the creation of highly convincing fake emails, personalized to the recipient, which improves the chances of an attack's success.

Timothy Morris, chief security advisor at Tanium, said tools such as WormGPT will make phishing more effective — and open doors for more cybercriminals.

Not only are the emails more convincing, with correct grammar, but the ability to also create them almost effortlessly has lowered the barrier to entry for any would-be criminal. Not to mention, [the tools add] the ability to increase the pool of potential victims, since language is no longer an obstacle.

Timothy Morris

Mike Parkin, a senior technical engineer at Vulcan Cyber, said AI tools such as ChatGPT are good at sounding like a real person because they are trained on LLMs, which leverage the Internet.

That makes it a lot easier for a criminal operator who might have English as their second or third language to write convincing hooks.

Mike Parkin

While early concerns about AI tools such as ChatGPT focused on their being used to write malicious code, WormGPT highlights its value for making fraud more effective, Parkin said.

Conversational AI's real threat is with social engineering. With a little data scraping and some dedicated AI training, it's possible to automate much, if not all, of the process to enable threat actors to phish at scale.

Mike Parkin

Jailbreaking ChatGPT

While generative AI models can lower the barriers to becoming a cybercriminal, don't expect hordes of threat actors to start appearing on the immediate horizon, said Mika Aalto, co-founder and CEO of Hoxhunt.

For now, the misuse of ChatGPT for BEC, phishing, and 'smishing' attacks will likely be focused on improving the capabilities of existing cybercriminals more than activating new legions of attackers.

Mika Aalto

SlashNext researchers found another disturbing trend among cybercriminals and ChatGPT. "We’re now seeing an unsettling trend among cybercriminals on forums, evident in discussion threads offering 'jailbreaks' for interfaces like ChatGPT," Kelley said in his blog post.

These jailbreaks are specialized prompts that are becoming increasingly common. They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code.

Daniel Kelley

For example, one jailbreak, called the Grandma Exploit, tricked ChatGPT into revealing how to make napalm. It asked the chatbot to pretend to be a deceased grandmother who had been a chemical engineer at a napalm production factory and then asked the chatbot to explain how napalm is made.

Another jailbreak cooked up by Reddit users prompted ChatGPT to pretend it was in a role-playing game in which it was given the persona of DAN, short for Do Anything Now. That freed the model from adhering to some of the rules related to racist, sexist, and violent content.

How organizations can fight AI-fueled attacks

What can organizations do to thwart AI-powered attacks? Kelley recommends developing extensive training programs that focus on BEC threats and updating them regularly.

Such programs should educate employees on the nature of BEC threats, how AI is used to augment them, and the tactics employed by attackers. This training should also be incorporated as a continuous aspect of employee professional development.

Daniel Kelley

He also recommended that organizations implement stringent email verification processes, including automatic alerts when emails originating outside the organization impersonate internal executives or vendors, and that they flag messages containing specific keywords linked to BEC attacks such as “urgent," "sensitive,” or “wire transfer.”

Aalto said such measures could ensure that potentially malicious emails are subjected to thorough examination before any action is taken.

Be sure to focus on your people and their email behavior, because that is what our adversaries are doing with their new AI tools.

Mika Aalto

Aalto said organizations should embed security as a shared responsibility throughout the organization. He recommended ongoing training that enables users to spot suspicious messages, and rewards for staff reporting threats.

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:Security Operations

More Blog Posts

Post-quantum security

Crypto group ushers in post-quantum security

Here’s a look at the Ethereum Foundation’s new PQC security effort — and why you need to modernize your SecOps.

Learn More about Crypto group ushers in post-quantum security
Crypto group ushers in post-quantum security
Cybercrime-as-a-service

Cybercrime-as-a-service forces a security rethink

With AI-powered tools readily available, sophisticated attacks no longer require sophisticated attackers.

Learn More about Cybercrime-as-a-service forces a security rethink
Cybercrime-as-a-service forces a security rethink
AI adoption guardrails

Why governance is key to safe AI adoption

A new CSA report stresses getting out in front of AI risk — and why it matters for SecOps.

Learn More about Why governance is key to safe AI adoption
Why governance is key to safe AI adoption
Adversarial AI rise

Adversarial AI is on the rise: What you need to know

Researchers explain that as threat actors move to AI-enabled malware in active operations, existing defenses will fail.

Learn More about Adversarial AI is on the rise: What you need to know
Adversarial AI is on the rise: What you need to know

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top