RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Product & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
Dev & DevSecOpsAugust 2, 2023

FraudGPT / WormGPT: Scammy for now — but a worrying signpost for software security

Your app sec team should factor in more capable malicious AI tools, coming soon.

Richi Jennings
Richi Jennings, Independent industry analyst, editor, and content strategist.Richi Jennings
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
four robots lined up looking at laptops

Dark web AI models that can phish and write malware have been exercising minds in recent weeks. But the so-called WormGPT and FraudGPT LLMs do seem to be pretty limited, once you scratch the surface — they even feel like scams, to some researchers.

Nevertheless, it shows where this technology is headed. In this week’s Secure Software Blogwatch, we shore up defenses against BEC and SSCA.

Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: Choose your fighter (no, not that one).

Learn why you need to upgrade you app sec: Tools gap leaves organizations exposed to supply chain attacks

ScamGPT?

What’s the craic? John P. Mello Jr. reports — “WormGPT: Business email compromise amplified by ChatGPT”:

“Unsettling”

Since OpenAI introduced ChatGPT to the public last year, generative AI large language models (LLMs) have been popping up like mushrooms after a summer rain. So it was only a matter of time before online predators, frustrated by the guardrails deployed by developers … cooked up their own model for malevolent purposes. … Here's what researchers know:

…

WormGPT is believed to be based on the GPT-J LLM, which isn't as powerful as OpenAI's GPT-4. But … it doesn't have to be. GPT-J [was] developed in 2021 by EleutherAI.

…

WormGPT is believed to have been trained on a diverse array of data sources, with an emphasis on malware-related data. … Experiments with WormGPT to produce an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice were "unsettling."

It’s just the beginning. Bill Toulas has more — “Cybercriminals train AI chatbots for phishing, malware attacks”:

“Growing”

In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google's AI experiment, Bard.

…

[The developers] said that they were working on DarkBART - a "dark version" of Google's conversational generative artificial intelligence chatbot. [They] also had access to another large language model named DarkBERT developed by South Korean researchers and trained on dark web data but to fight cybercrime.

…

The trend of using generative AI chatbots is growing. … It can provide an easy solution for less capable threat actors or for those that want to expand operations to other regions and lack the language skills.

“Less capable threat actors”? elmomle puts it another way:

Writing a convincing email is one of the more time-consuming parts of a spearphishing attack. Any competent cybercriminal would have their own script that finds a closest-available match to the actual CEO's email and use that. If they can automate the part that used to take research, the average script kiddie now isn't that far from being able to brute-force scam most companies.

…

That said, I don't want to evoke too much alarm. The business side will evolve as well; that's how these things go. Maybe by enforcing very strict protocols on link-clicking and money-sending, maybe by something that automates such enforcement. Or maybe something stupidly simple like your email warning you that this email address is one that you haven't seen before but looks like a near-clone of one you have seen. To which the scammers would then adapt, etc.

Also, it shows a worrying path forward. u/SPHAlex shares some concerns:

The true concern for AI is the possibility to combine two things: The mass data that we currently have and collect, and the ability to construct unique, targeted scams with AI with growing capabilities.

…

The real concern [is] that scammers … will use AI to analyze data to target scams at people. Most of the knowledge to mimic a site/email comes from personal use, but with scraping and more advanced AI it become easier to filter for who is most vulnerable and create a template that is harder to detect as a scam.

…

I'm not really concerned about the idiots trying to do refund scams, the random texts from "girls" who think you're their your friend, or stuff like that. I'm worried about the complex scams that rely on them faking a human connection to get you to drop your guard or slip up.

I’m confused — panic or don’t panic? Kyle Wiggers advises, “There’s no reason to panic”:

“Scammers”

The dark web creators of … WormGPT and FraudGPT advertise their creations as being able to perpetrate phishing campaigns, generate messages aimed at pressuring victims into falling for business email compromise schemes and write malicious code. [But] the threat of AI-accelerated hackers isn’t quite as dire as some headlines would suggest.

…

In the AI world … GPT-J is practically ancient history — and certainly nowhere near as capable as the most sophisticated LLMs today, like OpenAI’s GPT-4. … FraudGPT’s creator describes it as “cutting-edge,” claiming the LLM can “create undetectable malware” and uncover websites vulnerable to credit card fraud. But … there’s not much to go on besides the hyperbolic language.

It’s the same sales move some legitimate companies are pulling: Slapping “AI” on a product to stand out or get press attention, praying on customers’ ignorance. … Realistically, they’ll at most make a quick buck for the … scammers who built them.

As does Melissa Bischoping — “The new tools are just rudimentary apps that generate the kind of code a teenager could write”:

“Scam”

I haven’t seen my industry peers overly concerned about either [FraudGPT or WormGPT]. And I have seen nothing to suggest that this is scary.

…

[The creators] are preying on people who are not sophisticated enough to actually write their own malware, but want to make a quick buck. … It’s all in clear text, so there’s no attempt to be evasive here. [It wouldn’t be] something the average person is even going to run on their own. … This is something that your average high schooler could write. You don’t need [AI] to write this.

…

The real scam is the fact that someone out there is trying to sell this as a wonder tool. This is someone who is capitalizing on the same hype that we all have been paying attention to, and going after the people who lack the technical ability to write their own effective malware. But if something sounds too good to be true, it probably is.

It does. Nothing to see here, thinks a slightly sarcastic eur0pa:

Yes, truly groundbreaking. … It's just skiddiots scamming skiddiots, as it's always been.

It was ever thus. u/blu3tu3sday has seen it all before:

This reminds me of the folks who spend 5 weeks automating a task that takes 5 mins to do.

Meanwhile, JMZero says we don’t need to worry — yet:

Prompt: Could you write a joke about a squirrel and an umbrella?

GPT4: Why did the squirrel share his umbrella with a friend? Because he didn't want to be the only one going nuts in the rain!

And Finally:

I choose: Bush, Hopper and Minsky. How about you?

Previously in And finally


You have been reading Secure Software Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi, @richij or ssbw@richi.uk. Ask your doctor before reading. Your mileage may vary. Past performance is no guarantee of future results. Do not stare into laser with remaining eye. E&OE. 30.

Image sauce: Mohamed Nohassi (via Unsplash; leveled and cropped)

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:Dev & DevSecOps

More Blog Posts

MCP security robot

Lab offers 9 ways to improve MCP security

The Vulnerable MCP Servers Lab delivers integration training, demos, and instruction on attack methods.

Learn More about Lab offers 9 ways to improve MCP security
Lab offers 9 ways to improve MCP security
AI coding new life for Rust

How AI coding is breathing new life into Rust 

AI tools are making Rust a favorite language of developers — even those maintaining codebases like Microsoft’s.

Learn More about How AI coding is breathing new life into Rust 
How AI coding is breathing new life into Rust 
Open-source software (OSS)

Anthropic’s PSF investment: Why it matters

Here’s what the $1.5M investment in the Python Software Foundation will mean for AI coding and open-source security.

Learn More about Anthropic’s PSF investment: Why it matters
Anthropic’s PSF investment: Why it matters
Software quality crisis

Software quality's decline: How AI accelerates it

Development is in freefall toward software entropy and insecurity. Can spec-driven development help?

Learn More about Software quality's decline: How AI accelerates it
Software quality's decline: How AI accelerates it

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top