Dark web AI models that can phish and write malware have been exercising minds in recent weeks. But the so-called WormGPT and FraudGPT LLMs do seem to be pretty limited, once you scratch the surface — they even feel like scams, to some researchers.
Nevertheless, it shows where this technology is headed. In this week’s Secure Software Blogwatch, we shore up defenses against BEC and SSCA.
Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: Choose your fighter (no, not that one).
[ Learn why you need to upgrade you app sec: Tools gap leaves organizations exposed to supply chain attacks ]
What’s the craic? John P. Mello Jr. reports — “WormGPT: Business email compromise amplified by ChatGPT”:
Since OpenAI introduced ChatGPT to the public last year, generative AI large language models (LLMs) have been popping up like mushrooms after a summer rain. So it was only a matter of time before online predators, frustrated by the guardrails deployed by developers … cooked up their own model for malevolent purposes. … Here's what researchers know:
WormGPT is believed to be based on the GPT-J LLM, which isn't as powerful as OpenAI's GPT-4. But … it doesn't have to be. GPT-J [was] developed in 2021 by EleutherAI.
WormGPT is believed to have been trained on a diverse array of data sources, with an emphasis on malware-related data. … Experiments with WormGPT to produce an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice were "unsettling."
It’s just the beginning. Bill Toulas has more — “Cybercriminals train AI chatbots for phishing, malware attacks”:
In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google's AI experiment, Bard.
[The developers] said that they were working on DarkBART - a "dark version" of Google's conversational generative artificial intelligence chatbot. [They] also had access to another large language model named DarkBERT developed by South Korean researchers and trained on dark web data but to fight cybercrime.
The trend of using generative AI chatbots is growing. … It can provide an easy solution for less capable threat actors or for those that want to expand operations to other regions and lack the language skills.
“Less capable threat actors”? elmomle puts it another way:
Writing a convincing email is one of the more time-consuming parts of a spearphishing attack. Any competent cybercriminal would have their own script that finds a closest-available match to the actual CEO's email and use that. If they can automate the part that used to take research, the average script kiddie now isn't that far from being able to brute-force scam most companies.
That said, I don't want to evoke too much alarm. The business side will evolve as well; that's how these things go. Maybe by enforcing very strict protocols on link-clicking and money-sending, maybe by something that automates such enforcement. Or maybe something stupidly simple like your email warning you that this email address is one that you haven't seen before but looks like a near-clone of one you have seen. To which the scammers would then adapt, etc.
Also, it shows a worrying path forward. u/SPHAlex shares some concerns:
The true concern for AI is the possibility to combine two things: The mass data that we currently have and collect, and the ability to construct unique, targeted scams with AI with growing capabilities.
The real concern [is] that scammers … will use AI to analyze data to target scams at people. Most of the knowledge to mimic a site/email comes from personal use, but with scraping and more advanced AI it become easier to filter for who is most vulnerable and create a template that is harder to detect as a scam.
I'm not really concerned about the idiots trying to do refund scams, the random texts from "girls" who think you're their your friend, or stuff like that. I'm worried about the complex scams that rely on them faking a human connection to get you to drop your guard or slip up.
I’m confused — panic or don’t panic? Kyle Wiggers advises, “There’s no reason to panic”:
The dark web creators of … WormGPT and FraudGPT advertise their creations as being able to perpetrate phishing campaigns, generate messages aimed at pressuring victims into falling for business email compromise schemes and write malicious code. [But] the threat of AI-accelerated hackers isn’t quite as dire as some headlines would suggest.
In the AI world … GPT-J is practically ancient history — and certainly nowhere near as capable as the most sophisticated LLMs today, like OpenAI’s GPT-4. … FraudGPT’s creator describes it as “cutting-edge,” claiming the LLM can “create undetectable malware” and uncover websites vulnerable to credit card fraud. But … there’s not much to go on besides the hyperbolic language.
It’s the same sales move some legitimate companies are pulling: Slapping “AI” on a product to stand out or get press attention, praying on customers’ ignorance. … Realistically, they’ll at most make a quick buck for the … scammers who built them.
As does Melissa Bischoping — “The new tools are just rudimentary apps that generate the kind of code a teenager could write”:
I haven’t seen my industry peers overly concerned about either [FraudGPT or WormGPT]. And I have seen nothing to suggest that this is scary.
[The creators] are preying on people who are not sophisticated enough to actually write their own malware, but want to make a quick buck. … It’s all in clear text, so there’s no attempt to be evasive here. [It wouldn’t be] something the average person is even going to run on their own. … This is something that your average high schooler could write. You don’t need [AI] to write this.
The real scam is the fact that someone out there is trying to sell this as a wonder tool. This is someone who is capitalizing on the same hype that we all have been paying attention to, and going after the people who lack the technical ability to write their own effective malware. But if something sounds too good to be true, it probably is.
It does. Nothing to see here, thinks a slightly sarcastic eur0pa:
Yes, truly groundbreaking. … It's just skiddiots scamming skiddiots, as it's always been.
It was ever thus. u/blu3tu3sday has seen it all before:
This reminds me of the folks who spend 5 weeks automating a task that takes 5 mins to do.
Meanwhile, JMZero says we don’t need to worry — yet:
Prompt: Could you write a joke about a squirrel and an umbrella?
GPT4: Why did the squirrel share his umbrella with a friend? Because he didn't want to be the only one going nuts in the rain!
You have been reading Secure Software Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi, @richij or firstname.lastname@example.org. Ask your doctor before reading. Your mileage may vary. Past performance is no guarantee of future results. Do not stare into laser with remaining eye. E&OE. 30.
- Join Webinar: Threat Modeling & Software Supply Chain Security
- Supply Chain Risk Report: Learn why you need to upgrade your app sec
- See special report: The Evolution of Application Security
- Track key trends: The State of Supply Chain Security 2022-23
- Special report: C-SCRM and federal supply chain security guidance