Who hasn't used ChatGPT at this point? It's fun, absolutely fascinating if you have any interest in artificial intelligence, and free (for now).

Though it's usually described as a chatbot, ChatGPT is much more than that. It can generate copy, explain complex topics, act as a translator, come up with jokes, and write code. But it can also be weaponized by threat actors.

How ChatGPT Works and Why It's Attractive to Cybercriminals

ChatGPT (Generative Pre-trained Transformer) was developed by the artificial intelligence research laboratory OpenAI, and launched in November 2022. It is a large language model that uses a combination of supervised and reinforcement machine learning techniques.

Perhaps more importantly, ChatGPT is constantly fine-tuned and trained by users, who can either upvote or downvote its responses, making it all the more accurate and powerful, as it gathers data on its own.

That is what separates ChatGPT from other chatbots. And if you've ever used it, you know that the difference is noticeable immediately: unlike other similar products, it is able to actively participate in a conversation and complete complex tasks with stunning accuracy, while delivering coherent and human-like responses.

If you were shown a short essay written by a human, and one written by ChatGPT, you'd probably struggle to determine which is which. As an example, here's a part of the text ChatGPT generated when told to write a short essay about The Catcher in the Rye.

Screenshot of an essay about The Catcher in the Rye, generated by ChatGPT

This is not to say that ChatGPT doesn't have its limitations—it most certainly does. The more you use it, the more you'll notice this being the case. As powerful as it is, it can still struggle with elementary logic, make mistakes, share false and misleading information, misinterpret instructions in a comical way, and be manipulated into drawing the wrong conclusion.

But ChatGPT's power doesn't lie in its ability to converse. Rather, it lies in its near-unlimited capacity to complete tasks en masse, more efficiently and much faster than a human could. With the right inputs and commands, and a few creative workarounds, ChatGPT can be turned into a disturbingly powerful automation tool.

With that in mind, it's not difficult to imagine how a cybercriminal could weaponize ChatGPT. It's all about finding the right method, scaling it, and making the AI complete as many tasks as possible at once, with multiple accounts and on several devices if necessary.

5 Things Threat Actors Could Do With ChatGPT

There are already a few real-word examples of ChatGPT being used by threat actors, but it is more than likely it's being weaponized in a number of different ways, or will be at some point in the future. Here are five things hackers can do (and probably are doing) with ChatGPT.

1. Write Malware

Screenshot of code generated by ChatGPT

If ChatGPT can write code, it can write malware. No surprise there. But this is not just a mere theoretical possibility. In January 2023, cybersecurity firm Check Point Research discovered that cybercriminals are already using ChatGPT to write malware—and bragging about it on underground forums.

The threat actor Check Point Research discovered used the advanced chatbot rather creatively, to recreate Python-based malware described in certain research publications. When researchers tested the malicious program, the cybercriminal was telling the truth: his ChatGPT-generated malware did exactly what it was designed to do.

2. Generate Phishing Emails

Screenshot of a phishing email generated by ChatGPT

As powerful as spam filters have become, dangerous phishing emails still slip through the cracks, and there isn't much the average person can do except report the sender to their provider. But there is a lot a capable threat actor could do with a mailing list and access to ChatGPT.

With the right commands and suggestions, ChatGPT can generate convincing phishing emails, potentially automating the process for the threat actor and allowing them to scale their operations.

3. Build Scam Websites

ChatGPT-generated HTML code for a website

If you just Google the term "build a website with ChatGPT," you'll find a bunch of tutorials explaining in great detail how to do just that. Though this is good news for anyone who'd like to make a website from scratch, it's also great news for cybercriminals. What's stopping them from using ChatGPT to build a bunch of scam sites, or phishing landing pages?

The possibilities are almost endless. A threat actor could clone an existing website with ChatGPT and then modify it, build fake e-commerce websites, run a site with scareware scams, and so on.

4. Create Spam Content

Fake giveaway post generated by ChatGPT

To set up a fake website, run a scam social media page, or build a copycat site, you need content—a lot of it. And it needs to look as legitimate as possible for the scam to work. Why would a threat actor hire content writers, or write blog posts on their own, when they can just have ChatGPT do that for them?

Granted, a website with AI-generated content would likely get penalized by Google fairly quickly and not appear in the search results, but there are many different ways in which a hacker could promote a website, send traffic to it, and scam people out of their money or steal their personal information.

5. Spread Disinformation and Fake News

Screenshot of a fake news story generated by ChatGPT

Online disinformation has become a major issue in recent years. Fake news spread like wildfire on social media, and people who don't know any better often fall for misleading—and sometimes literally made up—stories. This can have dire, real-life consequences, and it seems as though nobody has any idea how to stop the spread of fake news without violating free speech laws.

Tools like ChatGPT could exacerbate this problem. Threat actors having access to software that is able to generate thousands of fake news stories and social media posts every day seems like a recipe for disaster.

Don't Take Our Word For It

If you're not convinced, we asked ChatGPT how a cybercriminal would use it. It appears to agree with the crux of this article.

Screenshot of ChatGPT explaining how a cybercriminal would use it

In the Wrong Hands, ChatGPT Becomes Dangerous

One can only imagine what artificial intelligence will be capable of five or 10 years from now. For the time being, it's best to ignore the hype and the panic, and assess ChatGPT rationally.

Like all technology, ChatGPT is neither inherently helpful nor harmful. Despite some shortcomings, it is by far the most capable chatbot ever released to the public.