In January 2023, just two months after launch, ChatGPT (Generative Pre-trained Transformer) became the fastest-growing application of all time, amassing more than 100 million users.

OpenAI's advanced chatbot may have reinvigorated the public's interest in artificial intelligence, but few have seriously contemplated the potential security risks associated with this product.

ChatGPT: Security Threats and Issues

The technology underpinning ChatGPT and other chatbots may be similar, but ChatGPT is in a category of its own. This is great news if you intend to use it as a kind of personal assistant, but worrying if you consider that threat actors also use it.

Cybercriminals can utilize ChatGPT to write malware, build scam websites, generate phishing emails, create fake news, and so on. Because of this, ChatGPT may be a bigger cybersecurity risk than a benefit, as Bleeping Computer put it in an analysis.

At the same time, there are serious concerns that ChatGPT itself has certain unaddressed vulnerabilities. For example, in March 2023, reports emerged about some users being able to view titles of others’ conversations. As The Verge reported at the time, OpenAI CEO Sam Altman explained that "a bug in an open source library" had caused the issue.

This just underscores how important it is to limit what you share with ChatGPT, which collects a staggering amount of data by default. Tech behemoth Samsung learned this the hard way, when a group of employees who had been using the chatbot as an assistant accidentally leaked confidential information to it.

Is ChatGPT a Threat to Your Privacy?

ChatGPT logo on green background

Security and privacy are not one and the same, but they are closely related and often intersect. If ChatGPT is a security threat, then it is also a threat to privacy, and vice versa. But what does this mean in more practical terms? What are ChatGPT's security and privacy policies like?

Billions of words were scraped from the internet to create ChatGPT's vast database. This database is in a continual state of expansion, since ChatGPT stores whatever users share. The US-based non-profit Common Sense gave ChatGPT a privacy evaluation score of 61 percent, noting that the chatbot collects Personally Identifiable Information (PII), and other sensitive data. Most of this data is stored, or shared with certain third-parties.

In any case, you should be careful when using ChatGPT, especially if you use it for work, or to process sensitive information. As a general rule of thumb, you should not share with the bot what you wouldn't like the public to know.

Addressing the Security Risks Associated With ChatGPT

Artificial intelligence will be regulated at some point, but it's difficult to imagine a world in which it doesn't pose a security threat. Like all technology, it can—and will—be abused.

In the future, chatbots will become an integral part of search engines, voice assistants, and social networks, according to Malwarebytes. And they will have a role to play in various industries, ranging from healthcare and education, to finance and entertainment.

This will radically transform security as we know it. But as Malwarebytes also noted, ChatGPT and similar tools can be used by cybersecurity professionals as well; for example to look for bugs in software, or "suspicious patterns" in network activity.

Raising Awareness Is Key

What will ChatGPT be capable of five or 10 years from now? We can only speculate, but what we do know for sure is that artificial intelligence is not going anywhere.

As even more advanced chatbots emerge, entire industries will have to adjust and learn how to use them responsibly. This includes the cybersecurity industry, which is already being shaped by AI. Raising awareness about the security risks associated with AI is key, and will help ensure these technologies are developed and used in an ethical way.