Unless you've been living under a rock, you've probably heard of and even used ChatGPT or a similar AI chatbot. Although using these chatbots is certainly fun, it can also be dangerous if you don't know what you're doing.

These chatbots do have limitations, and it's important to understand what those are. So, here are six things you should avoid asking an AI chatbot.

1. Medical Diagnosis

AI chatbots are good at a lot of things, but providing accurate medical diagnoses isn't one of them. Sure, they might not be wrong all the time, but trusting a chatbot to diagnose your illness instead of a doctor is the same as googling your symptoms.

A chatbot will simply summarize the info it has pulled from the internet that might or might not be true. More importantly, it doesn't know your medical history or tolerance to certain substances and hence can't judge what kind of treatment would be right for you.

The internet has a tendency to exaggerate things, so it's no wonder why many people who look up their symptoms find themselves experiencing hypochondria and severe anxiety. Using AI chatbots can replicate the same effect and make matters worse. For your own sake, don't consult a chatbot for medical advice.

2. Product Reviews

Image of five stars on top of a pink & blue background

An AI chatbot can easily list down the price and specs of a product you might be interested in, but it can never give a product review. Why? Because a product review by definition includes personal opinion, experience, and judgment of the reviewer.

AI doesn't have the ability to see, smell, touch, taste, or hear, and so the claims it would make about a product—any product, not just tech—in its "review" would be disingenuous.

For instance, an AI chatbot can't experience the in-hand feel of a phone, the clarity of its speaker, the quality of its vibration motor, the reliability of its fingerprint sensor, the fluidity of the software, and the versatility of the camera system.

If you're looking to get recommendations on which products to buy, make sure you ask a chatbot to summarise existing reviews. You'll need to use an AI bot that can search the web, such as Bing with ChatGPT built-in.

You might have headlines that ChatGPT successfully cleared US law school exams, and that AI chatbots are on their way to enter our courtrooms in the coming years. This should be shocking, but a quick search would tell you that we've actually been trying to get AI into our legal system since 2015!

Some companies have already created specialized AI legal assistants, even marketing them as an affordable alternative to human lawyers. The concept clearly has merit for people who can't afford the latter, but the problems with these so-called AI lawyers are too real to ignore.

The obvious difference is that AI chatbots lack common sense. Things that are instinctively obvious to us humans need to be programmed into chatbots, which makes them less capable of logical reasoning.

More importantly, a human lawyer can search for new evidence and think outside the box to come up with clever solutions and loopholes. But an AI chatbot can only use the data you provide and process it in a pre-determined manner. An AI lawyer would also be the perfect target for hackers to steal sensitive data of many people at once.

4. News

Man in business suit looking appalled at what he's reading on his laptop

Using AI chatbots as a news source poses three major problems: accountability, context, and power. Firstly, accountability. When you read a news story, you know the publication and journalist behind it. Meaning, there's someone accountable for the information given to you, so there's a clear incentive to be accurate in order to maintain one's reputation and credibility.

But with an AI chatbot, there's no individual person writing the news story. Instead, the bot is just summarizing the stories already available on the web. This means there's no primary source, and you don't know where information is coming from. Hence, it's not as reliable.

Secondly, context. If the news you're reading is just a summary of info pulled from various sources, it can quickly paint a false narrative because you lose the deeper context of each individual story. The bot doesn't know if what it's claiming is true, it's only blending things together in a way that seem true at surface level.

And lastly, power. We know that ChatGPT can be biased (among other problems). So, if we all turn to AI chatbots as a news source, it gives the companies behind them power to contain any negative coverage and highlight all positive coverage about the causes they believe in and want to promote. This is extremely dangerous for obvious reasons.

5. Political Opinion

It should be pretty obvious why asking a chatbot for political opinion is a big no-no. When you ask ChatGPT for its take on any political matter, you'll usually get a restricted response that goes something like this: "As an AI language model, I do not have personal preferences or desires..."

This restriction is understandable, but Reddit users have already found ways to "jailbreak" the bot into responding in an unrestricted manner using all sorts of creative loopholes. Doing this for fun is one thing, but asking an unrestricted AI chatbot for political opinion is the same as letting your toddler access the dark web; it can't end well for any of the parties involved.

6. Commercial Content

How to Upgrade To VirtualBox 7.0 on Windows 11

The biggest selling point of AI chatbots is that they can produce content instantly. What would take a human a couple of hours to write, ChatGPT can do in seconds. Except, it can't.

Even if you overlook the questionable ethics of using AI chatbots to write content, you can't ignore that they are simply not good enough and can't replace human writers. Not yet, at least. ChatGPT, for example, often delivers inaccurate, outdated, and repetitive content.

Since these bots don't have their own opinions, judgment, imagination, or prior experience, they can't draw intelligent conclusions, think of examples, use metaphors, and create narratives in a way a professional human writer can.

In other words, it's not adding any new value to the internet. Using chatbots for personal use is perfectly fine. But using them to write your articles, news stories, social media captions, web copies, and more is probably a very bad idea.

AI Chatbots Are Fun, But Still Have Some Flaws

It's hard to say how AI chatbots will influence the tech industry and modern society at large. As revolutionary as the tech is, chatbots pose many problems that need to be addressed.

What's interesting is that Microsoft is now integrating a ChatGPT-like chatbot directly into its search engine Bing, and Google will soon do the same with its own chatbot called Bard. It's going to be interesting to see how these new AI-powered search engines change the internet.