The tell-tale signs of a deepfake image used to be easy to spot, but generative AI is making us question just about everything we see and hear now. With each new AI model that is released, the tell-tale signs of a fake image are diminishing, and to add to the confusion, you can now create deepfake videos, voice clones of your loved ones, and fabricate fake articles in mere seconds.To avoid being fooled by AI deepfakes, it's worth knowing what kind of dangers they pose.

The Evolution of Deepfakes

A deepfake shows a person doing something that never happened in real life. It's completely fake. We laugh at deepfakes when they are shared on the internet as a meme or joke, but very few people find it funny when they are used to mislead us.

In the past, deepfakes were created by taking an existing photo and altering it in an image editing software like Photoshop. But what sets an AI deepfake apart is that it can be generated from scratch using deep learning algorithms.

The Merriam-Webster dictionary defines a deepfake as:

An image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.

But with advances in AI technology, this definition is beginning to look outdated. With the use of AI tools, deepfakes now include images, text, videos, and voice cloning. Sometimes, all four modes of AI generation are used at once.

Because it's an automated process that is incredibly quick and cheap to use, it's the perfect tool for churning out deepfakes at a rate we've never seen before—all without needing to know a single thing about how to edit photos, videos, or audio.

The Big Dangers of AI Deepfakes

A host of AI video generators already exist, alongside plenty of AI voice generators. Throw in a large language model like GPT-4 and you have a recipe for creating the most believable deepfakes that we have seen in modern history thus far.

Being aware of the different kinds of AI deepfakes, and how they might be used to trick you, is one way to avoid being misled. Here are just a few serious examples of how AI deepfake technology poses a real threat.

1. AI Identity Theft

You may have seen them. Among the first truly viral AI deepfakes to spread across the world were an image of Donald Trump being arrested, and one of Pope Francis in a white puffer jacket.

AI generated image of Pope Francis in a puffer jacket, posted to the Midjourney forum on Reddit.

While one seems like an innocent re-imagining of what a famous religious figure might throw on to wear on a chilly day in Rome; the other image, showing a political figure in a serious situation with the law, has far greater consequences if taken to be real.

So far, people have mainly targeted celebrities, political figures, and other famous individuals when creating AI deepfakes. In part, this is because famous individuals have plenty of photos of them on the internet which likely helped train the model in the first place.

In the case of an AI image generator like Midjourney—used in both the deepfake of Trump and the Pope—a user simply needs to input text describing what they want to see. Keywords can be used to specify the art style, such as a photograph or photorealism, and results can be fine-tuned by upscaling the resolution.

You can just as easily learn to use Midjourney and test this out yourself, but for obvious moral and legal reasons, you should avoid posting these images publicly.

Unfortunately, being an average, non-famous human being won't guarantee that you're safe from AI deepfakes either.

The problem lies with a key feature being offered by AI image generators: the ability to upload your own image and manipulate it with AI. And a tool like Outpainting in DALL-E 2 can extend an existing image beyond its borders by inputting a text prompt and describing what else you would like to generate.

If someone else were to do this with your photos, the dangers could be significantly greater than the deepfake of the Pope in a white jacket—they can use it anywhere, pretending to be you. While most people generally use AI with good intentions, there are very few restrictions stopping people from using it to cause harm, especially in cases of identity theft.

2. Deepfake Voice Clone Scams

With the help of AI, deepfakes have crossed a line most of us weren't prepared for: fake voice clones. With just a small amount of original audio—perhaps from a TikTok video you once posted, or a YouTube video you appear in—an AI model can replicate your one-and-only voice.

It's both uncanny and frightening to imagine receiving a phone call that sounds just like a family member, friend, or colleague. Deepfake voice clones are a serious enough concern that the Federal Trade Commision (FTC) has issued a warning about it.

Don’t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can’t reach your loved one, try to get in touch with them through another family member or their friends.

The Washington Post reported a case of a couple in their 70s who received a phone call from someone who sounded just like their grandson. He was in jail and urgently needed money for bail. Having no other reason to doubt who they were talking to they went ahead and handed over the money to the scammer.

It's not only the older generation that is at risk either, The Guardian reported another example of a bank manager who approved a $35 million dollar transaction after a series of "deep-faked calls" from someone they believed to be a bank director.

3. Mass-Produced Fake News

Large language models, like ChatGPT are very, very good at producing text that sounds just like a human, and we currently don't have effective tools to spot the difference. In the wrong hands, fake news and conspiracy theories will be cheap to produce and take longer to debunk.

Spreading misinformation isn't anything new of course, but a research paper published on arXiv in January 2023 explains that the problem lies in how easy it is to scale up the output with AI tools. They refer to it as "AI-generated influence campaigns", which they say could, for example, be used by politicians to outsource their political campaigns.

Combining more than one AI-generated source creates a high-level deepfake. As an example, an AI model can generate a well-written and convincing news story to go alongside the fake image of Donald Trump being arrested. This gives it more legitimacy than if the image was shared on its own.

Fake news isn't limited to images and writing either, developments in AI video generation mean we are seeing more deepfake videos cropping up. Here's one of Robert Downey Jr. grafted onto a video of Elon Musk, posted by the YouTube channel Deepfakery.

Creating a deepfake can be as simple as downloading an app. You can use an app like TokkingHeads to turn still images into animated avatars, which allows you to upload your own image and audio to make it seem like the person is talking.

For the most part, it's entertaining and fun, but there's also potential for trouble. It shows us just how easy it is to use anyone's image to make it seem as if that person uttered words that they never spoke.

Don't Get Fooled by an AI Deepfake

Deepfakes can be rapidly deployed at very little cost and with a low bar of expertise or computing power required. They can take the shape of a generated image, a voice clone, or a combination of AI-generated images, audio, and text.

It used to be a lot more difficult and labor-intensive to produce a deepfake, but now, with plenty of AI apps out there, just about anyone has access to the tools used to create deepfakes. As AI deepfake technology grows ever more advanced, it's worth keeping a close eye on the dangers it poses.