Deepfakes and AI-generated videos are here to stay. But over the past few years, they’ve grown in quality and quantity, making a lot of people worry over national security and personal privacy.

Still, no matter how hard anonymous online users tried to make their fake video realistic, they could never get past advanced facial recognition software. Until now.

Fooling Face Recognition APIs

Researchers at Sungkyunkwan University in Suwon, South Korea, tested the quality the current deepfake technology. They tested both Amazon and Microsoft APIs using open-source and commonly used deepfake video generating software to see how well they perform.

The researchers used the faces of Hollywood celebrities. In order to create solid deepfakes, the software needs a lot of high-quality images from different angles of the same persons, which are much easier to acquire of celebrities instead of ordinary people.

The researchers also decided to use Microsoft and Amazon’s API as the benchmarks for their study as both companies offer celebrity face recognition services. They used publicly available datasets and created just over 8,000 deepfakes. From each deepfake video, they extracted multiple faceshots and submitted it to the APIs is in question.

With Microsoft’s Azure Cognitive Services, the researchers were able to fool the system 78 percent of the time using deepfakes. Amazon's results were slightly better, with 68 percent of submitted faces being identified as real.

What About Deepfake Detectors?

Deepfake detectors work more or less the same way deepfake does. The detectors are software that was trained using machine learning models on how to detect deepfake videos.

But instead of focusing on creating a hyper-realistic video to fool the detectors, deepfakes can now include adversarial examples in every frame to confuse the AI system. In fact, deepfake attacks of this type have success rates ranging from 78 to 99 percent.

It’s Getting Worse

AI face overlay on code

Deepfakes are a machine learning application. To create one that’s even remotely convincing, you need hundreds of images of the same person’s face from different angles and displaying various emotions.

Because of the need for massive amounts of data, one would think that only people with a large online presence are at risk, like celebrities and politicians. But that’s no longer the case.

According to Deeptrace, the number of deepfakes online increased by 330 percent in under a year—from October 2019 to June 2020. Not to mention, the software and algorithms deepfake makers use are becoming stronger and more readily available and accessible.

Who’s at Risk of Deepfakes?

When deepfakes first became mainstream, the primary worries were for privacy and national security. People feared that video footage of politicians and official government workers couldn’t be trusted anymore.

But while it’d be irresponsible to disregard the security risk deepfake pose, multiple surveys found that deepfake makers aren’t that interested in disturbing politics just yet. The majority of deepfakes videos online can be divided into two categories: funny videos of celebrity interviews, and movies and pornographic material.

While the recent study was conducted using celebrity faces to ensure the deepfakes were of high quality to fool the APIs, that doesn’t mean you can’t make deepfakes with less data. Sure, they might not stand a chance at fooling advanced facial recognition systems, but they can be convincing enough to trick other people.

Nowadays, deepfakes of anyone with a social presence can be made convincingly. All they need are a few photos of you and maybe a video you appear in. The resulting deepfake may be low in quality, but it’s still doable and can be damaging.

The Future Is Still Unknown

There are many contradictory predictions regarding the state of deepfakes, as they’re not going away anytime soon.

Some expect an apocalyptic cyber future where you can’t trust any footage you come across online. Others are more optimistic, comparing deepfakes to animation and saying it may have a future in content production.