Skip to Content
Artificial intelligence

The biggest threat of deepfakes isn’t the deepfakes themselves

The mere idea of AI-synthesized media is already making people stop believing that real things are real.
October 10, 2019
An image of Hitler giving a speech with his face replaced by a question mark.
An image of Hitler giving a speech with his face replaced by a question mark.Ms. Tech

It was late 2018, and the people of Gabon hadn’t seen their president, Ali Bongo, in public for months. Some began to suspect that he was ill, or even dead, and the government was covering it up. To stop the speculation, the government announced that Bongo had suffered a stroke but remained in good health. Soon after, it released a video of him delivering his customary New Year’s address.

Rather than assuaging tensions, however, the video did precisely the opposite. As uncovered by the digital rights organization Internet Without Borders, many people, thinking Bongo looked off in the footage, immediately suspected it was a deepfake—a piece of media forged or altered with the help of AI. The belief fueled their suspicions that the government was hiding something. One week later, the military launched an unsuccessful coup, citing the video as part of the motivation.

Subsequent forensic analysis never found anything altered or manipulated in the video. That didn’t matter. The mere idea of deepfakes had been enough to accelerate the unraveling of an already precarious situation.

In the lead-up to the 2020 US presidential elections, increasingly convincing deepfake technology has led to fears about how such faked media could influence political opinion. But a new report from Deeptrace Labs, a cybersecurity company focused on detecting this deception, found no known instances in which deepfakes have actually been used in disinformation campaigns. What’s had the more powerful effect is the knowledge that they could be used that way.

“Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake,” says Henry Ajder, one of the authors of the report. “The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact.”

Documentation is no longer evidence

Human rights activists and disinformation experts have sounded the alarm on these separate yet intertwined threats since deepfakes appeared on the scene. In the past two years, US tech companies and policymakers have focused almost exclusively on the first problem Ajder mentions: the ease with which the technology can make fake things look real. But it’s the second that troubles experts more. While the barriers to creating deepfakes may be falling rapidly, calling the veracity of something into question requires no tech at all.

“It gives another weapon to the powerful: to say ‘It’s a deepfake’ about anything that people who are out of power are trying to use to show corruption.”

“From the very beginning, it’s been my biggest worry in this space,” says Aviv Ovadya, a disinformation expert who now runs the nonprofit Thoughtful Technology Project.

Undermining trust in the media can have deep repercussions, particularly in fragile political environments. Sam Gregory, the program director of Witness, a nonprofit that helps people document human rights abuses, offers an example. In Brazil, which has suffered a history of police violence, citizens and activists now worry that any video they film of an officer killing a civilian will no longer be sufficient grounds for investigation. This fear that real evidence can plausibly be dismissed as fake, says Gregory, has become a recurring theme in workshops he hosts around the world.

“It’s an evolution of the claim that something is ‘fake news,’” he says. “It gives another weapon to the powerful: to say ‘It’s a deepfake’ about anything that people who are out of power are trying to use to show corruption, to show human rights abuses.”

Proving the real is real and the fake is fake

Solving these problems will require understanding both types of threat. “At a high level, you want to make it as easy as possible to show that a real thing is real and that a fake thing is fake,” says Ovadya.

In recent months many research groups, and tech companies like Facebook and Google, have focused on tools for exposing fakes, such as databases for training detection algorithms and watermarks that can be built into digital photo files to reveal if they are tampered with. Several startups have also been working on ways to build trust through consumer applications that verify photos and videos when they’re taken, to form a basis for comparison if versions of the content are circulated later. Gregory says tech giants should integrate both kinds of checks directly into their platforms to make them widely available.

But tech companies also need to employ human content moderators, and media organizations need to train journalists and fact checkers on both detection and verification as well. On-the-ground reporting can confirm whether or not a video reflects reality and add an important layer of nuance. “Technical models cannot interpret the content of the faked video across cultural contexts or imagine how it could be further recontextualized,” says Britt Paris, an information studies expert who recently published a report on deepfakes.

“What [disinformation actors] really want is not for you to question more, but for you to question everything.”

As an example, Paris points to altered videos of Nancy Pelosi and Jim Acosta that went viral over the past year. Both were so-called “cheapfakes” rather than deepfakes—their speed had simply been tampered with to mislead viewers. “There would be no way to catch these fakes with technical methods for catching deepfakes,” Paris says. Instead, journalists had to debunk them—which meant people had to trust the journalists.

Finally, all the experts agree that the public needs greater media literacy. “There is a difference between proving that a real thing is real and actually having the general public believe that the real thing is real,” Ovadya says. He says people need to be aware that falsifying content and casting doubt on the veracity of content are both tactics that can be used to intentionally sow confusion.

Gregory cautions against placing too large a burden on news consumers, however. Researchers, platforms, and journalists should do as much of the work as possible to help make clear what is real and what is fake before news reaches the public.

The ultimate goal, Ovadya says, is not to instill overall skepticism but to build “social, educational, inoculating infrastructure” for neutralizing the impact of deepfakes. “What should we be trying to avoid?” he asks. “It is valuable to be questioning evidence. But what [disinformation actors] really want is not for you to question more, but for you to question everything.”

He adds, “That is the antithesis of what we're looking for.”

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

Google DeepMind used a large language model to solve an unsolved math problem

They had to throw away most of what it produced but there was gold among the garbage.

Unpacking the hype around OpenAI’s rumored new Q* model

If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.

Finding value in generative AI for financial services

Financial services firms have started to adopt generative AI, but hurdles lie in their path toward generating income from the new technology.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.