June 20, 2018

Making it look like someone did or said something that never happened is harder than it looks.

That’s what Poynter learned over the past several weeks while trying to create a “deepfake” video of Alex Jones, Infowars host and frequent conspiracy theorist, giving remarks that were actually made by Facebook CEO Mark Zuckerberg. Deepfakes, named after the Reddit user who came up with the method, are essentially created by extracting a large number of frames from one video and superimposing them on one another.

The goal: Make it look like someone is doing or saying something they’re not. And that’s what Poynter tried to do in order to see how easy this technology actually is to use.

Using FakeApp, a free online tool that lets users create their own deepfakes, we tried to create a rudimentary video of Zuckerberg spouting the baseless conspiracy theories of Jones’. Below is the video, created with help from Poynter’s part-time web developer Warren Fridy.

Clearly that’s not a good version of something that porn aficionados, people on fringe websites and even political parties have become alarmingly proficient at. While Zuckerberg’s face is indeed overlayed onto Jones’, FakeApp included his entire face, making him appear like some kind of blurry demon without discernible eyes. Then there’s the fact that the audio didn’t transpose.

In short: We should keep our day jobs.

Jokes aside, it took us weeks to come up with something that’s far from even a passable deepfake video. And that’s because — despite frequent doomsday media coverage of deepfake technology — it’s actually really hard to create one.

“Deepfake requires quite some manual effort,” said Matthias Niessner, a professor at the Technical University of Munich whose lab researches deepfakes, in an email to Poynter. “In a way you could see it as similar to Photoshop. It's just a little overhyped.”

The rise of deepfakes — discussed during a panel at the Global Fact-Checking Summit in Rome today — went from being a creepy internet thing to a genuine threat to democracy, according to Rolling Stone. Vox wrote that manipulated videos could alter our memories, The Verge said creating deepfake porn videos will soon be super easy and Wired even covered a blockchain startup that has dedicated itself to combating the format.

At the same time, some have reported that general deepfakes are already over and “deep video portraits” are what we should all be worrying about.

Meanwhile, after more than a month of trying, we were barely able to produce something that looked even marginally realistic. Sure, that could (rightfully) be chalked up to our own ignorance of machine learning models — other journalists have had more success — but the point here is that creating significantly altered video isn’t yet as easy as posting a bogus news story on Facebook (and probably not as profitable).

“I don't think that people without any type of experience in deep learning can create a convincing deepfake video using the deepfake app available online,” said Annalisa Verdoliva, an assistant professor at the University of Naples Federico II, in an email to Poynter. “Even for expert people it is rather difficult.”

Verdoliva tried to create her own deepfake video with a research team and said it turned out poorly because they didn’t get enough variation in images to accurately train the model that creates the deepfake. That’s a key challenge for would-be hoaxers like Poynter, too.

Here are the steps we took:

  1. We downloaded FakeApp and installed it on a Windows remote desktop (Poynter almost exclusively uses Macs, with which FakeApp isn’t compatible.)
  2. We downloaded two video clips — one of Zuckerberg speaking about the investigation of Russian meddling in the 2016 U.S. election, and another of Jones talking about the “deep state” and President Donald Trump’s political agenda. This gives FakeApp plenty of face pictures to choose from.
  3. We let the “artificial intelligence” that powers FakeApp find and crop the faces in each video to create a machine learning model that would blend Jones and Zuckerberg to make it look like the former was speaking like the latter. This took an entire weekend, or about three days.
  4. Finally, FakeApp generated the video. Fridy said that he could have trained the model longer and with more photos of Zuckerberg’s face to make it more accurate, but that would have taken longer.

Now for the fun part. Here are some of the problems we encountered while using FakeApp:

  1. You can’t run FakeApp on Apple’s macOS — it only worked on Windows for us.
  2. You must have an NVIDIA-based graphics card, which is essentially a piece of hardware that allows you to process all the images and morph them together.
  3. You have to have a lot of memory on your computer — about 10 GB or so.
  4. We ran into compatibility issues with the different versions of TensorFlow, a software library that allows FakeApp to create its machine learning model.
  5. We pulled the audio from the original video to try and piece it into the deepfake, but FakeApp didn’t do anything with it.

“FakeApp sounds good, but when you sit yourself down to try it, it doesn’t 100 percent work,” Fridy said. “It’s not a tool out of the box.”

FakeApp has reportedly been downloaded more than 100,000 times. There are entire 4chan and Reddit channels dedicated to the creation of deepfakes. But for now, the technology is rudimentary at best — even for those that know how to use it properly.

FakeApp
(Screenshots from FakeApp)

Fridy said technology like Adobe After Effects and Lyrebird, as well as Face2Face and Pixel2Pixel, would probably be better bets for creating a deepfake video. S.pa, a Belgian political party that created a deepfake of Trump last month, used After Effects.

Despite the tool complications, deepfake videos are still worrisome — just not in the immediate future, and with a few caveats.

“We are just at the beginning of this technology and I think that it will soon improve and become easy-to-use for everybody,” Verdoliva said. “At the moment the resolution is still too low, but this will be an issue in the near future, because there has been a stunning progress in the generation of high-quality synthetic faces.”

At the same time, misunderstanding and overhyping the impact of and potential for deepfakes could pose a greater threat in the short term.

“My biggest concern at the moment is the misconception of many of these approaches,” Niessner said. “The movie industry has been doing quite realistic 'fake' images for the last 30 years. But now adding AI as a buzzword, it suddenly is an issue — even though it doesn't even work that well.”

Deepfake videos will be discussed during a panel at Global Fact 5 today. Follow @factchecknet and #GlobalFactV for live updates.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News