June 13, 2019

Factually is a newsletter about fact-checking and accountability journalism, from Poynter’s International Fact-Checking Network & the American Press Institute’s Accountability Project. Sign up here.

The problem with deepfakes

Today, the U.S. House of Representatives Select Committee on Intelligence is holding a hearing on deepfake videos. Researchers will testify to Congress about the potential threat manipulated content poses to national security — particularly in the lead-up to the 2020 election.

And based on the past few weeks, the committee will have plenty to talk about.

Deepfake videos, which leverage artificial intelligence to make it look like someone is doing or saying something they never did or said, have been a popular misinformation topic since Motherboard first reported on them in December 2017. But they regained traction in the news cycle a couple of weeks ago, when a manipulated video of House Speaker Nancy Pelosi (D-Calif.) went viral on Facebook.

The video, which tried to make it look like Pelosi was drunk and slurring, wasn’t a deepfake — it was simply slowed down. Researchers call these types of videos “cheapfakes” or “shallowfakes.” But the massive reach it got caused a panic in the media about the potential effect of manipulated videos going into 2020.

And that panic didn’t die down this week.

On Monday, The New York Times ran an alarmist opinion article, pegged to the Pelosi hoax, suggesting that the rise of AI-manipulated videos will make it impossible for people to believe their own eyes on the internet. The Hill asked if a deepfake will be 2020’s “Comey moment.”

To many, that kind of coverage seemed to blow the problem out of proportion.

Digitally altered videos have been around for a long time (look no further than Hollywood’s CGI capabilities) so the problem isn’t coming from left field. And deepfakes are still pretty hard to make, taking several days and a lot of server space just to extract all the photos you need to create a believable visual illusion. When Poynter tried to create one, we failed miserably.

But someone also posted an actual deepfake of Mark Zuckerberg this week.

On Wednesday, The Washington Post reported that two artists and a tech startup created the video of Facebook’s chief executive. Posted on Instagram, the deepfake made it look like Zuckerberg was bragging on CBS about stealing data from users. In actuality, the video was created using AI and a voice actor. The same artists created other deepfakes depicting Donald Trump and Kim Kardashian West.

https://www.instagram.com/p/ByaVigGFP2U/

 

The discovery set off a firestorm of speculation over whether Facebook-owned Instagram would remove the videos, since one depicted Zuckerberg. Facebook hadn’t removed the manipulated Pelosi video, instead relying on its fact-checking partners to debunk and flag the post as false. (Disclosure: Being a signatory of Poynter’s International Fact-Checking Network code of principles is a necessary condition for joining the project.)

An Instagram spokesperson reiterated that strategy to Techcrunch, saying it would remove the deepfake videos from the Explore tab and hashtag result pages if fact-checkers found it to be false. Lead Stories, one of Facebook’s fact-checking partners, debunked the Zuckerberg video and labeled it as satire. That means its future distribution won’t be affected but it will appear with a disclaimer on Facebook.

These deepfake videos will no doubt come up at today’s Congressional hearing. And, as with the bogus video of Pelosi (who still hasn’t returned Zuckerberg’s calls about the incident — awkward!), it’s a safe bet that they’ll be blown out of proportion to make a broader point about the dangers of misinformation online.

It’s true that deepfake videos present a big problem for journalists and news consumers — but the threat is still somewhat far off. And both lawmakers and journalists would do well to remember that.

 

As Tim Hwang noted for Poynter in December, it’s still much easier for someone to take a photo out of context and post it on social media with a bogus claim than it is to create deepfakes. And, aside from the few high-profile examples we’ve analyzed in this newsletter, The Verge wrote recently that there’s still virtually no evidence that such fakes will be shared widely in the near future.

There are more immediate challenges that the development of media manipulation technology poses. This week, The Verge wrote about how some Facebook engineers managed to clone Bill Gates’ voice using AI (which is remarkably easy to do). And researchers have found a way to edit what people say in videos simply by typing what they want to hear.

Conversely, deepfakes can also serve as a PSA about, well, deepfakes. With that in mind, below are five articles and resources that best sum up the ongoing challenges of deepfakes — and how journalists and the public can best prepare to meet them.

“We Can Learn to See Through ‘Deepfake’ Video Hoaxes” — Lux Alptraum

“Prepare, Don’t Panic: Synthetic Media and Deepfakes” — Witness

“Six lessons from my deepfakes research at Stanford” — Tom Van de Weghe

“How do we work together to detect AI-manipulated media?” — Sam Gregory and Eric French

“How Could Deepfakes Impact the 2020 US Elections?” — Nicholas Diakopoulos and Deborah Johnson

. . .   technology

  • One of the primary ways misinformation reporters and fact-checkers investigate online content is by using open-source tools like Facebook graph searches. But now the company is turning off some of those search features, BuzzFeed News reported — “a potential disaster.”
  • Instagram has long been among the least-used social media platforms for journalists and fact-checkers — but no more. In Turkey, Teyit amassed more than 80,000 followers in only eight months by offering fact-checking content in a variety of  formats. Meanwhile, the fact-checking site has also released a sticker pack for messaging platforms that makes it easier for users to gently warn their contacts about misinformation.
  • Pinterest has banned a popular anti-abortion group that was sharing misinformation and conspiracies, BuzzFeed News reported. Meanwhile, Facebook banned notorious anti-vaccine website Natural News from publishing on its platform for violating the company’s spam policies, and YouTube has given the boot to Sandy Hook truthers and Holocaust deniers.

. . .  politics

  • Writing for Poynter, Julianna Rennie and Bill Adair commented on the trend of “embedded fact checks” in American journalism — and how more mainstream political reporters should do them for politicians other than Trump. The president has made 10,796 false or misleading claims over 869 days, according to The Washington Post Fact Checker’s latest tally.
  • The U.S. congressman who represents Silicon Valley doesn’t think tech giants are doing enough to combat disinformation, The Washington Post reported. And he wants to create a consortium where they can share disinformation threats, similar to how banks share data about fraud.
  • European Union parliamentary elections are over — and so is the collaborative fact-checking project FactCheckEU, which was backed by the IFCN. So what have the organizations behind the initiative learned? Here’s a Q&A with FactCheckEU director Jules Darmamin.

. . .  the future of news

  • Next week is the sixth annual Global Fact-Checking Summit, hosted by the IFCN in Cape Town, South Africa. Here’s a look at some of this year’s new attendees, here’s a preview of what will happen during the conference and here’s the event’s full agenda.
  • Speaking of new fact-checkers, the Duke Reporters’ Lab has published another update to its census of organizations worldwide. It found that there are now 188 fact-checking organizations in more than 60 countries — a 26% increase from the last census in February 2018. The largest growth came in Asia.
  • Here’s a fun new fact-checking format: (Poynter-owned) PolitiFact is launching a Mueller Report book club to help readers better understand the 448-page document about 2016 U.S. election interference. Each week this summer, fact-checkers will suggest a set amount of reading and publish analyses and reader insights both on the PolitiFact website and in its weekly newsletter.

Around the world, governments, companies and civil society organizations have singled out plastic straws as a public enemy No. 1 when it comes to oceanic pollution. And while the jury’s still out on whether banning straws will have any meaningful impact in the long run, there are plenty of false claims to go around.

Full Fact debunked one such claim at the end of May. In the article, the fact-checker wrote that several mainstream news organizations were reporting inflated or out-of-date statistics about straw use in the United Kingdom. The reporting came after the Department for Environment, Food and Rural Affairs announced a near complete ban on the use of plastic straws and stirrers.

After Full Fact published its fact check, which found that The Independent and The Guardian used faulty numbers, both outlets corrected their stories.

What we liked: Statistics about straw use aren’t necessarily as important as, say, numbers about teen birth rates in sub-Saharan Africa, but it’s still significant that Full Fact was able to set the record straight. The Guardian and The Independent have large audiences, and Full Fact was able to provide the full context by doing some good old-fashioned number-crunching.

  1. Forbes, The Hill, the Daily Caller and The Federalist all published articles written by a fake Iranian opposition activist that’s actually just a bogus Twitter account.
  2. Snopes has published another update to its public GoFundMe page, saying it has filed a new petition in its ongoing legal battle over the ownership of the website. Meanwhile, Snopes has raised more than $1 million from the crowdfunding effort. The Seattle Times covered the story.
  3. Techcrunch wrote about new AI technology developed by researchers at the University of Washington that can both write and detect fake news stories.
  4. The Washington Post noted that a Google search for the Mueller report was turning up results calling it fiction. Yikes.
  5. Writing for Wired, misinformation researcher Renee DiResta explained how attempts to limit falsehoods on social media could borrow from what the platforms have done to weed out spam.
  6. Reuters reported on a series of Russian-backed YouTube channels that were publishing false reports without contextual label from the social media platform.
  7. A New York University professor wrote a report on what social media companies should do to cut down on domestic disinformation.
  8. Mozilla published the results of a survey it conducted on misinformation — and most people said that education is the primary way to address the phenomenon.
  9. In Brazil, Agência Lupa has created ethics and business councils that will help the fact-checking organization make high-level decisions about its content and business model.
  10. Finally, a plug from Poynter: Applications for our Leadership Academy for Diversity in Digital Media are due tomorrow. It’s a free program that’s designed “to train journalists of color working in digital media to thrive, professionally and personally.” Apply here.

That’s it for this week! Feel free to send feedback and suggestions to factchecknet@poynter.org.

Daniel and Susan

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News