Misinformation doesn’t have to be altered to go viral

This week, nearly 20,000 people fell for a video hoax. And it wasn’t even altered — the video came from a mainstream news organization.

In a fact check published Thursday, Liputan 6, an Indonesian news site, wrote that Bloomberg published the video in question about the potential presidential candidacy of Prabowo Subianto. But the video made the rounds on Facebook with a false caption that claimed the foreign press had reported Subianto would become the next president.

Liputan 6’s fact check got more than three times as many engagements as the false claim. That’s good news for the fact-checking outlet, which partners with Facebook to decrease the spread of false claims on the platform. (Disclosure: Being a signatory of the International Fact-Checking Network’s code of principles is a necessary condition for joining the project.)

But the out-of-context Bloomberg video illustrates the importance of context on Facebook — and how altered images and videos are still rarely the most viral.

Below is a chart with other top fact checks since last Tuesday in order of how many likes, comments and shares they got on Facebook, according to data from BuzzSumo and CrowdTangle. Read more about our methodology here.

On social media, misinformers often post real news articles out of context or with new, false context during breaking news or political events. These posts take virtually no effort to create but easily amass thousands of likes, shares and comments. And they spread false, potentially dangerous information about real news content.

That phenomenon was front and center last week following the fire at the Notre Dame cathedral in Paris.

Fact-checkers across Europe debunked several out-of-context news articles following the fire, which produced a torrent of hoaxes on social media. Spanish fact-checking site Maldito Bulo wrote about how people were sharing an El Mundo story about four people being detained near Notre Dame.

That story is factual — but it occurred in 2016. And social media users were sharing it as if it had happened in the aftermath of the Notre Dame fire.

In another case debunked by Maldito Bulo, people shared a 2016 Telegraph story about how police had found Arabic documents and gas tanks near Notre Dame. The story was factual, but it was outdated — the newspaper even added a topper to the article saying that it was unrelated to last week’s fire.

One of the most viral hoaxes about the Notre Dame fire falsely claimed that a video from CNBC depicted a person in Muslim attire walking in the cathedral during the fire. But (Poynter-owned) PolitiFact debunked that Islamophobic hoax, reporting that, in fact, the person was simply a firefighter.

Those hoaxes all borrow real stories or footage from fact-based news organizations to spread a bogus narrative. That’s very different from manipulated or “deepfake” content, which relies on editing software like Adobe Photoshop and After Effects to fundamentally alter an article, image or video.

Why fact-checkers couldn’t contain misinformation about the Notre Dame fire

Deepfake videos are a looming threat. But out-of-context news coverage poses several challenges for journalists and fact-checkers that are much more immediate.

First, posting a factual news story on Facebook isn’t technically distributing misinformation unless the user adds false context or removes it altogether. That could pose a potential gray area where fact-checkers have to decide to what extent a caption constitutes a serious hoax or just a partisan opinion. And fact-checkers have already faced several controversies about distinguishing between satire and fake news stories.

Second, logistically speaking, out-of-context videos could be harder for fact-checkers to isolate. The tool that they use to sift through potentially false content on Facebook draws from a combination of user reports and machine learning models that place a lot of value on the credibility of the sources that posts link to. So if a user publishes a New York Times article with a bogus caption, Facebook’s system may just see the NYT link and think the entire post is fine.

Finally, scaling the work of Facebook’s fact-checking partners to posts with false context could prove more difficult than simply applying existing fact checks to links, photos or videos. While the former three are relatively easily for machine learning to identify, identifying duplicate bogus captions could be much harder — particularly if users share the same false claim with different wording.

In short: Before sharing something on Facebook, always make sure that the caption actually aligns with what the news content actually shows.