April 10, 2019

Facebook is starting the second quarter of the fiscal year by rolling out changes to the way it combats misinformation on its platform.

In a nearly 2,000-word blog post sent to Poynter on Wednesday, Facebook announced a slew of new things it’s doing to combat false news stories, images and videos.

Among the changes:

  • Facebook is now reducing the reach of groups that repeatedly spread misinformation.
  • It is exploring the use of crowdsourcing as a way to determine which news outlets users trust most.
  • And the company is adding new indicators to Messenger, groups and News Feed in an effort to inform users about the content they’re seeing.

“We have a number of product updates vs one or two major changes,” said Facebook spokeswoman Mari Melguizo in an email to Poynter. “These efforts focus on keeping people safe and maintaining the integrity of information that flows through our apps.”

Here’s what each of the biggest changes specifically have to do with misinformation, with context from journalists, academics and technologists, as well as additional reading about each topic. Read Facebook’s blog post in full here.

Demoting groups that spread misinformation

Among the biggest changes Facebook announced Wednesday was that it would start demoting the reach of groups that repeatedly spread false news stories, images and videos.

“When people in a group repeatedly share content that has been rated false by independent fact-checkers, we will reduce that group’s overall News Feed distribution,” wrote Guy Rosen, Facebook’s vice president of integrity, and Tessa Lyons, head of news feed integrity, in the blog post.

Facebook has caught flack over the past few months for the spread of anti-vaccine conspiracy theories, many of which started in groups and then spread to the rest of the platform. In response to both media pressure and pressure from American politicians, the company outlined a plan in early March to curb antivaxxer content on its platform.

In it, Facebook announced that groups and pages that share anti-vaccine misinformation would be removed from its recommendation algorithm — but not removed altogether. The move was a tacit acknowledgment of the power that groups have in spreading bogus content.

Facebook is taking action against anti-vaccine conspiracies. But bogus medical cures are still getting massive reach.

BuzzFeed News reported in March 2018 that the feature, often lauded by Facebook leadership — and prioritized in News Feed — had become “a global honeypot of spam, fake news, conspiracies, health misinformation, harassment, hacking, trolling, scams and other threats to users.” Why?

“Propagandists and spammers need to amass an audience, and groups serve that up on a platter,” Renee DiResta, a security researcher, told BuzzFeed. “There’s no need to run an ad campaign to find people receptive to your message if you can simply join relevant groups and start posting.”

And, while the company has taken several steps to limit the spread of fakery in News Feed, until Wednesday, it was doing little to combat misinformation specifically in groups.

“There’s no concerted effort to get rid of false news, misinformation, whatever,” a former Facebook employee who worked on groups told Poynter in January. “It’s so much worse because it sits there and it’s hidden … it’s just as bad as a false news misinformation generation machine as it ever was on News Feed.”

Leonard Lam, a spokesman for Facebook groups, told Poynter that the same anti-misinformation policies that govern products like News Feed apply to the entire platform. That means bogus articles, images and videos debunked by Facebook’s fact-checking partners will appear with the relevant fact check displayed below them — even in groups.

Those signals will also be used to determine which groups are repeat misinformers, one of the first things Facebook has done specifically to combat misinformation in groups.

Hyperpartisan Facebook groups are the next big challenge for fact-checkers

Crowdsourcing trust in news

Wednesday’s announcement comes as Facebook expands its partnership with fact-checking outlets around the world — arguably the company’s most visible effort to combat misinformation on the platform.

Facebook launched the program in December 2016 with American fact-checkers like (Poynter-owned) PolitiFact, Snopes and Factcheck.org. The goal: To identify, debunk and reduce the reach of false news stories on the platform. Once a hoax is flagged as false, its future reach in the News Feed is decreased and a fact check is appended to it. (Disclosure: Being a signatory of Poynter’s International Fact-Checking Network’s code of principles is a necessary condition for joining the project.)

Since then, it has expanded to let fact-checkers debunk false images and videos. The partnership has grown to 47 projects writing in 23 languages around the world. And while projects like Snopes and CBS have pulled out for different reasons, outlets like the Associated Press have recently expanded their commitment to the program.

One new anti-misinformation feature could help bolster that work.

How Facebook deals with misinformation, in one graphic

“There simply aren’t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time,” Rosen and Lyons wrote in the blog post. “One promising idea to bolster their work, which we’ve been exploring since 2017, involves groups of Facebook users pointing to journalistic sources to corroborate or contradict claims made in potentially false content.”

CEO Mark Zuckerberg aired that idea in a Facebook video in February — a little more than a year after he first floated it. The move wasn’t popular among journalists, who said that everyday Facebook users aren’t able to set aside their biases to grade credible news outlets.

But a new study published in February 2018 suggests otherwise.

“What we found is that, while there are real disagreements among Democrats and Republicans concerning mainstream news outlets, basically everybody — Democrats, Republicans and professional fact-checkers — agree that the fake and hyperpartisan sites are not to be trusted,” said David Rand, an associate professor at the Massachusetts Institute of Technology, in a press release.

According to Wednesday’s blog post, Facebook will continue exploring the idea by consulting academics, fact-checking experts, journalists and civil society organizations.

“Any system we implement must have safeguards from gaming or manipulation, avoid introducing personal biases and protect minority voices,” Rosen and Lyons wrote.

Crowdsourcing trustworthy sources on Facebook isn’t as far-fetched as you think

More context on Facebook

In the past, tech companies have turned to websites like Wikipedia to provide more context about the sources that publish on their platforms. On Wednesday, Facebook announced a slew of new similar indicators.

“We’re investing in features and products that give people more information to help them decide what to read, trust and share,” Rosen and Lyons wrote in the blog post.

Facebook has updated its context button, launched in April last year, to include information from The Trust Project about publishers’ ethics policies, ownership and funding structure. The company is starting to show more information in its page quality tab, which launched in January to show page owners which of their posts were debunked by Facebook’s fact-checking partners. And, in Messenger, the company is adding a verified badge to cut down on impersonations and scams.

Facebook is also starting to label forwarded messages in Messenger — a tactic seemingly borrowed from sister company WhatsApp, which rolled out a similar feature in July in an attempt to cut down on the spread of misinformation.

WhatsApp launches a feature that labels forwarded messages

While they’re an easy way to give users more information about publishers on social media, and thereby prevent them from sharing misinformation, indicators like Facebook’s context button also have the potential to be gamed.

Over the summer, someone vandalized the Wikipedia page for the California Republican Party to say that it supported Nazism. While most cases of Wikipedia vandalism are caught fairly quickly, this incident case made its way to Google, which surfaced the false edit high up in search results.

That’s rare. But given the volume of edits that are made to Wikipedia each day, it can be hard for tech platforms to catch all instances of vandalism.

“Of course it is a pretty weak way to combat fake news because Wikipedia is not a reliable source of information — as even Wikipedia acknowledges,” Magnus Pharao Hansen, a postdoctoral researcher at the University of Copenhagen, told Poynter in June. “Wikipedia is very vulnerable to hoaxes and contains all kinds of misinformation, so it is not a very serious way to combat the problem of fabricated news.”

Wikipedia vandalism could thwart hoax-busting on Google, YouTube and Facebook

At the same time, features like Facebook’s page quality tab have had a more demonstrative effect on the spread of misinformation.

After Factcheck.org debunked a false meme about U.S. Rep. Alexandria Ocasio-Cortez (D-N.Y.) in March, the page that published the photo deleted it. And it wasn’t the first time; other repeat misinforming pages on Facebook have taken down content debunked by the company’s fact-checking partners, and some have rebranded their operations altogether.

Correction: A previous version of this article misspelled Leonard Lam’s last name.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News