How Facebook deals with misinformation, in one graphic

Donald Trump is confused about how Facebook is fighting misinformation.

In a tweet over the weekend, the President of the United States said the technology company has set up a system to “purge themselves of fake news.” In keeping with the continued misuse of the term “fake news,” that statement was immediately followed by an attack on CNN.

While it’s unclear what exactly the president was talking about (Facebook’s recent removal of inauthentic pages and accounts? Its partnership with fact-checkers? An NBC Nightly News segment on the company’s “war room” that aired shortly before his tweet?), his misrepresentation of how Facebook tackles misinformation is not anomalous. (Disclosure: Being a signatory of the International Fact-Checking Network’s code of principles is a necessary condition for joining the project.)

Over the past several months, it’s been hard for journalists to discern between the tech company’s myriad policies on misinformation, inauthentic behavior and advertising transparency. Is the company’s continued purge of fake accounts and pages an anti-misinformation policy or a violation of the Community Standards? Is Facebook letting conservative publishers censor their content? And can fact-checkers do whatever they want on the platform?

Facebook itself shares at least some of the blame for this. Over the past year and a half, the company has released precious little data on how its fact-checking efforts are working, maintained often contradictory policies about misinformation and regularly shied away from on-the-record media interviews that shed more light on the problem.

In light of the ongoing confusion, Poynter created a simple flowchart showing what Facebook does and doesn’t do to limit the spread of misinformation on the platform. Read more about the company’s third-party fact-checking program here and here.

Comments

 
Email IconGroup 3Facebook IconLinkedIn IconsearchGroupTwitter IconGroup 2YouTube Icon