Disinformation researchers are arguing that Facebook should do more than just indefinitely block President Donald Trump. They argue the company should go further by doing away with its exemption of politicians from its Third-Party Fact-Checking program.
The program partners with independent fact-checking organizations from around the world who are signatories to the International Fact-Checking Network’s Code of Principles. These fact-checkers submit articles and ratings to Facebook, which the platform appends to a flagged post. Facebook independently decides whether to limit the distribution of that post.
Posts from politicians are not eligible for fact-checking. A help page explaining the company’s policy says, “by limiting political speech, we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words.”
The one exception to this rule is if politicians share content that has previously been debunked by fact-checkers. There the help page explains Facebook, “will demote that content, display a warning and reject its inclusion in ads. This is different from a politician’s own claim or statement.”
“I think the exemption is effectively pulling the biggest sharpest teeth from the whole point of fact-checking as a means of controlling disinformation and the damage it can cause,” said Alexi Drew, a postdoctoral research associate at King’s College London.
She argued that disinformation does the most damage when its spread by those with perceived legitimacy, adding that one of the greatest sources of legitimacy is a political office.
“If you don’t fact check that you’re ignoring one of the greatest potential risk factors for spreading misinformation that exists on your platform,” Drew said.
Masato Kajimoto, an associate professor of journalism at the University of Hong Kong and founder of the fact-checking organization Annie Lab, echoed Drew’s statements.
“In many Asian countries, politicians are one of the major sources and disseminators of misinformation and disinformation who have a wide reach and influence,” Kajimoto wrote in an email to the IFCN. “They should be held more accountable for what they say, in my view.”
Lucas Graves, associate professor of journalism at the University of Wisconsin-Madison, empathized with Facebook’s effort to balance the public’s right to hear from its leaders with the platform’s goals of stopping disinformation.
“There are legitimate questions raised around the treatment of public figures when by definition their statements have news value,” Graves said. However, he argued the current policy gives political figures across the globe license to spread falsehoods with impunity.
“It’s really well established anecdotally, and through systematic studies, that public figures, high follower accounts are crucial nodes in disseminating misinformation,” Graves said. He added that this is an argument that political figures’ accounts deserve more scrutiny, and reasoned that fact-checking strikes a balance between the public’s right to hear from their leaders and efforts to combat harmful disinformation.
“The good thing about fact-checking as a solution is that it doesn’t actually suppress speech,” Graves said. “It annotates speech, it qualifies speech.”
The IFCN reached out to representatives from Facebook for comment, but did not receive a reply before publication. We will update this story if we receive comments from Facebook.
Drew acknowledged that any effort to moderate content on social media will be seen by some as censorship. However, she argued this demonstrates the importance of having independent organizations like fact-checkers involved in checking the veracity of a politician’s posts.
“If those independent organizations check the content of any politician’s tweet or message is false or wrong, then (platforms) should label it as false, and then link to credible sources,” Drew said.
All three researchers acknowledged the complex challenges that companies like Facebook are tasked with when it comes to content moderation, however, Drew and Kajimoto said this complexity calls for independent entities to adjudicate these issues outside the influence of the tech platforms.
“They’re technologists, and technologists are not ensconced with the second order issues they need to understand to make these discussions effective,” Drew said.
“Even though those platforms are private entities, their business models rely on providing public spaces where people gather,” Kajimoto wrote. “If there is a dispute, it has to be an impartial body that should decide if the speech is harmful or not in a fair manner with a proper appeals process.”