December 15, 2017

Exactly one year ago, one of the largest technology companies in the world turned to fact-checkers for help with its fake news problem.

Since then, Facebook’s partnership with truth sleuths like PolitiFact and Snopes has been the subject of many a headline. The Atlantic used a study on the effects of tagging fake news to lament the project’s impending doom, and several progressive websites criticized the recent decision to include the conservative publication The Weekly Standard’s Fact Check among the partners (The International Fact-Checking Network oversaw the decision to verify TWS Fact Check, a necessary condition for Facebook’s fact-checking partners.)

But what do fact-checkers themselves think about the partnership, which allows them to fact-check and label fake stories directly on Facebook?

In interviews with Poynter, three of the tech company’s original partner organizations said the initiative has had some really positive impacts, but there’s still work to do.

Aaron Sharockman, executive editor of PolitiFact (a project of the Poynter-owned Tampa Bay Times), said that while he thinks a lot of journalism thought leaders and experts would have negative things to say about the project, from his perspective it’s been mostly positive.

“I look at it as a fact-checker, and as a fact-checker my goal is really simple: to help people find the truth,” he said. “Is the methodology perfect? Is the timeline perfect? No — but at the end of the day, people on Facebook can ask if a story on Facebook is true and fact-checkers can provide that service back.”

The criticism Sharockman mentioned stems from several articles published about Facebook’s partnership throughout the year.

In October, BuzzFeed’s Craig Silverman reported that, based on a leaked email from Facebook to its fact-checking partners, the tech company’s fake news label reduces the spread of fake news stories by 80 percent once at least two fact-checkers have debunked them. But it typically takes more than three days for a label to appear, and by that time, most of the impressions on a Facebook post will probably have already been made.

Other critiques of Facebook’s approach to the fact-checking partnership stem from the company’s opacity when it comes to data sharing.

Dartmouth College’s Brendan Nyhan penned an October column in The New York Times Upshot in which he draws upon existing research to outline why it’s so important for Facebook to share data on how its fact-checking initiative is working (the IFCN has consistently made similar appeals). Several fact-checkers even told The Guardian anonymously last month that they thought the project required a significant reboot.

But is it too early to pass judgment on a project that’s only been around for a year?

“Those are great debates to have, but to me that’s almost a phase two and this is very much a beta,” Sharockman said. “I think to really declare a judgment that this is successful or not successful is premature.”

Facebook declined to comment for this story pending the publication of their own review of the fact-checking partnership later this month.

Despite the criticism, it’s clear that Facebook’s tool for fact-checkers has been at least somewhat successful in practice. Sharockman said PolitiFact has used it to check about 2,000 URLs since the partnership began — which is a lot, considering the outlet has published about 15,000 fact checks in its entire 10-year history.

“To cover that much ground in one year with this Facebook tool is a sign of success,” he said.

Eugene Kiely, director of Factcheck.org, agreed. He said the beauty of using Facebook’s tool is that his organization can fact-check hoaxes faster and more frequently. Factcheck.org has submitted nearly 600 links to stories since the initiative began and Facebook often flags questionable content within 24 hours, he said.

The fact that the tech giant helps foot the bill is nice, too.

“Once we got some funding from Facebook, we were able to go out and hire somebody to do this on a full-time basis and devote more of our resources to it,” he said. “Since June is really when we’ve started to see an increase in the kinds of work we were doing in debunking these viral deceptions.”

In exchange for weeding through user flags and pumping out fact checks, Facebook’s partner organizations each receive about $100,000 annually.

But not all of Facebook’s fact-checking partners think the initiative has been largely successful. Brooke Binkowski, managing editor of Snopes, said that while helping to surface fact checks has been a great effort, it’s ultimately not how Facebook should be trying to address online misinformation.

“It’s not like there’s a weed in the garden and you need to go pull it — you have to uproot everything, and by that I mean the fake news ecosystem that’s been created,” she said. “Technology and algorithms cannot make up for human intervention. They’re catching on — it’s just really slow.”

Last year, Facebook infamously replaced the human editors behind its trending section with robots, which resulted in fake news going viral after only a few days. Binkowski said that overreliance on algorithms and technology is a key contributor to Facebook’s blunders with fact-checking and misinformation, and that the only way for the platform to truly prevent the future spread of viral hoaxes is to disincentivize the creation of fake news.

“It’s going to keep lingering until they hire people back,” she said. “I just don’t think they want to admit that human error can make their way into algorithms or that perfect technology can’t exist when imperfect humans are making it.”

For Kiely and Sharockman, the chief concern about Facebook’s fact-checking efforts is less about overarching strategy and more about fine-tuning processes. Sharockman said he would like to see the platform develop a way to notify users when they’ve interacted with a post that was later debunked, and Kiely said he thinks Facebook should invest more time and money in developing ways to stop the spread of misinformation online. As Binkowski noted, paying fact-checkers is probably a lot cheaper than paying a boutique public relations firm to address the problem.

In 2018, all said they’d like to see the tech company share data about how the disputed tag is working, as well as communicate more openly with the public.

“In terms of working with Facebook, what I would like to see is more transparency and more information on the effectiveness on this experiment that we’re doing,” Kiely said. “At one point, Facebook said that they’re very pleased about how it’s going and its reducing impressions on these pages, but there’s no data behind that — there’s no information that would satisfy me as a fact checker that that’s an accurate statement.”

“If we were writing about this, I would have to say this claim is unsupported.”

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News