December 21, 2018

In the above image, Tessa Lyons, a product manager at Facebook, speaks during the Fifth Global Fact-Checking Summit in Rome on Thursday, June 21, 2018. (Photo/Giulio Riotta)

2018 was a big year for Facebook’s partnership with fact-checking organizations.

The program — which enables fact-checkers like Snopes to find and debunk hoaxes in an online dashboard, reducing their spread in News Feed — expanded to 35 partners in 24 countries. Fact-checkers started debunking false images, videos and headlines in addition to stories. CEO Mark Zuckerberg and COO Sheryl Sandberg both cited the project as a key part of the company’s anti-misinformation strategy in their testimony to Congress.

But the project also faced tremendous challenges — some of which the company will have to answer for in 2019. (Disclosure: Being a signatory of the International Fact-Checking Network’s code of principles is a necessary condition for joining the project.)

After partnering with Facebook, fact-checkers in Brazil and the Philippines were harassed, doxxed and threatened online. Notorious misinformers still found a big audience on the platform. And the company’s fact-checking partners are still unsure if the project is actually reducing the spread of hoaxes, despite some preliminary evidence that it is.

“I think in the last two years, we’ve made a lot of progress in our fight against misinformation — and we obviously still have a lot to do,” said Tessa Lyons, a product manager in charge of Facebook’s anti-misinformation efforts, in an interview with Poynter. “But a lot of that progress, and a lot of what informs what we know we still have to do, has been the partnerships that we have with fact-checkers.”

Poynter caught up with Lyons to talk about the past two years of Facebook’s fact-checking partnership, some of the challenges the program has faced and how the company is planning to address them in 2019. This Q-and-A has been edited for clarity and brevity.

So it’s been two years since Facebook launched its partnership with fact-checking organizations. What has been the biggest value added, from Facebook’s point of view? And what has been the biggest drawback?

In the last year, we expanded from four to 24 countries. In expanding to that many countries, we’ve learned about how this problem is both global but also manifests differently in different places. We’ve also, in the past year, expanded from links to photos and videos, and so we’ve learned a lot about the different challenges of predicting and surfacing and fact-checking misinformation in photos and videos.

And we’ve also seen how we can be more effective in deploying technology. When fact-checkers debunk an individual claim, it takes them time, right? Because that’s real work that they have to do to report and to understand the context and what’s true and what’s false. What we’re able to do now — because we’re learning from fact-checkers — is using technology to identify duplicates to improve the whole system and give us better leverage, this combination of technology and human review.

RELATED ARTICLE: How Facebook deals with misinformation, in one graphic

And we’ve also learned from all of those things how much we have to do going forward, which gets to the second part of your question. What I’d say there is that misinformation isn’t confined to any one country, to any one content type to any one time. And the bad actors who spread it are highly motivated and we have a platform of 2 billion people. And so when we think about the work we have ahead of us, a lot of it is really just thinking about scale and thinking about how we can continue to internationalize, thinking about how we can continue to improve our efficacy across other content types and thinking about just the scale —  the number of claims themselves and the amount that’s being posted — how we can improve our ability to predict and prioritize, but also to increase the throughput of the amount of potentially viral false content that we can get.

And I think those are all areas where we’ve been working, but we realize from the work we’ve done, how much more we have to do and why we don’t see this as an investment that is time-bound in any way — this is an ongoing investment.

That’s an interesting jumping off point, and the scalability question is one I think about a lot. As you mentioned, in 2018, you started letting fact-checkers debunk images, videos and even false headlines. How do you expect the project to change in the new year to help increase its reach and efficacy?

We have nothing specific to announce right now, but I can talk to you about some of the themes here and some of the things that we’ve seen.

So one thing that we’re excited about, and that we developed from feedback from our fact-checking partners and so far they’ve given us good feedback on (we’ve only piloted this in the U.S. so far), is a faster way to notify them when we have a piece of content that we think has a high probability of being false and prediction of getting broad viral distribution. And by signaling that and prioritizing that for them, we can make sure that we’re helping them focus their time on the things that can have the most impact. And so one of the ways you can help with the scale problem is through prioritization, and so that’s an area that we’ve made some progress on and gotten good feedback on that we’ll continue to invest in.

Another area is going to be continuing to figure out the combination of technology and human review. So we already use technology to prioritize things for fact-checkers, we already use it to find duplicates of content, we already use it — as we shared last year — to identify questionable foreign actors who are spreading misinformation to other audiences. And continuing to think about how we can leverage technology to help with this problem is important if we’re going to solve the scale challenge.

The third thing is we constantly have to be researching and understanding and testing new approaches so that we are getting ahead do this emerging challenge and making sure that we’re doing all we can to live up to that responsibility.

Back to the first point, because I thought that was particularly interesting: making the notification of potentially false content faster for fact-checkers. I have two questions about it: What kind of signals do you use to determine if something is potentially false in this new method? And have you measured how much faster that is than the old method?

This was really a test we were doing in the lead-up to the midterms. I think what we learned from this is having a way to signal priorities for fact-checkers is really, really helpful in reducing the amount of time it takes for them.

If you think about latency and the delay, there’s more opportunity in the time it takes to predict something. The signals we use there are feedback from our community, the history of the domain or the page. And then two is, once we predicted it, there is a time until the fact-checker starts looking into it because they get so much potentially false content. And then the third stage is the actual latency for them to do their research and reporting. And we can work at each of those three stages to try to reduce latency.

This test that I had mentioned was really focused on that middle stage, where we’re taking the signals that we have and using things like feedback from the community of things that they think are likely to be false and also things that are getting broad, broad distribution — or are predicted to get broad distribution, if they’re going to go viral in the near term and every minute now counts. We were using those signals to say, “OK, look at these ones first.” And they were very responsive to that and we’ve gotten good feedback from them about that process.

By being able to reduce latency in that middle stage, we were able to reduce the number of people who could have been exposed or seen those false stories overall, but we need to do this at each of those three stages.

Fast-forwarding to the new year, we recently conducted a survey of 19 partners, and they were particularly concerned about both transparency about their reach and improving the dashboard in terms of what it surfaces. Did you read that survey and do you plan to address those things in the new year?

I definitely saw the survey and the article. It really tracks with what we’ve heard from them.

I think one thing that you recognize that a lot of people don’t recognize is that we talk to our fact-checking partners very regularly. We want to get their constant feedback; we spend time meeting with them as a group, we spend time having one-on-one conversations with them, we spend time traveling to meet with them in their countries.

RELATED ARTICLE: We asked 19 fact-checkers what they think of their partnership with Facebook. Here’s what they told us.

On the topic of transparency, one of the things that we have done based on their feedback is started to share with each partner a report of their impact and activity. And so far we’ve shared that with not all the partners, we’ve started to roll it out, but we continue to expand that as we head into 2019. And our goal of doing that was to say, “OK, we’ve heard this request for transparency and we really do want to increase transparency. So let’s start by sharing with partners more detail on the data and getting feedback from them on what data points are helpful to have, what data points they think are missing, so we can continue to iterate on the sort of shared dashboard that we’ll all be looking at.”

The second thing we’ve seen on transparency in the last year is the work that’s been done by other academics, like Stanford (University), (the University of) Michigan and Le Monde to measure the overall prevalence of misinformation. And across those three studies, what they found is that the work we’re doing — of which fact-checking is a component — is having an impact on meaningfully reducing the overall volume of false news. We know that there is a need and desire for more data transparency with academics to be able to do this research in a more rigorous way, and that’s why we’ve been working on the Social Science One partnership. As you know, that has its own independent structure for lots of important reasons, but I believe they’re planning on sharing in January who the selected (request for proposals) and academic grants were. And I’m really excited to see what we learn from that additional transparency and research.

And I think, again, just like we’re going to continue to talk about misinformation at these milestones and throughout the year, we’re going to continue to talk about transparency. Because as we continue to share more, we’re going to have more questions, fact-checkers are going to have more questions, and we’re all going to need to continue to make sure that the conversation we’re having is responsive to where we are in terms of solving and addressing and fighting this problem — and responsive to the fact that we’re going to have to continue to do so in new ways, right? We started with links and now we’re working with photos and videos, and I think with transparency, a lot of the work today has been on links. We’re going to need to start understanding what that’s going to look like for photos and videos.

I’m glad you brought up the personalized reports you’re sending to fact-checkers and Social Science One. But I have to ask: Could you share any more data that’s more of an aggregate look based on those reports?

Not right now. Look, as you know, one of the challenges of talking about data and the impact of the work of fact-checking specifically is that fact-checking is a component, an important component, but a component of the overall work we’re doing to fight misinformation. And so trying to tease out the impact of any one lever is really challenging.

Now at a partner level, at an individual fact-checking partner level, we can get into more detail — and that’s what we’re doing — about their activity and the impact of their activity. Trying to understand how fact-checking contributes versus other pieces of work that we’re doing to our overall efforts to fight misinformation to come up with the kind of stat that you’re talking about is harder. We’ve shared in the past the percent stat and the time that it takes, and the 80 percent stat holds true today.

But we’re focused, really, on two things. 1.) Making sure that our partners understand their activity and the impact of their work. And the reason we’re focused on that, to be really clear, is because they want transparency, but also because we believe the that by sharing more data with them, we can all get better and faster and more effective. And so we have a shared interest in making sure that they have that data so we can all be iterating together.

And then 2.) In making sure that we have overall volume of prevalence of misinformation and its reduction over time, teasing apart the contribution of any one thing to that decline is harder, but by measuring that overall decline, we can see if our collective efforts are being effective. And that’s why we were so encouraged by the studies from Michigan, Le Monde and Stanford and why we’re also really excited about the work that Social Science One will have.

Shifting gears a bit, there are some ramifications of this project beyond just how the project itself is working. In the past year, a few fact-checking partners were attacked this year for working with Facebook, notably in Brazil and the Philippines. What plans do you have for the new year to prevent those kinds of incidents?

This is something that I take incredibly seriously. We are conducting safety training with all new partners who join the program. And through the Facebook Journalism Program, we provide additional online safety resources for journalists, because we know that this is a challenge for fact-checkers, but it’s a challenge for journalists around the world. And we take it really, really seriously and want to make sure that they have resources, including information about how they can best use our tools to keep themselves safe, whether it’s things like two-factor authentication, how they can control location-sharing, report abusive content and impersonation — it runs across the board. And we’re going to continue to invest in that work because it is so critical.

RELATED ARTICLE: These fact-checkers were attacked online after partnering with Facebook

And we also make sure our Community Standards protect journalists and other vulnerable people from threats of credible violence. We remove content and work with local authorities whenever we become aware of any content that poses a genuine risk of physical harm or direct threats to safety.

More close to home, The Weekly Standard announced last week that it was folding. Obviously, they were one of the fact-checking partners. And I’m curious, from Facebook’s perspective, if you’re concerned about the perceived ideological bent of the project?

They did great work as a partner and we will definitely be sad to lose them as a partner because the quality of their fact checks was fantastic. But I don’t have any other — we continue to care a lot about scaling the work that we’re doing to fight misinformation, as I said, and that’s going to remain a focus internationally and here in the U.S.

Finally, as you continue to leverage technology to scale fact-checking, do you foresee artificial intelligence replacing fact-checkers to fight misinformation in the future? Working with fact-checkers has obviously opened Facebook up to some liability.

I think that, for the foreseeable future, the fight against misinformation is going to require a lot of approaches. It’s going to require technology being used, but it’s also going to require human review. I wish that there was some single thing, whether technology or human review, that could solve this problem globally tomorrow. But that’s just not the reality — this is a complex, adversarial space, and so we’re going to continue to invest in solutions across the board so that, collectively, they can help us reduce the volume of misinformation.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News