(Mis)informed podcast: Is fact-checking the best way to fight misinformation?

Fact-checking is often touted as the antidote to the epidemic of misinformation on the internet. But with so much fakery out there, it’s unrealistic to expect a relatively small band of fact-checkers to debunk every bogus claim online.

In the third episode of Poynter’s limited-run podcast about fact-checking and misinformation, we tried to answer the question: Is fact-checking really the best way to fight misinformation?

“With social media, it’s easy to deliver content to a lot of people,” said Giovanni Luca Ciampaglia, an assistant professor of computer science at the University of South Florida, during the episode. “So by the time something is going viral, that’s when the fact-checkers actually get to see it, so it’s already too late.”

It’s not all bad news: Also on the show, Clara Jiménez Cruz, founder of Spanish fact-checking site Maldito Bulo, talks about how she’s copying the format of hoaxes to get fact checks in front of more people — and seeing results.

Listen to the show below, or wherever you get your podcasts. And let us know what you think by emailing dfunke@poynter.org, tweeting @factchecknet or filling out this form.

Below is a transcript of the full episode, edited for clarity and brevity. Read more transcripts for other episodes of (Mis)informed here.

The Bump – 00:42

Daniel Funke: If you were to take a look inside my inbox on any given day, you’d see a neverending list of press releases. Here are a few subject lines: “Helping fact-checking with authenticated images,” “Pressland Uses Blockchain To Combat Fake News,” “Tech startup cracks down on misinformation ahead of midterm elections.”

Some of these projects merit more attention than others. In the year and a half that I’ve covered this beat, I’ve started to notice that most of the work in this space is trying to increase the reach of fact checks.

Fact-checking sites like Factcheck.org used to focus simply on figuring out whether what politicians were saying was true or false. But for the past couple of years, they’re also going after junky viral hoaxes on social media. It makes sense — fact-checkers are well-equipped to debunk this kind of misinformation.

But there’s a problem: It’s way easier to post a fake on Facebook and get a ton of clicks than it is to publish a fact check debunking that same hoax.

DF: Today on the show, we’ll hear from two people who are trying to improve the reach of fact-checking on social media.

First, we’ll talk to Giovanni Luca Ciampaglia, an assistant professor of computer science at the University of South Florida. He’s published several studies on how the reach of fact-checking compares to online hoaxes — and how tech platforms could play a key role.

Then, we’ll speak with Clara Jiménez Cruz, founder of Spanish fact-checking site Maldito Bulo. She’s found that, by mimicking the format of misinformation on social media, she can actually get more people to read fact checks.

The Set – 2:50

DF: During this year’s elections in Brazil and the U.S., hoaxes still made it through to the mainstream in spite of robust fact-checking from sites like PolitiFact and Aos Fatos. And they had real-life consequences.

Giovanni Luca Ciampaglia at the University of South Florida has actually quantified how a lot of this stuff spreads. And he says that fact-checkers need an assist from tech platforms to more effectively combat misinformation in the future.

DF: Hey Giovanni, how’s it going? It’s been a long time since we talked to each other.

Giovanni Luca Ciampaglia: Hey Danny, how are you? I’m good.

DF: So today’s episode is all about, you know, whether or not fact-checking can scale to misinformation. And I know you’ve spent a lot of time thinking about this question, you know, in the abstract. I think it’s easy for us to think about why it’s harder to publish a fact check than a fake news story, but what does that look like in practice? What does your research say about this question?

GLC: In practice, what we’re seeing is that there’s still a big lag in terms of both the reach of fact-checking, how many people actually get to see it and the temporal gap — how fast the fact-checkers are able to put out a verification about something that is spreading. So this means that the misinformation is always reaching too many people and the fact-checking is just lagging behind.

DF: And you’ve actually done studies in which you’ve kind of analyze the effect of fact-checking, specific fact checks, on misinformation and quantified that over time. Why don’t you walk us through some of those studies and what they reveal about this problem?

GLC: Yeah. So the first study that I was referring to is coming from our platform called Hoaxy. And so Hoaxy is a system that tracks all the traffic that is being generated from a number of sources that we call low-credibility sources. Fundamentally, these are services that various fact-checkers, like you, and media organizations have listed as untrustworthy, because they have published previously fake news or hoaxes and so on, or because of they are just very misleading partisan outlets, for example, from both sides of the political spectrum.

And Hoaxy also tracks all the traffic to fact-checking websites. So in a certain sense, it lets us see how the tweets that share these two classes of content spread over Twitter. And that’s where we’ve been seeing these lags both in terms of exposure and time. And we estimated that there was a lack of 13 hours, which it was exactly like an eternity in terms of social media time.

DF: For our listeners that might not know a lot about how much traffic misinformation and fact-checkers usually get, walk us through, like you mentioned, that some of your studies have compared those two metrics. What would it look like for a typical fact check compared to the misinformation in terms of the audience that it gets?

GLC: Yeah. So a good example is the claim about the 3 million votes by illegal aliens during the 2016 presidential election. So probably many people have heard of this, have heard this claim made by Donald Trump. But maybe not many know that it was not — Donald Trump took it from another web, from a website called InfoWars that ran it and started spreading this claim and actually fueled it sometime before the election themselves.

RELATED ARTICLE: Here’’s what the spread of misinformation on Twitter looks like

So Donald Trump somehow popularized it, let’s say. So when you actually look at that report from InfoWars, that story that was inaccurate, I think on Hoaxy the reach that we’ve been able to collect so far is about in the order of the tens of thousands of people — I think 60,000 people — I don’t remember, in terms of tweets.

The fact-checking websites debunked it a few days later, and I think that the overall reach is 10 times less. So about 6,000 tweets overall. That gives you an idea, therefore, that for every primary tweet sharing the URL to Snopes, there are 10 tweets sharing the URL to InfoWars.

DF: What do you think that is, like, why do you think misinformation is reaching more people?

GLC: Well, it has to do a lot with the way social media work today and the, in a certain sense, the reason that the shrinking life cycle and attention span. So with social media, it’s easy to deliver content to a lot of people and so now this is happening very quickly. So by the time something is going viral, the fact-checkers, that’s when the fact-checkers actually get to see it, so when it’s already too late. And at the same time, there’s also the problem of the selective exposure.

So oftentimes the people who are consuming the kind of information from low-credibility sources are, the platforms somehow do not include fact-checking information in their media diet, just because the algorithms are trained to feed them something that they would consume. And so there’s a complete split in terms of resource consumption.

DF: Yeah. I mean, I think you’re hitting on a really important problem that fact-checkers increasingly told me about over the past year or so, right? It’s that, you know, it’s way harder to go through and write a fact check in the time it would take to address misinformation in the way that would decrease it on the platform. Which is another aspect of this that I’m glad you brought up. What role do platforms play in making it easier for fact-checkers to address this information, do you think?

GLC: Yeah, that’s a very good question because I think that platforms until, I think, at least until the previous presidential elections were not really aware of this — especially of this role that they have now, and started to take, trying to work and engage more with a fact-checkers. Simply because this kind of content has consequences and they see it not just in terms of their own business, for having people, but also in terms of the overall well-being of society.

So we’re seeing that platforms are, some platforms are engaging directly with fact-checkers, as you know Facebook does, others prefer to engage in the sense that they welcome the content produced by fact-checkers along other media organizations in terms of as, hopefully as a way somehow to drown the bad information, inaccurate information, with correct information.

DF: And what are some good ways you think we should do that?

GLC: Well, that’s the part that’s interesting. I think that we should like, I don’t think. Right now, the prevalent approaching in machine learning and AI is to rely on so-called “supervised learning,” meaning that you train a machine-learning algorithm based on previous examples of what you want to classify for example, and then the system will try to generalize with new, to new unlabeled samples. But this approach is showing that oftentimes this training data encodes biases, it encodes the biases of the people who label the data, the biases of the people who actually design the algorithms and so on.

And I think instead that, when it comes to fact-checking, we probably need to think of systems that really include the input of humans — not just in terms of the development and deployment, but also the actual everyday processing. So I think that the system should start to be, talk more to you humans, talk more to resources that have been curated by fact-checkers. I think that there’s an exciting trend about harnessing all the work that has been already made by fact-checkers, in such a way that we can try to match new claims with previous claims. Often, oftentimes examples of hoaxes are just rehashes on previous claims. So there’s always this idea that we can try to harness that information that we’ve been already fact-checked before.

The Spike – 11:38

DF: Fake news travels so quickly, in part, because it’s so expertly packaged. Bold fonts, striking images, and salacious headlines appealing to readers’ basic emotions spread across social media much faster than fact checks.

Clara Jiménez Cruz is trying to change that. She’s the creator of the Spanish debunking project Maldito Bulo, and she’s trying to beat hoaxers at their own game.

DF: Hey Clara, how it’s going? It’s been a long time.

Clara Jiménez Cruz: Hi, yeah. Since Rome, right?

DF: Yeah. Yeah. So let me just start by saying that I think some of the fact-checking work you do on social media is really innovative. But do you ever feel like you’re playing a game of whack-a-mole? Because it’s obviously way easier to publish a fake news story on Facebook and get a ton of clicks than it is to go through the rigmarole of producing a fact check. So do you ever feel like it’s just kind of not worth it?

CJC: I mean it does get kind of hard sometimes and you think it’s a huge effort, but it’s also very valuable when you see that your community is responding to you and it’s asking for your help to fact-check something. Like, we have 200 people writing to us and our WhatsApp service each day asking for answers. So you’re really there for them, not so much, I don’t know, not only to be, like, valued because of what you do, right?

DF: Tell me about the format that you use because I know it’s pretty different from what most fact-checkers use. So tell me what it looks like and how you got there.

CJC: So what we do is basically an image that you can publish on social media that it’s very small so that people can share it very easily through WhatsApp or Telegram or whatever. It has the same size as your cell phone screen. And when you see it, you don’t need to amplify to read it. You basically get all your information on one sight.

Why did we decide to do this? Because we thought that most of the misinformation and misinformation we were seeing on social media, which is basically what if pops up, was in that sort of format. So we decided that we needed to use the same weapons that the bad guys are using to fact-check this information.

DF: Right, you’re basically beating the hoaxers at their own game.

CJC: Yeah, that’s it. We realize that most people are not willing to dedicate, like, two hours or not even two minutes to read an article that debunks something that is false. All they want to know is that something is fake and why it is fake.

DF: And what has the reception been like?

CJC: I mean, I think it’s working very good. One of the purposes of the format we use it is that it goes as viral as the disinformation itself and that doesn’t always happen. And it helps people a lot when you ask them to share it with their contacts, like, because one of the things that we found with this information is that it’s OK if debunk something on social media, but I need people sharing it — and especially sharing it in WhatsApp.

RELATED ARTICLE: How do you make fact-checking viral? Make it look like misinformation.

So we need, we cannot get to WhatsApp by ourselves, but we need our community and our malditos and malditas to be the ones that go back to their groups where their families have shared the hoaxes and share the debunking with them. And it’s also because researches have shown that when your brother or a friend is telling you that something is false you kind of take it in much more than if I get there and you don’t know me and I have no credibility for you and tell you that something is fake.

DF: Yeah. So basically there are these groups of people who are kind of like malditos and malditas, like you said; they’re loyal to your brand and they’re sharing your content. Is that right? Are there entire groups of them on WhatsApp dedicated to this?

CJC: Yeah, that’s it. What they do is basically they get our debunking and they go back to whoever shared it with them and tell them, “Well, this has been debunked and it’s fake for this, for this and for this. Maldita has done the research and you can trust them because I do.” That’s what happens.

DF: What are some other ways you’re trying to scale your efforts to tackle more than, to tackle more misinformation and address it faster?

CJC: Well, one of the things that we discovered with Maldito Bulo is that it’s very silly to adapt the format of what we do to different media platforms. So we are on social media but we do, we run a TV show where we do a section on Maldito Bulo. We have a, two radio shows where we intervene to debunk stuff every week.

And somehow we are reaching different audiences in that way, right? Like, we just started a new section on a, I don’t know how, like an, I don’t know how you call this in English — on a satire TV show. So it’s not really where fact-checkers are usually, but it’s working pretty well because it’s a show that a lot of people watch on YouTube and it’s for young people. So we’re reaching out of an audience that we traditionally did through Twitter, for example. I think that’s the way.

DF: What do you think would be an easy way for more people to do the kind of debunking work that you’re doing?

CJC: I’ve talked to the AFP group and they told us, like, “Oh, we thought what you did with the images where you put the hoax on top of the hoax to disseminate through on social media, it’s good and so we are going to copy it.” And we’re happy about it.

I mean, I think that the key issue about misinformation is that we need to reach as much people as possible so that we can stop misinformation from moving around. And also because what we do as fact-checkers is basically that we show the path to fact-check something to our audiences.

DF: At the end of the day, I think a lot of people have concerns about the ability of fact-checking to address all the misinformation out there, even tackle a good bulk of it, and I’m curious, like, what do you think about that, do you, do you feel like your work is making a dent in the amount of fakery that’s out there?

CJC: I’m sure we have a huge way to go. And I think there are two basics on that side. One is technology. We need to improve our technological tools to tackle this information. We spend a lot of time in thinking: Which tech tools can we build that can help people distinguish true from false?

And on the other hand, I think we need to educate on, probably on fact-checking, basically. Like, we need to educate young, very young audiences and to get into the schools and start telling them, “OK, so when you get something that you’re not very sure if it’s true or not, there’s something that you need, some questions you need to ask that piece of information or misinformation in order to decide if you can trust it or not.” We really need to work much more on that.

This episode of (Mis)informed was produced by Vanya Tsvetkova, an interactive learning producer at Poynter’s News University. It was edited by Alexios Mantzarlis, with additional editing and creative direction from Alex Laughlin.