March 30, 2016

Jeb Bush (remember him?) revised a factual claim in a speech in order not to “get PolitiFacted.”

Gabriela Michetti, then a vice presidential candidate in Argentina, responded to a fact check by saying “I saw that on Chequeado. Which is why we corrected ourselves and never repeated it.”

Luigi Di Maio, vice president of Italy’s Chamber of Deputies, confronted on live radio with a fact check by Pagella Politica apologized for his claim: “I admit my error […] However, I appreciate fact-checking a lot.”

Fact-checkers proudly display these anecdotes as a badge of honor. They are also fruitful counterpoints when exasperation with politicians’ fluid relationship with the truth on the campaign trail lead to editorials asking whether fact-checking is dead.

But could fact-checkers measure impact in a more quantifiable and less anecdotal manner? A roundup of recent academic studies on the question helpfully summarized the effects on three groups: politicians readers and the media.

I emailed the 131 members of the International Fact-Checking Network Google group (mostly active fact-checkers around the world plus a few academics) and asked them what they thought of six possible metrics.

The sample (12 people) that answered the survey is small and unrepresentative. Moreover, as I only asked respondents to rate each metric (from one “that’s stupid” to five “a good idea”) but not justify their vote, I can only hypothesize why some metrics were more popular than others.

No metric will be perfect, nor can the impact of fact-checking be easily turned into a set of numbers. Promoting a critical approach to consuming information is a cherished goal for many fact-checkers — and is essentially immeasurable.

However, this shouldn’t halt the conversation about what each fact-checking initiative could be doing to assess its impact:

1. Measure the amount of times a fact check was used by other media as a final word to adjudicate an issue. (Average rating in survey: 4.6)

This was the metric closest to receiving unanimous support among the small group of respondents. Existing instruments are capable of tracking and alerting all mentions of your organization’s name, so it is a relatively easy metric to start quantifying. It becomes a little harder when fact-checkers’ work is mentioned vaguely and with scant attribution (as Barack Obama did in his remarks at the Toner Prize Ceremony with this fact-checking effort by POLITICO Magazine).

2. Measure how much a certain claim continues spreading after the publication of a fact check via Twitter or LexisNexis. (Average rating in survey: 4.0)

This is complicated but intriguing. Much like tracking media references, this metric would require a certain amount of sophistication to set up in order to ensure that paraphrases of a claim are tracked alongside the claim itself. Yet tools like RumorLens and recent studies (Starbird et al, 2014; Guess, 2015; Zubiaga et al, 2016) have looked at the spread of rumors and their consequent corrections on Twitter. Perhaps researchers in this field could team up with fact-checkers to develop some form of tailored semi-autonomous tracking system? If there is a noticeable fall in the spread of a false claim post-publication it may not in itself prove the impact of the fact check, but it is a start. British fact-checkers Full Fact have developed a similar tool, still in beta, but are for the moment being mostly interested in using it to target their efforts at obtaining corrections.

3. Measure whether readers’ beliefs are more accurate after reading fact checks. (Average rating in survey: 4.0)

This question is being analyzed in an upcoming study by Brendan Nyhan of Dartmouth College and Jason Reifler of Exeter University focusing on the Italian fact-checking site I used to edit, Pagella Politica. Nyhan and Reifler randomly exposed some Italian respondents to fact checks and then compared their assessments of the accuracy of those claims to people who didn’t see the fact checks. They hope to measure whether fact checks by Pagella Politica improved the public’s understanding of those topics. The form of randomized survey they performed is an expensive and time-consuming effort which may be difficult to replicate. Nonetheless, such a study could be conducted across organizations in collaboration with outside experts and help assess whether fact checks are clear and leading to improved understanding and not reinforcing flawed beliefs.

4. Measure the times fact checks are quoted in parliament or other transcripts of legislative bodies. (Average rating in survey: 3.6)

I have done this in a previous article for ABC Fact Check in Australia, and before that Mark Stencel of Duke Reporters’ Lab looked at their use in the U.S. Congress. These efforts do help give an understanding of the political ramifications of the fact-checkers’ work. Noting whether all parties quote fact checks also helps gauge perceived neutrality. Still, the total number of mentions doesn’t shed any light on whether fact checks are improving public discourse. If the 52 logged citations from Australia are any model of what happens in other countries, fact-checkers are more often quoted to attack opponents than to rectify one’s own mistakes. Finally, in countries where the legislative body is mostly ornamental and power is in the hands of an authoritarian executive branch, success in this metric may not mean much at all in terms of impact.

5. Measure the number of retractions per fact check published. (Average rating in survey: 3.5)

How often do public figures retract or correct their claims following a fact check? This doesn’t happen very often; politicians rarely apologize, though they more regularly tone down or quietly drop references to an erroneous “fact.”  This metric may also be better suited for countries where a culture of corrections is more integral to the media and political ecosystem. Perhaps for this reason, half of the survey respondents rated it highly and the other half average or worst.

6. Measure the traffic of a fact check compared to that of the original claim being checked. (Average rating in survey: 3.0)

This metric received the worst average rating from fact-checkers. Presumably the doubters here question the extent to which a fact-checking operation can be expected to match the traffic of a claim that could be repeated by several different sources and which has been in the public domain for longer than the fact check itself.

None of these metrics is a silver bullet. Yet especially when so many are wondering whether we are in a “post-fact” age, fact-checkers need to lead the conversation on understanding the extent to which facts still do matter.

Do you have any suggestions for possible metrics of impact? Tweet them @factchecknet or email factchecknet@poynter.org

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Alexios Mantzarlis joined Poynter to lead the International Fact-Checking Network in September of 2015. In this capacity he writes about and advocates for fact-checking. He…
Alexios Mantzarlis

More News

Back to News