If it seems to you that science-based claims do not need solid fact-checking, you would be wrong. Just look at the misinformation that spread around the world last week about the perils of eating red meat.
SciCheck, launched by FactCheck.org in January, is one example of fact-checkers dedicating themselves to debunking claims made about science. In many cases, politicians and media misquote scientific findings, as with the WHO research on the carcinogenic nature of red meat. In these instances, fact-checkers merely need to point out that there is a basic problem with the interpretation of the research. Other times, fact-checkers show that certain claims are based on blatantly flawed data.
But what can they do about the limitations inherent to rigorous scientific research? Two recent articles in Nature portray these limits effectively. In one, Raphael Silberzahn and Eric L. Uhlmann, of IESE and INSEAD, two business schools, gave the same exact data set and question to 29 different research groups and got back dramatically different results. In another, Regina Nuzzo, a statistician and science writer, shines a spotlight on four common cognitive fallacies that undermine the scientific process, from “hypothesis myopia” to the “Texas sharpshooter.”
I reached out to Nuzzo and with her help compiled a non-exhaustive list of precautions that fact-checkers could take when dealing with scientific research as evidence.
1. Assess a study’s design and reward transparency
Fact-checkers tend to be familiar with basic parameters of a study’s reliability. Dave Levitan, science writer for FactCheck.org, notes that though “it isn’t written in stone, there is certainly a hierarchy of data and science that I adhere to,” with peer-reviewed research published in leading journals generally at the top. The best fact-checkers also tend to reward studies that share all their data publicly.
Nuzzo says a further step is analyzing whether the researchers scrutinized have designed their study in a way that clearly considers competing hypotheses, or at least discuss the results of their study in light of other possible explanations. Preferential treatment should also be given to pre-registered studies, i.e. ones where the hypothesis and data collection techniques are published before the actual collection and analysis takes place. A similar preference could be given to researchers who disclose everything about how the study was conceived and designed in supplemental information. Fact-checkers grappling with how to assess transparency in science could start by consulting the Center for Open Science’s TOP guidelines. Nuzzo adds, “Even though the fact-checkers themselves won’t be going in to attempt to replicate their analyses to see if they’re right, the very act of researchers posting their information online is an extra incentive for the researchers to examine their own biases and stay honest and unbiased.”
2. Prize replication, where possible
The Silberzahn and Uhlmann paper quoted above is indicative of the risk of quoting non-replicated studies. If any of the studies their paper cites had been published alone, a fact-checker would have found confirmation that darker-skinned soccer players are three times more likely to get red cards than lighter-skinned ones, or no more likely. Nuzzo notes that fact-checkers should also rely on “the informal post-publication peer review […] many blogs serve as watch-dogs to scrutinize, discuss, and try to replicate published findings.” Of course, the perfect should not get in the way of the good. As Levitan notes, “realities of funding, time, logistics, and other factors get in the way” and “it doesn’t make a lot of sense to ignore science that hasn’t been subjected to precise replication.”
3. Explain what is unknown as extensively as what is known
Levitan says that at SciCheck he regularly explains “any uncertainties in what evidence is available.” Fact-checkers like academics should not be scared of admitting our collective ignorance on a subject. Some fact-checkers already use ratings such as El Sabueso’s “No se puede probar” (This cannot be proven). This allows them to punish an unsubstantiated claim and inform the reader, while recognizing that an issue cannot yet be judged on a spectrum that goes from true to false.
4. Understand the power and limitations of the p-value (or make friends with a statistician)
Most fact-checkers have a basic knowledge of the p-value, the standard statistical tool scientists use to assess whether their results were due to an identifiable cause or could occur by random chance. They should not trust it blindly. Nuzzo has written about the perils of “p-hacking” in another article on Nature and warns, “p-values are often abused, distorted, misunderstood, and misinterpreted.” For fact-checkers without the time or resources to assess whether a study has been “p-hacked”, an alternative solution is to befriend a statistician. Common personality traits should make this easier: as Nuzzo notes, “statisticians are notoriously skeptical.”
5. Watch out for hypothesis myopia in others – and yourself
Scientists’ hypothesis myopia may sound familiar to many fact-checkers too. Some mistakes that fact-checkers make are a product of a similar tunnel vision, in which when a realistic justification for a politician’s mistake is found, no other possible explanations are sought. This sometimes results in fact-checkers disregarding improbable but truthful justifications for a puzzling claim.
Silberzahn and Uhlmann write: “Under the current system, strong storylines win out over messy results.” Fact-checkers see this daily in the claims they fact-check. They too, should be careful not to fall in the same trap when dealing with scientific evidence.