Science journalism has seen a resurgence during the pandemic. Every day seems to bring a new study, a new paper, a new finding to break down and interpret for readers.
But how well do journalists communicate those findings?
A study from the University of Michigan found that journalists tend to understate the claims of scientific papers. As part of their research into how certainty is expressed in science communication, School of Information Ph.D. student Jiaxin Pei and assistant professor David Jurgens compared hundreds of thousands of paper abstracts with corresponding news articles reporting on those papers’ findings.
“The findings presented in the science news are actually lower than the certainty of the same scientific findings presented in the paper extracts,” Pei said.
Pei and Jurgens found that journalists can be very careful in their science reporting, sometimes downplaying the certainty of findings. Their research contradicts claims that journalists exaggerate scientific discoveries.
To reach their conclusions, Pei, Jurgens and a team of human annotators calculated certainty levels in scientific abstracts and news articles pulled from Altmetric, which tracks news stories mentioning scientific papers. They then built a computer model that could replicate these calculations, allowing them to analyze hundreds of thousands of articles and papers.
Those discrepancies became especially evident when they analyzed different aspects of certainty, such as “number.” For example, their results, along with prior research, indicate that journalists may be replacing precise numbers found in scientific papers with language like “roughly” to make their writing more accessible.
While they don’t have definitive answers as to why journalists understate scientific findings, Jurgens hypothesized that one reason may be that reporters believe it is better to err on the side of caution. He notes that journalists’ work can be difficult. They must translate scientific work so that it is comprehensible to a general audience.
While some believe that overclaiming scientific findings is worse than underclaiming them, Jurgens said the latter can also have negative effects. He pointed to reporting on COVID-19 vaccines as an example.
“Scientists are fairly certain that the vaccines are safe,” Jurgens said. “But I think bringing up the uncertainty regarding that could lead people to be less vaccinated or to maybe not seek out health care. In this case, within the pandemic setting, it could mean a loss of life, which is a fairly serious outcome.”
Pei and Jurgens also examined how “journal impact factor” — their proxy for measuring quality of science — affects the way journalists present scientific conclusions. They found that the journal where a study originated did not seem to influence how reporters described scientific uncertainty.
That can be a problem, Pei said, since higher impact journals have a more rigorous reviewing process. Knowing how prestigious or trustworthy the journal a scientific paper is published in could be useful information for readers.
For journalists who wish to improve how they describe scientific findings, Pei advises that they talk to the scientists behind the study they are trying to cover. Jurgens noted, however, that scientists can also be “pretty bad communicators.”
“It’s an open question,” Jurgens said. “How do we effectively communicate this in a way that is accessible?”
Asked how certain they were about their own results, Jurgens and Pei said they were “pretty certain.” The model they built produced certainty levels that were very similar to the ones calculated by the annotators, and their analysis included hundreds of thousands of data points. Their paper was published in the Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
Pei and Jurgens noted, however, that they could always use more data and that their study didn’t look at other areas where people might perceive exaggeration in news — such as headlines.
The next step in their research is to talk to journalists and determine what tools they could use to improve their reporting. They have been thinking about ways to help reporters translate scientists’ work for general audiences.
One step Pei and Jurgens have already taken is publishing code that allows journalists and scientists to calculate certainty levels in their writing.
“There are a lot of open questions in this field (natural language processing)” Pei said. “With more efforts in this area, we’ll be able to provide tools and systems for journalists to cover science.”