How do news audiences actually feel about Elon Musk-style credibility scores?

News readers are constantly challenged to make nuanced decisions as they scroll through social media feeds filled with news stories from brands they have never heard of.

Perhaps as a testament to that, Facebook recently announced plans to hire news credibility specialists. These specialists would be tasked with creating a list of news brands with reputations for credible news that hints the company may be “whitelisting” certain brands in the future.

Labeling, even internally, certain news brands as credible looks like the next step the company will take to help audiences navigate the wide range of news quality available online. This — accompanied with the reactions to Elon Musk’s tweets about creating a credibility labeling system for online news and news earlier this year that NewsGuard plans to evaluate and assign a credibility label to 7,500 news brands — suggests systems that rate the credibility of news may be the next trend in news media literacy.

In one of his tweets, Musk almost off-handedly raised the question of how much audiences would pay attention to credibility labels.

That got my attention.

I’ve been interested in how audiences update their perceptions of credibility as new forms of journalism emerge. I wanted to test how audiences respond to credibility labels. In two experiments I’ll be presenting at the annual conference of the Association for Education in Journalism and Mass Communication in August, I tested how credibility labels might work on unfamiliar new brands. Both of these experiments were embedded in online surveys sent to adults who were told they were market testing a new mobile news app through Survey Sampling International. I analyzed the responses of 350 adults in the first experiment, and 254 adults in the second experiment.

Some of the results are encouraging for improved news media literacy advocates.

In the first experiment, I tested whether audiences would change their perception of credibility based on a news credibility label. I found that when audiences are shown a negative news credibility label that indicates the story is not credible, they generally decrease their perception of credibility. In this experiment, some participants saw the news on a web page that had either “liberal” or “conservative” in the name of the news brands’ names. I was trying to replicate the experience of someone browsing Facebook before the 2016 election and happening upon an unfamiliar partisan site.

When people read news on either of these sites, the match or mismatch between an audience member's ideology and the ideology of the news brand made a difference in how well the credibility label worked.

So, if a conservative saw a news story on a conservative news site, they were less willing to accept a negative credibility label than if a conservative saw a story on liberal news site. Among the group of strong partisans, the negative credibility label did little to discourage sharing the news with others. I didn’t ask in this experiment what motivated them to share the story, though. That will be an important part of this puzzle moving forward.

In a second experiment, I used a positive news credibility label that indicated that the story was legitimate. I wanted to find the best way to teach audiences to be able to interpret news credibility labels. For a labeling system to be effective, audiences need to learn to incorporate them into the equations they use to make final judgments.

Think of the blue check mark next to a verified Twitter handle. Fifteen years ago, that symbol next to a news story would have had no meaning and would not have an impact on how any audience member viewed the credibility of that story. Today, not only is the meaning of that symbol commonly understood, people have worked out a how much weight to give the presence or absence of that symbol in their final credibility decisions.

In this experiment, I included several symbols on the web page and gave participants instructions about the meanings of those symbols in different ways. I found that a pop-up page before the participants read the news story was the most effective aid for participants to recall the meaning of those symbols.

However, audiences didn’t seem to include the valence of the information in their mental math when rating the credibility of the story. The methods of teaching the meaning of credibility labels made no difference in the perception of credibility.

Instead, the partisanship of the audience was again driving how they rated the credibility of the news story.

These two studies are just the beginning of understanding how credibility labels might work. There’s very little research specifically on credibility labels. One exception is a study that looked at the now-defunct Facebook labeling system. The authors found that that news stories on the social media site labeled as false were perceived as less credible. But other stories around them got a credibility bump for their lack of negative label.

We can also learn a lot about how audiences might respond to credibility labels from studies of fact-checking. However, there are fundamental differences between what fact-checkers do and what news credibility labels do for individual stories or news brands. Where fact-checking usually involves a journalist investigating a claim made by a newsmaker, credibility labels as they are discussed now would mean assessing far more information to assign a rating at either the story or the news brand level. While fact-checking journalists usually carefully choose a claim that is factually verifiable, credibility labels will involve more grey area where the truth is more difficult to determine. The nuance is going to be more difficult to navigate.

In the past, audiences were advised to use checklists to determine the credibility of unfamiliar sources. Facebook gave users a list of 10 tips. News articles, too, advised readers to go through a list of cues.

But the temporal nature of online and social media design requires audiences to constantly update their understanding of these cues and to rework the mental equation that helps them make a final credibility decision.

Looking forward, we need more research into how credibility labels would work and give audiences practical tools to determine the veracity of the information they need to navigate the world around them.

Comments

 
Email IconGroup 3Facebook IconLinkedIn IconsearchGroupTwitter IconGroup 2YouTube Icon