May 2, 2019

This post was originally published here by Journalist’s Resource at the Shorenstein Center on Media, Politics and Public Policy at Harvard University.

On April 25, 2019, former Vice President Joe Biden became the latest big-name politician to join the race for the 2020 Democratic Party presidential nomination. Among Democrat voters, he leads the field over the next most popular candidate, Vermont Sen. Bernie Sanders, by 7 percentage points — with a sampling margin of error of 5.4 percentage points — according to a recent poll from Monmouth University.

But public and media perception has been burned by polls before — see the 2016 presidential election — and there’s still a long, long way to go before the Democratic field is settled. Donald Trump officially became the Republican Party nominee for president in July 2016, but a year prior there were still 16 other candidates angling for the nomination.

Precisely because there are still so many town halls and county fairs to come for the Democratic contenders, we’re rounding up some recent academic research that can inform coverage of political opinion polls in this early presidential contest. This research digs into bias in evaluating political polling, polling errors across time and space, the relationship between media coverage and polling, and more.

All the Best Polls Agree with Me: Bias in Evaluations of Political Polling

Madson, Gabriel J.; Hillygus, D. Sunshine. Political Behavior. February 2019.

The credibility of a poll comes down to survey methods, the pollster’s reputation and how transparent the pollster is with their data. Does the public care about any of that? The authors conducted two surveys with a total of 2,048 participants — 600 recruited from Amazon Mechanical Turk and 1,448 from the national Cooperative Congressional Election Study. They found participants perceived polls to be more credible when polls agreed with their opinions, and less credible when polls disagreed.

“Polls are not treated as objective information,” the authors write.

Disentangling Bias and Variance in Election Polls

Shirani-Mehr, Houshmand; et al. Journal of the American Statistical Association. July 2018.

Margins of error indicate the precision of polling estimates. The margin of error of an opinion poll says something about how close the poll’s results are likely to match reality. A larger sample typically will come with a smaller margin of error, while a smaller sample means a larger margin of error.

Confidence intervals and margins of error go hand-in-hand. The final Gallup poll before the 2012 election showed Mitt Romney with 49% of the popular vote and Barack Obama with 48%. The poll had a 95% confidence interval and a 2 percentage point margin of error. So, Gallup was 95% confident the election would end with Romney winning 51% to 46%, Romney losing 47% to 50% or somewhere in the middle. In the end, Obama outperformed Gallup’s confidence interval with 51% of the popular vote, while Romney got 47%.

Political polls typically report margins of error related only to sample size. For that reason they often underestimate their uncertainty, according to the authors. For example, there may be errors because pollsters don’t know the number of people in their target population who will vote.

The authors analyzed 4,221 polls across 608 state-level presidential, senatorial and gubernatorial elections from 1988 to 2014. The polls were conducted within the last three weeks of campaigns.

On average, they find a 3.5 percentage point difference between poll results and election outcomes, “about twice the error implied by most confidence intervals,” the authors write.

“At the very least, these findings suggest that care should be taken when using poll results to assess a candidate’s reported lead in a competitive race.”

Election Polling Errors Across Time and Space

Jennings, Will; Wlezien, Christopher. Nature Human Behaviour. March 2018.

The authors look at more than 30,000 national polls from 351 elections across 45 countries from 1942 to 2017. They find that national polls taken from 2015 to 2017 performed in line with historical norms. But polls that ask about the largest political parties tended to be less accurate than those asking about smaller parties.

“These errors are most consequential when elections are close, as they can be decisive for government control,” the authors write.

While the reputation of an individual pollster matters when evaluating poll results, the authors find that presidential polls conducted 200 days out from a presidential election were generally less accurate than those conducted closer to the election day.

Partisan Mathematical Processing of Political Polling Statistics: It’s the Expectations That Count

Niemi, Laura; et al. Cognition. May 2019.

Polling results bombard the public during presidential campaigns, and it can be difficult for voters to process that information. The authors surveyed 437 participants recruited from MTurk and find that for the 2012 and 2016 presidential elections, those who had committed to a particular candidate underestimated their opponents — even in the face of conflicting polling information. Those who didn’t actually think their candidate would win did not succumb to the same cognitive dissonance.

Mass Media and Electoral Preferences During the 2016 U.S. Presidential Race

Wlezien, Christopher; Soroka, Stuart. Political Behavior. June 2018.

Does the dog wag the tail, or is it the other way around? The authors compare polling data and nearly 30,000 stories in nine major newspapers across the United States leading up to the 2016 presidential election, to clarify the relationship between media coverage and voter preferences. Their most robust finding indicates coverage at these media outlets followed public opinion. As polls shifted in favor or away from the candidates, so too did the tone of media coverage become positive or negative.

“Results speak to the importance of considering media not just as a driver, but also a follower of public sentiment,” the authors write.

Don’t look to polls for yes-or-no answers

For as much as journalists and the public may want political polls to indicate yes-or-no answers, they don’t, they won’t, and they never have. University of Minnesota journalism professor Benjamin Toff put it like this in a March 2018 essay in Political Communication:

“Polls are more pointillism than photorealism; their results are meant to be observed from a distance. One should never mistake these impressionistic representations of public sentiment for the actual thing.”

If you’re curious what went wrong with polling during the 2016 presidential race, check out this postmortem from the American Association for Public Opinion Research. The upshot? National polls were generally correct, but at the state level polls showed a closer race whose outcome was more uncertain.

For more guidance on covering polls, check out 11 questions journalists should ask about public opinion polls and 7 tips related to margin of error. Plus, political involvement during the 2016 presidential election wasn’t very different from previous elections. FiveThirtyEight also offers a good rundown of trustworthy pollsters. Finally, this is how the press failed voters in the 2016 presidential election.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate

More News

Back to News