On the morning after the 2016 presidential election, many opinion pollsters and pundits picked at their scrambled eggs with egg on their faces. Somehow, Donald Trump had upset Hillary Clinton, the favored candidate in their polls. Journalists who provided misleading reporting to the public should have been eating humble pie.
Before this year’s Nov. 3 elections, we’ll be inundated again with polls and punditry. With national and state polls proliferating, we risk drowning in bad data, said Tom Rosenstiel, executive director of the American Press Institute.
Journalists shouldn’t make matters worse by superficial and careless reporting. Improving the quality of coverage isn’t rocket science if you know a few essentials.
Let’s start here: Journos should make clear that all opinion polls come with tricky challenges and a margin of error. Getting accurate poll data this year is being complicated by the pandemic, widespread mail-in voting, hyper-polarized constituencies and daily news surprises.
Many in the news media compound the situation by misleading reporting. Too many reports, for example, ignore that each poll carries a margin of error—or explain what that means. Adding fine print at the bottom of a graphic doesn’t cut it.
Here’s how it works: Say a reputable polling firm — call it Company A — surveys 1,000 people and finds that 55% of respondents oppose reopening schools. The company says the poll’s margin of sampling error is plus or minus 3 percentage points. According to the American Association for Public Opinion Research and other experts, a correctly calculated margin of error means that Company A would get the same results — plus or minus 3 percentage points — 95 times if it repeated the poll 100 times. Thus, factoring in the margin of error, the poll’s results would fall somewhere between 52% and 58%. (The size of the stated margin of error depends on the number of poll respondents and other factors.)
It’s instructive to go back to 2016. On Election Eve, many polls favored Clinton at around 47% to 43%. The Bloomberg/Selzer poll, for example, had Clinton ahead 46%-43%. Factoring in that poll’s margin of error of 3.5 percentage points, its results really showed Clinton could get as much as 49.5% of the vote, or as little as 42.5%. For Trump, he could receive anywhere from 39.5% to 46.5%.
Given those numbers, journalists should have reported the poll results “within the margin of error” and too close to call. That might not sound like a catchy headline, but it comes with that element of mystery—who’ll win? — that attracts readers. And, most important, accuracy counts.
Some but not all newspapers did a better job of poll reporting. Television generally did not. Without mention of the margin of error, many viewers assumed Clinton was headed to certain victory.
As it turned out, Clinton won the popular vote 48% to 46%. That didn’t matter, of course, because Trump prevailed in the Electoral College. And yes, journalists had failed to remind us that the Electoral College, not the popular vote, decides it all.
The stated margin of sampling error is just one source of potential miscalculation. A news report shouldn’t turn off readers/viewers by venturing into the statistical weeds. And we won’t go there now. But journos can inform the public that polls are an imprecise snapshot of people’s view at a given time. If they mention that a result falls “within the margin of error,” they can explain what that means. The American Association for Public Opinion Research offers guides on polling for journalists.
As a veteran newspaper editor, I come down hard on my news colleagues. The journalistic mission is not to spew out enticing headlines but to evaluate, skeptically, the accuracy and limitations of polls and to educate and inform the public. In 2016, most journalists failed.
Given this year’s “super-challenging” polling environment, journalists can aid public understanding “by being more careful in phrasing of information” and exercising caution in projecting winners, said Angie Holan, editor-in-chief of PolitiFact, the Poynter Institute’s increasingly vital fact-checking website.
A few researchers and journalists aggregate a large number of polls to develop statistically driven “probabilistic” forecasts. Nate Silver’s “FiveThirtyEight” website, for example, gave Clinton a 71% chance of winning. The New York Times’ “Upshot” predicted Clinton’s chances of winning at 85%.
Social scientist Natalie Jackson, who botched the 2016 results by giving Clinton a 98% chance of winning, issued a mea culpa and wrote: “I have concluded that marketing (of) probabilistic poll-based forecasts to the general public is at best a disservice … and at worst could impact voter turnout and outcomes.”
Clinton told New York Magazine: “I don’t know how we’ll ever calculate how many people thought it was in the bag, because the percentages kept being thrown at people— Oh, she has an 88% chance to win.’’
My bottom line to journalists: Start covering polls responsibly.
Frank Sotomayor, a Los Angeles Times editor for 35 years, co-edited the 1983 series on Latinos that won the 1984 Pulitzer Prize for Public Service. He lives in Tucson and can be reached at email@example.com.