Study about PolitiFact — OK to call it a study?

PolitiFact Editor Bill Adair was careful to call a study that claimed his shop “rates Republicans as less trustworthy than Democrats” a “press release” when I asked him for comment about it last week.

“The authors of this press release seem to have counted up a small number of our Truth-O-Meter ratings over a few months, and then drew their own conclusions,” Adair wrote. (Poynter owns the Tampa Bay Times, which owns PolitiFact.) I asked the spokesperson for George Mason University’s Center for Media and Public Affairs for a copy of the full study, about which I had indeed received a press release. In return, CMPA spokesperson Kathryn Davis sent me the following tables:

(Click to view these bigger)

CMPA combined “Mostly False,” “False” and “Pants on fire!” ratings “into a single ‘dishonesty’ rating,” Davis told me.

The press release, she said, “is the study and announcement combined.”

CMPA President Robert Lichter told me in an email CMPA “does publish more extensive research reports as warranted.”

The short-term studies, which emphasize brevity and timeliness, usually appear first on the CMPA website. Eventually they are combined/collected into larger data sets and subjected to additional data analysis, producing results that are published in scholarly journal articles and books.

Here’s a page with CMPA studies listed.

Dr. James Wright of the University of Central Florida’s sociology department is also the editor-in-chief of the journal Social Science Research. He told me via email that press-release studies are “frowned upon in academic circles.”

Wright said that for this study, he’d like to see “a much longer time series of data.” Looking at CMPA’s tables, he noted that in March “Dems lied more than Republicans.”

Is the overall trend for these 5 months indicative of a long-term pattern? Or is it a temporary aberration? Would other five month series show the same thing? Does the pattern vary by which party controls the White House?

He also said he’d prefer a “larger sample size. 100 cases is not much. 500 would be better. How were these 100 cases chosen?”

I attempted to duplicate CMPA’s results myself: I counted all PolitiFact’s national rulings between Jan. 20 and May 22, 2013, arriving at 113, 17 of which were either clearly or arguably not regarding the statements of politicians. Three were by National Rifle Association Executive Vice President and CEO Wayne La Pierre.

In another email, Lichter said the study “did include assertions from groups aligned with one of the parties, which are picked up by party supporters in current policy debates. In this study period, which of course featured a debate over gun control legislation, that means statements by…Wayne La Pierre were included.”

CMPA says it will next turn its attention to “the Washington Post Fact-Checker’s ratings.” That feature is written by Glenn Kessler, whose work CMPA has previously compared to PolitiFact’s. “My basic principal is that politicians in both political parties will stretch the truth if they think it gives them an edge,” Kessler said in an emailed statement.

I always strive to be fair-minded and nonpartisan in evaluating claims-and aim to be consistent in applying the Pinocchio ratings. Depending on the time period studied, the results may favor one party or another, simply because I decide what claims to study based on newsworthiness and overall reader interest.

Related: Why Fact-Checkers Find More GOP Lies (The Atlantic Wire) | Is Politifact Biased Against Republicans? (The Monkey Cage) | Who’s Checking the Fact Checkers? (U.S. News & World Report)

We have made it easy to comment on posts, however we require civility and encourage full names to that end (first initial, last name is OK). Please read our guidelines here before commenting.

  • Bryan

    **Looking at CMPA’s tables, [Wright] noted that in March “Dems lied more than Republicans.”**

    And suddenly a sample size of 25 was okay. :-)

  • Bryan

    After answering Dr. Wright’s question you could have done him the favor of omitting the quotation where he complains about the sample size.

    There’s nothing wrong with the sample size of the CMPA study. It looks at the entire relevant pool of data (plus Wayne LaPierre), since it’s specifically looking at a time frame where the flow of the news cycle has been less favorable for Democrats. The limitations on the sample size come from PolitiFact’s end of things. Plus there’s PolitiFact’s selection bias, which is probably the point of the study in the first place.

    As was pointed out at the blog PolitiFact Bias, how can Poynter and Bill Adair expect us to dismiss conclusions about PolitiFact based on 100 ratings while at the same time giving us a “report card” about Michele Bachmann based on 59 ratings? It’s inconsistent. It’s hypocritical.