April 7, 2003

By Philip Meyer
Knight Chair in Journalism, University of North Carolina-Chapel Hill

As Esther Thorson shows (in her accompanying article), the connection between news-editorial quality and business success is pretty clear. But we can’t stop there.

Let me show you a simplified version of Thorson’s model with a new arrow thrown in.

Capacity –> Quality –> Success –> Capacity

Correlation doesn’t prove causation. Nor does it tell its direction. Maybe quality leads to success or maybe success pays for quality. I’m arguing that it goes both ways. This is a loop. A reinforcing loop. When things go well, it’s a virtuous cycle. If things go badly, as they are now, it’s a vicious cycle. How did this happen?

Thorson hit upon a key point when she mentioned the time lag. The bottom-line benefits in cost cutting are immediate; the costs in reduced reader satisfaction are long term.

Here’s another thing: The relationship between journalistic quality and profitability is probably not represented by a straight line. I’m betting that it looks more like the Bell curve show below:



From a zero start, the benefits of improving quality accelerate. The line gets really steep. But then it levels off. After a certain point, the cost of increasing quality is more than the return. You might plug 20 more reporters into the city hall beat, and it won’t matter.

The top of the hump is “the sweet spot.”

It’s vital to know where you are on the curve. If you think that you’re on the downhill side, it’s rational to want to cut costs because you are spending more on quality than you need.

But what if you’re really on the uphill side? I think most, if not all, newspapers are. It’s hard to tell, because the effect of cost-cutting on reader loyalty doesn’t kick in right away. If you think you have enough quality and cut, you could be sliding backward down this slippery slope. And you wouldn’t know it.

What to do? Let’s shine a light on the sweet spot: Measure the quality of content and its effects on business success. We do not have that answer now. We might not have it a year from now. It might be necessary to collect measurements of quality over a period of years. But we have to start somewhere. The first issue is what to measure.

Standing on the shoulders of Leo Bogart and asking ASNE members, we have found these things measurable and worth measuring:


  • Accuracy
  • Ease of use
  • Localism
  • Editorial vigor
  • News quantity
  • Interpretation
Scott Maier of the University of Oregon and I measure accuracy by sending clips to news sources and asking them to point out any errors. So far, we have completed 10 of 20 newspapers. In the first pass at analysis, I have used seven kinds of very simple, objective errors.

  • Misspelling source’s name
  • Error in job title
  • Wrong address
  • Person’s age wrong
  • Location of event wrong
  • Wrong time
  • Wrong date
Treating these as indicators of accuracy, our 10 newspapers vary a lot. In the most accurate paper, almost 7 percent of stories had at least one of these objective errors. In the least accurate, the rate was 18 percent.

On to ease of use, a good indicator is readability. The 40 newspapers that we’ve looked at are written for grade levels ranging from 5th to 10th.



This picture shows the distribution at the individual story level –- from first grade to college senior. How do we relate these differences to business success? Indicators of success are hard to get or hard to measure. But circulation is an exception. The numbers are public, and they’re audited.

But we have to be careful. Some circulation isn’t cost-effective. Every market is different. A circulation size that would be considered a success in one place might mean failure in another.

To minimize these problems, I deal with household penetration — that’s circulation divided by households — in the home county. And I let each newspaper serve as its own benchmark, judging its success by its ability to hold on to that home county penetration over a period of time. I call this ability to resist the forces of decline “robustness”:

Robustness = 2000 home county penetration ÷ 1995 home county penetration

Using ASNE data, I have looked at the effect of news-editorial staffing in 1995 on robustness over the period 1995-2000. The results:

Reduced or level staff: 93.5%
Growing staff: 97.0%

Those newspapers that reduced staff size or held it constant retained significantly less of their home county penetration — 3.5 percent — than did those that grew staff. So capacity does matter.

Another thing to watch is reader trust. ASNE has been studying credibility off and on since 1985.



On the chart above, you see 21 counties arrayed by how much people believe their newspapers — as measured by a Knight Foundation survey. That’s on the horizontal scale. The vertical scale shows circulation robustness. As you can see, there is an upward slope — the higher the credibility, the greater the robustness. People read what they believe and/or believe what they read.

That, of course, leads to a series of questions. How does capacity affect believability and robustness? What options are open to editors, within the constraints of their capacity, to improve believability and robustness? Stay tuned! Seeking those answers is my current task. I wish I had started earlier.


Phil Meyer has been a Miami Herald reporter, Knight Ridder Washington correspondent, and Knight Ridder’s director of news and circulation research. In 1981, he joined the journalism faculty at the University of North Carolina in Chapel Hill and now holds the Knight Chair in Journalism.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Bill Mitchell is the former CEO and publisher of the National Catholic Reporter. He was editor of Poynter Online from 1999 to 2009. Before joining…
Bill Mitchell

More News

Back to News