Most of your headline writing tricks don't work, apart from these two
Analytics firm Chartbeat has published a new study which finds that most of the conventions for writing catchy digital headlines don't work very well.
Author Chris Breaux, a Chartbeat data scientist, summarized results of testing a dozen styles for optimal click-through rates as follows:
“Use terse, punchy headlines”; “Ask questions”; “Name drop.” None of these properties show much predictive power.
That’s right, writers: We’ve proven that "5 Ways To Write The Best Headline Ever" isn’t actually that effective.
Breaux did find one exception. Headlines using demonstrative adjective like 'this,' 'that' and 'these' had a substantially higher click-through rate than the norm. Long headlines also did modestly better.
Breaux writes that demonstrative words can create a bit of clickbait intrigue as in "These simple tricks will leave you speechless." Even a much simpler specifier like "GOP debate this evening" can be effective.
The data is drawn from a compendium of A-B testing Chartbeat does for its clients -- more than 100 publishers and 10,000 individual tests. A second finding is testing more than one variation maximizes the chance for big improvement.
In a single A-B test, the winning headline did about 25 percent better. But publishers who tested six variants got a winner that did 2.6 times better than the average of those tested.
Breaux concedes that more elaborate tests require more time writing alternate headlines, are statistically more complex to analyze, and may delay determining a best version to leave posted. But he think the benefits can outweigh the disadvantages.
[caption id="attachment_387322" align="alignleft" width="742"] A chart in the Chartbeat study shows the effectiveness of various types of headlines.[/caption]
In the opening of his article, Breaux writes that "blindly following guidelines can lead to copy that sounds cliché at best, and actively off-putting at worst." I asked whether once-effective approaches may now have worn out their appeal from overuse.
It's an intriguing question, he replied but "our headline testing product was released just this year, so we haven't been able to see if headline effectiveness has changed over time. But it's something we'd love to look into toward the end of 2016."
The convention fatigue principle is reasonable.... we certainly wouldn't suggest that publishers go big on demonstrative adjectives. The main result we found was that there's little evidence that most headline-writing conventions work, so publishers should always concentrate on writing great headlines (and ideally running tests over several options) as opposed to following specific style formulae.
For a second opinion I checked with John Schlander, digital general manager of the Tampa Bay Times, who also teaches a popular Poynter's News University course on digital headline writing.
"A-B testing is a fantastic tool," he told me, but also limited. It mainly works for choosing headlines that will work best for users already on the home page. But it is much less effective for testing alternate approaches that would draw traffic from search or aggregation sites (which often write their own headlines).
Schlander said that he teaches his students that meaning, reader interest and searchability are the strongest elements of effective headlines. But, like Breaux, he thinks the winning headline depends both on the site and the story's characteristics -- it can't be reduced to one-size-fits for every situation.
As a guy who writes most of my own headlines, I come away from this pleased to learn that getting wording that is effective and inviting is part art, part science -- and probably a moving target as well. Maybe the robots are not on the verge of making live practitioners in this corner of the business redundant after all.