March 28, 2012

Many news organizations are pressing a new frontier of technology in 2012 — training computers to analyze thousands of written messages and interpret how their authors feel.

The Washington Post, Yahoo News, Time and ABC News are gauging social media activity and sentiment in the presidential campaign; The Boston Globe tracks the mood of Red Sox fans; and the Los Angeles Times monitored fan favorites to win the Oscars.

Whatever the applied topic, these news organizations are on a similar quest for a grail-like achievement — a computer system that can process thousands of natural-language messages, determine what they mean, and draw valid conclusions and connections.

That’s the dream, but what’s actually possible now? And where will this technology take journalists in the future?

The most celebrated achievement in the field so far has been IBM’s Watson, a free-standing computer that was able to learn, listen and “think” well enough to play and win a game of Jeopardy in 2011.

The Jeopardy game was just one demonstration of language-recognition and artificial intelligence technology that the company is applying in many industries — including media, says Steve Canepa, IBM’s general manager of media & entertainment.

That language-recognition technology — wielded by the University of Southern California’s Signal Analysis and Interpretation Lab — powered the Los Angeles Times’ Senti-Meter that tracked the popularity (on Twitter) of the actors and films nominated for an Academy Award. (Trendrr and Bluefin Labs are behind the Time and ABC News apps.)

Impressive, yes. But when it comes to computers trying to understand the meaning of human language, the challenges are just as fascinating as the successes.

Context complicates meaning

Written messages are full of little tricks that can confound a reading machine. It can’t simply make checklists of positive and negative words to evaluate sentiment, because some words mean entirely different things in different contexts.

“When you’re doing strictly social data, there are a lot more mistakes that get made [by computers], because the problems are trickier,” said Jeff Catlin, CEO of Lexalytics. The company’s technology can read, analyze and extract information from written text in websites or tweets. Its text-analysis engine powers services provided by social enterprise software companies like Radian6 and Salesforce.

“There is no grammar” in social media, Catlin said. “The capitalization, the punctuation are either nonexistent or don’t mean what you think they mean. You just have a whole host of problems.”

Intensifiers are one problem. The computer must learn the difference between phrases like these:

“It was spectacular” vs. “It was a spectacular failure.”

In the first case, the word “spectacular” is a straightforward description, in the second it’s intensifying the complete opposite meaning.

Then there’s the case of negation, Catlin explained. Consider this phrase:

“The candidate’s debate performance wasn’t horrible.”

What’s a computer to make of that? It’s not a positive statement, but it’s not altogether negative either. Lexalytics’ system addresses the issue by scoring each message on a continuum from positive to negative, so there’s room for gray areas in between.

That’s also helpful in measuring relative sentiment — a statement that expresses sentiment for one object in relation to another.

“‘I love the Red Sox even more than the Bruins.’ What is that for the Bruins?” Catlin posed. “It’s clear for the Red Sox, but what about the Bruins? It’s good for them; you as a human know that it’s good for them — but how good? It’s not quite as good as for the Red Sox.”

When a normally positive word like “good” appears in a message (or “like,” for that matter), the computer must understand whether that is actually a signal of sentiment, or just context to other words around it.

“A lot of engines, if they saw ‘Good morning world’ as a tweet, they would get that wrong,” Catlin said. “They would say that’s a good thing, and it’s in fact not. ‘Good’ in that context is not expressing sentiment, it’s just a contextual statement.”

So the computer must run a preliminary filter on each incoming message to look for catches like this and determine whether the message actually expresses sentiment at all.

The limits of social media inference

Even when scientists nail all the language-specific problems, we still have the question of what inference we can actually draw from the data. To date, most of these studies use data from Twitter because it is publicly available through approved vendors and rather easy to get, for a price.

But Twitter is not representative of public opinion. These technologies will be more telling when we can apply them to data from bigger networks, like Facebook, or across the Web as a whole.

Online sentiment, even if accurately measured, does not predict offline action. Talking Points Memo said that when it comes to politics, tweets are the digital version of campaign lawn signs: they’re nice to have on your side, but they don’t equate to votes:

“Journalists sometimes use Twitter to help make sense of where voters stand and why — the ultimate goal of political coverage. Spoiler alert: it doesn’t.

Widgets like the Washington Post’s Mention Machine and Time magazine’s Campaign Buzz Meter track mentions on Twitter and other social media to determine who’s up and who’s down on any given day of the campaign. As of [yesterday] afternoon, Rep. Ron Paul (R-TX) and former Massachusetts Gov. Mitt Romney were jockeying for the top spot on TIME’s Campaign Buzz Meter. But Paul has yet to win a GOP primary contest.”

Despite all these issues, Canepa of IBM has a lot of ideas and hopes about how journalists could use the technology in the future.

It could help close the feedback loop between journalists and the audience, so a computer can analyze all the social media and blog feedback on a reporter’s story, summarize it for her and highlight anything worthy of a follow-up or response.

It can also power what Canepa called a “discovery portal” that journalists could use when preparing to write a story.

“In this discovery portal, we have the idea of putting people, places, things onto a palette and using the analytics tool to look at any relationships between those things so you can perhaps come up with a story angle or investigative finding that you otherwise may have never gotten,” he said. “The ability to begin to find relationships, to distill sentiment, to understand context, all of these attributes we think have tremendous ramifications for…content production and distribution.”

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Jeff Sonderman (jsonderman@poynter.org) is the Digital Media Fellow at The Poynter Institute. He focuses on innovations and strategies for mobile platforms and social media in…
Jeff Sonderman

More News

Back to News

Comments

Comments are closed.