July 30, 2012

The new issue of Nieman Reports includes an exhaustive cover package about the craft of verification.

The issue includes contributions from journalists with Associated Press, the BBC, CNN, and Storyful, among other outlets. Incoming New York Times Public Editor Margaret Sullivan also contributed a piece.

The articles deliver advice, case studies and insight into what it’s like to verify information flowing in from social networks and a multitude of other sources.

I was honored to write an introductory essay for the package. I attempted to outline some of the challenges and opportunities presented by today’s decentralized, democratized and socialized media world.

The entire issue is great reading, and I hope you dig in.

Reading through the different articles, I was struck by how much of the advice overlapped; the verification advice from the BBC’s User Generated Content Hub, Storyful’s Mark Little, AP’s Santiago Lyon and CNN iReport’s Lila King all hit similar points.

That’s a good thing.

It suggests that, even amid the rapidly changing world of journalism and citizen media, there are new best practices being applied across newsrooms. We have a sense of what’s working, and a productive way to do this work.

As a result, this issue’s collection of checklists, case studies and other tips is something of a blueprint for journalists looking to build their skills in this area, or organizations trying to establish policies and procedures for vetting information. It’s a state of the art crib sheet of best practices.

One key piece of advice for verifying user generated content that’s repeated over and over again may seem obvious but is too often ignored: always contact the person who uploaded or provided the material. In other words, check the source as much as the information.

Here’s what David Turner wrote in his piece about the BBC’s UGC hub:

The golden rule, say Hub veterans, is to get on the phone whoever has posted the material. Even the process of setting up the conversation can speak volumes about the source’s credibility: unless sources are activists living in a dictatorship who must remain anonymous to protect their lives, people who are genuine witnesses to events are usually eager to talk. Anyone who has taken photos or video needs to be contacted in any case to request their permission, as the copyright holder, to use it.

Another theme in the package is uncertainty. Verification may seem like an exact science, but it usually comes down to collecting as much information as possible and practicing triangulation to make an informed decision. There is always the risk that you make the wrong call, or act to soon. Errors are a constant concern, and a fact of life.

Below, I’ve highlighted a few sections from the issue. Of course, I encourage you to read it for yourself.

Inside the BBC’s UGC team

One interesting point raised in Turner’s look at BBC’s User Generated Content hub is whether this brand of verification specialist will always be needed, or whether journalists and newsrooms will eventually catch up and spread these skills across the organization.

Here’s the response from BBC Social Media Editor Chris Hamilton:

“We’re seeing correspondents and producers building up their verification skills, and you’ve got to work out whether it’s something you need specialists for,” Hamilton says. But, he adds, “in some form you’ll always need them,” if only for the sake of efficiency.

Hamilton can, however, foresee a time when the size of the BBC’s Hub team might shrink as verification is “industrialized.” By that, he means that some procedures are likely to be carried out simultaneously at the click of an icon.

He also expects that technological improvements will make the automated checking of photos more effective.

Vetting & “validating” content

Storyful Founder Mark Little offered a thoughtful take on what he and his team at the social media news agency have learned about vetting content from social media.

“Reporters are taught never to expose their own ignorance but ‘I don’t know’ is the starting point in any honest investigation of online communities and their content. Internally we consciously use the word ‘validation’ instead of verification,” Little wrote. “Our role is to provide the essential context that will allow newsrooms to make informed judgments about content that may never be completely free of risk.”

Best of all, Little’s piece includes two checklists that his team uses to validate content. There’s also an accompanying case study that shows how their team went about vetting a clip uploaded to YouTube.

Spotting photo manipulations

The Associated Press’ Santiago Lyon examined what it’s like to verify photos at a time when Photoshop and the tools of manipulation are widely available. He also wrote about how the AP educates its staffers and freelancers about internal policies regarding photo alteration. Here’s a look at how the AP spreads and enforces its policy:

One of the important steps to take in this new media ecology is to formulate a policy about what can and cannot be done to imagery. AP’s ethics code is quite clear:

AP pictures must always tell the truth. We do not alter or digitally manipulate the content of a photograph in any way. … No element should be digitally added or subtracted from any photograph. The faces or identities of individuals must not be obscured by Photoshop or any other editing tool. Minor adjustments in Photoshop are acceptable. These include cropping, dodging and burning, conversion into grayscale, and normal toning and color adjustments that should be limited to those minimally necessary for clear and accurate reproduction …

Even these statements need to be supported by training and guidance, as words alone cannot address every possible nuance in tonality, shading and other variables.

We currently have more than 350 staff photographers and photo editors at the AP, and in the past few years we have invested substantially in a global training program designed to teach photographers and editors the best practices for using Photoshop. We have provided clear guidance on how to accurately handle images and what to do when in doubt.

A vetting toolkit

Lila King of CNN’s iReport team wrote a piece about how iReport vets citizen content. (I did a post about that same topic earlier this year). Part of her contribution address the role technology plays in the process, and how the CNN team relies on people spread out around the world to help check content:

At iReport we use a variety of tools: CNN-ers in the field, subject-matter experts, affiliate networks, and local media. We cross-check what we learn from citizen journalists with other social media reports.

We also use technology, which can’t prove if a story is reliable but offers helpful clues. For example, we often check photo metadata to find timestamps and sometimes location data about the source photo or ask a photographer to share the previous or next 10 images from her camera. We also occasionally send an image through a service like TinEye to help determine whether it shows signs of alteration.

That’s the journalism part—figuring out what you need to add to a video or photo that you find on the Internet to make sense of it and to help someone else understand why it matters.

As a bonus, also have a look at a slideshow from the Report’s Jonathan Seitz highlighting some of the biggest media mistakes of all time.

Related: How journalists can do a better job of correcting errors on social media

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Craig Silverman (craig@craigsilverman.ca) is an award-winning journalist and the founder of Regret the Error, a blog that reports on media errors and corrections, and trends…
Craig Silverman

More News

Back to News