Articles about "Verification"


Screen Shot 2014-07-08 at 3.45.08 PM

Amnesty International launches video verification tool, website

Amnesty International is in the verification game and that is good news for journalism.

When journalists monitor and search social networks, they’re looking to discover and verify newsworthy content. Amnesty utilizes the same networks and content — but their goal is to gather and substantiate evidence of human rights abuses.

“Verification and corroboration was always a key component of human rights research,” said Christoph Koettl, the emergency response manager in Amnesty USA’s Crisis Prevention and Response Unit. “We always had to carefully review and corroborate materials, no matter if it’s testimony, written documents or satellite imagery.”

Now they’re “confronted with a torrent of potential new evidence” thanks to social networks and cell phones. As with their counterparts in newsrooms, human rights workers and humanitarian organizations must develop and maintain skills to verify the mass of user-generated content.

That’s why, it’s no surprise, Amnesty International today launched a new website and tool to help human rights researchers and others with the process of video verification. The site is Citizen Evidence Lab, which offers step-by-step guidance on how to verify user-generated video, as well as other resources. The tool is the YouTube Data Viewer.

The development of the site and tool were led by Koettl, who is one of Amnesty’s lead verification experts. (He also authored a case study about verifying video for the Verification Handbook, a free resource I edited for the European Journalism Centre.)

Here’s an introduction to the site:

YouTube Data Viewer

The YouTube Data Viewer enables you to enter in the URL of a YouTube video and automatically extract the correct upload time and all thumbnails associated with the video. These two elements are essential when verifying a YouTube video, and it’s information that’s difficult to gather from YouTube.

The upload time is critical in helping determine the origin of a video. Finding the upload time of a YouTube video can be difficult — it’s not clearly displayed on the video page. The thumbnails are useful because you can plug them into a reverse image search tool such as Google Image or TinEye and see where else online these images appear.

“Many videos are scraped, and popular videos are re-uploaded to YouTube several times on the same day,” said Koettl. “So having the exact upload time helps to distinguish these videos from the same day, and a reverse image search is a powerful way to find other/older versions of the same video.”

The goal is to offer non-technical users a tool and guidance to help them verify video, without requiring an expert such as Koettl. He said now his colleagues “will be able to do this basic research themselves by using the new tool, so not everything has to go through me for a basic assessment.”

The same goes for journalists. The YouTube Data Viewer should join tools such as an EXIF reader, reverse image search, Spokeo, and Google Maps/Earth as one of the core, free verification tools in the verification toolkit. (For a list of other tools out there, see this section of the Handbook.)

A guide to video verification

Citizen Evidence is also a valuable addition to verification training. Koettl has created a series of videos that offers a step-by-step walkthrough for verifying user-generated video. This is a detailed and easy-to-follow guide, offered by someone who practices this as part of his daily job. (The videos are geared toward human rights workers, but the techniques apply for journalists.)

For Koettl, the tool and the videos are an important step in helping spread the skills of digital content verification within his profession.

“I believe in a couple of years from now, verification of citizen media will be part of the core skills of any human rights researcher, as a consequence of better verification protocol and tools, as well as dedicated training,” he said. “Subsequently, we will only need dedicated staff for more advanced analysis, including more technical and forensic assessments.”

I hope this same dynamic begins to emerge in more newsrooms, whereby basic verification knowledge/skills are spread among as many people as possible, and they are also supported by a smaller group of colleagues with specialized expertise. Read more

Tools:
7 Comments

Mobile trends to watch in second half of 2014; plus, a newsgathering guide to Tweetdeck

Here’s our roundup of the top digital and social media stories you should know about (and from Andrew Beaujon, 10 media stories to start your day, and from Kristen Hare, a world roundup):

— At Poynter, Adam Hochberg explores in depth Gannett’s three-year CMS overhaul to “replace the existing systems and serve every Gannett newsroom – from USA Today to KHOU-TV in Houston to the Fort Collins Coloradoan.”

Frédéric Filloux runs down three mobile trends to watch for the rest of 2014, including questions about what news sites should do about the market of Android users — which is bigger than the iOS market but less lucrative.

Joanna Geary, Twitter UK’s head of news, visited the Wall Street Journal in June to share tips on how to use Tweetdeck to gather news. Sarah Marshall turned them into a handy guide.

— Lots of executives have left Twitter lately, Mike Isaac and Vindu Goel write at The New York Times Bits blog, but the company has kept things stable in one area: its advertising team.

— More Poynter digital stories you might have missed last week: Don’t get fooled by fake hurricane photos this summer, how NPR built its Civil Rights Act interactive, and why the Tulsa World’s new sports sites link prominently to competitors.


!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs'); Read more

Tools:
0 Comments
sandy-sharks

Don’t get hosed by fake hurricane photos this year

As far as I can tell, these photos of lightning hitting New York Wednesday night are legit.

But as the U.S. hurricane season begins this weekend with Arthur’s approach, it’s a good time to remember that hoaxers, as Craig Silverman wrote during Hurricane Sandy in October 2012, “love nothing more than getting the press to share their handiwork.”

Often, a reverse image search can help you root out bogus pictures. In a chapter in the Silverman-edited Verification Handbook, BBC editor Trushar Barot shared his organization’s four-step process for verifying user-generated content:

  1. Establish the author/originator of the image.
  2. Corroborate the location, date and approximate time the image was taken.
  3. Confirm the image is what it is labeled/suggested to be showing.
  4. Obtain permission from the author/originator to use the image.

One case study in the book looks at photos from Sandy that purported to show sharks swimming in a New Jersey street. Tom Phillips, now with BuzzFeed, shows how he and Atlantic editor Alexis Madrigal tried to verify the images.

“Especially in rapidly developing situations, verification is often less about absolute certainty, and more about judging the level of acceptable plausibility,” Phillips writes. “Be open about your uncertainties, show your work, and make it clear to the reader your estimate of error when you make a call on an image.”

Related training: Getting It Right: Accuracy and Verification in the Digital Age Read more

Tools:
2 Comments
authenticate_small_depositphotos

New service will rate the authenticity of digital images

By the time an image makes its way online, it could have been opened and processed in any number of applications, passed through various hands, and been remixed and manipulated.

Today a new image hosting service, Izitru, is launching to give people new ways to certify the authenticity of a digital image. It’s also a tool that journalists can use to help verify images.

The Izitru website and iOS app can “distinguish an original JPEG file captured with a digital camera from subsequent derivations of that file that may have been changed in some way,” according to the company.

It mixes forensic image analysis with elements of crowdsourcing and human oversight. Izitru also has an API that will enable other services to integrate its technology.

Confirming provenance

The service is a new offering from Fourandsix Technologies, Inc., a company I previously wrote about. It’s founded by Kevin Connor, a former vice president of product management for Photoshop, and Dr. Hany Farid, a leading image forensics expert. Their initial product, FourMatch, was a verification extension for Adobe Photoshop.

Anyone can use Izitru as a place to host their images and to have their photos subjected to a series of six forensic tests that result in a publicly visible “trust rating.” The Izitru iOS app can also take photos and have them uploaded directly to the site. Watch it in action:

izitru: Real photos. Without a doubt. from Fourandsix on Vimeo.

One important note about the six tests the site performs: they are geared toward “proving that a file is the original from a camera, rather than trying to prove it has been manipulated,” Connor said. It’s not about determining whether something has been Photoshopped.

These automated tests help with one important element of photo verification: provenance. You want to know who took the image and whether that image came directly from a digital camera. By shooting with the Izitru app, it ensures the photo is an original from the phone’s camera. The Izitru website can also be used by journalists to upload and test a photo.

“From a journalism standpoint, one of the challenges … is that once files get distributed on social media sites, they automatically get re-compressed and modified to the point that we can’t verify them any more,” Connor told me in an email.

Images are also often scraped and altered, making it incredibly difficult to determine the original creator.

Others have recognized this problem. Scoopshot, a crowdsourced photography service, last year launched a photography app with an authenticity rating system. Vice journalist Tim Pool recently launched Taggly, an app that watermarks and attributes images before they get shared online.

A sample image uploaded to Izitru.

A ‘trust rating’ for images

Connor said that with Izitru they want to “encourage people to verify their important photos before they’re distributed. This uses an evolution of the same technology that is in our first product, FourMatch, but with the addition of five additional forensic tests.”

In addition to those tests, which result in a trust rating being added, anyone viewing the image can push a “challenge” button to indicate their view that the image may not be authentic. Enough challenges will result in Connor’s team doing additional analysis. If they determine the image has been manipulated, they will apply a No Trust rating. (The No Trust rating can only be applied after human analysis.)

Their ratings from high to low are: High Trust, Medium Trust, Undetermined File History, Potential File Modification and No Trust.

“Though we can’t commit to looking at every challenged file, we’ll certainly look at any file that gets a significant number of challenges,” Connor said.

He continued:

At that point, we can apply some of our other tests–such as clone detection, lighting analysis, etc. If we see a reason to adjust our rating, then we’ll do so and add a note to this effect on the page. If we see clear evidence the image content has been manipulated, then we’ll apply a No Trust rating. The Challenge button is a community feedback mechanism for us that will allow us to continue to refine our automated testing approach as well.

It’s only by challenging an image and getting the Izitru team to perform additional tests and analysis that possible manipulation can be detected.

“Unfortunately, the tests that detect specific signs of manipulation can be more open to interpretation, so they don’t currently lend themselves to automated usage by people who aren’t trained analysts,” Connor told me.

The Izitru iPhone app.

Competitive area

Connor acknowledged that the world of photo apps and upload sites is very competitive. People will need to first know Izitru exists, and then feel inclined to use it in the moment when they’re snapping that important or newsworthy image.

That’s why his team also built an Izitru API to enable other applications to connect to the service and take advantage of its analysis capabilities.

“With [the API], sites that are already getting volumes of images uploaded for sharing could integrate our tests and badge the most trustworthy images as they come in,” he said.

Since the product has only just launched, there yet aren’t any API integrations to share.

He did however say that “a stealth citizen journalism startup” has expressed interest in an integration.

It will be interesting to watch whether they can forge partnerships that begin to spread their trust rating, or if partners don’t see this as enough of a value add. Social networks and apps, for example, prefer to verify users rather than play any part in rating or verifying content.

Those aren’t the only possible partners, of course — but they are where images are shared and engaged with on a huge scale. Will any of them see an advantage in building in an additional trust layer? Read more

Tools:
0 Comments
Keyboard and Wait button, work concept_depositphotos

Anthony De Rosa on verifying news: ‘I take in a lot and I put back out very little’

If some information is already out there, do you need to say so?

This is a conundrum faced by many journalists, though not everyone sees it as a conundrum.

For example, if media in Vietnam report news about a missing flight that is the subject of reports all over the world, what do you do?

It’s attributed to a fellow news outlet, and an established one to boot. It’s attributed to the local navy. So, what do you do?

Well, that report was false, notes Circa editor-in-chief Anthony De Rosa in “The network effect of bad information,” a piece he wrote for Medium about the benefits of waiting — of not passing along everything you see and hear.

His concern is that so many journalists put restraint aside and push on:

The problem comes from people, the worst offenders being people ostensibly working as journalists, that share reports they haven’t followed up on, and they need to tell you about it right! now!

This piece builds on his contribution to the Verification Handbook that came out in January, and which I edited.

De Rosa is a great messenger of restraint, as he’s one of those journalists who seems to have a neuroconnection to news on Twitter. He surfaces an inordinate amount of chatter and information, but still finds a way to wait.

As he wrote in his case study for the Handbook:

Remember that the information on social media should be treated the same as any other source: with extreme skepticism.

For the most part, I view the information the same way I would something I heard over a police scanner. I take in a lot and I put back out very little. I use the information as a lead to follow in a more traditional way. I make phone calls, send emails and contact primary sources who can confirm what I’m hearing and seeing (or not).

That brings us back to the supposed navy report. Where is the original article that mentions it? Where was the navy’s statement? How did the Vietnamese paper get this information if no one else has it? Had the navy been releasing information this way recently? Those are essential questions to answer before relaying anything.

As a rule, local media are often a great resource on a story. They are on the ground, have connections to key sources, and know the terrain.

But they can be wrong, too. So they shouldn’t get a pass — or a pass along. You take what they say, and you follow your verification process to get answers.

But the above tweet was retweeted hundreds of times, and picked up by media as something out there that they needed to relay.

“This isn’t even reporting, this is second hand sharing of information, it’s the telephone game,” De Rosa writes. “The excuse for this type of reporting is “well, we sourced it to xyz” as if it’s ok to share information you didn’t bother to follow up on, shrug and say ‘they said it, not me’ when it’s knocked down.”

If you do the digging yourself, make the extra call, then you could have something material to offer, rather than having to parrot someone else. But in the meantime, that requires being silent, or sharing that you’re not sharing something because it requires more reporting.

Showing restraint is difficult when your feel the rush of news. But in an information environment when everyone is reporting, retweeting and regurgitating, silence is a damn good strategy.

“During real-time news events, quality sources of information are sometimes characterized by what they aren’t reporting,” I offered in a previous post.

Because, as De Rosa writes, “simply attributing bad information doesn’t absolve you from passing along bad information.”

Related: Social media editor role expands to include fighting misinformation during breaking news Read more

Tools:
0 Comments

Announcing the release of the free Verification Handbook

A little over a year ago, I suggested to colleagues at Poynter that I write an e-book about verification.

It seemed to me an essential project, but also a reflection of the shift I’ve experienced in my focus for Regret the Error. When I first launched this blog as a standalone site in 2004, I was primarily finding and publishing corrections. Over time, I began to look at errors — their cause, prevalence and effect.

In the past three years, perhaps in part due to the spread of social media, smartphones and viral news, I’ve found myself more and more focused on verification.

With so much misinformation flowing fast and freely, and the ability for anyone to easily shoot, share and/or manipulate images and video, the skills of verification have never been more important. Yet it’s not taught on an ongoing basis in most newsrooms. And it’s not just journalists who need the skills and knowledge to sift real from fake — this is a basic, essential skill of news literacy. We all need it. It’s about cultivating a mindset to question and scratch away at the surface of what we see, hear and read.

Today, I’m proud to announce the publication of the free Verification Handbook. It provides news organizations and others with detailed and valuable guidance for verifying information. It’s live today as a website and we will soon release the handbook as a PDF and Kindle book, along with an Arabic translation. (More languages will follow, along with a print edition.) Sign up at the website to be notified when the other versions are released. Read more

Tools:
1 Comment
Storyful homepage. (Storyful)

Video, verification, value: Why News Corp’s purchase of Storyful deserves your attention

I first met Storyful CEO Mark Little at the 2011 ONA Conference in Boston. We headed off to find a quiet corner so I could hear more about what exactly a “social news agency” was.

“Three words: it’s discovery, it’s verification, it’s delivery,” Little told me. “I think that’s essentially the three component parts of the new form of social news.”

I was amazed they were basically running an outsourced verification service for other news outlets.

“I see the need,” I wrote. “The question is, can verification form the basis of a viable business?”

On Friday, the News Corp announced it paid $25 million to acquire Storyful. Question answered. Read more

Tools:
0 Comments

AP’s Navy Yard photos unrelated to shooting? D.C. man who says he was in them tells his story

Eric Levenson raises many good questions about two pictures AP pulled from the wire Monday. They purported to show bystanders helping a victim of the Navy Yard shootings. The photographer, Don Andres, told MSNBC: “I don’t know if it’s related” to the violence.

Mandy Jenkins of Digital First Media tweeted her doubts: “Still pretty confused as to how a wounded man was dragged to CVS from the Navy Yard, it’s at least 3 blocks away.” Other questions remained, as well: Why was there no sign of blood? Would people have picked up and moved a gunshot victim to the ground on a concrete street corner?

James Birdsall may hold some answers. Birdsall is a structural engineer at the Parsons Corporation, a firm with an office at 100 M St. SE in Washington, D.C. That’s very close to the CVS in front of which the photos appear to have been taken. Some of his colleagues saw a woman in a violet shirt pull an injured man from her car. Birdsall, who called Poynter to share his story, said he saw that she was performing chest compressions on the man and noticed she didn’t have an automated external defibrillator.

Birdsall grabbed his firm’s AED “and ran over to help out,” he said.

Birdsall was wearing a blue shirt and khaki pants Monday. He sent me a photo of himself after I requested one. His face isn’t visible in the pictures. But his story casts doubt on easy conclusions about the photo’s truth. Read more

Tools:
7 Comments
Bird words

New research suggests it’s possible to automatically identify fake images on Twitter

One of the most challenging aspects of social media is figuring out how to efficiently verify information and stop the spread of misinformation during breaking news situations.

Hurricane Sandy gave rise to a variety of efforts to try and identify and debunk fake images that were circulating on social media. News outlets like The Atlantic, BuzzFeed and the blog “Is Twitter Wrong?” all attempted to verify images in as close to real-time as possible, and spread word about the fakes.

But what if we could automate that process during crisis situations like Sandy?

A recent paper presented by researchers from the Indraprastha Institute of Information Technology, IBM Research Labs and the University of Maryland found that it was possible to identify tweets containing fake Sandy images with up to 97 percent accuracy.

The paper provides interesting data about the way fake images spread during Sandy, and — tantalizingly — it also offers a look at how one day we may be able to flag tweets as potentially containing false information.

The researchers, led by Aditi Gupta, a Ph.D. student at Indraprastha Institute of Information Technology, conclude that, “automated techniques can be used in identifying real images from fake images posted on Twitter.”

In his post about the paper, Patrick Meier, director of social innovation at the Qatar Foundation’s Computing Research Institute, also noted that his group has been working on automating the evaluation of tweets in related areas.

Here’s a look at their notable findings about how fake photos spread, and the promising new way we could automate the detection of fake images on Twitter.

For fake photos, the retweet is king

Not surprisingly, the researchers found the vast majority of tweets containing fake images were retweets (86 percent). For journalists, this reinforces the importance of verifying material before retweeting it. During a crisis situation, the rule of retweets not equaling endorsements doesn’t really apply.

What’s particularly notable is what the researchers saw when examining the follower network of the people who shared fake images. They concluded that the “social network of a user on Twitter had little impact on making these fake images viral.”

Why?

Because there was “just an 11% overlap between the retweet and follower graphs of tweets containing fake images.”

During Sandy, and in other crisis situations, many people — especially journalists — rely on Twitter’s advanced search function and also build lists to discover and track people on the ground, or with access to quality information. This means they go looking outside of the people they usually follow, and so they inevitably retweet people from outside of their Twitter social graph.

“Hence, in cases of crisis, people often retweet and propagate tweets that they find in Twitter search or trending topics, irrespective of whether they follow the user or not,” the researchers write.

This dynamic of out-of-graph retweets helps things spread rapidly, and it also illustrates how during breaking news events, social search can become more important than one’s social graph.

A few people have big influence

Though people were not primarily retweeting fake images from accounts they follow, the retweets still came from a relatively small number of influential users:

Our results showed that top thirty users out of 10,215 users (0.3%) resulted in 90% of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very less (only 11%) to the spread of these fake photos URLs.

It seems that people who went searching for Sandy content still ended up retweeting the same things, possibly thanks to Twitter’s tendency to highlight “top Tweets” within certain hashtags. It wasn’t based on what they saw from the people they follow.

As the folks at Storyful like to say, there is always someone one closer to the story. When news breaks, journalists and others go searching for new sources on social media. They add them to lists and retweet them. The act of doing so attracts attention to these sources, thereby bringing more retweets and attention. Once found and amplified, they become hard to ignore.

During a crisis the sphere of influence can shift in order to reflect the emergence of new sources specific to the event. The best example of this is Twitter user Sohaib Athar who just happened to live not far from the place where Osama bin Laden was hiding out. He had few followers, but being in the right place at the right name made him instantly influential as a source when the raid went down.

Another related piece of data in the paper is that fake images did not begin to spread rapidly until roughly 12 hours after they were first introduced on Twitter.

The researchers note that “the sudden spike in their propagation via retweets happened only because of a few users.” So a fake will lay dormant until someone with the ability to amplify it comes along and retweets it. That’s what the fakers rely on, in fact.

Content is King

Now, on to the idea of detecting fake images. To test whether they could automate the process of detecting fake images, the researchers used algorithms to analyze two groups of information.

One set of information was the specific Twitter user/account (“User Features”); the second group related to the tweet’s content (“Tweet Features”). Here’s a look at the things that fell into these two “feature groups”:

The researchers then used two algorithms and these features to analyze a data set that included 5,767 tweets containing URLs of fake images, and 5,767 tweets containing real images.

They hoped to see if the system could determine which tweets had real photos, and which were offering fakes, and do so with a high degree of reliability.

In the end, they found that the combination of one type of algorithm (called the Decision Tree) with the Tweet Features delivered 97 percent accuracy in predicting fake images.

“Our results, showed that automated techniques can be used in identifying real images from fake images posted on Twitter,” they wrote.

They also concluded that “content and property analysis of tweets can help us in identifying real image URLs being shared on Twitter with a high accuracy.”

So if you have a Decision Tree algorithm working with the Tweet Features group, it can be very effective at spotting fake images. (One caveat they offered is that their high degree of accuracy may be in part due to the fact that so many tweets with fake images were retweets, and therefore had similar content.)

Interestingly, User Features — such as the number of followers, the number of times an account is listed by other users, and the length of time that an account has existed — proved less predictive than the content of the message itself.

One of the fundamentals of verifying user generated content is that you check the content (an image, a video etc.) and the account/person who created it. For example, this is a cornerstone of how AP verifies user-generated content.

This is still a best practice, and the research in this paper does not argue that user/account details are irrelevant. Perhaps it inadvertently helps reinforce the message that even longtime Twitter users with many followers (and/or a verified account) will also fall for fake images. We certainly know that to be true. But it may also be true that when it comes to machine analysis, the content of a tweet is a better data set upon which to determine reliability of images.

The researchers plan to continue work in this area and, notably, they also talk about developing “a browser plug-in that can detect fake images being shared on Twitter in real-time.”

Read more
Tools:
0 Comments
Cameraman at work

Associated Press purchases minority stake in Bambuser video service

Associated Press

The Associated Press has purchased a minority stake in Bambuser — a service that lets users watch, share and broadcast video.

AP Director of Global Video News Sandy MacIntyre will join Bambuser’s board as “a non-executive director,” the AP says. In a release about the move, MacIntyre said:

“User-generated video content of live and breaking news is the new frontier of news generation. … Bambuser is the proven platform for eyewitnesses around the world to stream their video content and has been invaluable to the AP over the past year, allowing us to access footage of verifiable breaking news stories that would simply not have been possible before. Moreover, we have always been deeply impressed by the proven technology from the small but very talented team at Bambuser.”

Read more
Tools:
0 Comments