July 5, 2013

One of the most challenging aspects of social media is figuring out how to efficiently verify information and stop the spread of misinformation during breaking news situations.

Hurricane Sandy gave rise to a variety of efforts to try and identify and debunk fake images that were circulating on social media. News outlets like The Atlantic, BuzzFeed and the blog “Is Twitter Wrong?” all attempted to verify images in as close to real-time as possible, and spread word about the fakes.

But what if we could automate that process during crisis situations like Sandy?

A recent paper presented by researchers from the Indraprastha Institute of Information Technology, IBM Research Labs and the University of Maryland found that it was possible to identify tweets containing fake Sandy images with up to 97 percent accuracy.

The paper provides interesting data about the way fake images spread during Sandy, and — tantalizingly — it also offers a look at how one day we may be able to flag tweets as potentially containing false information.

The researchers, led by Aditi Gupta, a Ph.D. student at Indraprastha Institute of Information Technology, conclude that, “automated techniques can be used in identifying real images from fake images posted on Twitter.”

In his post about the paper, Patrick Meier, director of social innovation at the Qatar Foundation’s Computing Research Institute, also noted that his group has been working on automating the evaluation of tweets in related areas.

Here’s a look at their notable findings about how fake photos spread, and the promising new way we could automate the detection of fake images on Twitter.

For fake photos, the retweet is king

Not surprisingly, the researchers found the vast majority of tweets containing fake images were retweets (86 percent). For journalists, this reinforces the importance of verifying material before retweeting it. During a crisis situation, the rule of retweets not equaling endorsements doesn’t really apply.

What’s particularly notable is what the researchers saw when examining the follower network of the people who shared fake images. They concluded that the “social network of a user on Twitter had little impact on making these fake images viral.”

Why?

Because there was “just an 11% overlap between the retweet and follower graphs of tweets containing fake images.”

During Sandy, and in other crisis situations, many people — especially journalists — rely on Twitter’s advanced search function and also build lists to discover and track people on the ground, or with access to quality information. This means they go looking outside of the people they usually follow, and so they inevitably retweet people from outside of their Twitter social graph.

“Hence, in cases of crisis, people often retweet and propagate tweets that they find in Twitter search or trending topics, irrespective of whether they follow the user or not,” the researchers write.

This dynamic of out-of-graph retweets helps things spread rapidly, and it also illustrates how during breaking news events, social search can become more important than one’s social graph.

A few people have big influence

Though people were not primarily retweeting fake images from accounts they follow, the retweets still came from a relatively small number of influential users:

Our results showed that top thirty users out of 10,215 users (0.3%) resulted in 90% of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very less (only 11%) to the spread of these fake photos URLs.

It seems that people who went searching for Sandy content still ended up retweeting the same things, possibly thanks to Twitter’s tendency to highlight “top Tweets” within certain hashtags. It wasn’t based on what they saw from the people they follow.

As the folks at Storyful like to say, there is always someone one closer to the story. When news breaks, journalists and others go searching for new sources on social media. They add them to lists and retweet them. The act of doing so attracts attention to these sources, thereby bringing more retweets and attention. Once found and amplified, they become hard to ignore.

During a crisis the sphere of influence can shift in order to reflect the emergence of new sources specific to the event. The best example of this is Twitter user Sohaib Athar who just happened to live not far from the place where Osama bin Laden was hiding out. He had few followers, but being in the right place at the right name made him instantly influential as a source when the raid went down.

Another related piece of data in the paper is that fake images did not begin to spread rapidly until roughly 12 hours after they were first introduced on Twitter.

The researchers note that “the sudden spike in their propagation via retweets happened only because of a few users.” So a fake will lay dormant until someone with the ability to amplify it comes along and retweets it. That’s what the fakers rely on, in fact.

Content is King

Now, on to the idea of detecting fake images. To test whether they could automate the process of detecting fake images, the researchers used algorithms to analyze two groups of information.

One set of information was the specific Twitter user/account (“User Features”); the second group related to the tweet’s content (“Tweet Features”). Here’s a look at the things that fell into these two “feature groups”:

The researchers then used two algorithms and these features to analyze a data set that included 5,767 tweets containing URLs of fake images, and 5,767 tweets containing real images.

They hoped to see if the system could determine which tweets had real photos, and which were offering fakes, and do so with a high degree of reliability.

In the end, they found that the combination of one type of algorithm (called the Decision Tree) with the Tweet Features delivered 97 percent accuracy in predicting fake images.

“Our results, showed that automated techniques can be used in identifying real images from fake images posted on Twitter,” they wrote.

They also concluded that “content and property analysis of tweets can help us in identifying real image URLs being shared on Twitter with a high accuracy.”

So if you have a Decision Tree algorithm working with the Tweet Features group, it can be very effective at spotting fake images. (One caveat they offered is that their high degree of accuracy may be in part due to the fact that so many tweets with fake images were retweets, and therefore had similar content.)

Interestingly, User Features — such as the number of followers, the number of times an account is listed by other users, and the length of time that an account has existed — proved less predictive than the content of the message itself.

One of the fundamentals of verifying user generated content is that you check the content (an image, a video etc.) and the account/person who created it. For example, this is a cornerstone of how AP verifies user-generated content.

This is still a best practice, and the research in this paper does not argue that user/account details are irrelevant. Perhaps it inadvertently helps reinforce the message that even longtime Twitter users with many followers (and/or a verified account) will also fall for fake images. We certainly know that to be true. But it may also be true that when it comes to machine analysis, the content of a tweet is a better data set upon which to determine reliability of images.

The researchers plan to continue work in this area and, notably, they also talk about developing “a browser plug-in that can detect fake images being shared on Twitter in real-time.”

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Craig Silverman (craig@craigsilverman.ca) is an award-winning journalist and the founder of Regret the Error, a blog that reports on media errors and corrections, and trends…
Craig Silverman

More News

Back to News