Articles about "Misinformation"


A lady stands in front of an electronic display showing live information of flight positions according to predicted time and flight duration calculations at the Kuala Lumpur International Airport, Sunday, March 16, 2014 in Sepang, Malaysia. Malaysian authorities Sunday were investigating the pilots of the missing jetliner after it was established that whoever flew off with the Boeing 777 had intimate knowledge of the cockpit and knew how to avoid detection when navigating around Asia. (AP Photo/Wong Maye-E)

Why the press can’t help but speculate about the missing Malaysia Airlines flight

Did you hear?

A piece of missing Malaysia Airlines Flight MH370 was spotted by the Vietnamese Navy. The plane made an emergency landing in Nanning, China. It may be in North Korea. Or taken over by Iranian terrorists. No, it just completely vanished.

If you want to know how much people are thinking and obsessing about a story, just count the rumors labeled as reporting, the baseless “expert” punditry, and the conspiracy theories it inspires. By that measure, the missing jet is occupying more of our collective consciousness than any other story in the world right now. (Take that, Putin.)

It’s a global story due to the fact that it connects so many countries thanks to the departure and destination locations, and all of the nationalities represented by passengers and its flight path.

A woman in Sepang, Malaysia, Sunday, in front of a display showing information about possible flight positions of Flight 370 (AP Photo/Wong Maye-E)

The insatiable desire for information is partly because the situation is so mysterious. Couple that with the fact that the flow of new, credible details comes in the form of a drip rather than a firehose. Now mix it all together with fears of terrorism and airplane crashes and you have a perfect recipe for rumor and conspiracy theories. (There is something about air crashes, in particular, that brings out the worst in conspiracy theories.)

A major factor driving speculation is that the central character is missing. The story is the fact that the plane is gone. There is nothing to train a live camera on, to tweet in real-time, or crowdsource.

This story is about something that has disappeared — and what a terrible mismatch that is for the way the news cycle, social media and the human brain work.

No new facts

The result, along with all of the above mentioned rumors that make their way into the press, is that you get segments like this one on CNN, where experts are trotted out fill airtime by speculating (emphasis mine):

RICHARD QUEST, CNN: … So, many questions, none of which, frankly, we’re going to be able to answer for you tonight. But many questions are raised by this new development. For instance, not least, how can a plane go like this and no one notices it’s off flight plan?

The former director general of IATA says he finds it incredible that fighter jets were not scrambled as soon as the aircraft went off course. I asked Giovanni Bisignani for his gut feeling about what happened to the plane.

GIOVANNI BISIGNANI, FORMER DIRECTOR GENERAL, IATA: It is difficult to imagine a problem in a structure failure of the plane. The Boeing 777 is a modern plane, so it’s not the case. It’s not a problem of a technical problem to an engine because in that case, the pilot has perfectly time to address this and to inform the air traffic control.

We have no answers for you, so let’s bring in someone who also has no answers or new information, but has a credential that seems relevant. Let’s get a “gut feeling” because we have nothing else.

With only so much actual news to portion out, you end up being proffered the worst kind of angles and programming. Anyone with any kind of connection can expect a phone call. If they can come up with a view of What It Means, then sign ‘em up. Come on down, John Little from the Museum of Flight in Seattle.

“We’ve built this facade that man is in control of all things,” he told The New Republic, (which put an awful headline on the story). “Well, look at this: a Boeing 777 falls off the map and half the world is looking for it and they can’t find anything! If this can happen, well, then, maybe there is a place for the individual.”

Human nature

Though I hate the rumors and overreaching on a story like this, I see the fundamentally human process driving empty quotes and unsubstantiated reporting.

This is how we cope with big, uncertain events: we grasp for ways to relate, to process them through our own lens. And when there is a dearth of information, we push, prod and search and speculate. We fill the empty air of cable news and conversation with anything.

In searching for answers, we reach for anything that can seemingly make sense of what we don’t know. We also engage in this process together, by sharing and communicating.

“[W]e are fundamentally social beings and we possess an irrepressible instinct to make sense of the world,” wrote Nicholas DiFonzo, a professor at Rochester Institute of Technology and author of “The Watercooler Effect: A Psychologist Explores the Extraordinary Power of Rumors.” “Put these ideas together and we get shared sensemaking: We make sense of life together. Rumor is perhaps the quintessential shared sensemaking activity. It may indeed be the predominant means by which we make sense of the world together.”

In this respect, the media coverage mimics our innate desire to fill the empty space with something. People are demanding answers. They are hitting refresh again and again, looking for new information. (Here I am, doing much the same thing — except I’m covering the coverage of the thing that’s disappeared, and trying to make sense of why we’re seeing so much rumor and speculation.)

David Gallo helped lead the team that solved the mystery of an Air France flight that went missing in the Atlantic Ocean in 2009. Speaking to PRI, he cited the pressure, the scrutiny that those leading this current search effort will face. But he could have just as easily been talking about the editors, reporters and producers trying to meet audience demands.

“It’s horrible,” he said. “The pressure is building from the families and friends and loved ones of the victims. The pressure is building from the international community. The questions are, ‘Are you confident? Are you hiding something? When are we going to have some answers?’ ”

The difference is the search team won’t proffer answers or speculation until they have something to share.

Tom McGeveran, the co-founder and editor of Capital New York, held up this kind of restraint up as a model for the press:

It is simple advice. To practice it, journalists have to channel their innate sense-making and social tendencies into real reporting, and mix in a measure of restraint.

Put another way: Be more of a reporter, and less of a human. Read more

Tools:
6 Comments
truth or lie

Researchers have 3 tips to help journalists debunk misinformation

Having the truth on your side is a necessary thing when trying to debunk misinformation.

But it’s far from enough.

The truth alone does not change minds, create belief. Convincing people of your argument, or correcting someone else’s lies, requires more than unearthing the truth and reciting the facts.

So what’s a journalist to do?

Brendan Nyhan and Jason Reifler continue to produce work aimed at identifying the best and most effective ways to combat political misinformation. (I also recently wrote about their research into whether politicians fear fact checkers.)

Nyhan, a professor at Dartmouth, and Reifler, a lecturer at the University of Exeter, today added a bit more to their body of debunking work. They published a research paper, “Which Corrections Work,” with the New America Foundation that contains three specific pieces of advice for how journalists can best correct misinformation. That advice is coupled with related experiments they conducted to reinforce the tips. Read more

Tools:
3 Comments

5 projects take different approaches to promote fact-checking, fight misinformation

This week a different kind of hack day took place at the MIT Media Lab.

These gatherings are usually filled with software developers and other technical folks. Wednesday’s hack day had its share of geeks, but there were also social scientists, journalists, NGO workers and students from Harvard’s Kennedy School, among others.

It was an interdisciplinary mix, and a nice complement to the Truthiness in Digital Media symposium held at Harvard the day before. (See this post from me about an interesting piece of data shared in one of the presentations.)

On Tuesday we discussed problems and outlined possible solutions. Wednesday’s goal was to sketch out ideas that, even on a small scale, could attack some of the challenges.

The result was a variety of suggested projects that looked at misinformation in digital media. Each of the groups will post about its ideas on the event’s blog, but Andrew Phelps at the Nieman Journalism Lab has already written a good overview.

We absolutely need more projects in this area, and I hope the half-day of activity at MIT can spark new efforts. At the same time, I came away with was an appreciation for the efforts already underway. Progress is happening.

Below is a quick breakdown of five projects focused on fact-checking and misinformation, all of which were represented in Cambridge this week. Even better, this week marked the first time they were all in the same place at the same time. By the end of the two days, there was a collective sense that they should work together as much as possible — which I believe is the most important development of the event.

Five projects

LazyTruth: I’ll have more on this project when it fully launches soon, but the concept is simple and I think very useful: LazyTruth is a Gmail gadget that will alert you if you receive an email containing a fake chain letter, urban legend, or other type of documented misinformation. That way you won’t be tempted to send money to that Nigerian prince. It also can nudge you to reply to the sender with the appropriate debunking information.

The trio of Matt Stempeck, Justin Nowell and Stefan Fox are working on LazyTruth. One aspect of this system that strikes me as particularly important is that email is frequently used by political misinformation campaigns, not to mention scams and other nefarious information-based attacks. Yet it’s difficult to track the spread of these emails and to fight the misinformation — one reason email is such an effective tactic. If LazyTruth can gain a good-sized user base, it can gather data about the spread of these campaigns. That’s just as important as its ability to debunk misinformation.

TruthGoggles: Dan Schultz has already managed to attract a lot of interest for this project. Last year Nieman Lab did a great overview of what Schultz was trying to build as part of his masters thesis at the MIT Media Lab. (I often edited Schultz’s posts when I was the managing editor of PBS MediaShift.) From there, the story spread to NPR and other outlets. I even heard him interviewed one Saturday afternoon on CBC Radio in Canada.

Why all the interest? Truth Goggles is a browser plugin that will tell you if something you’re reading online includes claims or information that have been fact-checked. Let’s say you’re reading a story that refers to Mitt Romney’s claim that Iran released its American hostages the same day Ronald Reagan was sworn in as president because of Reagan’s “peace through strength” policy. Since PolitiFact rated that statement as “Pants on Fire,” Truth Goggles would alert you that the statement had been checked and provide you with a link to the verdict. Schultz is due to graduate in May, so by then we should have something to look at and try out, though it will take more time to build out the full tool.

Fact Spreaders: This project is based at the University of Michigan and led by professor Paul Resnick. The concept is that more and more claims are being checked, but they aren’t being spread to a large audience. So Resnick and his team are working to build a site and community that will help spread accurate information and verified claims.

The idea is to recruit a community of users who would reach out to people on Twitter and other social networks when they see them spreading misinformation. (For instance, a Fact Spreader could send a reply tweet explaining that the information someone just linked to has been debunked.) Resnick and his team also want to engage the crowd to help identify claims on social media and elsewhere that should be checked. That could act as a crowdsourced assignment desk for fact-checkers like PolitiFact.

PolitiFact API: All of the aforementioned projects require a database of fact checks. There’s no way Truth Goggle or LazyTruth can operate if they don’t have content to pull in and display to users. The good news, which I learned this week for the first time, is that PolitiFact has an API. This means their content can be accessed and used in applications (provided, of course, that thse applications adhere to the terms of service for the APIs).

On a related note, the hack day group that included Bill Adair of PolitiFact sketched out how fact-checks and verdicts could pop up onscreen when political campaign ads aired on interactive TV. That kind of application would also require access to the PolitiFact API; it shows how the such APIs could enable a new realm of applications for fact-checking.

Truthy: I previously wrote about this project at the University of Indiana, as did CNN. Its team is working to track how memes (and misinformation) spread on Twitter. They had success with an initial focus on political misinformation and astroturfing, and they’re now expanding to other forms of memes and misinformation.

The Truthy website shows the diffusion networks of recent Twitter memes and describes how things spread on that network. They also invite people to flag suspicious memes. This crowdsourcing adds muscle to the system, which uses a “sophisticated combination of text and data mining, social network analysis, and complex networks models.”

Need for collaboration

It’s not hard to see how all of these projects could be working together. In fact, that was the focus of the hack day group I joined. We sketched out the basics of what a credibility API would look like, and how it could help feed the above projects and new ones. In a blog post about our efforts, LazyTruth’s Stempeck explained why it made sense to enable everyone to share data:

There could be many benefits to working together in a federated network. Talented developers are prevented from experimenting freely in this space because of the high barrier to entry that is hiring a team of researchers. Fact-checking outlets can’t possibly know, empirically, which internet fire most needs a dousing. The audience for traditional fact-checkers is also limited to their relatively small web and print readership. Together, we could do much more, and at greater scale.

Amen to that. We need more projects and experiments, but let’s also be sure to get these people to talk to each other and collaborate so that their tools can have the most impact.

Correction: This post originally stated that FactCheck.org has an API. In fact, they do not have an API for their content. Read more

Tools:
1 Comment
plot-misinformation-correction

Visualized: Incorrect information travels farther, faster on Twitter than corrections

Many times on Twitter I’ve witnessed what I call The Law of Incorrect Tweets:

Initial, inaccurate information will be retweeted more than any subsequent correction.

The goal should be to make the correction as viral as the mistake. But that’s a challenge, and Tuesday at Harvard’s Truthiness in Digital Media conference, I saw (for the first time) what it looks like when we fail.

The presentation by Gilad Lotan, the vice president of research and development for SocialFlow, included a chart that compared the Twitter traffic of an incorrect report to the traffic for the ensuing correction. It’s the Law of Incorrect Tweets visualized:

The data for that chart comes from one of three case studies he shared in this blog post. It focused on an incorrect tweet by NBC New York in November that said the NYPD had ordered its helicopter to move away from the site of the Occupy Wall Street protests:

That report was soon corrected by the NYPD Twitter account:

NBC New York and the main NBC News accounts tweeted out corrections, but, as you can see from Lotan’s chart, the new information did not reach as many people.

“People are much more likely to retweet what they want to be true, their aspirations and values,” Lotan wrote.

He also noted that he has seen corrections beat out incorrect information on Twitter, which is encouraging and suggests my “law” is maleable:

Does misinformation always spread further than the correction? Not necessarily. I’ve seen it go either way. But I can safely say that the more sensationalized a story, the more likely it is to travel far. Many times the story about about misinformation is what spreads, rather than the false information itself (for example: the Steve Jobs false death tweet which cost Shira Lazar her CBS gig).

If you understand the dynamic, you may be more likely to change it. One cause: Incorrect information is bound to be more provocative and interesting than a correction. The other cause is that too little attention is paid to making corrections on Twitter.

Journalists need to make the effort to contact anyone who retweeted the incorrect information and make them aware of the correction; it also helps to ask them to retweet the correction to their followers. I offer other advice for correcting tweets here. Read more

Tools:
0 Comments