NYT corrects: Bald eagles’ poop isn’t purple

A New York Times correction delves into the nitty gritty of bald eagle and osprey poop:

An earlier version of this article described bald eagles and ospreys incorrectly. They eat fish, and their poop is white; they do not eat berries and excrete purple feces. (Other birds, like American robins, Eurasian starlings and cedar waxwings, do.)

Read more
Tools:
1 Comment

Monday, Aug. 18, 2014

Let's Get Weird: A BuzzFeed Event sponsored by The CW

BuzzFeed’s Ben Smith: ‘We didn’t fully think through’ the removal of old posts

Several months ago, roughly half a dozen early BuzzFeed writers were told to go back through their pre-2012 work and decide what they wanted to save. Anything they didn’t want to keep or update should be removed from the site, they were told.

“Go through your stuff and save what you care about,” is how BuzzFeed Editor-in-Chief Ben Smith summarizes the direction.

The result was that thousands of posts were removed from BuzzFeed, without any notice or disclosure. The removal of content was revealed by Gawker’s J.K. Trotter in two posts, the most recent of which outlined the previously unknown scale of the cull.

Smith was off in the woods on vacation when Trotter’s latest piece was published last week. This morning I spoke to him by phone, and he said BuzzFeed did not handle the purge of old content as well as it should have.

“I don’t suggest that this was masterpiece of a really well-thought-through process,” he said. “In retrospect was we should have done is we should have had a pop-up on that page when you hit that URL [of a removed article]. It’s stuff made at a time when people were really not thinking of themselves as doing journalism; they saw themselves as working in a lab.”

Smith views the decision to remove the old content, and the scrutiny and criticism of how it was done, as part of BuzzFeed “growing up.”

He said there was no central process for deciding what stayed and what was deleted. The early BuzzFeed writers decided for themselves. They were told that anything they wanted to save had to have any broken links or missing fields for the CMS filled in, according to Smith.

“Many of the older stories are technically broken and some of them were kind of done as inside jokes,” he said. Others had jokes that “didn’t age well,” or were early games made in Flash.

Smith believes the biggest category of removed articles were ones that were broken, either in terms of outside links or in the way they displayed thanks to CMS changes over time. Some stories did not meet their standards for sourcing and attribution, but Smith suggested that was not the primary driver for removal in most cases.

Of course, we have to take his word for that, since all the evidence has disappeared and there is no list of what was disappeared.

“We didn’t fully think through as we should have what the reaction would be,” he said. “We should have thought a bit more about how this would be perceived.”

BuzzFeed is many things

Gawker’s revelation of the scale of deletion came just days after BuzzFeed announced a new investment of $50 million from VC firm Andreessen Horowitz. Investor Chris Dixon wrote about the investment on hiss blog, and described BuzzFeed as akin to a technology company. But it’s also a news operation, as Smith notes. And as CEO Jonah Peretti has said, for a long time they were nothing like a news org:

BuzzFeed is many things, and is trying to be even more.

Smith described BuzzFeed to me as a “Media company that includes a news organization, but that is not solely that.”

It’s a young company in transition. Perhaps nothing has served to communicate the evolution — and contradictions — of BuzzFeed better than the Great Content Cull of 2014.

When they looked at dealing with older content that was broken and not up to new standards (more on the latter below), Smith said they chose to deal with the older content in a way that reflected what BuzzFeed was at the time — meaning a lab and not a news organization.

“We returned to it in that spirit, and not as we are now,” he said.

I said in my view it’s problematic for them to apply old standards when part of the motivation for choosing what to delete was the application of new standards. You don’t get to apply new standards and then enforce them using old methods that fail to meet new standards.

Smith said he felt that was “valid criticism.”

He said that one factor that contributed to the problematic way this was done is that he’s always conscious of trying to keep the experimental heart of BuzzFeed alive, even as they transition into a new kind of organization.

“One of the big challenges for me has been maintaining that spirit, which is very central to how we operate,” he said. It can be “a big challenge to maintain that experimental spirit at the moment when a lot of people are looking at us, and it’s more intimidating to try something and fail.”

New standards

BuzzFeed deputy editor in chief Shani Hilton is leading a new initiative to set standards that will apply across the organization, and to also develop specific standards and guidelines for each BuzzFeed group.

Prior to that kicking off, Smith said that editorial standards were not put down on paper in one place; it was more of a matter of asking and expecting employees to adhere to basic ethical behavior.

“The way I have always preferred to operate as a reporter, and I think the ethos we started with when we were a much smaller shop staring in 2012, was that the rules are pretty clear: don’t lie, don’t cheat, don’t steal,” he said. “And that journalist ethics are [basic] ethics, and those are things that no one should do.”

After the Benny Johnson plagiarism scandal, Smith realized that “perhaps these things are not as obvious as I had assumed, and that they were harder to communicate in a bigger organization.”

So now Hilton is leading an initiative to develop clear, written standards for BuzzFeed editorial, whether you are a political writer, within the experimental BuzzTeam group, working for BuzzFeed Food, etc.

“We have a very ambitious journalism organization, but it’s not the only thing we do,” Smith said. “A lot of what we do in editorial is journalism, but we also do stuff that isn’t. We’re trying to make clear to ourselves and to others what are the standards that apply to things that are and that aren’t journalism.”

He expects there will be one set of global standards that apply to everyone, and specific guidelines for each group. He gave the example that a news journalist could never accept a blender from the manufacturer, but that the food team “has to figure out how long they can test the blender for, and whether they can include a photograph in the article,” among other things.

“Most readers see items one by one on the Internet and I think that basically it’s more important that we get things right across the board,” he said. “If you’re not doing news that’s not an excuse for misinforming people. We want to have uniformly high standards.”

The challenge right now is that in attempting to enforce new standards BuzzFeed applied old, low, standards. (And its new, higher standards are also clearly still in development.)

The result of the new process for defining standards at BuzzFeed will hopefully result in some excellent policies. But while the existence of them is essential, it doesn’t guarantee anything.

People have to live ethics and standards. They must become part of the organizational culture. It’s about how people behave, and the decisions that are made.

That’s one fundamental reason why the undisclosed mass deletion of old BuzzFeed content struck a chord: it’s a concrete example of how BuzzFeed as an organization behaves — right now.

Smith said they are learning from this, especially when it comes to deleting content.

“If anybody didn’t know this before, we absolutely know now that the best way to call attention to something is to delete it,” he said. Read more

Tools:
2 Comments

Friday, Aug. 08, 2014

bieber-bear-small

Bear attack foiled by Justin Bieber’s music: A story too good to check

Just after 12 pm on Tuesday, a story started picking up some serious online momentum.

In the span of about an hour, it appeared on the websites of The Week, Elite Daily, the Daily Mirror, the New York Post, Mediaite, an ABC affiliate, among others.

Here’s how the New York Post’s story began:

Even bears can’t stand Justin Bieber’s music.

A fisherman in Russia was being attacked by a brown bear and escaped death when his Justin Bieber ringtone went off and sent the beast fleeing into the forest.

Animals? Check. A strange and amazing turn of events? Check. Justin Bieber angle when he’s already in the news for a run-in with Orlando Bloom? Check.

Too good to check? Check.

But next thing you know, NPR’s “Morning Edition” covers it, and it ends up in a Seth Meyers monologue:

Here’s the issue: the first story of the bear attack was published in Russian language publication Pravda back on July 31 — and it says nothing about Justin Bieber.

At some point in making the leap to English, someone added a detail to the story that transformed it into a viral hit. According to Google Translate, the original Russian version said the bear was scared away when Igor Vorozhbitsyn’s phone began speaking out the current time. So, yes, the phone apparently scared off the bear mid-mauling. But no Bieber.

Did Vorozhbitsyn change his story in a subsequent interview and realize it was Bieber all along?

Or did someone insert a seemingly false Justin Bieber angle into the story?

Point of Bieberfication

How did this untamed beast get into a story about a bear? (Bieber photo by Bizuayehu Tesfaye/Invision/AP; Bear: AP Photo/U.S. Fish And Wildlife Service, File)

How did this untamed beast get into a story about a bear? (Bieber photo by Bizuayehu Tesfaye/Invision/AP; Bear: AP Photo/U.S. Fish And Wildlife Service, File)

The bear-and-Bieber stories all carried the same pictures of Vorozhbitsyn. They had the same quotes of him explaining that his granddaughter had put the ringtone on his phone.

They quote the same “wildlife expert”: “Sometimes a sharp shock can stop an angry bear in its tracks and that ringtone would be a very unexpected sound for a bear.”

Hey, thanks for that too-perfect quote to round out the story, anonymous wildlife expert with no credentials!

The symmetry in the stories is because they all used the information contained in a single English language report from a site called the Austrian Times. It’s led by a Brit named Michael Leidig, who also owns the Central European News agency. (His name is listed in the domain owership records for the sites, as well as for CEN’s affiliate agency, EuroPics.)

After the Austrian Times/CEN published the story, it spread to MailOnline.

Once MailOnline had it, the story was off and running. Bieber and the bear was the real deal, and everyone wanted to plant a flag on it. As of this writing, MailOnline’s story has racked up over 13,000 shares.

The images are the first clue as to where the story really came from. MailOnline cited CEN as the copyright holder of the image it used. But the Austrian Times story credits Pravda with the image on its story. A search on the Pravda website turned up the original article with its images and Bieber-less reporting.

So how did the Austrian Times learn of the Bieber angle that Pravda apparently missed?

No answer from Austrian Times/CEN/EuroPics

I called the offices of the Austrian Times and first asked to speak with David Rogers, who is the only person listed on the site. (He is both its ombudsman and its primary sales contact.)

I spoke with a woman who said Rogers was not in the office. When I asked about the story, she said she would follow up with their office in Russia to get the details and would call me back. I asked if their office there typically rewrites things from wires and local press.

“A lot of stories are found on the wire or in local media but also from local interviews on the ground, or we speak to the reporters who wrote them; we speak to police to get things confirmed,” she said.

I called her again later that day to ask if she had news from the Russian office, and she said they are often hard to get ahold of. She never got back to me.

I also called and emailed Leidig, owner of both the Austrian Times, CEN and EuroPics. A man who answered the phone at the CEN office said Leidig is on vacation in Romania.

Leidig, who has lived in Austria for some time, says nothing about himself on the CEN or Times websites, but he has an extensive Wikipedia page. In the section about CEN, it lists MailOnline as one of its clients, though that and much of the page itself offers no citation for the claim. (In 2013, Leidig self-published a book about Ponzi schemer Bernie Madoff.)

Along with detailing Leidig’s journalistic achievements, the Wikipedia page includes this passage:

Leidig is also a campaigner for greater support for journalism which he describes as the “coalface of democracy.” He has campaigned in favour of more responsibility from search engines like Google to give credit to original source material and also for payment for originators of news, arguing that if the journalists all go out of business nobody will provide the content worth having.

The sole link in that passage goes to… an article on the Austrian Times. Still, one would hope Pravda is therefore earning some revenue from the story and images that CEN and the Austrian Times plucked from its site. (I contacted Pravda to ask about the images and the story, but have not yet heard back.)

For their part, the Austrian Times, CEN and EuroPics have stopped talking to me. After receiving no information from their Russian bureau, I sent a detailed email Thursday with questions about the Bieber version of the Pravda story and its use — and possible resale — of Pravda images.  No has replied or returned my calls.

Meanwhile, the irresistibly Bieberfied version of the story continues to spread. Entertainment websites and news organizations give it the quick rewrite treatment and link to each other’s versions, completely obscuring the dubious origins of the story.

Too good to check, and now, I suspect, too entrenched to ever be really corrected. Read more

Tools:
0 Comments

Friday, July 25, 2014

5865689943_cf5c3f3e74_q

The Wall Street Journal fails ‘Monsters of Greek Mythology 101′

Someone at the Wall Street Journal can’t tell a Minotaur from a Cyclopes. As a result, the paper published a monstrous correction this week:

The Minotaur is a monster in Greek mythology that is part bull, part human. A travel article in Saturday’s Off Duty section mistakenly called it a one-eyed monster.

Read more
Tools:
1 Comment

Thursday, July 24, 2014

Economist updates article after man’s mother objects to his photo

The Economist published a blog post that tried to show negative stereotypes of tourists from different countries are often untrue and unfair.

And by stereotypes, they mean:

Germans? Humourless and demanding. Americans? Loud with garish shorts. Chinese? Rude. Canadians? Actually Canadians are all quite nice. And the Brits? Drunken, violent louts

The original version of the article featured a picture of a young British fellow “on a night out in Mallorca.” This upset the man’s mother, and she contacted the publication to ask that it not tar him with the aforementioned “drunken, violent” brush.

Well, The Economist was only happy to comply. It changed the photo and added this note to the bottom of the story:

Note: This blogpost was originally illustrated with a photograph of a young British man on a night out in Mallorca. His mother called to let us know that he meets none of the negative stereotypes mentioned in the article, so we have replaced the picture with a photo of two innocuous Canadians instead.

As a Canadian, I intend to contact them and object to the labeling of these two true patriots as “innocuous”:

Screen Shot 2014-07-24 at 3.25.32 PM Read more

Tools:
0 Comments

Friday, July 18, 2014

Scottish paper issues correction after it claims prom couple were ‘the envy of their classmates’

Here’s a wonderful backhanded correction from this week’s edition of the Cumbernauld News, of Scotland:

Vine is a broadcaster with the BBC, and the Twitter user he credits for finding the correction told me that it was published in this week’s edition of the paper. I emailed the paper to see if I can get more information about why Mrs. Masterson raised objections, and why the paper decided to issue the correction. Read more

Tools:
2 Comments

Wednesday, July 16, 2014

Mitt Romney, Barack Obama

Study: Political journalists opt for stenography over fact checking during presidential debates

During the 2012 U.S. presidential debates, political journalists on Twitter primarily repeated candidate claims without providing fact checks or other context, according to new research published in The International Journal of Press/Politics.

Authors Mark Coddington, Logan Molyneux and Regina G. Lawrence analyzed tweets from 430 political journalists during the debates to see how much they engaged in the checking of candidate claims. The resulting paper is “Fact Checking the Campaign: How Political Reporters Use Twitter to Set the Record Straight (or Not).”

They also examined whether the political journalist’s tweets fell more into the construct of traditional objectivity or what they call “scientific objectivity,” which eschews he said/she said in favor of empirical statements and analysis, i.e fact checking.

They found that 60 percent of the journalist tweets “reflected traditional practices of ‘professional’ objectivity: stenography—simply passing along a claim made by a politician—and ‘he said, she said’ repetition of a politician’s claims and his opponent’s counterclaim.”

Journalists largely repeated the claims and statement of candidates, rather that check or challenge them.

“Our data suggest that fact checking is not the most prominent use to which Twitter was put by reporters and commentators covering the 2012 presidential election,” the authors write. “Indeed, only a fraction of tweets in our sample referenced specific candidate claims at all.”

A missed opportunity

The researchers chose to look at tweets during the debates because debates are “central to the practice of political journalism and fact checking.”

They also wanted to see if fact checking was a big part of political Twitter during debates to get a sense of “how the emerging journalistic practice of fact checking manifests itself in a continually flowing information environment marked at its core by a fading distinction between fact and opinion.”

In the end, 15 percent of the tweets reflected the traditional fact-checking approach. These tweets saw journalists “referencing evidence for or against the claim and, in a few cases, rendering an explicit judgment about the validity of the claim …”

The data showed that checking was done more frequently by those in the data set who identified themselves as commentators rather than reporters. This again suggests that traditional notions of objectivity may be a factor.

Coddington, the lead author and a doctoral student at the University of Texas-Austin, said he and his co-authors believe journalists are missing an opportunity by not challenging and checking claims.

“Debates are a prime opportunity to challenge and confirm factual claims in real-time on Twitter to a public that’s paying real attention — a perfect spot to cut through the rhetoric of the campaign and play the informational role that journalists are capable of doing so well,” Coddington said. “Journalists aren’t, by and large, doing that, and they should, especially in a situation where audiences may be looking for someone to help them sort through the claims that are coming at them at a bewildering pace.”

The lack of checking was something of a surprise to him, as the researchers chose to look at fact checking on Twitter during the debates because they had seen so much of it in their feeds at the time.

I asked him why in the end there was so much stenography.
“Much of the debate analysis on Twitter fell into the category of what’s often called ‘horse-race’ journalism or commentary on strategy,” he said. “In other words, a lot of it was about what a candidate might have been trying to do strategically with statements in the debate, or the likely reception of those statements. As it related to the factual claims the candidates were making, these tweets fell into the stenography category — the journalists were simply passing on the claims, true or not, without any comment on their factual correctness. They weren’t concerned with whether the claims were true, only whether they would help or hurt the candidate.”

Challenge of real-time checking

One other factor may be that political journalists find it difficult to keep in the real-time flow of a debate and do checking at the same time.

Bill Adair, the founder of PolitiFact and now the Knight Professor of the Practice of Journalism and Public Policy at Duke, said it’s notable that journalists were able to do fact checking during such a fast moving event.

“It’s important to remember the nature of the event: It is a rapid-fire, largely unscripted free-for-all and reporters are trying to listen with one ear and still produce some tweets with value,” Adair said. “So there isn’t much time for reflection and verification. I’m happy to see that they manage to produce as much fact-checking as they do.”

It is indeed a challenge to do real-time fact-checking when you have no idea what candidates may say at any given moment. In an interview with me in 2012, the Associated Press’ Cal Woodward explained how they scale up their fact checking efforts for debate night:

We have anywhere from three to six or more people who are sitting at home or in the office watching a debate. When they hear something they’ll flag it and tell my editor [Jim Drinkard], who is the gatekeeper, and he will make a call if we think it’s strong enough to be developed. Sometimes they give me an item that’s pretty much already written, and I’ll slip it in.

It takes planning and execution to deliver fact checks at debate speed.

But it must also be said that journalists don’t have to be constantly tweeting during a debate. If you assume that people interested in the debate are watching it live, then your tweets need not be stenography — which is exactly what 60 percent of the ones gathered for this study were.

Why bother repeating what most people just watched and heard the candidate say? It may take a few minutes more to hunt for the source of a claim, or to offer context. But that’s arguably more valuable. So too is waiting until you have something to say, rather than rushing to transcribe something your followers are watching.

“For all the talk about Twitter as revolutionary journalistic tool, what we and others have found is that political journalists tend to use it simply to snark, talk strategy, and link to their work,” Coddington said. “Those are all fine ways to use Twitter, but that’s a big journalistic whiff if it’s not being used for anything more substantial than that.”

***

A final note on methodology for those interested: Their final data set included 17,922 tweets sent by the journalists beginning “one hour before each debate began until noon Eastern Time the following day.” The news organizations represented among the 430 journalists included a mix of large print outlets, broadcasters, cable news, online outlets, NPR and the AP. The authors attempted to mix national reporters with regional ones, and  17 percent of the journalists had bios that included words such as “commentator” or  “analyst.”  The authors felt they might be more inclined to offer opinions. That was born out in the data that showed these people did more fact-checking than others. Read more

Tools:
9 Comments

Tuesday, July 08, 2014

Screen Shot 2014-07-08 at 3.45.08 PM

Amnesty International launches video verification tool, website

Amnesty International is in the verification game and that is good news for journalism.

When journalists monitor and search social networks, they’re looking to discover and verify newsworthy content. Amnesty utilizes the same networks and content — but their goal is to gather and substantiate evidence of human rights abuses.

“Verification and corroboration was always a key component of human rights research,” said Christoph Koettl, the emergency response manager in Amnesty USA’s Crisis Prevention and Response Unit. “We always had to carefully review and corroborate materials, no matter if it’s testimony, written documents or satellite imagery.”

Now they’re “confronted with a torrent of potential new evidence” thanks to social networks and cell phones. As with their counterparts in newsrooms, human rights workers and humanitarian organizations must develop and maintain skills to verify the mass of user-generated content.

That’s why, it’s no surprise, Amnesty International today launched a new website and tool to help human rights researchers and others with the process of video verification. The site is Citizen Evidence Lab, which offers step-by-step guidance on how to verify user-generated video, as well as other resources. The tool is the YouTube Data Viewer.

The development of the site and tool were led by Koettl, who is one of Amnesty’s lead verification experts. (He also authored a case study about verifying video for the Verification Handbook, a free resource I edited for the European Journalism Centre.)

Here’s an introduction to the site:

YouTube Data Viewer

The YouTube Data Viewer enables you to enter in the URL of a YouTube video and automatically extract the correct upload time and all thumbnails associated with the video. These two elements are essential when verifying a YouTube video, and it’s information that’s difficult to gather from YouTube.

The upload time is critical in helping determine the origin of a video. Finding the upload time of a YouTube video can be difficult — it’s not clearly displayed on the video page. The thumbnails are useful because you can plug them into a reverse image search tool such as Google Image or TinEye and see where else online these images appear.

“Many videos are scraped, and popular videos are re-uploaded to YouTube several times on the same day,” said Koettl. “So having the exact upload time helps to distinguish these videos from the same day, and a reverse image search is a powerful way to find other/older versions of the same video.”

The goal is to offer non-technical users a tool and guidance to help them verify video, without requiring an expert such as Koettl. He said now his colleagues “will be able to do this basic research themselves by using the new tool, so not everything has to go through me for a basic assessment.”

The same goes for journalists. The YouTube Data Viewer should join tools such as an EXIF reader, reverse image search, Spokeo, and Google Maps/Earth as one of the core, free verification tools in the verification toolkit. (For a list of other tools out there, see this section of the Handbook.)

A guide to video verification

Citizen Evidence is also a valuable addition to verification training. Koettl has created a series of videos that offers a step-by-step walkthrough for verifying user-generated video. This is a detailed and easy-to-follow guide, offered by someone who practices this as part of his daily job. (The videos are geared toward human rights workers, but the techniques apply for journalists.)

For Koettl, the tool and the videos are an important step in helping spread the skills of digital content verification within his profession.

“I believe in a couple of years from now, verification of citizen media will be part of the core skills of any human rights researcher, as a consequence of better verification protocol and tools, as well as dedicated training,” he said. “Subsequently, we will only need dedicated staff for more advanced analysis, including more technical and forensic assessments.”

I hope this same dynamic begins to emerge in more newsrooms, whereby basic verification knowledge/skills are spread among as many people as possible, and they are also supported by a smaller group of colleagues with specialized expertise. Read more

Tools:
7 Comments

Sunday, July 06, 2014

L.A. Times corrects report of author’s porn habits, man’s “endowment”

The Los Angeles Times offered a book review correction that’s jam packed with porn and penis references:

“Big Little Man”: A review in the June 29 Arts & Books section of the book “Big Little Man” said that author Alex Tizon is in his 60s. He is 54. Also, the review described Tizon as an avid consumer of porn, but the book says the viewing was for research. It also described Tizon’s friend’s embarrassment about the size of his endowment, whereas the book states that “he liked being average.” 

Hat tip to Romenesko for spotting this. Read more

Tools:
4 Comments

Thursday, July 03, 2014

How CNBC got burned by a nonexistent ‘cyberattack’

Two weeks ago, CNBC aired a story and published a detailed article about what it called an “audacious,” “brazen,” sophisticated” and “unprecedented” cyberattack against a big hedge fund.

A company called BAE Systems Applied Intelligence said it had identified the attack, but declined to name the hedge fund involved. 

CNBC correspondent Eamon Javers wrote the lengthy look at the incident and also appeared on air in a more than two-minute segment.

Maybe you can guess what happened next: Yesterday, Javers wrote a follow-up article to note that BAE subsequently admitted that the attack on the hedge fund never really happened. It was part of a “scenario” the company had laid out. From the company statement given to Javers:

“From the extensive amount of cyber incidents we deal with, we occasionally produce anonymized illustrative scenarios to help inform industry and the media. We now understand that we recently provided CNBC with an example referencing a hedge fund and incorrectly presented it as an actual BAE Systems Applied Intelligence client case study rather than an illustrative scenario.

“Although the example was a plausible scenario, we believe that it does not relate to a specific company client,” the spokesperson added. “We sincerely apologize for this inaccuracy. We are taking the necessary action to ensure this type of error does not occur again.”

Most sources are prone to spin or errors of omission, rather than outright misrepresentations. But it happens. Along with outing the source as untrustworthy, it also tars the reporter and outlet who didn’t properly confirm the story before running with it.

In this case, a PR firm representing BAE, a publicly traded company, pitched the story. Javers then had a company executive walk him though the incident in an interview and on air. (The company says the executive, Paul Henninger, is now “taking some time away from the business.”)

Javer’s follow up piece presses the company on how this happened and also notes that “On that day the story was posted on CNBC.com, BAE stock went up 1.6 percent with trading volume higher than usual.” (A report by a Forbes staffer says that “BAE Systems stock dropped 1.8% between closing on July 1 and July 2, the day this updated story broke.”)

The company told him it waited so long to rectify the mistake because it “had attempted to get more information on the incident and ‘it took some time’ to conclude it had never happened.”

Obviously, the company and its executive get a black eye for this. But what about Javers?

His article about the attack included this line, “The details of the attack were provided by BAE Systems and were not independently verifiable by CNBC.”

If it can’t be verified, then maybe it doesn’t warrant a full segment and feature article?

Also notable is that the disclosure doesn’t appear until roughly 800 words into the online story. At that point, the reader has been given ample quotes and other details that treat the attack as real. The broadcast segment, however, doesn’t include any disclosures about CNBC’s inability to confirm the information it was relaying.

The story is also positioned online and on air as an exclusive for CNBC, flagging it as important for the reader/viewer. The opening paragraph of the story uses the phrase “CNBC has learned” and the TV report begins with Javers saying that experts at “BAE Systems … tell CNBC exclusively”:

I give Javers some credit for writing a follow up article. He also went on the air with the updated information. (I’ve emailed him for comment about the incident and will update if I hear back.)

My experience is that many news organizations would have just added an editor’s note to the offending online piece, rather than do a new article.

There is indeed an editor’s note at the top of the original, incorrect article. It links to the follow up piece. But I find the note too thin on details. People shouldn’t have to click through to be told that the hacking attack at the centre of the article never really happened. That should be stated up front.

The note:

Editor’s Note: BAE Systems admitted that it “incorrectly presented” the facts and circumstances it supplied in this report after its publication. Please see this follow-up report. 

Notice anything else missing? It doesn’t include any apology or expression of regret for CNBC’s role in the debacle. Nor does the follow-up article. Read more

Tools:
4 Comments