Craig Silverman

Craig Silverman (craig@craigsilverman.ca) is an award-winning journalist and the founder of Regret the Error, a blog that reports on media errors and corrections, and trends regarding accuracy and verification. The blog moved to The Poynter Institute in December 2011, and he joined as Adjunct Faculty. He also serves as Director for Content for Spundge, a content curation and creation platform used by newsrooms and other organizations. Craig has been a columnist for the Toronto Star, Columbia Journalism Review, The Globe And Mail and BusinessJournalism.org. He’s the former managing editor of PBS MediaShift, and was part of the team that launched OpenFile.ca, a Canadian online news start-up. His journalism and books have been recognized by the Mirror Awards, National Press Club, Canadian National Magazine Awards, and the Canadian Online Publishing Awards.


L.A. Times corrects report of author’s porn habits, man’s “endowment”

The Los Angeles Times offered a book review correction that’s jam packed with porn and penis references:

“Big Little Man”: A review in the June 29 Arts & Books section of the book “Big Little Man” said that author Alex Tizon is in his 60s. He is 54. Also, the review described Tizon as an avid consumer of porn, but the book says the viewing was for research. It also described Tizon’s friend’s embarrassment about the size of his endowment, whereas the book states that “he liked being average.” 

Hat tip to Romenesko for spotting this. Read more

Tools:
4 Comments

How CNBC got burned by a nonexistent ‘cyberattack’

Two weeks ago, CNBC aired a story and published a detailed article about what it called an “audacious,” “brazen,” sophisticated” and “unprecedented” cyberattack against a big hedge fund.

A company called BAE Systems Applied Intelligence said it had identified the attack, but declined to name the hedge fund involved. 

CNBC correspondent Eamon Javers wrote the lengthy look at the incident and also appeared on air in a more than two-minute segment.

Maybe you can guess what happened next: Yesterday, Javers wrote a follow-up article to note that BAE subsequently admitted that the attack on the hedge fund never really happened. It was part of a “scenario” the company had laid out. From the company statement given to Javers:

“From the extensive amount of cyber incidents we deal with, we occasionally produce anonymized illustrative scenarios to help inform industry and the media. We now understand that we recently provided CNBC with an example referencing a hedge fund and incorrectly presented it as an actual BAE Systems Applied Intelligence client case study rather than an illustrative scenario.

“Although the example was a plausible scenario, we believe that it does not relate to a specific company client,” the spokesperson added. “We sincerely apologize for this inaccuracy. We are taking the necessary action to ensure this type of error does not occur again.”

Most sources are prone to spin or errors of omission, rather than outright misrepresentations. But it happens. Along with outing the source as untrustworthy, it also tars the reporter and outlet who didn’t properly confirm the story before running with it.

In this case, a PR firm representing BAE, a publicly traded company, pitched the story. Javers then had a company executive walk him though the incident in an interview and on air. (The company says the executive, Paul Henninger, is now “taking some time away from the business.”)

Javer’s follow up piece presses the company on how this happened and also notes that “On that day the story was posted on CNBC.com, BAE stock went up 1.6 percent with trading volume higher than usual.” (A report by a Forbes staffer says that “BAE Systems stock dropped 1.8% between closing on July 1 and July 2, the day this updated story broke.”)

The company told him it waited so long to rectify the mistake because it “had attempted to get more information on the incident and ‘it took some time’ to conclude it had never happened.”

Obviously, the company and its executive get a black eye for this. But what about Javers?

His article about the attack included this line, “The details of the attack were provided by BAE Systems and were not independently verifiable by CNBC.”

If it can’t be verified, then maybe it doesn’t warrant a full segment and feature article?

Also notable is that the disclosure doesn’t appear until roughly 800 words into the online story. At that point, the reader has been given ample quotes and other details that treat the attack as real. The broadcast segment, however, doesn’t include any disclosures about CNBC’s inability to confirm the information it was relaying.

The story is also positioned online and on air as an exclusive for CNBC, flagging it as important for the reader/viewer. The opening paragraph of the story uses the phrase “CNBC has learned” and the TV report begins with Javers saying that experts at “BAE Systems … tell CNBC exclusively”:

I give Javers some credit for writing a follow up article. He also went on the air with the updated information. (I’ve emailed him for comment about the incident and will update if I hear back.)

My experience is that many news organizations would have just added an editor’s note to the offending online piece, rather than do a new article.

There is indeed an editor’s note at the top of the original, incorrect article. It links to the follow up piece. But I find the note too thin on details. People shouldn’t have to click through to be told that the hacking attack at the centre of the article never really happened. That should be stated up front.

The note:

Editor’s Note: BAE Systems admitted that it “incorrectly presented” the facts and circumstances it supplied in this report after its publication. Please see this follow-up report. 

Notice anything else missing? It doesn’t include any apology or expression of regret for CNBC’s role in the debacle. Nor does the follow-up article. Read more

Tools:
4 Comments
logo

Truth Goggles launches as an annotation tool for journalists

//
When Dan Schultz first described Truth Goggles close to three years go, he deemed it a “magic button” that could tell you “what is true and what is false on the web site you are viewing.”

That concept – which Schultz refers to as the “fact-check the Internet approach” – attracted a decent amount of press and enthusiasm at the time. Schultz shipped some related code as a result of him developing the project while at the MIT Media Lab.

Today, nearly three years later, he’s released the first Truth Goggles product — and it’s a departure from that original vision.

The Truth Goggles launching today is a tool to enable anyone to annotate an existing piece of online content to raise and answer questions about what’s been reported/written. It can also be used to offer a layer of personalized commentary.

“It’s still a credibility layer and it’s still very much about challenging the user and prompting the user to think in the moment,” Schultz said.

Schultz said journalists can use it to add more context, and to prompt readers to think more critically about information in an article.

“I think of it more as a storytelling tool being given to the journalist,” Schultz said. “Just like they can embed a YouTube video, they can embed a credibility layer. Or as a media critic or reader [you can highlight] an article that has red flags and can share your layer with your friends by giving a URL.”

Truth Goggles is by no means the only annotation tool out there. There is Scrible, MarkUp.io (which says it will be relaunching), and a plethora of tools to help web designers, educators and others markup websites with notes and feedback. There are also efforts like Hypothes.is, which aims to create a fact-based annotation layer for the web. Earlier this month, it received a grant of just over $750,000 from the Andrew W. Mellon Foundation to “investigate the use of annotation in humanities and social science scholarship over a two year period.”

Schultz said his project is different in that it enables content creators like journalists to embed their own annotations on their work for all to see, and because it’s oriented to creating public annotations that are “about getting people to ask better questions and be more critical.”

It was after spending this academic year working part-time on Truth Goggles as a non-residential fellow with the Reynolds Journalism Institute that Schultz came to the conclusion that a personalized annotation layer was the best place to start with Truth Goggles.

As for how it connects to his original project goals, he said, “The goal still is to help people cut through their biases and walk away with a more informed sense of what they believe to be true  The point of this iteration on that vision really is to see whether or not a journalist would be willing (and able) to use annotation layers to get them there.”

Two ways to annotate

Truth Goggles annotations can be made visible in two ways. One option is for the author of the content to create annotations and then paste an embed code into the post to automatically display the annotations to all readers. (I’ve done that with this post; look for the yellow highights.)

Another option enables anyone to create annotations for an existing piece of content, and to generate a custom URL that can be shared with others to show your annotations.

Schultz said his inspiration is “to allow the journalist to be the voice inside their readers’ heads.” For others, it can be a way to “call out bullshit without needing to write a full blog post.”

If journalists are at least initially the primary user group, one obvious question is why they would need to annotate their own work? Shouldn’t important information be contained in the original article?

“My thinking is that interrupting the reader [with additional information/sourcing] every time you say something or make a claim interrupts the flow of the article in a physical sense,” Schultz said.

One example of this approach is ProPublica’s Explore Sources, a tool it developed to enable journalists to easily incorporate snippets of source material into a story. Click here to see it in action in a story. (Be sure to click the ON button at the top of the story to enable Explore Sources.)

Schultz said the Boston Globe plans to test out Truth Goggles to annotate health articles with additional information. (In 2012, Schultz spent a year in the Globe newsroom as a Knight-Mozilla Fellow.)

Why the pivot?

This version of Truth Goggles is being launched to see if it proves valuable to users, and to help Schultz identify how he should evolve the project.

“Maybe it’s not going to be a useful tool, maybe it will be … but I can see if it has legs or not,” he said.

My personal feeling is that journalists are more likely to use the tool to add context to their own work, or to call out notable passages elsewhere.

I asked Schultz what made him realize he had to move away from his original plans. He talked about the challenge of  “needing to have a database that has hundreds of thousands of [facts] before you can get off the ground” with a product that aims to fact check web content in real-time.

I detailed that very challenge in my recent post about Trooclick, a French startup that is aiming to execute on the “fact-check the Internet” vision.

Even with a big database of checked facts in hand, you also have to have enough computational and natural language processing power to analyze web content in real-time and surface the correct, relevant facts for any given piece of content. (Trooclick’s engineering team includes NLP experts.)

“Unrealistic is not a word want to use, but it was frankly a lot harder to gain traction and get to the point where traction was just a feasible thing,” Schultz said. Read more

Tools:
0 Comments
3280232580_52884df706_q

Ottawa Citizen apologizes to David Bowie for ‘Space Oddity’ accusation

In May, the Ottawa Citizen published an op-ed from a university professor that began with an accusatory opening line:  “David Bowie stole a piece of Canadian culture on Wednesday.”

It was dead wrong.

The piece claimed Bowie was personally responsible for having astronaut Chris Hadfield’s version of “Space Oddity” removed from YouTube. Professor Blayne Haggart wrote that “the world was only allowed to see the video because Bowie had granted Hadfield a one-year license to show it. On May 14, the license expired and Hadfield removed it from public view.”

Today, the paper apologized for the error. Turns out Bowie does not own the copyright for that song, and he in fact made efforts to try and get the owner to give the necessary permissions.

The apology reads in part:

One year later, the Citizen erroneously published that Mr. Bowie had granted the original licence but failed to renew the licence after one year. The commentary published by the Citizen also erroneously implied that Mr. Bowie was the reason the video had to be removed from YouTube and questioned how his actions could have “made the world a better place.” The article caused an immediate reaction by thousands of fans worldwide, and this incorrect information was picked up by hundreds of other news sources around the world.

On behalf of Blayne Haggart and ourselves, we regret the error and we sincerely apologize to Mr. Bowie as well as all his fans around the world.

Also of note is the URL of the apology: http://ottawacitizen.com/news/national/edited-dont-alter-apology-to-david-bowie. It clearly needed to run exactly as drafted, perhaps for legal reasons.

So far, the apology has been written up by USA Today. Read more

Tools:
6 Comments
googleglass

LinkedIn acquires major fact checking patents

Lucas Myslinski was tired of having to fact check the questionable emails his father often forwarded to him.

“My dad would send these emails where they say something like, ‘Oh the government is stockpiling billions of dollars of ammunition’ and other things like that, where if all you would do is take a little time and look on Snopes you would find it’s not true,” Myslinski said.

That very problem has inspired projects such as LazyTruth, Truth Goggles, and Trooclick, all of which I wrote about last week, as well as the Washington Post’s TruthTeller. There’s a broad consensus that in a world of abundant, and often incorrect, information it would be valuable to have an app that “automatically monitors, processes, fact checks information and indicates a status of the information.”

Myslinski sketched out his ideas and then took the step of patenting them. The above quote is in fact taken from one of his many patent filings and summarizes the core of the systems he has imaged and diagrammed over the last few years.

“I filed the initial ones and then as I had new ideas I attached them to it and kind of kept growing it,” he told me by phone this week.

As a result, since 2012 Myslinski has been awarded eight U.S patents related to fact-checking systems. It’s arguably the largest portfolio of fact-checking patents in the U.S., and perhaps the world.

Filing for patents is Myslinski’s day job. He began his career as a software engineer and is today a patent attorney with the Silicon Valley firm Haverstock and Owens, L.L.P.

A patent attorney in Silicon Valley holding eight fact-checking patents is interesting enough on its own. But it’s what Myslinksi did in March of this year that makes these patents even more notable.

That month, he transferred ownership of all of his fact-checking patents to a major Silicon Valley company, though perhaps not the first one you’d think of: LinkedIn.

Yes, the juggernaut of professional networking and recruiting is now the owner of perhaps the most significant portfolio of fact-checking patents.

I asked Myslinski what LinkedIn plans to do with his former patents.

“You know, I don’t know,” he told me. “I haven’t had any real discussions about what their plans are for it.” (Some entirely speculative thoughts from me are below.)

I contacted LinkedIn for comment and not surprisingly they didn’t offer any specifics, either.

“We are a fast growing Internet company and it’s not uncommon for us to expand our patent portfolio,” said spokesman Doug Madey in an emailed response. He also declined to name the cost of the acquisition.

I asked if LinkedIn planned to use these patents for product development and Madey said, “Our patent acquisitions do not necessarily foreshadow new product innovations.”

Mark Lemley, director of the Stanford Program in Law, Science, and Technology and a partner at Durie Tangri LLP, listed three main reasons why a company like LinkedIn would buy patents:

(1) to try to shore up legal rights in a product space they consider important, (2) to resolve a claim that they are infringing those patents, and (3) because they think the patents will be useful to target a competitor or someone who is in turn threatening to sue them.

Michael Carrier, an intellectual property expert and distinguished professor at Rutgers School of Law, said LinkedIn’s acquisition likely has more to do with its competitors, rather than a specific interest in fact-checking.

“Companies acquire any patents that they think they can use against competitors,” he said. “LinkedIn must believe that it will be able to use these patents against rivals.”

For his part, Myslinski said he sought out a patent broker to sell his portfolio because he realized he wasn’t going to be able to turn the patents into a real product.

“First I focused on the patents and then I did have a developer develop a prototype, a very basic one,” he told me. “But then you know with just life and everything going on I figured it would probably be best to see what I could get out of it in terms of monetizing.”

The Patents

LinkedIn now owns these fact-checking patents (ordered by most recently granted):

  1. Method of and system for fact checking with a camera device
  2. Method of and system for fact checking email
  3. Social media fact checking method and system
  4. Web page fact checking system and method
  5. Method of and system for fact checking rebroadcast information
  6. Fact checking method and system
  7. Fact checking methods
  8. Fact checking method and system

There are also some open applications, including this one, which was just made public last week. It’s for a “Fact checking Graphical User Interface Including Fact Checking Icons,” and builds on the existing patents by introducing claims related to a user interface to display the result of fact checking claims.

Here, for example, is one drawing from that filing, a pair of “fact checking glasses”:

More important than the newly published application is the core patent in the portfolio, “Fact checking method and system,” which was granted in May of 2012.

That patent’s claims, in my view, represent the kind of systems being used, at least in part, by the aforementioned existing efforts in the world of automated/real-time fact checking.

Myslinski said he is aware of Truth Teller. I asked if he felt the project infringes on the patents. He hesitated before answering. “That would be up to [LinkedIn] to decide.”

I also asked LinkedIn. “We do not comment on intellectual property implications outside of the case of an active lawsuit,” was their answer.

That 2012 patent outlines Myslinksi vision of a checking system. Here’s what he wrote about the benefits of the system:

The fact checking system will provide users with vastly increased knowledge, limit the dissemination of misleading or incorrect information, provide increased revenue streams for content providers, increase advertising opportunities, and support many other advantages.

The patent’s specification includes a myriad of potential applications, from checking basic facts to alerting TV viewers to political bias on the part of a commentator, and imagining ways that viewers could flag items that need to be fact checked. The basics of the system are outlined in this diagram:

Again, that’s very basic. And, again, it arguably applies to how TruthTeller and others do their work… but that’s my non-legal opinion. (I’ll also state that my hope is these patents would never be used to stop efforts to develop fact-checking applications and systems.)

If Carrier, the patent expert, is correct and LinkedIn wants these patents mainly to use against competitors, then it’s important to consider who falls into their competitive set. Social networks, as well as jobs websites, are certainly competitors. (And when I saw those glasses I of course thought of Google Glass.)

But so too are publishers and other online information providers.

LinkedIn the publisher

LinkedIn has in a very short time become a major online publisher. The first big step in this direction came in the form of the purchase of Pulse, a news reader app that has since been revamped to power LinkedIn Today, a section of the site where the Pulse algorithm helps surface relevant content in a variety of industry and topic areas.

LinkedIn also has a small editorial team led by Dan Roth, formerly of Fortune. (Disclosure, when I was working at Spundge, a start-up, I met with Roth and a member of his team, and demoed our product.)

One of the biggest editorial efforts at LinkedIn is its Influencers program that has influential executives, entrepreneurs and others contribute content to the site. A more recent evolution is the expansion of LinkedIn’s CMS to enable anyone to write and publish content on LinkedIn.

That context makes the acquisition seem more in tune with LinkedIn’s editorial efforts. If they wanted to actually use these patents for innovation, an obvious step would be for LinkedIn to integrate fact-checking into its Pulse content algorithm. Then it could conceivably begin to offer professionals a feed of the most important and accurate information in their given industry.

That would save people time, and saving busy professionals time is a powerful value proposition. Of course, it would also bring people back to the site in a way that’s more effective and less annoying than all the “It’s Jane Doe’s birthday” LinkedIn emails.

And if LinkedIn can build an algorithm and system that reliably surfaces the most accurate content about a given topic, then that’s also a powerful tool to help scale its LinkedIn Today curation efforts – without requiring additional human editors. (Sorry folks!)

But the above is of course speculation on my part. Maybe even wishful thinking, given my affection for fact-checking. It’s entirely possible, and probably more likely, that LinkedIn simply wants to keep these patents in the chamber should they ever need to fire upon competitors.

If that’s the case, I hope the promising efforts in this emerging space don’t end up being collateral damage. Read more

Tools:
5 Comments
BuzzFeed-faceplant

BuzzFeed faceplants in hockey story, then makes an amusing correction

When the Los Angels Kings won the Stanley Cup at home, lots of significant others, officials, and other folks came down to the ice to celebrate. As reported by BuzzFeed and Uproxx, this resulted in a remarkable faceplant by one woman who unwisely wore high heels on the ice:

That slip up caused another: BuzzFeed’s post mistakenly said the Kings are based on Sacramento, rather than L.A. (Sacramento’s basketball team is called the Kings.) That resulted in this amusing correction:

This post originally identified the Kings as being from Sacramento, not Los Angeles. The author clearly cares much more about faceplants than sports. We regret the error.

It’s funny and it conforms to the recently implement BuzzFeed correction policy, which I previously wrote about. Among other things, the style guide advised that an error in a lighthearted post can match the tone:

The correction’s tone should echo the tone of the item, in keeping with its gravity. For a factual error in, say, a funny list, the language can be fairly colloquial and even humorous as long as it contains the basic building blocks — “we got something wrong, and here is the correct information”; whereas for a news error, the language should be more sober and direct. A dumb mistake on a list of weird facts about Love Actually can begin: “GAH.” An error of fact in a news story should usually be labeled “CORRECTION.”

  Read more

Tools:
1 Comment
trooclick1 copy

A new truth layer for the web

Over the years this idea has attracted entrepreneurs and technologists, and so far no one has been able to figure out a workable, widely-adopted product.

The problem to solve is obvious: With so much content being published online, it’s difficult for most people to determine the quality and credibility of a given webpage or other piece of content. How can you know if the article you’re reading has incorrect facts, is incomplete, or was produced by an organization with serious ethical issues? Isn’t there some way to compare all the articles and content on a given topic and surface the best, most accurate and complete version?

A team of 16 people in Paris are the latest to try and solve this problem. Their product is Trooclick, and it will launch an initial version this month. (Today, at the Global Editors Network Summit in BarcelonaI’m moderating a panel about fact-checking that includes Trooclick CEO Stanislas Motte.)

Trooclick is a browser plugin (Chrome and Firefox) that alerts you if an article you’re reading includes what they call “glitches.” A glitch could be an incorrect fact, information that conflicts with other media reports about the same topic, or something about the publisher’s ethics, or the ethics of the article itself, that a reader should be aware of.

“These are warning signs that something in the article doesn’t quite match with a public database, or with other articles that have been written about that same subject,” said Robyn Bligh, a translator with the company who also leads its communications efforts, in a phone interview. “We’re not saying that it’s completely false; it’s a warning sign.”

During a demo they showed a Venture Beat article about a company’s IPO. Here’s a look at the Trooclick window that popped up to tell me about the glitches:

Trooclick flagged it due to the fact that a Wall Street Journal article included a different amount of money that a company was planning to raise in its IPO. Note: the “Is this article reliable” feature is a user-generated voting aspect that wasn’t active when I used People will be able to vote an article up or down.

The app noted that the company’s IPO filing with the Securities and Exchange Commission contained a different amount for the IPO raise, as a well as a different filing date:

Their strategy is to start by focusing on identifying glitches about financial/business news.

“The main target is professional in the financial and economic fields because they’re the ones who can benefit the most,” Bligh said.

The thinking is that if Trooclick can help a business or financial professional ensure they always see correct information in news articles, then the company sees that as a path to gaining a foothold among users. They also see potential in the future to have others use their technology to surface the best content, and pay a licensing fee to do so.

“The ambition is to check all the field of the news,” said Pierre-Alber Ruquier, a former journalist who is the company’s CMO, in the interview. “So for the moment we do it step by step. It’s more a question of priority of which [subject] to start with.”

Business and financial news is important for many professionals, so that’s where they will start.

How it works

The way Trooclick works is relatively simple to explain but harder to execute: it analyzes the text of the webpage you’re on and compares it to their database of facts and figures to see if anything is related between the two. If there is a match, they see if the data they’ve collected is different from what you’re reading. If that’s the case, it alerts you to the glitches.

As for the ethical glitches, they will have a set of things to look for, such as the use of anonymous sources. For example, a blog post by Trooclick notes that a recent article in TechCrunch would have likely included some “media ethics conflicts” notifications given the number of anonymous sources:

Reading Techcrunch’s article, Trooclick was surprised to see the number of times they mention anonymous sources. The article is full of “we’re hearing from multiple sources”, “we hear that”, “people said”, will apparently”, “a source tells us” and so on. These expressions of uncertainty are among the criteria which Trooclick will be able to analyze in the future when rating there liability of a news article.

By the end of this month, they expect their system to capture and build a database of roughly 30 different economic properties that will be used to compare against an article you’re reading.

This data will be drawn from SEC filings, official company press releases, and other data sources they deem reliable. They also extract key data from news articles published by a growing list of close to 100 publishers whose content they scan on a constant basis. (Trooclick doesn’t store the articles themselves; it extracts the key data from the article, such as share price, and stores that in a database.)

The challenge of a truth layer

As noted above, there have been several attempts to figure out the right truth/quality layer for the web.

Two projects that didn’t fulfill their initial promise were NewsTrust, which was meant as a way for people to collaboratively rate the quality of news articles, and to come up with the best coverage on specific topics. It ended in 2012 and the product and company domain were transferred to Poynter.

Another attempt in the same vein was NewsCred. It initially launched as a project to surface the best news articles using a mix of technology and user feedback. The company made a major pivot away from that and is today a leader in the content marketing technology space.

In terms of ongoing projects, Truth Goggles will use the PolitiFact database of fact checks, among others, to compare against a given article you’re reading. Like Trooclick, its consumer implementation would be as a browser plugin, but it hasn’t yet launched. There’s also LazyTruth, an effort led by MIT graduate Matt Stempeck to help identify urban legends and scams in your email inbox. It’s currently a Chrome extension, or you can use it by forwarding a suspect email to ask@lazytruth.com to get an analysis.

Another effort is Skeptive,which relies on users to identify conflicting claims online. Skeptive then attempts to determine “which side of a dispute is most supported by the sources that any given User trusts. Put together, these two processes tell you whether there’s a source out there that you trust that disagrees with the sentence that you’re reading.” Their goal is to find a better way to resolve online disputes and differences of opinion.

Trooclick and its ilk typically have two big challenges:

  • Building out a big enough database of quality data to compare against what people see and read.
  • Getting enough people to install their plugin/use their app.

The two elements are obviously connected. You need the data to deliver a good user experience in order to drive adoption. But without adoption, it tends to be hard to raise money to keep building out the data and product.

Trooclick is attacking the scale/adoption issue by focusing on a one area where there is potentially real value to users, and where they can access or build databases of relevant facts. Based on the alpha, their technology works and is nicely implemented for the user.

One challenge for them, and anyone else who relies on a browser plugin, is the fact that more and more reading is done on mobile phones, which renders most browser plugins useless.

Hey, nobody said building a truth layer was easy. That’s why people keep trying. Read more

Tools:
1 Comment

San Francisco Chronicle blog fails video game trivia, issues correction

The San Francisco Chronicle’s pop culture critic had to issue a correction after he misstated the revenge intentions of video game aliens:

CORRECTION: An earlier version of this post suggested that a singular being named Yar was getting his revenge in the Atari 2600 game Yars’ Revenge. In fact, the Yarians were a race of aliens, and were collectively seeking revenge. The Big Event apologizes for the error.  (Thanks to TBE reader Marty for the e-mail pointing this out.) Read more

Tools:
1 Comment

CNN serial plagiarist primarily lifted from her old employer, Reuters

Editors at CNN were performing a regular spot check of content in the organization’s publishing queue last week when they discovered that a story by London bureau news editor Marie-Louise Gumuchian included material taken without attribution from another source.

Using plagiarism detection software, they quickly turned up more examples and in the end have so far found that Gumuchian plagiarized in roughly 50 articles.

CNN leadership announced their findings and her firing in an Editor’s Note published today. Gumuchian was on the CNN world desk, and appears to have written frequently about the Middle East, among other topics.

“Most of what we found was [lifted] from Reuters, which she was previously employed by,” says a CNN source who asked not to be identified due to the fact that they were not cleared to speak publicly about the incident. “We also notified [Reuters]. She worked for us for about six months, so if we found that many in six months I can’t imagine the job Reuters has now.”

Reuters is reviewing Gamuchian’s work, a spokesman told Poynter. She worked for Reuters for roughly nine years, according to the CNN source.

“It’s kind of ballsy — don’t you think your old colleagues might look to see what you were doing at your new job?” the source said, adding that as a longtime journalist it’s unlikely Gumuchian thought it was okay for her to use Reuters wire content without attribution. Or that it would be acceptable.

The editor’s note about Gumuchian said CNN has gone in and “removed the instances of plagiarism found in her pieces. In some cases, we’ve chosen to delete an entire article.” That’s happened in seven instances, the source said.

Articles that were updated include a note informing readers of the reason for changes. Here’s an example:

Editors’ Note: This article has been edited to remove plagiarized content after CNN discovered multiple instances of plagiarism by Marie-Louise Gumuchian, a former CNN news editor.

CNN has also sent a note out to all CNN wire clients to inform them of the offending articles, so they could add any editors’ notes as needed.

As for the deleted stories, the source said this was done “because the plagiarism was so extensive … we killed the whole article because it was so blatant.” Here’s the text that appears at deleted story URLs:

(CNN) – This article has been removed after CNN discovered multiple instances of plagiarism in this story.

I asked if readers going to the URL of the removed article will see some kind of note explaining why the article has disappeared, and the source wasn’t sure if that was being done or not. (I’ll update with any news on that issue.)

The source said spot checks for attribution and plagiarism are a regular part of CNN’s editorial workflow. “She only worked for us for six months and we identified it.”

It’s encouraging that CNN caught her so soon into her tenure. However, by that time she had already caused a lot of damage. Perhaps the spot checks need to be increased, given the amount of content being produced. I also hope CNN moves the editor’s notes from the bottom of the offending articles up to the top.

If you’re wondering about best practices for handling an incident of plagiarism or fabrication, Kelly McBride and I previous offered a comprehensive guide. Read more

Tools:
16 Comments

Times public editor calls Joe Nocera column ‘intrinsically flawed,’ calls for more than a correction

New York Times public editor Margaret Sullivan has weighed in on a dispute between two heavyweights. In one corner is high-profile Times columnist Joe Nocera. In the other, billionaire investor Warren Buffett.

In the end, she sides with Buffett, writing that a Nocera column about Buffett is “so intrinsically flawed, a standard correction didn’t get the job done.” She’s right, for the reasons she cites and for another that I’ll note below.

Sullivan’s post focused on a pair of columns (1,2) by Nocera about Buffett and recent decisions related to executive compensation at Coca-Cola, a company in which Buffett’s Berkshire Hathaway is the largest shareholder.

Sullivan notes that both of Nocera’s columns required corrections for factual errors. But even more than the mistakes, the major concern for her is that “The entire premise of the second column is built on a mistake: that Mr. Buffett had changed his tone after ‘licking his wounds’ over the reaction to statements he made on April 23, including Mr. Nocera’s criticism.”

Nocera’s second column played up the apparent change of heart as his reason writing:

I am returning to this subject because, on Monday, following widespread criticism of his decision, Buffett gave a remarkable interview to Fortune magazine’s Stephen Gandel, an interview that was strikingly different in tone from his remarks of last week.

But that Fortune interview happened before Nocera’s initial column (and other criticism) was published. So Nocera’s stated reason to return to the subject was in fact wrong.

This is the exact situation people often raise with me when expressing their frustration with the way the press handles errors. When an article or opinion column is based on an incorrect fact or mistaken assumption, they expect there to be something more than just a simple correction. They expect the offending party to admit they were wrong, or to significantly alter their original assertions.

Sullivan agrees. Calling the column “intrinsically flawed,” she outlined her preferred remedy:

Mr. Nocera should have devoted at least part of another column to telling his readers what happened and why. In his email to me, Mr. Nocera referred to the second column’s fundamental mistake as “bad/dumb/embarrassing.”

Such a forthright admission should not be confined to an email answer to the public editor’s question, but should be published in the same Times pages where the two columns ran. Ideally, the online version of the second column would provide a clear link to the mea culpa.  That would go a long way toward making this right.

This raises another issue that remains unresolved: most people reading the offending column will likely read all of the incorrect assertions before getting to the correction.

The Times places its corrections at the bottom of an offending story, and it also puts  “Corrections Appended” at the top of the story, and hyperlinks that text to the correction. In most cases, this is great; you can go to the correction right away if you like, or just start with the story.

However, in a situation such as this, the issue is that the Times leaves the original, incorrect text intact even after adding the correction.

Any reader who gets the bottom of the Nocera column and sees the correction is going to feel like they just read all of this stuff about Buffet’s change of heart, only to discover that it’s not the case. It should be noted right away, or in the text itself.

To Sullivan’s point, when a column is so badly flawed there needs to be something more done for readers (and the aggrieved party). Either put the correction text at the top so that it’s clearly spelled out for everyone before they read, or require Nocera to fix the column, and offer an explanation, as Sullivan suggests.

As it stands, the correction’s placement and the lack of a corrected column exacerbate the mistake.

One final note: Sullivan’s post includes a necessary disclosure that she for years worked as the top editor at a paper owned by Buffett’s company:

(Disclosure: From 1999 to 2012, I was the editor of The Buffalo News, a paper owned by Berkshire and of which Mr. Buffett is the chairman. I own no shares of Berkshire.)

I had a lingering question after reading her post: how much interaction did she have with Buffett while in her role as the top editor of a paper he owned? It occurred to me because her post for the Times includes several quotes from an interview she had with Buffett. I wondered if they had spoken before.

“I’ve met him several times over the years,” she told me after I sent a question via a Twitter direct message. “We’ve had a cordial though not close relationship.”

It’s not an issue for her to have had past interactions with Buffett. But if I wondered about that, perhaps others did, too.

Update May 13: It looks like Sullivan’s post had the desired effect. Nocera has a new column up, and he uses it to offer a more meaningful mea culpa An excerpt:

“Although The Times published a strong correction, Margaret Sullivan, the public editor, wrote that she didn’t think it went far enough because my column was ‘so intrinsically flawed.’ Upon reflection, I agree with her. I sincerely regret the error.”

Good on Nocera, and nice work by the public editor. Read more

Tools:
5 Comments