Craig Silverman

Craig Silverman (craig@craigsilverman.ca) is an award-winning journalist and the founder of Regret the Error, a blog that reports on media errors and corrections, and trends regarding accuracy and verification. The blog moved to The Poynter Institute in December 2011, and he joined as Adjunct Faculty. He also serves as Director for Content for Spundge, a content curation and creation platform used by newsrooms and other organizations. Craig has been a columnist for the Toronto Star, Columbia Journalism Review, The Globe And Mail and BusinessJournalism.org. He’s the former managing editor of PBS MediaShift, and was part of the team that launched OpenFile.ca, a Canadian online news start-up. His journalism and books have been recognized by the Mirror Awards, National Press Club, Canadian National Magazine Awards, and the Canadian Online Publishing Awards.


Economist updates article after man’s mother objects to his photo

The Economist published a blog post that tried to show negative stereotypes of tourists from different countries are often untrue and unfair.

And by stereotypes, they mean:

Germans? Humourless and demanding. Americans? Loud with garish shorts. Chinese? Rude. Canadians? Actually Canadians are all quite nice. And the Brits? Drunken, violent louts

The original version of the article featured a picture of a young British fellow “on a night out in Mallorca.” This upset the man’s mother, and she contacted the publication to ask that it not tar him with the aforementioned “drunken, violent” brush.

Well, The Economist was only happy to comply. It changed the photo and added this note to the bottom of the story:

Note: This blogpost was originally illustrated with a photograph of a young British man on a night out in Mallorca. His mother called to let us know that he meets none of the negative stereotypes mentioned in the article, so we have replaced the picture with a photo of two innocuous Canadians instead.

As a Canadian, I intend to contact them and object to the labeling of these two true patriots as “innocuous”:

Screen Shot 2014-07-24 at 3.25.32 PM Read more

Tools:
0 Comments

Scottish paper issues correction after it claims prom couple were ‘the envy of their classmates’

Here’s a wonderful backhanded correction from this week’s edition of the Cumbernauld News, of Scotland:

Vine is a broadcaster with the BBC, and the Twitter user he credits for finding the correction told me that it was published in this week’s edition of the paper. I emailed the paper to see if I can get more information about why Mrs. Masterson raised objections, and why the paper decided to issue the correction. Read more

Tools:
2 Comments
Mitt Romney, Barack Obama

Study: Political journalists opt for stenography over fact checking during presidential debates

During the 2012 U.S. presidential debates, political journalists on Twitter primarily repeated candidate claims without providing fact checks or other context, according to new research published in The International Journal of Press/Politics.

Authors Mark Coddington, Logan Molyneux and Regina G. Lawrence analyzed tweets from 430 political journalists during the debates to see how much they engaged in the checking of candidate claims. The resulting paper is “Fact Checking the Campaign: How Political Reporters Use Twitter to Set the Record Straight (or Not).”

They also examined whether the political journalist’s tweets fell more into the construct of traditional objectivity or what they call “scientific objectivity,” which eschews he said/she said in favor of empirical statements and analysis, i.e fact checking.

They found that 60 percent of the journalist tweets “reflected traditional practices of ‘professional’ objectivity: stenography—simply passing along a claim made by a politician—and ‘he said, she said’ repetition of a politician’s claims and his opponent’s counterclaim.”

Journalists largely repeated the claims and statement of candidates, rather that check or challenge them.

“Our data suggest that fact checking is not the most prominent use to which Twitter was put by reporters and commentators covering the 2012 presidential election,” the authors write. “Indeed, only a fraction of tweets in our sample referenced specific candidate claims at all.”

A missed opportunity

The researchers chose to look at tweets during the debates because debates are “central to the practice of political journalism and fact checking.”

They also wanted to see if fact checking was a big part of political Twitter during debates to get a sense of “how the emerging journalistic practice of fact checking manifests itself in a continually flowing information environment marked at its core by a fading distinction between fact and opinion.”

In the end, 15 percent of the tweets reflected the traditional fact-checking approach. These tweets saw journalists “referencing evidence for or against the claim and, in a few cases, rendering an explicit judgment about the validity of the claim …”

The data showed that checking was done more frequently by those in the data set who identified themselves as commentators rather than reporters. This again suggests that traditional notions of objectivity may be a factor.

Coddington, the lead author and a doctoral student at the University of Texas-Austin, said he and his co-authors believe journalists are missing an opportunity by not challenging and checking claims.

“Debates are a prime opportunity to challenge and confirm factual claims in real-time on Twitter to a public that’s paying real attention — a perfect spot to cut through the rhetoric of the campaign and play the informational role that journalists are capable of doing so well,” Coddington said. “Journalists aren’t, by and large, doing that, and they should, especially in a situation where audiences may be looking for someone to help them sort through the claims that are coming at them at a bewildering pace.”

The lack of checking was something of a surprise to him, as the researchers chose to look at fact checking on Twitter during the debates because they had seen so much of it in their feeds at the time.

I asked him why in the end there was so much stenography.
“Much of the debate analysis on Twitter fell into the category of what’s often called ‘horse-race’ journalism or commentary on strategy,” he said. “In other words, a lot of it was about what a candidate might have been trying to do strategically with statements in the debate, or the likely reception of those statements. As it related to the factual claims the candidates were making, these tweets fell into the stenography category — the journalists were simply passing on the claims, true or not, without any comment on their factual correctness. They weren’t concerned with whether the claims were true, only whether they would help or hurt the candidate.”

Challenge of real-time checking

One other factor may be that political journalists find it difficult to keep in the real-time flow of a debate and do checking at the same time.

Bill Adair, the founder of PolitiFact and now the Knight Professor of the Practice of Journalism and Public Policy at Duke, said it’s notable that journalists were able to do fact checking during such a fast moving event.

“It’s important to remember the nature of the event: It is a rapid-fire, largely unscripted free-for-all and reporters are trying to listen with one ear and still produce some tweets with value,” Adair said. “So there isn’t much time for reflection and verification. I’m happy to see that they manage to produce as much fact-checking as they do.”

It is indeed a challenge to do real-time fact-checking when you have no idea what candidates may say at any given moment. In an interview with me in 2012, the Associated Press’ Cal Woodward explained how they scale up their fact checking efforts for debate night:

We have anywhere from three to six or more people who are sitting at home or in the office watching a debate. When they hear something they’ll flag it and tell my editor [Jim Drinkard], who is the gatekeeper, and he will make a call if we think it’s strong enough to be developed. Sometimes they give me an item that’s pretty much already written, and I’ll slip it in.

It takes planning and execution to deliver fact checks at debate speed.

But it must also be said that journalists don’t have to be constantly tweeting during a debate. If you assume that people interested in the debate are watching it live, then your tweets need not be stenography — which is exactly what 60 percent of the ones gathered for this study were.

Why bother repeating what most people just watched and heard the candidate say? It may take a few minutes more to hunt for the source of a claim, or to offer context. But that’s arguably more valuable. So too is waiting until you have something to say, rather than rushing to transcribe something your followers are watching.

“For all the talk about Twitter as revolutionary journalistic tool, what we and others have found is that political journalists tend to use it simply to snark, talk strategy, and link to their work,” Coddington said. “Those are all fine ways to use Twitter, but that’s a big journalistic whiff if it’s not being used for anything more substantial than that.”

***

A final note on methodology for those interested: Their final data set included 17,922 tweets sent by the journalists beginning “one hour before each debate began until noon Eastern Time the following day.” The news organizations represented among the 430 journalists included a mix of large print outlets, broadcasters, cable news, online outlets, NPR and the AP. The authors attempted to mix national reporters with regional ones, and  17 percent of the journalists had bios that included words such as “commentator” or  “analyst.”  The authors felt they might be more inclined to offer opinions. That was born out in the data that showed these people did more fact-checking than others. Read more

Tools:
9 Comments
Screen Shot 2014-07-08 at 3.45.08 PM

Amnesty International launches video verification tool, website

Amnesty International is in the verification game and that is good news for journalism.

When journalists monitor and search social networks, they’re looking to discover and verify newsworthy content. Amnesty utilizes the same networks and content — but their goal is to gather and substantiate evidence of human rights abuses.

“Verification and corroboration was always a key component of human rights research,” said Christoph Koettl, the emergency response manager in Amnesty USA’s Crisis Prevention and Response Unit. “We always had to carefully review and corroborate materials, no matter if it’s testimony, written documents or satellite imagery.”

Now they’re “confronted with a torrent of potential new evidence” thanks to social networks and cell phones. As with their counterparts in newsrooms, human rights workers and humanitarian organizations must develop and maintain skills to verify the mass of user-generated content.

That’s why, it’s no surprise, Amnesty International today launched a new website and tool to help human rights researchers and others with the process of video verification. The site is Citizen Evidence Lab, which offers step-by-step guidance on how to verify user-generated video, as well as other resources. The tool is the YouTube Data Viewer.

The development of the site and tool were led by Koettl, who is one of Amnesty’s lead verification experts. (He also authored a case study about verifying video for the Verification Handbook, a free resource I edited for the European Journalism Centre.)

Here’s an introduction to the site:

YouTube Data Viewer

The YouTube Data Viewer enables you to enter in the URL of a YouTube video and automatically extract the correct upload time and all thumbnails associated with the video. These two elements are essential when verifying a YouTube video, and it’s information that’s difficult to gather from YouTube.

The upload time is critical in helping determine the origin of a video. Finding the upload time of a YouTube video can be difficult — it’s not clearly displayed on the video page. The thumbnails are useful because you can plug them into a reverse image search tool such as Google Image or TinEye and see where else online these images appear.

“Many videos are scraped, and popular videos are re-uploaded to YouTube several times on the same day,” said Koettl. “So having the exact upload time helps to distinguish these videos from the same day, and a reverse image search is a powerful way to find other/older versions of the same video.”

The goal is to offer non-technical users a tool and guidance to help them verify video, without requiring an expert such as Koettl. He said now his colleagues “will be able to do this basic research themselves by using the new tool, so not everything has to go through me for a basic assessment.”

The same goes for journalists. The YouTube Data Viewer should join tools such as an EXIF reader, reverse image search, Spokeo, and Google Maps/Earth as one of the core, free verification tools in the verification toolkit. (For a list of other tools out there, see this section of the Handbook.)

A guide to video verification

Citizen Evidence is also a valuable addition to verification training. Koettl has created a series of videos that offers a step-by-step walkthrough for verifying user-generated video. This is a detailed and easy-to-follow guide, offered by someone who practices this as part of his daily job. (The videos are geared toward human rights workers, but the techniques apply for journalists.)

For Koettl, the tool and the videos are an important step in helping spread the skills of digital content verification within his profession.

“I believe in a couple of years from now, verification of citizen media will be part of the core skills of any human rights researcher, as a consequence of better verification protocol and tools, as well as dedicated training,” he said. “Subsequently, we will only need dedicated staff for more advanced analysis, including more technical and forensic assessments.”

I hope this same dynamic begins to emerge in more newsrooms, whereby basic verification knowledge/skills are spread among as many people as possible, and they are also supported by a smaller group of colleagues with specialized expertise. Read more

Tools:
7 Comments

L.A. Times corrects report of author’s porn habits, man’s “endowment”

The Los Angeles Times offered a book review correction that’s jam packed with porn and penis references:

“Big Little Man”: A review in the June 29 Arts & Books section of the book “Big Little Man” said that author Alex Tizon is in his 60s. He is 54. Also, the review described Tizon as an avid consumer of porn, but the book says the viewing was for research. It also described Tizon’s friend’s embarrassment about the size of his endowment, whereas the book states that “he liked being average.” 

Hat tip to Romenesko for spotting this. Read more

Tools:
4 Comments

How CNBC got burned by a nonexistent ‘cyberattack’

Two weeks ago, CNBC aired a story and published a detailed article about what it called an “audacious,” “brazen,” sophisticated” and “unprecedented” cyberattack against a big hedge fund.

A company called BAE Systems Applied Intelligence said it had identified the attack, but declined to name the hedge fund involved. 

CNBC correspondent Eamon Javers wrote the lengthy look at the incident and also appeared on air in a more than two-minute segment.

Maybe you can guess what happened next: Yesterday, Javers wrote a follow-up article to note that BAE subsequently admitted that the attack on the hedge fund never really happened. It was part of a “scenario” the company had laid out. From the company statement given to Javers:

“From the extensive amount of cyber incidents we deal with, we occasionally produce anonymized illustrative scenarios to help inform industry and the media. We now understand that we recently provided CNBC with an example referencing a hedge fund and incorrectly presented it as an actual BAE Systems Applied Intelligence client case study rather than an illustrative scenario.

“Although the example was a plausible scenario, we believe that it does not relate to a specific company client,” the spokesperson added. “We sincerely apologize for this inaccuracy. We are taking the necessary action to ensure this type of error does not occur again.”

Most sources are prone to spin or errors of omission, rather than outright misrepresentations. But it happens. Along with outing the source as untrustworthy, it also tars the reporter and outlet who didn’t properly confirm the story before running with it.

In this case, a PR firm representing BAE, a publicly traded company, pitched the story. Javers then had a company executive walk him though the incident in an interview and on air. (The company says the executive, Paul Henninger, is now “taking some time away from the business.”)

Javer’s follow up piece presses the company on how this happened and also notes that “On that day the story was posted on CNBC.com, BAE stock went up 1.6 percent with trading volume higher than usual.” (A report by a Forbes staffer says that “BAE Systems stock dropped 1.8% between closing on July 1 and July 2, the day this updated story broke.”)

The company told him it waited so long to rectify the mistake because it “had attempted to get more information on the incident and ‘it took some time’ to conclude it had never happened.”

Obviously, the company and its executive get a black eye for this. But what about Javers?

His article about the attack included this line, “The details of the attack were provided by BAE Systems and were not independently verifiable by CNBC.”

If it can’t be verified, then maybe it doesn’t warrant a full segment and feature article?

Also notable is that the disclosure doesn’t appear until roughly 800 words into the online story. At that point, the reader has been given ample quotes and other details that treat the attack as real. The broadcast segment, however, doesn’t include any disclosures about CNBC’s inability to confirm the information it was relaying.

The story is also positioned online and on air as an exclusive for CNBC, flagging it as important for the reader/viewer. The opening paragraph of the story uses the phrase “CNBC has learned” and the TV report begins with Javers saying that experts at “BAE Systems … tell CNBC exclusively”:

I give Javers some credit for writing a follow up article. He also went on the air with the updated information. (I’ve emailed him for comment about the incident and will update if I hear back.)

My experience is that many news organizations would have just added an editor’s note to the offending online piece, rather than do a new article.

There is indeed an editor’s note at the top of the original, incorrect article. It links to the follow up piece. But I find the note too thin on details. People shouldn’t have to click through to be told that the hacking attack at the centre of the article never really happened. That should be stated up front.

The note:

Editor’s Note: BAE Systems admitted that it “incorrectly presented” the facts and circumstances it supplied in this report after its publication. Please see this follow-up report. 

Notice anything else missing? It doesn’t include any apology or expression of regret for CNBC’s role in the debacle. Nor does the follow-up article. Read more

Tools:
4 Comments
logo

Truth Goggles launches as an annotation tool for journalists

//
When Dan Schultz first described Truth Goggles close to three years go, he deemed it a “magic button” that could tell you “what is true and what is false on the web site you are viewing.”

That concept – which Schultz refers to as the “fact-check the Internet approach” – attracted a decent amount of press and enthusiasm at the time. Schultz shipped some related code as a result of him developing the project while at the MIT Media Lab.

Today, nearly three years later, he’s released the first Truth Goggles product — and it’s a departure from that original vision.

The Truth Goggles launching today is a tool to enable anyone to annotate an existing piece of online content to raise and answer questions about what’s been reported/written. It can also be used to offer a layer of personalized commentary.

“It’s still a credibility layer and it’s still very much about challenging the user and prompting the user to think in the moment,” Schultz said.

Schultz said journalists can use it to add more context, and to prompt readers to think more critically about information in an article.

“I think of it more as a storytelling tool being given to the journalist,” Schultz said. “Just like they can embed a YouTube video, they can embed a credibility layer. Or as a media critic or reader [you can highlight] an article that has red flags and can share your layer with your friends by giving a URL.”

Truth Goggles is by no means the only annotation tool out there. There is Scrible, MarkUp.io (which says it will be relaunching), and a plethora of tools to help web designers, educators and others markup websites with notes and feedback. There are also efforts like Hypothes.is, which aims to create a fact-based annotation layer for the web. Earlier this month, it received a grant of just over $750,000 from the Andrew W. Mellon Foundation to “investigate the use of annotation in humanities and social science scholarship over a two year period.”

Schultz said his project is different in that it enables content creators like journalists to embed their own annotations on their work for all to see, and because it’s oriented to creating public annotations that are “about getting people to ask better questions and be more critical.”

It was after spending this academic year working part-time on Truth Goggles as a non-residential fellow with the Reynolds Journalism Institute that Schultz came to the conclusion that a personalized annotation layer was the best place to start with Truth Goggles.

As for how it connects to his original project goals, he said, “The goal still is to help people cut through their biases and walk away with a more informed sense of what they believe to be true  The point of this iteration on that vision really is to see whether or not a journalist would be willing (and able) to use annotation layers to get them there.”

Two ways to annotate

Truth Goggles annotations can be made visible in two ways. One option is for the author of the content to create annotations and then paste an embed code into the post to automatically display the annotations to all readers. (I’ve done that with this post; look for the yellow highights.)

Another option enables anyone to create annotations for an existing piece of content, and to generate a custom URL that can be shared with others to show your annotations.

Schultz said his inspiration is “to allow the journalist to be the voice inside their readers’ heads.” For others, it can be a way to “call out bullshit without needing to write a full blog post.”

If journalists are at least initially the primary user group, one obvious question is why they would need to annotate their own work? Shouldn’t important information be contained in the original article?

“My thinking is that interrupting the reader [with additional information/sourcing] every time you say something or make a claim interrupts the flow of the article in a physical sense,” Schultz said.

One example of this approach is ProPublica’s Explore Sources, a tool it developed to enable journalists to easily incorporate snippets of source material into a story. Click here to see it in action in a story. (Be sure to click the ON button at the top of the story to enable Explore Sources.)

Schultz said the Boston Globe plans to test out Truth Goggles to annotate health articles with additional information. (In 2012, Schultz spent a year in the Globe newsroom as a Knight-Mozilla Fellow.)

Why the pivot?

This version of Truth Goggles is being launched to see if it proves valuable to users, and to help Schultz identify how he should evolve the project.

“Maybe it’s not going to be a useful tool, maybe it will be … but I can see if it has legs or not,” he said.

My personal feeling is that journalists are more likely to use the tool to add context to their own work, or to call out notable passages elsewhere.

I asked Schultz what made him realize he had to move away from his original plans. He talked about the challenge of  “needing to have a database that has hundreds of thousands of [facts] before you can get off the ground” with a product that aims to fact check web content in real-time.

I detailed that very challenge in my recent post about Trooclick, a French startup that is aiming to execute on the “fact-check the Internet” vision.

Even with a big database of checked facts in hand, you also have to have enough computational and natural language processing power to analyze web content in real-time and surface the correct, relevant facts for any given piece of content. (Trooclick’s engineering team includes NLP experts.)

“Unrealistic is not a word want to use, but it was frankly a lot harder to gain traction and get to the point where traction was just a feasible thing,” Schultz said. Read more

Tools:
0 Comments
3280232580_52884df706_q

Ottawa Citizen apologizes to David Bowie for ‘Space Oddity’ accusation

In May, the Ottawa Citizen published an op-ed from a university professor that began with an accusatory opening line:  “David Bowie stole a piece of Canadian culture on Wednesday.”

It was dead wrong.

The piece claimed Bowie was personally responsible for having astronaut Chris Hadfield’s version of “Space Oddity” removed from YouTube. Professor Blayne Haggart wrote that “the world was only allowed to see the video because Bowie had granted Hadfield a one-year license to show it. On May 14, the license expired and Hadfield removed it from public view.”

Today, the paper apologized for the error. Turns out Bowie does not own the copyright for that song, and he in fact made efforts to try and get the owner to give the necessary permissions.

The apology reads in part:

One year later, the Citizen erroneously published that Mr. Bowie had granted the original licence but failed to renew the licence after one year. The commentary published by the Citizen also erroneously implied that Mr. Bowie was the reason the video had to be removed from YouTube and questioned how his actions could have “made the world a better place.” The article caused an immediate reaction by thousands of fans worldwide, and this incorrect information was picked up by hundreds of other news sources around the world.

On behalf of Blayne Haggart and ourselves, we regret the error and we sincerely apologize to Mr. Bowie as well as all his fans around the world.

Also of note is the URL of the apology: http://ottawacitizen.com/news/national/edited-dont-alter-apology-to-david-bowie. It clearly needed to run exactly as drafted, perhaps for legal reasons.

So far, the apology has been written up by USA Today. Read more

Tools:
6 Comments
googleglass

LinkedIn acquires major fact checking patents

Lucas Myslinski was tired of having to fact check the questionable emails his father often forwarded to him.

“My dad would send these emails where they say something like, ‘Oh the government is stockpiling billions of dollars of ammunition’ and other things like that, where if all you would do is take a little time and look on Snopes you would find it’s not true,” Myslinski said.

That very problem has inspired projects such as LazyTruth, Truth Goggles, and Trooclick, all of which I wrote about last week, as well as the Washington Post’s TruthTeller. There’s a broad consensus that in a world of abundant, and often incorrect, information it would be valuable to have an app that “automatically monitors, processes, fact checks information and indicates a status of the information.”

Myslinski sketched out his ideas and then took the step of patenting them. The above quote is in fact taken from one of his many patent filings and summarizes the core of the systems he has imaged and diagrammed over the last few years.

“I filed the initial ones and then as I had new ideas I attached them to it and kind of kept growing it,” he told me by phone this week.

As a result, since 2012 Myslinski has been awarded eight U.S patents related to fact-checking systems. It’s arguably the largest portfolio of fact-checking patents in the U.S., and perhaps the world.

Filing for patents is Myslinski’s day job. He began his career as a software engineer and is today a patent attorney with the Silicon Valley firm Haverstock and Owens, L.L.P.

A patent attorney in Silicon Valley holding eight fact-checking patents is interesting enough on its own. But it’s what Myslinksi did in March of this year that makes these patents even more notable.

That month, he transferred ownership of all of his fact-checking patents to a major Silicon Valley company, though perhaps not the first one you’d think of: LinkedIn.

Yes, the juggernaut of professional networking and recruiting is now the owner of perhaps the most significant portfolio of fact-checking patents.

I asked Myslinski what LinkedIn plans to do with his former patents.

“You know, I don’t know,” he told me. “I haven’t had any real discussions about what their plans are for it.” (Some entirely speculative thoughts from me are below.)

I contacted LinkedIn for comment and not surprisingly they didn’t offer any specifics, either.

“We are a fast growing Internet company and it’s not uncommon for us to expand our patent portfolio,” said spokesman Doug Madey in an emailed response. He also declined to name the cost of the acquisition.

I asked if LinkedIn planned to use these patents for product development and Madey said, “Our patent acquisitions do not necessarily foreshadow new product innovations.”

Mark Lemley, director of the Stanford Program in Law, Science, and Technology and a partner at Durie Tangri LLP, listed three main reasons why a company like LinkedIn would buy patents:

(1) to try to shore up legal rights in a product space they consider important, (2) to resolve a claim that they are infringing those patents, and (3) because they think the patents will be useful to target a competitor or someone who is in turn threatening to sue them.

Michael Carrier, an intellectual property expert and distinguished professor at Rutgers School of Law, said LinkedIn’s acquisition likely has more to do with its competitors, rather than a specific interest in fact-checking.

“Companies acquire any patents that they think they can use against competitors,” he said. “LinkedIn must believe that it will be able to use these patents against rivals.”

For his part, Myslinski said he sought out a patent broker to sell his portfolio because he realized he wasn’t going to be able to turn the patents into a real product.

“First I focused on the patents and then I did have a developer develop a prototype, a very basic one,” he told me. “But then you know with just life and everything going on I figured it would probably be best to see what I could get out of it in terms of monetizing.”

The Patents

LinkedIn now owns these fact-checking patents (ordered by most recently granted):

  1. Method of and system for fact checking with a camera device
  2. Method of and system for fact checking email
  3. Social media fact checking method and system
  4. Web page fact checking system and method
  5. Method of and system for fact checking rebroadcast information
  6. Fact checking method and system
  7. Fact checking methods
  8. Fact checking method and system

There are also some open applications, including this one, which was just made public last week. It’s for a “Fact checking Graphical User Interface Including Fact Checking Icons,” and builds on the existing patents by introducing claims related to a user interface to display the result of fact checking claims.

Here, for example, is one drawing from that filing, a pair of “fact checking glasses”:

More important than the newly published application is the core patent in the portfolio, “Fact checking method and system,” which was granted in May of 2012.

That patent’s claims, in my view, represent the kind of systems being used, at least in part, by the aforementioned existing efforts in the world of automated/real-time fact checking.

Myslinski said he is aware of Truth Teller. I asked if he felt the project infringes on the patents. He hesitated before answering. “That would be up to [LinkedIn] to decide.”

I also asked LinkedIn. “We do not comment on intellectual property implications outside of the case of an active lawsuit,” was their answer.

That 2012 patent outlines Myslinksi vision of a checking system. Here’s what he wrote about the benefits of the system:

The fact checking system will provide users with vastly increased knowledge, limit the dissemination of misleading or incorrect information, provide increased revenue streams for content providers, increase advertising opportunities, and support many other advantages.

The patent’s specification includes a myriad of potential applications, from checking basic facts to alerting TV viewers to political bias on the part of a commentator, and imagining ways that viewers could flag items that need to be fact checked. The basics of the system are outlined in this diagram:

Again, that’s very basic. And, again, it arguably applies to how TruthTeller and others do their work… but that’s my non-legal opinion. (I’ll also state that my hope is these patents would never be used to stop efforts to develop fact-checking applications and systems.)

If Carrier, the patent expert, is correct and LinkedIn wants these patents mainly to use against competitors, then it’s important to consider who falls into their competitive set. Social networks, as well as jobs websites, are certainly competitors. (And when I saw those glasses I of course thought of Google Glass.)

But so too are publishers and other online information providers.

LinkedIn the publisher

LinkedIn has in a very short time become a major online publisher. The first big step in this direction came in the form of the purchase of Pulse, a news reader app that has since been revamped to power LinkedIn Today, a section of the site where the Pulse algorithm helps surface relevant content in a variety of industry and topic areas.

LinkedIn also has a small editorial team led by Dan Roth, formerly of Fortune. (Disclosure, when I was working at Spundge, a start-up, I met with Roth and a member of his team, and demoed our product.)

One of the biggest editorial efforts at LinkedIn is its Influencers program that has influential executives, entrepreneurs and others contribute content to the site. A more recent evolution is the expansion of LinkedIn’s CMS to enable anyone to write and publish content on LinkedIn.

That context makes the acquisition seem more in tune with LinkedIn’s editorial efforts. If they wanted to actually use these patents for innovation, an obvious step would be for LinkedIn to integrate fact-checking into its Pulse content algorithm. Then it could conceivably begin to offer professionals a feed of the most important and accurate information in their given industry.

That would save people time, and saving busy professionals time is a powerful value proposition. Of course, it would also bring people back to the site in a way that’s more effective and less annoying than all the “It’s Jane Doe’s birthday” LinkedIn emails.

And if LinkedIn can build an algorithm and system that reliably surfaces the most accurate content about a given topic, then that’s also a powerful tool to help scale its LinkedIn Today curation efforts – without requiring additional human editors. (Sorry folks!)

But the above is of course speculation on my part. Maybe even wishful thinking, given my affection for fact-checking. It’s entirely possible, and probably more likely, that LinkedIn simply wants to keep these patents in the chamber should they ever need to fire upon competitors.

If that’s the case, I hope the promising efforts in this emerging space don’t end up being collateral damage. Read more

Tools:
5 Comments
BuzzFeed-faceplant

BuzzFeed faceplants in hockey story, then makes an amusing correction

When the Los Angels Kings won the Stanley Cup at home, lots of significant others, officials, and other folks came down to the ice to celebrate. As reported by BuzzFeed and Uproxx, this resulted in a remarkable faceplant by one woman who unwisely wore high heels on the ice:

That slip up caused another: BuzzFeed’s post mistakenly said the Kings are based on Sacramento, rather than L.A. (Sacramento’s basketball team is called the Kings.) That resulted in this amusing correction:

This post originally identified the Kings as being from Sacramento, not Los Angeles. The author clearly cares much more about faceplants than sports. We regret the error.

It’s funny and it conforms to the recently implement BuzzFeed correction policy, which I previously wrote about. Among other things, the style guide advised that an error in a lighthearted post can match the tone:

The correction’s tone should echo the tone of the item, in keeping with its gravity. For a factual error in, say, a funny list, the language can be fairly colloquial and even humorous as long as it contains the basic building blocks — “we got something wrong, and here is the correct information”; whereas for a news error, the language should be more sober and direct. A dumb mistake on a list of weird facts about Love Actually can begin: “GAH.” An error of fact in a news story should usually be labeled “CORRECTION.”

  Read more

Tools:
1 Comment

Get the latest media news delivered to your inbox.


Select the newsletter(s) you'd like to receive:
Page 2 of 5212345678910...Last »