Articles about "User commenting"


Seattle Times columnist can’t stand commenters, retires

Seattle Weekly
Seattle Times sports columnist Steve Kelley has standard reasons for retiring at 63: “I find myself at a lot more games thinking ‘I’ve written this story 411 times now. Isn’t that enough?’” he tells Seattle Weekly contributor Rick Anderson.

But another complaint puts him squarely in league with former Ohio Rep. Steve LaTourette and fans of science writing: “”The reader comments section, it’s a free-for-all,” Kelley said.

“The level of discourse has become so inane and nasty. And it’s not just at the Times, it’s ESPN, everywhere – people, anonymous people, take shots at the story, writers, each other. Whatever you’ve achieved in that story gets drowned out by this chorus of idiots.”

Kelley says he won’t write a farewell column. His last column will run near the end of January, Anderson says. Read more

Tools:
3 Comments

Researchers: Online commenters impair readers’ scientific literacy

Milwaukee Journal Sentinel
People who read newspaper and magazine reports on science “may be influenced as much by the comments at the end of the story as they are by the report itself,” a study by University of Wisconsin-Madison researchers says.

2,000 subjects who read “a balanced news report about nanotechnology” saw either civil or rowdy comments, Mark Johnson reports in the Milwaukee Journal Sentinel.

“Disturbingly, readers’ interpretations of potential risks associated with the technology described in the news article differed significantly depending only on the tone of the manipulated reader comments posted with the story,” wrote authors Dominique Brossard and Dietram A. Scheufele.

Read more
Tools:
0 Comments
moderator

How the Huffington Post handles 70+ million comments a year

The Huffington Post has accumulated more than 70 million comments so far this year, far surpassing the 2011 total of 54 million.

To take a single example, its post (the first published) with the now-famous video of Mitt Romney’s “47 percent” comments attracted a mind-blowing 171,753 comments.

All news sites struggle with user comments in some way or another, but moderating this enormous volume of messages is a unique test of whether online conversations as we know them — a dozen people making a few points on a blog post or article — can scale infinitely larger without collapsing into cacophony.

User comments on Huffington Post articles have surged over the years, to about 8 million in July and 9 million (not pictured) in August. (Chart and data courtesy of The Huffington Post)

I asked Justin Isaf, director of community at The Huffington Post, some questions about how the site creates valuable discussions and enforces community standards with up to 25,000 comments per hour.

Poynter: One of the big questions is, does community building around online news “scale”? I.e., when 100,000 people comment on a post, can you really still have an exchange of ideas and a true conversation — or do you just have 100,000 people talking in volume too great for anyone to listen? Does it really create a community that knows each other and learns with each other cheap retro jordans?

Justin Isaf: You can definitely have meaningful community at scale. 70% of our comments are replies to other people, even on articles with 100,000 comments. People are actually paying attention to each other and having interesting discussions.

This is actually a really interesting question and I’d love to nerd out about it for a minute. There are really two main challenges to community at scale. The biggest challenge is the bad apple problem, eloquently stated in “Godwin’s Law” (“As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1″). The second, which only really becomes a problem if you have solved the first, is one of discovery – how do you find the conversation that you, as a reader, will be interested in when there are 185,000 comments on an article.

Justin Isaf worked at Change.org before joining The Huffington Post.

Godwin’s Law is all about that one guy who comes in and just makes things less enjoyable to the point where your good members just leave. It’s a bigger problem at scale, but is an issue at all stages of a community.

For us, the solution has been to work really hard to keep the community safe and enjoyable by investing significant time and energy into pre-moderation to keep those bad actors out… Our belief boils down to a very simple “if you are intentionally or consistently making this a less enjoyable place to be, you and your comments may be removed from it.”

Our approach to the other problem – finding the right discussion for you — has been both communal and technological. First, the “fan” network on HuffPost allows people to watch what their friends are talking about and to easily engage with new people they find interesting. We want to make sure that old friends can find each other easily, but at the same time introduce them to new people so they are always expanding their community and network.

The technology solution to connecting the right people involves LOTS of data crunching, smart algorithms and some elegant design. For example, our Pundit program on the Politics vertical — which highlights comments and conversations that are going to be generally more interesting to a large audience — leads to some of the most engaging and deep conversations on the site even when there is a high volume.

By focusing on these two issues — creating a safe, enjoyable space, and helping people find content that is relevant to them — has allowed us to manage the growth and scale of a true community. It’s also helped us retain users amazingly well. Over 1,000 of the community members who signed up in first the 6 months of HuffPost’s existence still comment more than once a week today.

Clearly that kind of effort also takes significant resources to maintain, right? Could you tell me more about the comment moderation team — the number of people, when they work, what they do? And what technological solutions like filters and algorithms help automate some of that work?

Isaf: Now, we’re a bigger team with the equivalent of about 30 full time moderators. They work 24/7/365 in six-hour shifts going through hundreds of comments per hour each. They’re all based in the U.S. or in the country of the edition they are working on so that they get local cultural context and conversations. It’s a very specific skill set and takes a certain mentality to do it well. I am constantly in awe of this team.

They’re backed up by some of the best tech in the industry. It would take a really long time to explain all the different technologies that go into the moderation flow, but at the core is Julia. She’s an artificial intelligence machine that helps the mods out. She’s a very cool bit of tech that we acquired about 2 years ago and have expanded on a lot since then.

I’d love to hear more about “Julia.” What role does she play in moderating comments? Where did that technology come from, and is it available to anyone else to adopt? Does she do semantic analysis? Does she “learn” over time? What inspired the name?

Isaf: She was part of an acquisition of a company called Adaptive Semantics that created her and the original tech behind it all. It was a two-person team, Jeff Revesz and Elena Haliczer. Elena is now HuffPost’s VP of product, and ongoing Julia development is done by some very smart people with Ph.Ds in stuff I will never understand.

Julia is a machine learning algorithm (JuLiA stands for “Just a Linguistic Algorithm”) that we’ve taught to understand several languages and that we continue to teach on an ongoing basis (yes, she learns over time). She reads everything submitted to HuffPost and helps the moderators do their jobs faster and more accurately. We’ve really done a lot with machine-assisted moderation, allowing us to pre-moderate 9.5 million comments a month, and Julia is core to that.

I’m a big fan of having machines help us with the lower level tasks, freeing up time, resources and brain power for more interesting and complex tasks. Julia takes that a few steps further and helps us with a lot of other aspects of HuffPost in addition to helping weed out abusive members, including identifying intelligent conversations for promotion, and content that is a mismatch for our advertisers. She has allowed us to do a lot more with a lot less.

So, given the pretty advanced capabilities of “Julia,” what’s left for the human moderators to take care of? Are they dealing with the gray-area comments that the computer wasn’t sure about?

Isaf: Our human team deals with a lot of issues. Those definitely include gray-area comments that Julia isn’t sure about, but also dealing with higher level and special care moderation, user issues, auditing and quality care, etc.

For example:

  • Julia has freed up resources to move into our Community Editor program – Editors who focus on creating content and engaging our audience with the explicit goal of building a more dense community of people around content that they care about, with like-minded people. It’s very hands on, fundamentally human and has created amazing groups of individuals who now have a home where they can share their passion with similar people they otherwise may never have met.
  • When controversial figures die or are hospitalized, we try very hard to make sure that threads are appropriate and aren’t taken over by people who are overly vocal about their dislike of the person. It’s the kind of care that machines don’t quite get yet, but these are still people with feelings, family, friends, loved ones, and we take it as part of our responsibility to make sure that conversations on HuffPost take that into account.
  • We have several stories and sections of the site that are sadly controversial and some that are sensitive – our Voices verticals, Teen, articles about people’s appearance, etc. In many cases on these verticals, we want to make sure that we have more human involvement because they touch so many real lives, and affect real feelings. So we want to make sure people with feelings and lives are involved. Julia is the sensitive type, but she doesn’t always “get it” when it’s emotional.

Lastly, how do you think about trying to surface the very best comments out of the thousands you get? Or more broadly, can you design a comment system that is optimized for the people reading the comments, not for the people leaving the comments?

Isaf: There’s often a trade-off in design choices — people reading and people conversing often have different needs.

You can (and should) definitely optimize for reading, especially considering that’s a big part of a community (see the “1-9-90 rule”). But, at the same time if you’re optimizing for the people who don’t converse then you risk reducing the amount of stuff there is to read.

We take the approach that we want to build a platform that is incredibly easy for our community members to engage with each other through, but which also makes it very easy for first time readers to find the best engagement points and most interesting conversations.

It’s a balancing act to be sure, and we’ve tracked both qualitative and quantitative data over the years to make sure that we’re creating the most engaging and robust platform and community we can. Our approach might not work in every case, but for us, it has worked rather well.

Earlier: HuffPost values conversation and community more than content | HuffPost uses badges to empower users as moderators. Read more

Tools:
5 Comments
comments

How journalists can turn their stories into conversations

Social media have made it easier than ever for journalists to engage their readers in conversation. They’ve also changed the way we think about other, “nonsocial” media.

Maybe that’s why many journalists have given up on monitoring our comment sections. The philosophical justification goes something like this: Journalists can’t justifiably restrict the free speech of their readers while relying on it for their work. The practical argument is more pessimistic: Resources are limited now more than ever, and journalists can’t afford to invest in comment sections without guaranteed returns.

The alternative model has been popularized by Ta-Nehisi Coates, who blogs at The Atlantic. There, he gives his blog away to his readers several times a week (“It’s yours…”), but not before insisting that they live up to a civil standard. Even so, these 13-character posts routinely attract hundreds of insightful comments. Coates also maintains a book club. He crowd-sources stories. He even landed one of his commenters a job as a correspondent for The Atlantic. Unlike other social media, comment sections build strong reader loyalty. So where do we start?

Cultivate an inquisitive culture that appreciates doubt

Arrogance and assertions stifle conversation. Intellectual honesty is comprised of many questions and few certainties, and sincere commenters should be comfortable admitting their ignorance (“talk to me like I’m stupid”). This demonstrates vulnerability, which other commenters are likely to exploit unless a moderator intervenes. Your readers are relying on you to foster a safe environment for them.

Once readers trust their peers, more questions will be asked, smart conversations will be struck and intimate relationships will be built that bring readers back for more. Over time, they will become stakeholders in this new community and uphold its values themselves. Readers will visit your website because of other readers. But you need to set an example for them first.

Reader-driven websites like Reddit and Quora are developing grassroots models for knowledge dissemination, and journalists should take notice. Redditors keep themselves in line with a harsh social code, whereas Quora employs moderators who help guide discussion, encourage curiosity and positively reinforce good behavior.

Commenters reflect content

Sensationalistic headlines attract sensationalistic readers. If you don’t like your commenters’ attitudes, you should consider what’s bringing them to your website and making them feel the way they do. Your product could be part of the problem.

Journalists are not to blame for everything that happens in their comment sections, but they are responsible for the behavior they allow to persist. If it’s on your website, you own it. The content you publish sets a rhetorical standard, and your readers should be held to it.

When Apple unveiled its iPhone 5 earlier this month, CNET covered the event with a live blog that put readers, who were anxiously speculating and asking questions, in the same discursive space as writers, who were posting photos and updates from the event while responding to readers’ requests. Tearing down the wall between readers and reporters shows our consumers that we expect them to produce too. Commenters reflect content because in this new model comments are content.

Restrict attitudes, not words

Banning hate speech and four letter words is not enough. Determining what merits deletion and more severe discipline will require discretion. You will make mistakes, and that’s okay. Identifying strawman arguments, leading questions, misleading statements and tangential topics is not an exact science, yet these things don’t belong in constructive comment sections. Determining how many strikes gets a commenter banned is also up to you, but a strict policy should be enforced. Anything ad hominem is unacceptable, as are intellectual dishonesty and intimidation. Lots of people will fail to follow these rules, and even productive commenters will need occasional reminders.

Ignoring readers is a luxury of privileged writers at prominent news organizations. The best examples of reader engagement are often found on independent blogs — where it’s an existential matter, not abstract policy. John Scalzi has been blogging at Whatever for over a decade, where he literally teaches his readers how to be good commenters. His comment policy is broad in its scope and subjectivity:

A good rule of thumb is to comment as if the person to whom you are commenting is standing in front of you, is built like a linebacker, and has both a short temper and excellent legal representation.

One-time fixes won’t substitute for regular maintenance

When Talking Points Memo switched to Facebook’s commenting system earlier this year, Josh Marshall wrote that he hoped restricting anonymity would instill a new sense of accountability in the website’s commenters, whose posts would be tied directly to their identities:

Out in the real public square, the fact that people know who we are places some limits on how abusive, disruptive or anti-social we might be. The commenting world has very little of that. And it shows.

That logic makes sense, yet it didn’t affect dramatic change. Associating readers’ real-life identities with their comments largely failed to discourage abusive behavior.

Coates echoed the same finding last month on Reddit, adding that identity mandates might put some readers in danger: “One problem is you immediately get into a situation of dudes stalking [a]nd harassing women.”

It’s difficult to anticipate how commenters will react to policy change. Commenting policy is an ongoing experiment within and between news organizations. Whether participation in that experiment happens on the company-wide or individual level, it requires meaningful interaction with readers. Authors should be adding value to their comment sections by engaging readers as they remove detracting posts.

Coates says he spends at least as much time moderating his blog and talking with readers as he does writing for it. That formula may be different for other media. Some may find interns capable of fulfilling that responsibility. Most importantly, any kind of commitment is better than laissez-faire disregard.

Being a 21st century journalist means curating news. It should also mean curating comments. News media have the means to offer an alternative to political polarization — an opportunity to carve out a rhetorical space where substantial and civil conversation dominates. And when journalists invite readers into their professional homes to talk, they get to set the rules.

As Coates put it, “This is not a lunch room. This is a dinner party. Conduct yourselves accordingly.”

Tyler Borchers is a junior studying communication in Ohio University’s Honors Tutorial College. He recently completed a publishing internship at Talking Points Memo. You can follow him on his blog and on Twitter. Read more

Tools:
8 Comments

NPR, other news orgs tighten comment moderation to improve conversation

NPR.org | MinnPost | Charleston Gazette | Vancouver Sun | MarketWatch
NPR switched its user commenting to the Disqus platform this week, and is increasing its moderation efforts in response to user demand.

It took the unusual step of sending readers an email survey in advance, asking for ideas and feedback about how to improve the commenting system. More than 6,000 responded. The big surprise, social media product manager Kate Myers writes, is that readers called for more comment moderation.

We asked this question in our recent NPR audience survey:

Read more
Tools:
1 Comment

Judge orders Spokesman-Review to ID anonymous commenter

Seattle Weekly | The Spokesman-Review | NPR | Los Angeles Times
An Idaho judge ruled on July 10 that The Spokesman-Review had 14 days to reveal the identity of an online commenter after a Kootenai County politician sued the paper, claiming the commenter libeled her. On July 24 the paper reported that the commenter had revealed herself: Linda Cook, who’s also active in county politics.

Judge John Patrick Luster also waded into the question of whether the staffer who removed Cook’s comment from the newspaper blog was entitled to journalistic protections, Rick Anderson writes in Seattle Weekly:

Idaho doesn’t have a reporter’s shield law, to protect sources, and even if it did, Luster said, [Spokesman-Review blogger Dave] Oliveria was not acting as a journalist, in the judge’s view. Oliveria, who removed the comment a few hours after it was posted, was merely the “facilitator of commentary and administrator of the blog.”

Protections thus didn’t apply to the paper, nor to the commenter, the judge said (though he did turn down Jacobson’s request for the names of two other commenters). “While the individuals are entitled to the right of anonymous free speech, this right is clearly limited when abused,” Luster wrote.

The case put the newspaper in the position of defending its website comments, which its columnist Shawn Vestal called a “sewer of stupidity and insults and shallowness.” After the news of the ruling broke, Vestal wrote, the paper’s comments section went nuts: Read more

Tools:
0 Comments

New study: Real names improve quality of website comments

TechCrunch

A study of South Korean website commenters adds to the debate over whether requiring real names improves online discourse. Gregory Ferenstein writes:

For 4 years, Koreans enacted increasingly stiff real-name commenting laws, first for political websites in 2003, then for all websites receiving more than 300,000 viewers in 2007, and was finally tightened to 100,000 viewers a year later after online slander was cited in the suicide of a national figure. The policy, however, was ditched shortly after a Korean Communications Commission study found that it only decreased malicious comments by 0.9%. Korean sites were also inundated by hackers, presumably after valuable identities.

The study, he writes, provides some real data to combat the theorizing that using real names fosters better online discourse. His conclusion: “The presence of some phantom judgmental audience doesn’t seem to make us better versions of ourselves.” Read more

Tools:
0 Comments

Anonymous comments can be ‘a frothing, bubbling cauldron of insanity’

Adweek | Mental Floss
Ryan Broderick has a job I suspect would make me flee the grid after about two days: He’s BuzzFeed’s community manager, responsible for combing through about 22,000 comments a month, reports Adweek’s Charlie Warzel. Broderick says comments, even the worst ones, have a socio-biological explanation:

“There is a social realm where things are rationally sorted and then there’s the anonymous place that brings out a person’s base instincts. It can become a frothing, bubbling cauldron of insanity,” he said. “Yet, you need that animalistic part of yourself. I think of it almost like your sex drive.”

Both Broderick and Huffington Post community manager Justin Isaf defend anonymous commenting, however: “Anonymity can do amazing, extremely creative things if you believe in it,” Broderick says.

Mental Floss’ Chris Higgins spotlights a video from popular vlogger Ze Frank in which he tries to get inside the head of a troll: On a video about optical illusions, Ze Frank says, “Some young gentlemen said they wanted to punch me in the face because my voice was so annoying. I can easily see how someone could find my voice annoying, but an annoying voice doesn’t generally warrant a face-punching.” Read more

Tools:
11 Comments

Gawker plans a business model based on comments and conversation, not posts and ads

Reuters | GigaOM
Nick Denton is betting that comments will be a big new business for the future of Gawker Media. Felix Salmon explains how Gawker is reinventing comments and plans to sell advertisers the ability to create conversations. In an earlier memo, Denton wrote, “the days of the banner advertisement are numbered. In two years, our primary offering to marketers will be our discussion platform.”

Will it work? Salmon points out the biggest potential flaw: “The problem here, for Denton — and the reason why he got an editorial guy to run this new project — is the old one: how to persuade his websites’ readers to read the sponsored posts and to engage in their comments sections.”

Earlier: Denton’s new advertising system may foreshadow a post-blogger future (Poynter) || Related: The problem with Facebook’s ad model (Technology Review) | Why GM and others fail with Facebook ads (Business Week). Read more

Tools:
1 Comment
Tools:
1 Comment