The Huffington Post has accumulated more than 70 million comments so far this year, far surpassing the 2011 total of 54 million.
All news sites struggle with user comments in some way or another, but moderating this enormous volume of messages is a unique test of whether online conversations as we know them — a dozen people making a few points on a blog post or article — can scale infinitely larger without collapsing into cacophony.
I asked Justin Isaf, director of community at The Huffington Post, some questions about how the site creates valuable discussions and enforces community standards with up to 25,000 comments per hour.
Poynter: One of the big questions is, does community building around online news “scale”? I.e., when 100,000 people comment on a post, can you really still have an exchange of ideas and a true conversation — or do you just have 100,000 people talking in volume too great for anyone to listen? Does it really create a community that knows each other and learns with each other cheap retro jordans?
Justin Isaf: You can definitely have meaningful community at scale. 70% of our comments are replies to other people, even on articles with 100,000 comments. People are actually paying attention to each other and having interesting discussions.
This is actually a really interesting question and I’d love to nerd out about it for a minute. There are really two main challenges to community at scale. The biggest challenge is the bad apple problem, eloquently stated in “Godwin’s Law” (“As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1″). The second, which only really becomes a problem if you have solved the first, is one of discovery – how do you find the conversation that you, as a reader, will be interested in when there are 185,000 comments on an article.
Godwin’s Law is all about that one guy who comes in and just makes things less enjoyable to the point where your good members just leave. It’s a bigger problem at scale, but is an issue at all stages of a community.
For us, the solution has been to work really hard to keep the community safe and enjoyable by investing significant time and energy into pre-moderation to keep those bad actors out… Our belief boils down to a very simple “if you are intentionally or consistently making this a less enjoyable place to be, you and your comments may be removed from it.”
Our approach to the other problem – finding the right discussion for you — has been both communal and technological. First, the “fan” network on HuffPost allows people to watch what their friends are talking about and to easily engage with new people they find interesting. We want to make sure that old friends can find each other easily, but at the same time introduce them to new people so they are always expanding their community and network.
The technology solution to connecting the right people involves LOTS of data crunching, smart algorithms and some elegant design. For example, our Pundit program on the Politics vertical — which highlights comments and conversations that are going to be generally more interesting to a large audience — leads to some of the most engaging and deep conversations on the site even when there is a high volume.
By focusing on these two issues — creating a safe, enjoyable space, and helping people find content that is relevant to them — has allowed us to manage the growth and scale of a true community. It’s also helped us retain users amazingly well. Over 1,000 of the community members who signed up in first the 6 months of HuffPost’s existence still comment more than once a week today.
Clearly that kind of effort also takes significant resources to maintain, right? Could you tell me more about the comment moderation team — the number of people, when they work, what they do? And what technological solutions like filters and algorithms help automate some of that work?
Isaf: Now, we’re a bigger team with the equivalent of about 30 full time moderators. They work 24/7/365 in six-hour shifts going through hundreds of comments per hour each. They’re all based in the U.S. or in the country of the edition they are working on so that they get local cultural context and conversations. It’s a very specific skill set and takes a certain mentality to do it well. I am constantly in awe of this team.
They’re backed up by some of the best tech in the industry. It would take a really long time to explain all the different technologies that go into the moderation flow, but at the core is Julia. She’s an artificial intelligence machine that helps the mods out. She’s a very cool bit of tech that we acquired about 2 years ago and have expanded on a lot since then.
I’d love to hear more about “Julia.” What role does she play in moderating comments? Where did that technology come from, and is it available to anyone else to adopt? Does she do semantic analysis? Does she “learn” over time? What inspired the name?
Isaf: She was part of an acquisition of a company called Adaptive Semantics that created her and the original tech behind it all. It was a two-person team, Jeff Revesz and Elena Haliczer. Elena is now HuffPost’s VP of product, and ongoing Julia development is done by some very smart people with Ph.Ds in stuff I will never understand.
Julia is a machine learning algorithm (JuLiA stands for “Just a Linguistic Algorithm”) that we’ve taught to understand several languages and that we continue to teach on an ongoing basis (yes, she learns over time). She reads everything submitted to HuffPost and helps the moderators do their jobs faster and more accurately. We’ve really done a lot with machine-assisted moderation, allowing us to pre-moderate 9.5 million comments a month, and Julia is core to that.
I’m a big fan of having machines help us with the lower level tasks, freeing up time, resources and brain power for more interesting and complex tasks. Julia takes that a few steps further and helps us with a lot of other aspects of HuffPost in addition to helping weed out abusive members, including identifying intelligent conversations for promotion, and content that is a mismatch for our advertisers. She has allowed us to do a lot more with a lot less.
So, given the pretty advanced capabilities of “Julia,” what’s left for the human moderators to take care of? Are they dealing with the gray-area comments that the computer wasn’t sure about?
Isaf: Our human team deals with a lot of issues. Those definitely include gray-area comments that Julia isn’t sure about, but also dealing with higher level and special care moderation, user issues, auditing and quality care, etc.
- Julia has freed up resources to move into our Community Editor program – Editors who focus on creating content and engaging our audience with the explicit goal of building a more dense community of people around content that they care about, with like-minded people. It’s very hands on, fundamentally human and has created amazing groups of individuals who now have a home where they can share their passion with similar people they otherwise may never have met.
- When controversial figures die or are hospitalized, we try very hard to make sure that threads are appropriate and aren’t taken over by people who are overly vocal about their dislike of the person. It’s the kind of care that machines don’t quite get yet, but these are still people with feelings, family, friends, loved ones, and we take it as part of our responsibility to make sure that conversations on HuffPost take that into account.
- We have several stories and sections of the site that are sadly controversial and some that are sensitive – our Voices verticals, Teen, articles about people’s appearance, etc. In many cases on these verticals, we want to make sure that we have more human involvement because they touch so many real lives, and affect real feelings. So we want to make sure people with feelings and lives are involved. Julia is the sensitive type, but she doesn’t always “get it” when it’s emotional.
Lastly, how do you think about trying to surface the very best comments out of the thousands you get? Or more broadly, can you design a comment system that is optimized for the people reading the comments, not for the people leaving the comments?
Isaf: There’s often a trade-off in design choices — people reading and people conversing often have different needs.
You can (and should) definitely optimize for reading, especially considering that’s a big part of a community (see the “1-9-90 rule”). But, at the same time if you’re optimizing for the people who don’t converse then you risk reducing the amount of stuff there is to read.
We take the approach that we want to build a platform that is incredibly easy for our community members to engage with each other through, but which also makes it very easy for first time readers to find the best engagement points and most interesting conversations.
It’s a balancing act to be sure, and we’ve tracked both qualitative and quantitative data over the years to make sure that we’re creating the most engaging and robust platform and community we can. Our approach might not work in every case, but for us, it has worked rather well.
Earlier: HuffPost values conversation and community more than content | HuffPost uses badges to empower users as moderators.