January 27, 2006

In the wake of the blog blowup at WashingtonPost.com, a conversation about anonymity on weblogs has opened up in online news circles and across the wider Web.


At one extreme is a traditionalist point of view that says, “We’ve always required real names on letters to the editor because people should stand behind their words. We should do the same thing on the Internet.”


On the other side is an Internet point of view that says: “That century is long gone — the conversation is unleashed, and if you want to be part of the new world, you have to embrace anonymity.”


But there are more than two sides to this argument. Indeed, anonymity and the use of pseudonyms or “pen names” aren’t the same thing, as consultant Vin Crosbie has highlighted.


And if we’re setting some positive social goals for our Web-based interactions — say, for instance, “advance the cause of participative democracy” or “help people understand one another through conversation” — we need to look carefully at our options and judge them for their practical outcomes.


I’ve experienced five online identity models:




  1. Real, verified, published names. I first encountered this when working with Ziff-Davis Interchange in the pre-Web era. Under the Interchange model, publishers such as Star Tribune Online (where I was founding editor) and the Washington Post Digital Ink operated their own paid-access services. Everyone’s account was charged to a credit card, so the names were genuine. The discussion quality was first-class and behavior was sterling. The downside is that participation is limited and it all seems counter to the economic realities of journalism today. Why would people pay to get into your conversation space when free alternatives abound?

  2. Real names required but not verified. When we moved the Star Tribune‘s operation to the Web in 1995, we transported the existing discussion culture and the real-name rule — but we had no method for enforcing it. Periodically, we’d have incidents and “Monica Lewinsky” kept showing up, but overall, the conversational quality level was maintained. (The biggest incident came when Salon booted a long list of troublemakers, who discovered the Star Tribune was running similar software. It was like a plague of locusts, but the local culture eventually prevailed.)

  3. Pseudonyms allowed, tied to unpublished real names. This is the model we selected for BlufftonToday.com, and I think it’s an excellent compromise. Our goal was to get broad participation, and this model helps by offering participants some protection from offline harassment and stalkers. The site has been extraordinarily successful with women in that hard-to-reach 24-to-40 segment. Most participants use pseudonyms (some even blog using the identity of their pets), but some prefer to display their real names. Staffers use real names.

  4. Pseudonyms allowed with complete anonymity. This is the model that was in place at most Morris newspapers when I arrived, and the model that existed at Cox Interactive Media when I was there. There is some “reputation management” effect even with anonymous pseudonyms, but overall those systems tend to be dominated by a small number of users, generally men, who often are aggressive and occasionally abusive in their behavior.

  5. Completely open systems — post under any name you want (or in Slashdot’s case as “Anonymous Coward”). In my experience, this almost always leads to rampant abuse. There are ways of applying community moderation (Slashdot’s system is an example) that pushes morons and trolls into the background, but I generally wouldn’t recommend this model to anyone seeking to build local community. In addition to the interpersonal abuse problem, open systems recently have been overrun by spam.


Too much focus on the identity issue may obscure some other, equally important factors that affect the quality of online conversation. There are many well-led anonymous conversation systems on the Internet that have high participation and good behavior.


It helps to pay attention to these social factors:




  • Set clear goals for your site, and communicate them effectively. What are you trying to accomplish? Do your users understand those goals? If they don’t understand them, they can’t help you reach them. A clear, simple mission statement is a good place to start.

  • Ask your users for help. This doesn’t necessarily mean creating user-moderators; it can be as simple as asking your users to lead others by modeling good behavior and not getting caught up in destructive conflicts. A simple request often works wonders.

  • Set clear rules for your site. Nobody can follow the rules if they’re not easy to understand.

  • Participate in the conversation with the right tone and touch. Visible staff participation and leadership, and consistently gentle oversight/management make all the difference in the world.

  • Intervene when necessary. There’s nothing wrong and much that’s right about removing posts or people who are destructive to your goals.


There is one other model that I occasionally encounter: enforced pre-publication review. If you’re using a Web site primarily as an upload system for a print product, it’s a fine approach. But I don’t know of a single site that has been able to build a positive interactive conversational environment that way. The model simply doesn’t work.


What strategies have you used to effectively moderate online discussions?

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Steve Yelvington is an internet strategist for Morris DigitalWorks, the Internet division of privately held Morris Communications Co., based in Augusta, Ga. Morris is engaged…
Steve Yelvington

More News

Back to News