There is no strong reason that reasonable, informative discourse should be an attractor for online communities. Measures like karma or censorship are designed to address particular problems that people have observed; they aren't even intended to be a real solution to the general issue. If you happen to end up with a community where most conversation is intelligent, then I think the best you can say is that you were lucky for a while.

The question is, do people think that this is the nature of community? There is a possible universe (possible with respect to my current logical uncertainty) in which communities are necessarily reliant on vigilance to survive. There is also a possible universe where there are fundamentally stable solutions to this problem. In such a universe, a community can survive the introduction of many malicious or misguided users because its dynamics are good rather than because its moderator is vigilant. I strongly, strongly suspect that we live in the second universe. If we do, I think trying to solve this problem is important (fostering intelligent discourse is more important than the sum of all existing online communities). I don't mean saying "lets try and change karma in this way and see what happens;" I mean saying, "lets try and describe some properties that would be desirable for the dynamics of the community to satisfy and then try and implement a system which provably satisfies them."

I think in general that people too often say "look at this bad thing that happened; I wish people were better" instead of "look at this bad thing that happened; I wish the system required less of people." I guess the real question is whether there are many cases where fundamental improvements to the system are possible and tractable. I suspect there are, and that in particular moderating online discussion is such a case.

Well-Kept Gardens Die By Pacifism

Previously in seriesMy Way
Followup toThe Sin of Underconfidence

Good online communities die primarily by refusing to defend themselves.

Somewhere in the vastness of the Internet, it is happening even now.  It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing.  But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting.  (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)

So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood.  Or if there are new members, their quality also has gone down.

Then another fool joins, and the two fools begin talking to each other, and at that point some of the old members, those with the highest standards and the best opportunities elsewhere, leave...

I am old enough to remember the USENET that is forgotten, though I was very young.  Unlike the first Internet that died so long ago in the Eternal September, in these days there is always some way to delete unwanted content.  We can thank spam for that—so egregious that no one defends it, so prolific that no one can just ignore it, there must be a banhammer somewhere.

But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.

After all—anyone acculturated by academia knows that censorship is a very grave sin... in their walled gardens where it costs thousands and thousands of dollars to enter, and students fear their professors' grading, and heaven forbid the janitors should speak up in the middle of a colloquium.

It is easy to be naive about the evils of censorship when you already live in a carefully kept garden.  Just like it is easy to be naive about the universal virtue of unconditional nonviolent pacifism, when your country already has armed soldiers on the borders, and your city already has police.  It costs you nothing to be righteous, so long as the police stay on their jobs.

The thing about online communities, though, is that you can't rely on the police ignoring you and staying on the job; the community actually pays the price of its virtuousness.

In the beginning, while the community is still thriving, censorship seems like a terrible and unnecessary imposition.  Things are still going fine.  It's just one fool, and if we can't tolerate just one fool, well, we must not be very tolerant.  Perhaps the fool will give up and go away, without any need of censorship.  And if the whole community has become just that much less fun to be a part of... mere fun doesn't seem like a good justification for (gasp!) censorship, any more than disliking someone's looks seems like a good reason to punch them in the nose.

(But joining a community is a strictly voluntary process, and if prospective new members don't like your looks, they won't join in the first place.)

And after all—who will be the censor?  Who can possibly be trusted with such power?

Quite a lot of people, probably, in any well-kept garden.  But if the garden is even a little divided within itself —if there are factions—if there are people who hang out in the community despite not much trusting the moderator or whoever could potentially wield the banhammer—

(for such internal politics often seem like a matter of far greater import than mere invading barbarians)

—then trying to defend the community is typically depicted as a coup attempt.  Who is this one who dares appoint themselves as judge and executioner?  Do they think their ownership of the server means they own the people?  Own our community?  Do they think that control over the source code makes them a god?

I confess, for a while I didn't even understand why communities had such trouble defending themselves—I thought it was pure naivete.  It didn't occur to me that it was an egalitarian instinct to prevent chieftains from getting too much power.  "None of us are bigger than one another, all of us are men and can fight; I am going to get my arrows", was the saying in one hunter-gatherer tribe whose name I forget.  (Because among humans, unlike chimpanzees, weapons are an equalizer—the tribal chieftain seems to be an invention of agriculture, when people can't just walk away any more.)

Maybe it's because I grew up on the Internet in places where there was always a sysop, and so I take for granted that whoever runs the server has certain responsibilities.  Maybe I understand on a gut level that the opposite of censorship is not academia but 4chan (which probably still has mechanisms to prevent spam).  Maybe because I grew up in that wide open space where the freedom that mattered was the freedom to choose a well-kept garden that you liked and that liked you, as if you actually could find a country with good laws.  Maybe because I take it for granted that if you don't like the archwizard, the thing to do is walk away (this did happen to me once, and I did indeed just walk away).

And maybe because I, myself, have often been the one running the server.  But I am consistent, usually being first in line to support moderators—even when they're on the other side from me of the internal politics.  I know what happens when an online community starts questioning its moderators.  Any political enemy I have on a mailing list who's popular enough to be dangerous is probably not someone who would abuse that particular power of censorship, and when they put on their moderator's hat, I vocally support them—they need urging on, not restraining.  People who've grown up in academia simply don't realize how strong are the walls of exclusion that keep the trolls out of their lovely garden of "free speech".

Any community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving.  But this is more accused than realized, so far as I can see.

In any case the light didn't go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently.  While reading a comment at Less Wrong, in fact, though I don't recall which one.

But I have seen it happen—over and over, with myself urging the moderators on and supporting them whether they were people I liked or not, and the moderators still not doing enough to prevent the slow decay.  Being too humble, doubting themselves an order of magnitude more than I would have doubted them.  It was a rationalist hangout, and the third besetting sin of rationalists is underconfidence.

This about the Internet:  Anyone can walk in.  And anyone can walk out.  And so an online community must stay fun to stay alive.  Waiting until the last resort of absolute, blatent, undeniable egregiousness—waiting as long as a police officer would wait to open fire—indulging your conscience and the virtues you learned in walled fortresses, waiting until you can be certain you are in the right, and fear no questioning looks—is waiting far too late.

I have seen rationalist communities die because they trusted their moderators too little.

But that was not a karma system, actually.

Here—you must trust yourselves.

A certain quote seems appropriate here:  "Don't believe in yourself!  Believe that I believe in you!"

Because I really do honestly think that if you want to downvote a comment that seems low-quality... and yet you hesitate, wondering if maybe you're downvoting just because you disagree with the conclusion or dislike the author... feeling nervous that someone watching you might accuse you of groupthink or echo-chamber-ism or (gasp!) censorship... then nine times of ten, I bet, nine times out of ten at least, it is a comment that really is low-quality.

You have the downvote.  Use it or USENET.

 

Part of the sequence The Craft and the Community

Next post: "Practical Advice Backed By Deep Theories"

Previous post: "The Sin of Underconfidencee"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:20 AM
Select new highlight date
Rendering 50/316 comments  show more

May I suggest that length of comment should factor significantly into the choice to up/downvote?

I once suggested that upvote means "I would take the time to read this again if the insights from it were deleted from my brain" and downvote means "I would like the time it took to read this back."

Time figures into both of these. If you read a few words and don't profit from them, well, neither have you lost much. If you read several paragraphs, reread them to ensure you've understood them (because the writing was obtuse, say), and in the end conclude that you have learned nothing, the comment has, in some sense, made a real imposition on your time, and deserves a downvote.

This being said, one should not hesitate to downvote a short message if it does not add at all to the discussion, simply to keep the flow of useful comments without superfluous interruption that would hamper what could otherwise be a constructive argument.

(Note) This mostly has to do with karma with a minor rant/point at the end. If that doesn't sound interesting, it probably won't be.

Because I really do honestly think that if you want to downvote a comment that seems low-quality... and yet you hesitate, wondering if maybe you're downvoting just because you disagree with the conclusion or dislike the author... feeling nervous that someone watching you might accuse you of groupthink or echo-chamber-ism or (gasp!) censorship... then nine times of ten, I bet, nine times out of ten at least, it is a comment that really is low-quality.

Some of the most interesting things I have registered about LessWrong thus far have to do with the karma game. I am convinced that there are huge swaths of information that can be learned if the karma data was opened for analysis.

If I had to guess at the weaknesses of the karma system I would peg two big problems. The first is that (some? most? many?) people are trying to assign an integer value to a post that is something outside of the range [-1,1] and then adjust their vote to affect a post's score toward their chosen value. This seems to have the effect that everything is drawn toward 0 unless it is an absolutely stellar post. Then it just drifts up. I think the highest comment I have seen was in the high teens. I know there are more than twenty people visiting the site. Do they not read comments? Do they not vote on them?

The second problem spot is that I find it hard to actually use the feedback of karma. I have no way of knowing how well I am doing other a number. I have noticed that my karma has jumped lately and this leads me to believe I have made a change for the better. Unfortunately, I have no easy way of seeing which comments did well and which did poorly. Was it my tone? Did I get wiser? Are my comments more useful? Since I am new, my comment level is low and I can dig through what is there and learn, but this will simply get harder as time goes on. The karma system seems to work well on a comment by comment basis but not so much as a teaching tool. I see this as a problem because this is exactly what I need and I feel like I am squeezing a square peg into a round hole. It makes me think I am not using it correctly.

I find both of the above problems frustrating to me personally. I see a comment get voted down and think, "Okay, that was bad." If I ask for clarification, it goes back up, which just makes it confusing. "Uh, so was it bad or not bad?" The difference between the highest rated comment of mine and the lowest is less than 10. I think the highest is 5 and the lowest was at -2 before I deleted it.

Now, don't get me wrong, I am not complaining that my super-great-excellent posts are not voted to 20 karma in a single weekend. I am complaining that my crappy posts are all sitting at 0 and -1. I just started posting here and already have over 50 karma and the dark secret is that I am a complete poser. I barely even know the terms you guys use. I have not read much of Overcoming Bias and if you gave me a test on key points of rationality I would probably limp through the guessable stuff and start failing once the questions got hard. I can pick apart the logic within a given post, but the only real contributions I have made are exposing flaws in other comments. How in the world am I succeeding? I do not know.

To put this back into the original point, if people are shy about telling me my posts are low quality I can (a) never learn the difference between "mediocre" and "bad" and (b) any fool can limp by with comments that just repeat basic logic and use key terms in the right order. The chances of that being fun are low. One of my great paranoias is that I am the fool and no one pointed it out; I am the elephant in the room but have no mirror. I don't want to trample on your garden and smush the roses. I want to partake in what seems to be a really awesome, fun community. If I don't fit, kick me out.

(To be a little less harsh on myself, I do not consider myself a fool nor am I trying to play the role of a fool. If I am one, please let me know because I apparently have not figured it out yet.)

The karma system is a integral part of the Reddit base code that this site is built on top of. It's designed to do one thing - increase the visibility of good content - and it does that one thing very well.

I agree, though, that there is untapped potential in the karma system. Personally I would love to see - if not by whom - at least when my comments are up/down voted.

I have the same apprehension. I'm somewhere between "complete poser" and "well-established member of the community," I just sort of found out about this movement about 50 days ago, started reading things and lurking, and then started posting. When I read the original post, I felt a little pang of guilt. Am I a fool running through your garden?

I'm doing pretty well for myself in the little Karma system, but I find that often I will post things that no one responds to, or that get up-voted or down-voted once and then left alone. I find that the only things that get down-voted more than once or twice are real attempts at trolling or otherwise hostile comments. Then again, many posts that I find insightful and beneficial to the discussion rarely rise about 2 or 3 karma points. So I'm left to wonder if my 1-point posts are controversial but good, above average but nothing special, or just mediocre and uninteresting.

Something that shows the volume of up- and down-votes as well as the net point score might provide more useful feedback.

I'd like to weigh in with a meta-comment on this meta-discussion: y'all are over-thinking this, seriously.

In the vein of Eliezer's Tsuyoku Naritai!, I'd like to propose a little quasi-anime (borrowed from the Japanese Shinsengumi by way of Rurouni Kenshin) mantra of my own:

Aku soku zan! ("Slay evil instantly!")

Don't obssess over what fractional vote a read-but-not-downvoted comment should earn, don't try to juggle length with quality with low-brow/high-brow distinctions (as Wittgenstein said, a good philosophy book could be written using nothing but jokes), don't ponder whether the poster is a female and a downvote would drive her away, or consider whether you have a duty to explain your downvote - just vote.

Is it a bad comment? (You know deep down that this is an easy question.) Aku soku zan! Downvote evil instantly! Is it a useless comment? Aku soku zan!

(And if anyone replies to this with a comment like 'I was going to upvote/downvote your comment, but then I decided deep down to downvote/upvote' - aku soku zan!)

Maybe I understand on a gut level that the opposite of censorship is not academia but 4chan (which probably still has mechanisms to prevent spam).

A quick factual note: 4chan unconditionally bans child pornography and blocks (in a Wikipedia sense) the IPs, as I found out myself back when I was browsing through Tor. They'll also moderate off-topic posts or posts in the wrong section. They actually have a surprisingly lengthy set of rules for a place with such an anarchistic reputation.

I can see myself linking to this more than anything else you've ever written. Sing it, brother!

Note that the voting system we have here is trivially open to abuse through mass account creation. We're not in a position to do anything about that, so I hope that you, the site admins, are defending against it.

Wikipedia is an instructive example. People think it's some kind of democracy. It is not a democracy: Jimbo is absolute ruler of Wikipedia. He temporarily delegates some of his authority to the bureaucrats, who delegate to the admins, who moderate the ordinary users. All those with power are interested to get ideas from lots of people before making decisions, but it's very explicit policy that the purpose is to make the best encyclopaedia possible, and democracy doesn't enter into it. It is heavily policed, and of course that's the only way it could possibly work.

Update: new 'feature' - apparently, you can now only downvote if you've done less downvoting than your karma. Example from my screen:

Your total down votes (2538) must be less than your karma (528)

Current comment: 93t. This implies 11,792 comments, if I count correctly. You've downvoted 21% of all comments? I think it's more likely we're looking at some kind of bug, but if you've actually downvoted 21% of all comments then more power to you. Still, I'd like to verify first that it's not a bug.

That sounds about right - I try to read all comments and downvote over 1/3 of the time, but I've missed some in days of inactivity.

I think I just read the explanation for the strange phenomena some people have reported; that of karma disappearing rapidly over a few hours of downvotes on older threads. It's just thomblake catching up.

I've verified the numbers, thomblake has posted 2538 down votes. 93t is 11801 in base 36. Adding 436 articles drop the percentage slightly to 20.7%.

An unexpected consequence of this change is that upvoting thomblake now has benefits (he can downvote more) that don't correlate to the quality of his posting. While this will give him a (weak) incentive to produce better comments, it'll also encourage me to upvote him more, reducing the quality-signalling function of his karma.

it'll also encourage me to upvote him more

It's nice to hear that my tendency to downvote heavily is so valued.

It may be true that well-kept gardens die from activism, but it's also the case that moderation can kill communities.

Any community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving. But this is more accused than realized, so far as I can see.

There speaks the voice of limited experience. Or perhaps LiveJournal, Reddit, Google+ and Facebook really are not worth saving?

I've seen enough discussion forums killed by abusive moderators that I look carefully before signing up for anything these days. When I write a lengthy response, like this, I post it on my own site rather than face the possibility that it will be silently deleted for disagreeing with a moderator.

However, I've also been a moderator, and I've seen situations where moderation was desperately needed. In my experience on both sides of the issue, there are some basic criteria for moderation that need to be met to avoid abuse:

  • Moderation needs to be visible. Comments that are removed should be replaced with a placeholder saying so, and not simply deleted. Otherwise there will be accusations of repeated unfair deletion, and any act of moderation will quickly snowball into an argument about how much censorship is occurring, and then an argument about whether that argument is being censored, and so on until everyone leaves the site.
  • Moderation needs to be accountable. Moderators must have individual accounts, and moderation actions need to be associated with individual accounts. Without this, it's pretty much impossible to identify an abusive moderator. I recently got banned from a subreddit for asking which rule I had broken with a previous posting, and there was no way to find out who had banned me.
  • Moderation needs to be consistent. There needs to be a description of what the criteria for moderation actually are. It doesn't need to be legalistic and all-encompassing, and it may be subject to change, but it needs to exist. Some people feel that actually writing down the criteria encourages people to argue about them. The alternative, though, is that person A gets banned or censored for doing something that person B does all the time; that leads to much worse ill-will and ultimately is worse for the community.
  • Moderation rules need to apply to the moderators. A special case of the above, but it deserves highlighting. Few things are more infuriating than being banned by a moderator for doing something that the person doing the banning does all the time. Once this kind of moderation starts happening (e.g. Gizmodo), the atmosphere becomes extremely toxic.
  • Moderation needs an appeals process. There are abusive power-tripping assholes out there, and they love to find their way onto forums and become moderators. You need a mechanism for identifying any who find their way into your forum. Having some sort of appeals process is that mechanism. Ideally appeals should be resolved by someone who isn't part of the moderation team. Failing that, they should be resolved by someone other than the person being complained about, obviously.

It also helps if the moderation activity can be openly discussed in a partitioned area of the site. There will be desire to discuss moderation policy, so plan ahead and have a space where people can do so without derailing other threads. That way, you can also redirect meta-discussion into the moderation discussion area to avoid thread derailment, without making the problem worse.

(Also posted at my web site)

I think I fundamentally disagree with your premise. I concede, I have seen communities where this happened . . . but by and large, they have been the exception rather than the rule.

The fundamental standard I have seen in communities that survived such things, versus those that didn't fall under two broad patterns.

A) Communities that survived were those where politeness was expected - a minimal standard that dropping below simply meant people had no desire to be seen with you.

B) Communities where the cultural context was that of (And I've never quite worded this correctly in my own mind) acknowledging that you were, in effect, not at home but at a friendly party at a friends house, and had no desire to embarrass yourself or your host by getting drunk and passing out on the porch - {G}.

Either of these attitude seems to be very nearly sufficient to prevent the entire issue (and seem to hasten recovery even on the occasion when it fails), combined they (in my experience) act as a near invulnerable bulwark against party crashers.

Now exactly how these attitudes are nurtured and maintained, I have never quite explained to my own satisfaction - it's definitely an "I know it when I see it" phenomena, however unsatisfying that may be.

But given an expectation of politeness and a sense of being in a friendly venue, but one where there will be a group memory among people whose opinions have some meaning to you, the rest of this problem seems to be self-limiting.

Again, at least in my experience - {G}. Jonnan

I agree with you, and I also agree with Eliezer, and therefore I don't think you're contradicting him. The catch is here:

they act as a near invulnerable bulwark against party crashers

This implies that the party crashers, upon seeing that everyone else is acting polite and courteous, will begin acting polite and courteous too. In a closer model of an internet community, what happens is that they act rough and rowdy ... and then the host kicks them out. Hence, moderators.

Unless you really mean that the social norms themselves are sufficient to ward off people who made the community less fun, in which case your experience on the internet is very different from mine.

Eliezer,

I used to be not so sure how I felt about this subject, but now I appreciate the wonderful community you and others have gardened, here.

This post makes me think of SL4:

The most active place on the internet for discussing Friendly AI is the SL4 email list. Ironically, it must be one of the most hostile email lists on the internet with frequent flame wars and people being banned from the list. The moderation system consists of so-called “list snipers” whose job it is to ban discussions that they don’t like. If these people are experts in friendliness… lord help us.

There is no strong reason that reasonable, informative discourse should be an attractor for online communities. Measures like karma or censorship are designed to address particular problems that people have observed; they aren't even intended to be a real solution to the general issue. If you happen to end up with a community where most conversation is intelligent, then I think the best you can say is that you were lucky for a while.

The question is, do people think that this is the nature of community? There is a possible universe (possible with respect to my current logical uncertainty) in which communities are necessarily reliant on vigilance to survive. There is also a possible universe where there are fundamentally stable solutions to this problem. In such a universe, a community can survive the introduction of many malicious or misguided users because its dynamics are good rather than because its moderator is vigilant. I strongly, strongly suspect that we live in the second universe. If we do, I think trying to solve this problem is important (fostering intelligent discourse is more important than the sum of all existing online communities). I don't mean saying "lets try and change karma in this way and see what happens;" I mean saying, "lets try and describe some properties that would be desirable for the dynamics of the community to satisfy and then try and implement a system which provably satisfies them."

I think in general that people too often say "look at this bad thing that happened; I wish people were better" instead of "look at this bad thing that happened; I wish the system required less of people." I guess the real question is whether there are many cases where fundamental improvements to the system are possible and tractable. I suspect there are, and that in particular moderating online discussion is such a case.

One problem I have with hesitation to downvote is that some mediocre comments are necessary. Healthy discussion should have the right ratio of good comments to mediocre comments, so that people may feel relaxed, and make simple observations, increasing rate of communication. And current downvote seems too harsh for this role. On the other hand, people who only make tedious comments shouldn't feel welcome. This is a tricky balance problem to solve with comment-to-comment voting.

I would downvote more, if we had a separate button, saying "mediocre", that would downvote the comment, say, by 0.3 points (or less, it needs calibration). The semantics of this button is basically that I acknowledge that I have read the comment, but wasn't impressed either way. From the interface standpoint, it should be a very commonly used button, so it should be very easy to use. Bringing this to a more standard setting, this is basically graded voting, --, - and ++ (not soft/hard voting as I suggested before though).

An average mediocre comment should have (a bit of) negative Karma. This way, people may think of good comments they make as currency for buying the right to post some mediocre ones. In this situation, being afraid to post any mediocre comments corresponds to excessive frugality, an error of judgment.

Also, this kind of economy calls for separation of comment Karma and article Karma, since the nature of contributions and their valuation are too different between these venues.