If you transgress, I might have a problem with you. If you actively shield a transgressor, I might have a problem with you. If you just don't punish a transgressor, the circumstances where I might have a problem are pretty rare I think!

Followup toWhy Our Kind Can't Cooperate

One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flaws in reasoning.  This doesn't strictly follow.  You could end up, say, rejecting your religion, just because you spotted more or deeper flaws in the reasoning, not because you were, by your nature, more annoyed at a flaw of fixed size.  But realistically speaking, a lot of us probably have our level of "annoyance at all these flaws we're spotting" set a bit higher than average.

That's why it's so important for us to tolerate others' tolerance if we want to get anything done together.

For me, the poster case of tolerance I need to tolerate is Ben Goertzel, who among other things runs an annual AI conference, and who has something nice to say about everyone.  Ben even complimented the ideas of M*nt*f*x, the most legendary of all AI crackpots.  (M*nt*f*x apparently started adding a link to Ben's compliment in his email signatures, presumably because it was the only compliment he'd ever gotten from a bona fide AI academic.)  (Please do not pronounce his True Name correctly or he will be summoned here.)

But I've come to understand that this is one of Ben's strengths—that he's nice to lots of people that others might ignore, including, say, me—and every now and then this pays off for him.

And if I subtract points off Ben's reputation for finding something nice to say about people and projects that I think are hopeless—even M*nt*f*x—then what I'm doing is insisting that Ben dislike everyone I dislike before I can work with him.

Is that a realistic standard?  Especially if different people are annoyed in different amounts by different things?

But it's hard to remember that when Ben is being nice to so many idiots.

Cooperation is unstable, in both game theory and evolutionary biology, without some kind of punishment for defection.  So it's one thing to subtract points off someone's reputation for mistakes they make themselves, directly.  But if you also look askance at someone for refusing to castigate a person or idea, then that is punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.

The danger of punishing nonpunishers is something I remind myself of, say, every time Robin Hanson points out a flaw in some academic trope and yet modestly confesses he could be wrong (and he's not wrong).  Or every time I see Michael Vassar still considering the potential of someone who I wrote off as hopeless within 30 seconds of being introduced to them.  I have to remind myself, "Tolerate tolerance!  Don't demand that your allies be equally extreme in their negative judgments of everything you dislike!"

By my nature, I do get annoyed when someone else seems to be giving too much credit.  I don't know if everyone's like that, but I suspect that at least some of my fellow aspiring rationalists are.  I wouldn't be surprised to find it a human universal; it does have an obvious evolutionary rationale—one which would make it a very unpleasant and dangerous adaptation.

I am not generally a fan of "tolerance".  I certainly don't believe in being "intolerant of intolerance", as some inconsistently hold.  But I shall go on trying to tolerate people who are more tolerant than I am, and judge them only for their own un-borrowed mistakes.

Oh, and it goes without saying that if the people of Group X are staring at you demandingly, waiting for you to hate the right enemies with the right intensity, and ready to castigate you if you fail to castigate loudly enough, you may be hanging around the wrong group.

Just don't demand that everyone you work with be equally intolerant of behavior like that.  Forgive your friends if some of them suggest that maybe Group X wasn't so awful after all...

 

Part of the sequence The Craft and the Community

Next post: "You're Calling Who A Cult Leader?"

Previous post: "Why Our Kind Can't Cooperate"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:41 AM
Select new highlight date
Rendering 50/90 comments  show more

I'm going to make a controversial suggestion: one useful target of tolerance might be religion.

I think we pretty much all understand that the supernatural is an open and shut case. Because of this, religion is a useful example of people getting things screamingly, disastrously wrong. And so we tend to use that as a pointer to more subtle ways of being wrong, which we can learn to avoid. This is good.

However, when we speak too frequently, and with too much naked disdain, of religion, these habits begin to have unintended negative effects.

It would be useful to have resources on general rationality to which to point our theist friends, in order to raise their overall level of sanity to the point where religion can fall away on its own. This is not going to work if these resources are blasting religion right from the get-go. Our friends are going to feel attacked, quickly close their browsers, and probably not be too well-disposed towards us the next time we speak (this may not be an entirely hypothetical example).

I'm not talking about respect. That would be far too much to ask. If we were to speak of religion as though it could genuinely be true, we would be spectacular liars. Still, not bringing up the topic when it's not necessary, using another example if there happens to be one available, would, I think, significantly increase the potential audience for our writing.

The problem with tolerating religion is that, as Dawkins pointed out, it has received too much tolerance already. One reason religion is so widespread and obnoxious is that it has been so off limits to criticism for so long.

A good solution to this is to have some diversity of rhetoric. Some people can be blunt, others openly contemptuous, and others more friendly and overtly tolerant. There's room enough for all of these.

The less tolerant people destroy the special immunity to criticism that religion has long enjoyed, and get to be seen as the "extremists". Meanwhile they make the sweetness-and-light folks look more moderate by comparison, which is a useful thing. A lot of people reflexively reject extremism, which they define as simply the most extreme views that they're hearing expressed on a contentious issue. Make the extremists more extreme, and more moderate versions of their viewpoint become more socially acceptable.

Someone has to play the villains in this story.

I'm very much in favor of what you wrote there. I've been thinking to start a separate thread about this some time. Though feel free to beat me to it, I won't be ready to do so very soon anyway. But here's a stab at what I'm thinking.

This is from the welcome thread:

A note for theists: you will find LW overtly atheist. We are happy to have you participating, but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false.

This is fair. I could, in principle, sit down and discuss rationality with a group having such a disclaimer, except in favor of religion, assuming they got promoted to my attention for some unrelated good reason (like I've been linked to an article and read that one and two more and I found them all impressive). Not going to happen in practice, probably, but you get my drift.

Except that's not the vibe of what Less Wrong is actually like, IMO, that we're "happy to have" these people. Atheism strikes me as a belief that's necessary for acceptance to the tribe. This is not a Good Thing, for many reasons, the simplest of which is that atheism is not rationality. Reversed stupidity is not intelligence; people can be atheists for stupid reasons, too. So seeing that atheism seems to be necessary here in order to follow our arguments and see our point, people will be suspicious of those arguments and points. If you can't make your case about something that in principle isn't about religion, without using religion in the reasoning, it's probably not a good case.

What I'd advocate would be not using religion as examples of obvious inanity, in support of some other point, like in this, otherwise great, post:

http://lesswrong.com/lw/1j7/the_amanda_knox_test_how_an_hour_on_the_internet/

Now I'm not in favor of censoring religion out and pretending we're not 99% atheists here or whatever the figure is. If the topic of some article is tied to religion, then sure, anything goes - you'll need good arguments anyway or you won't have a post and people will call you on using applause lights instead of argumentation.

But, more subtly: if the topic is some bias or rationality tool, and religion is a good example of how that bias operates/tool fails to be applied, then go ahead and show that example after the bias/tool has already been convincingly established in more neutral terms. It's one of the reasons why we explain Bayes' theorem in terms of mammographies, not religion.

Feedback would be welcome.

I think this is a good analysis.

However, in some areas, it is particularly difficult to keep things separate. The two cultures are simply very different; discussions have a way of finding the largest differences.

To be more specific: a recent conversation about rationalism came to the point of whether we could depend on the universe not to kill us. (To put it as it was in the conversation: there must be justice in the universe.)

I think you point up the problem with your own suggestion - we have to have examples of rationality failure to discuss, and if we choose an example on which we agree less (eg something to do with AGW) then we will end up discussing the example instead of what it is intended to illustrate. We keep coming back to religion not just because practically every failure of rationality there is has a religious example, but because it's something we agree on.

we have to have examples of rationality failure to discuss

It should be noted that if all goes according to plan, we won't have religion as a relevant example for too much longer. One day (I hope) we will need to teach rationality without being able to gesture out the window at a group of intelligent adults who think crackers turn into human flesh on the way down their gullets.

Why not plan ahead?

ETA: Now I think of it, crackers do, of course, turn into human flesh, it just happens a bit later.

It's not so much that I'm trying to hide my atheism, or that I worry about offending theists - then I wouldn't speak frankly online. The smart ones are going to notice, if you talk about fake explanations, that this applies to God; and they're going to know that you know it, and that you're an atheist. Admittedly, they may be much less personally offended if you never spell out the application - not sure why, but that probably is how it works.

And I don't plan far enough ahead for a day when religion is dead, because most of my utility-leverage comes before then.

But rationality is itself, not atheism or a-anything; and therefore, for aesthetic reasons, when I canonicalize (compile books or similar long works), I plan to try much harder to present what rationality is, and not let it be a reaction to or a refutation of anything.

Writing that way takes more effort, though.

they may be much less personally offended if you never spell out the application - not sure why, but that probably is how it works.

Once you connect the dots and make the application explicit, they feel honor-bound to take offense and to defend their theism, regardless of whether they personally want to take offense or not. In their mind, making the application explicit shifts the discussion from being about ideas to being about their core beliefs and thus about their person.

If all goes according to plan, by then we will be able to bring up more controversial examples without debate descending into nonsense. Let's cross that bridge when we come to it.

I think there are other examples with just as much agreement on their wrongness, many of which have a much lower degree of investment even for their believers. Astrology for instance has many believers, but they tend to be fairly weak beliefs, and don't produce such a defensive reaction when criticized. Lots of other superstitions also exist, so sadly I don't think we'll run out of examples any time soon.

But because people aren't so invested in it, they mostly won't work so hard to rationalise it; mostly people who are really trying to be rational will simply drop it, and you're left with a fairly flabby opposition. Whereas lots of smart people who really wanted to be clear-thinking have fought to hang onto religion, and built huge castles of error to defend it.

I'm going to make a controversial suggestion: one useful target of tolerance might be religion.

I'll try to tolerate your tolerance.

(I blog using any examples that come to hand, but when I canonicalize I try to remove explicit mentions of religion where possible. Bear in mind that intelligent religious people with Escher-minds will see the implications early on, though.)

You canonicalize?

Where can we find your canon, and is it marked as canonical?

I usually have something nice to say about most things, even the ideas of some pretty crazy people. Perhaps less so online, but more in person. In my case the reason is not tolerance, but rather a habit that I have when I analyse things: when I see something I really like I ask myself, "Ok, but what's wrong with this?" I mentally try to take an opposing position. Many self described "rationalists" do this, habitually. The more difficult one is the reverse: when I see something I really don't like, but where the person (or better, a whole group) is clearly serious about it and has spent some time on it, I force myself to again flip sides and try to argue for their ideas. Over the years I suspect I've learnt more from the latter than the former. Externally, I might just sound like I'm being very tolerant.

"of someone who I wrote off as hopeless within 30 seconds of being introduced to them."

Few college professors would do this because many students are unimpressive when you first talk with them but than do brilliantly on exams and papers.

I've known people be hopeless for months, then suddenly for no observable reason begin acting brilliantly, another reminder that small data sets aren't sufficient to predict a system as complex as human behaviour.

Note that tolerance is part of a general conversion strategy. Nitpicking everyone who disagrees with you in the slightest isn't likely to make friends, but it is likely to make your opponents think you are an arrogant jerk. Sometimes you just have to keep it too yourself.

Punishing for non-punishment is an essential dynamic for preserving some social hierarchies, at least in schoolyards and in Nazi Germany.

Abby was just telling me this afternoon that psychologists today believe that when kids are picked on in school, it's their own fault - either they are too shy, or they are bullies. (There is a belief that bullies are picked on in school, something I never saw evidence of in my school days except when it was me doing the picking-on.)

My theory is that the purpose of picking on kids in school is not to have effects on the kid picked on, but to warn everyone else that they will be picked on if they fail to conform. A kid is thus likely to be picked on if they don't respond to social pressures. Thus the advice that every parent gives their children, "Just ignore them if they pick on you," is the worst possible advice. Fight back, or conform; failing to respond requires them to make an example of you and does not impose any cost on them for doing so.

Wolves have a very strict social hierarchy, but I've never noticed evidence of punishment for a failure to punish. So this behavior isn't necessary.

The theory is that bullies are often in the middle of a bullying hierarchy. For example, when I was in high school, one of my friends was harassed by seniors when he was a freshman. When he became a senior himself, he, in turn, harassed freshmen, saying that he was going to give as good as he got.

From what I've read, in high school at least, bullies tend to be those in the middle of the social hierarchy; those at the top (the most popular) are secure in their position and can afford to be nice, while those who are at risk for backsliding work hard at making sure there is at least one person who is a more tempting victim than they are.

Seem to be assuming there is a higher purpose for bullying, which seems to be making a mistake along the same lines as the parable of group selection.

Possibly bullies bully because they enjoy it and aren't stopped from doing so. What additional explanation is needed?

Well, as a kid I got bullied at school, quite a bit, and I DO remember bullying other a handful of times.

I remember being conscious about it and feeling like shit for it, but at the same time being so relieved because as long as someone else was being bullied, I wasn't.

I certainly did not enjoy it, mainly because it contradicted my vision of myself as a courageous victim.

Eliezer is correct, but this post should be followed up by one about the many places where failing to punish non-punishers, in other words, tolerating free-riders, has negative consequences.

If you transgress, I might have a problem with you. If you actively shield a transgressor, I might have a problem with you. If you just don't punish a transgressor, the circumstances where I might have a problem are pretty rare I think!

We can and should reach whatever conclusions about people we wish. But we should be very slow to fail to observe and accept new evidence about them.

Excluding people from discussion may screen out their nonsense (or at least the things you thought were nonsense), but it also prevents you from discovering that you made a hasty decision. Once you've started ignoring someone, you can no longer observe what they say - and possibly find that they're smarter than you thought they were.

It's worth acquiring new data even from those you've discarded, at least once in a while.

I think there is an important distinction between cheap and expensive tolerance. If I am sitting on a plane and don't have a good book and am talking to my seatmate, and they seem stupid and irrational, being tolerant is likely to lead to an enjoyable conversation. I may even learn something.

But if I am deciding what authors to read, whose arguments to think about more seriously, etc., then it seems irrational to not judge and prioritize with my limited time.

And this relates to indirect tolerance - someone who doesn't judge and prioritize good arguments but instead listens to and talks to everyone is someone whose links and recommendations are less valuable because they have done less filtering. On the other hand, they are more likely to convert people, and when they do find good ideas they are more likely to be good ideas I wouldn't otherwise encounter. So it's tricky. Seems like the ideal is to read people who are intolerant but read tolerant people, so they have the broadest base of ideas, but still select the best.

The advice isn't about your attitude towards your seatmate's stupidity and irrationality. It's directed at your rationalist buddy sitting on your other side -- she's being advised not to be annoyed at you if you choose to be tolerant.

My attitude toward Ben's tolerance depends on the context. When he does it as a person, I appreciate it. When he does it as chair of AGI, I don't. There were some very good presentations this year, but there were also some very bad time-wasters.

But probably I should blame the reviewers instead.

Damn M-nt-f-x! Damn every one that won't damn M-nt-f-x!! Damn every one that won't put lights in his windows and sit up all night damning M-nt-f-x!!!

Since I saw this comment before the post it goes with, I thought it was some sort of rant about people not using Emacs for their comments. ;-)

There's a question of whether there's an important difference in kind between sorts of tolerance. Here's an analogy which might or might not work: assume that, in general, a driver of a vehicle drives as fast as they think it is safe for cars to be driven in general. Only impatience would cause them to not tolerate people who drive slower than they; a safety concern could cause them to be upset by people who drive faster, since they consider that speed unsafe. Say you have two people who each drive at 50 mph. One of them tolerates only slower drivers but wants to ticket faster drivers and the other tolerates all drivers. The first driver could have a legitimate issue with the second one. They don't disagree about how fast it's safe to drive - they disagree about whether it is appropriate to expect that safety standard of others. Some kinds of statements are dangerous - perhaps not to the degree or in the way that cars are, but dangerous, like slanderous statements or ones that incite to riot or ones that are lies or ones that betray confidences or ones that mislead the gullible or ones that involve occupied inflammable theatrical venues. Refusing to castigate people who express those kinds of statements might - I'm not confident of this - itself be worthy of censure. Or perhaps I'm missing the point and those aren't the kinds of statements the tolerators of which should be tolerated?

Great post. I think I'd already sort of started trying to do this, although I couldn't have put it as well. Now what I want to know is how much to tolerate people who are less tolerant than me. I'm not quite sure what to do when I meet someone who is infuriated by patterns of thinking that I consider only trivially erroneous or understandable under certain circumstances.

I'd say, tolerate them! Though I speak as one with a certain conflict of interest, being on the other side of that judgment. But it seems like the logical mirror image and hence still the thing to do. Judge people only on non-borrowed trouble?

The communities that I've been a part of which I liked the best, which seemed to have the most interesting people, were also the nastiest and least tolerant.

If you can't call a retard a retard, you end up with a bunch of retards, and then the other people leave. When eventually someone nice came to power, this is invariably what happened.

Eliezer isn't suggesting that you refrain from calling fools "fools". He's suggesting you tolerate people who are otherwise non-foolish except that they don't call fools "fools".

Tolerating fools might not be a good idea. Tolerating non-fools who themselves tolerate fools is, AFAICT, a glaringly good idea. If you create an atmosphere where everyone has to hate the same people... we run into some of the failure modes of objectivism.

...I think this post might be over the meta threshold where some people lack the reflective gear and simply can't process the intended meaning! Really, I went to some lengths to spell it out here!

I steer clear of such communities, unless I need to extract some specific bit of information out of them (and I leave immediately when I'm done). Perhaps that's because in my upbringing calling someone a fool (let alone a retard) was considered extremely rude.

If you can't call a retard a retard

Do you know the person you're calling a retard well enough, our you're judging by a couple of their posts? Would you say "you are a retard" to their face in real life? When you call someone a retard, what do you imply, "your mental abilities in general are very poor" or "you are incompetent at activity X which we discuss here"?

In my experience, actually ejecting disruptive people from an online community can have a powerful positive effect, but replying to them with insults only encourages them and achieves nothing.