Followup to: Why Our Kind Can't Cooperate
One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flaws in reasoning. This doesn't strictly follow. You could end up, say, rejecting your religion, just because you spotted more or deeper flaws in the reasoning, not because you were, by your nature, more annoyed at a flaw of fixed size. But realistically speaking, a lot of us probably have our level of "annoyance at all these flaws we're spotting" set a bit higher than average.
That's why it's so important for us to tolerate others' tolerance if we want to get anything done together.
For me, the poster case of tolerance I need to tolerate is Ben Goertzel, who among other things runs an annual AI conference, and who has something nice to say about everyone. Ben even complimented the ideas of M*nt*f*x, the most legendary of all AI crackpots. (M*nt*f*x apparently started adding a link to Ben's compliment in his email signatures, presumably because it was the only compliment he'd ever gotten from a bona fide AI academic.) (Please do not pronounce his True Name correctly or he will be summoned here.)
But I've come to understand that this is one of Ben's strengths—that he's nice to lots of people that others might ignore, including, say, me—and every now and then this pays off for him.
And if I subtract points off Ben's reputation for finding something nice to say about people and projects that I think are hopeless—even M*nt*f*x—then what I'm doing is insisting that Ben dislike everyone I dislike before I can work with him.
Is that a realistic standard? Especially if different people are annoyed in different amounts by different things?
But it's hard to remember that when Ben is being nice to so many idiots.
Cooperation is unstable, in both game theory and evolutionary biology, without some kind of punishment for defection. So it's one thing to subtract points off someone's reputation for mistakes they make themselves, directly. But if you also look askance at someone for refusing to castigate a person or idea, then that is punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.
The danger of punishing nonpunishers is something I remind myself of, say, every time Robin Hanson points out a flaw in some academic trope and yet modestly confesses he could be wrong (and he's not wrong). Or every time I see Michael Vassar still considering the potential of someone who I wrote off as hopeless within 30 seconds of being introduced to them. I have to remind myself, "Tolerate tolerance! Don't demand that your allies be equally extreme in their negative judgments of everything you dislike!"
By my nature, I do get annoyed when someone else seems to be giving too much credit. I don't know if everyone's like that, but I suspect that at least some of my fellow aspiring rationalists are. I wouldn't be surprised to find it a human universal; it does have an obvious evolutionary rationale—one which would make it a very unpleasant and dangerous adaptation.
I am not generally a fan of "tolerance". I certainly don't believe in being "intolerant of intolerance", as some inconsistently hold. But I shall go on trying to tolerate people who are more tolerant than I am, and judge them only for their own un-borrowed mistakes.
Oh, and it goes without saying that if the people of Group X are staring at you demandingly, waiting for you to hate the right enemies with the right intensity, and ready to castigate you if you fail to castigate loudly enough, you may be hanging around the wrong group.
Just don't demand that everyone you work with be equally intolerant of behavior like that. Forgive your friends if some of them suggest that maybe Group X wasn't so awful after all...
Part of the sequence The Craft and the Community
Next post: "You're Calling Who A Cult Leader?"
Previous post: "Why Our Kind Can't Cooperate"
I'm very much in favor of what you wrote there. I've been thinking to start a separate thread about this some time. Though feel free to beat me to it, I won't be ready to do so very soon anyway. But here's a stab at what I'm thinking.
This is from the welcome thread:
This is fair. I could, in principle, sit down and discuss rationality with a group having such a disclaimer, except in favor of religion, assuming they got promoted to my attention for some unrelated good reason (like I've been linked to an article and read that one and two more and I found them all impressive). Not going to happen in practice, probably, but you get my drift.
Except that's not the vibe of what Less Wrong is actually like, IMO, that we're "happy to have" these people. Atheism strikes me as a belief that's necessary for acceptance to the tribe. This is not a Good Thing, for many reasons, the simplest of which is that atheism is not rationality. Reversed stupidity is not intelligence; people can be atheists for stupid reasons, too. So seeing that atheism seems to be necessary here in order to follow our arguments and see our point, people will be suspicious of those arguments and points. If you can't make your case about something that in principle isn't about religion, without using religion in the reasoning, it's probably not a good case.
What I'd advocate would be not using religion as examples of obvious inanity, in support of some other point, like in this, otherwise great, post:
http://lesswrong.com/lw/1j7/the_amanda_knox_test_how_an_hour_on_the_internet/
Now I'm not in favor of censoring religion out and pretending we're not 99% atheists here or whatever the figure is. If the topic of some article is tied to religion, then sure, anything goes - you'll need good arguments anyway or you won't have a post and people will call you on using applause lights instead of argumentation.
But, more subtly: if the topic is some bias or rationality tool, and religion is a good example of how that bias operates/tool fails to be applied, then go ahead and show that example after the bias/tool has already been convincingly established in more neutral terms. It's one of the reasons why we explain Bayes' theorem in terms of mammographies, not religion.
Feedback would be welcome.