Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult

A common question here is how the LW community can grow more rapidly. Another is why seemingly rational people choose not to participate.

I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go. In this post, I try to clearly explain why I don't participate more and why some of my friends don't participate at all and have warned me not to participate further.

  • Rationality doesn't guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. (Or, you could not believe in free will. But most LWers don't live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.

  • In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It's not at all clear if general AI is a significant threat. It's also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it's necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.

  • LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality "training camps" do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

  • Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because "is" cannot imply "should"). Rationalists tend to have strong value judgments embedded in their opinions, and they don't realize that these judgments are irrational.

  • LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I'm struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I "should" do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn't actually create better outcomes for them.

  • "Art of Rationality" is an oxymoron.  Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.

I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don't mind Harry's narcissism) and LW is is fun to read, but that's as far as I want to get involved. Unless, that is, there's someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 10:32 AM
Select new highlight date
Rendering 50/193 comments  show more

I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go.

That's further than I go. Heck, what else is there, and why worry about whether you're going there or not?

I have also translated the Sequences, and organized a couple of meetups. :)

Here are some other things someone could do to go further:

  • organize a large international meetup;
  • rewrite the Sequences in a form more accessible for general public;
  • give a lecture about LW-style rationality at local university;
  • sign up your children for cryonics;
  • join a polyamorous community;
  • start a local polyamorous community;
  • move to Bay Area;
  • join MIRI;
  • join CFAR;
  • support MIRI and/or CFAR financially;
  • study the papers published by MIRI;
  • cooperate with MIRI to create more papers;
  • design a new rationality lesson;
  • build a Friendly AI.

Actually, PJ, I do consider your contributions to motivation and fighting akrasia very valuable. I wish they could someday become a part of an official rationality training (the hypothetical kind of training that would produce visible awesome results, instead of endless debates whether LW-style rationality actually changes something).

  • join a polyamorous community;
  • start a local polyamorous community;

Seriously? What does that have to do with anything?

I feel like the more important question is: How specifically has LW succeeded to make this kind of impression on you? I mean, are we so bad at communicating our ideas? Because many things you wrote here seem to me like quite the opposite of LW. But there is a chance that we really are communicating things poorly, and somehow this is an impression people can get. So I am not really concerned about the things you wrote, but rather about a fact that someone could get this impression. Because...

Rationality doesn't guarantee correctness.

Which is why this site is called "Less Wrong" in the first place. (Instead of e.g. "Absolutely Correct".) On many places in Sequences it is written that unlike the hypothetical perfect Bayesian reasoner, human are pretty lousy at processing available evidence, even when we try.

deciding what to do in the real world requires non-rational value judgments

Indeed, this is why a rational paperclip maximizer would create as many paperclips as possible. (The difference between irrational and rational paperclip maximizers is that the latter has a better model of the world, and thus probably succeeds to create more paperclips on average.)

Many LWers seem to assume that being as rational as possible will solve all their life problems.

Let's rephrase it with "...will provide them a better chance at solving their life problems."

instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done.

Not sure exactly what you suggest here. We should not waste time reflecting, but instead pick a path quickly, because time is important. But we should find data. Uhm... I think that finding the data, and processing the data takes some time, so I am not sure whether you recommend doing it or not.

LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

You seem to suggest some sinister strategy is used here, but I am not sure what other approach would you recommend as less sinister. Math, science, philosophy... are the topics mostly nerds care about. How should we do a debate about math, science and philosophy in a way that will be less attractive to nerds, but will attract many extraverted highly-social non-intellectuals, and the debate will produce meaningful results?

Because I think many LWers would actually not oppose trying that, if they believed such thing was possible and they could organize it.

LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW

This is not a strong evidence against usefulness of LW. If you imagine a parallel universe with alternative LW that does increase average success of its readers, then even in that parallel universe, most of the most impressive LW readers became that impressive before reading LW. It is much easier to attract a PhD student at a top university by a smart text, than to attract a smart-but-not-so-awesome person and make them a PhD student at a top university during the next year or two.

For example, the reader may be of a wrong age to become a PhD student during the time they read LW; they may be too young or too old. Or the reader may have done some serious mistakes in the past (e.g. choosing a wrong university) that even LW cannot help overcome in the limited time. Or the reader may be so far below the top level, that even making them more impressive is not enough to get them PhD at a top university.

the LW community may or may not ... encourage them ... to drop out of their PhD program, go to "training camps" for a few months ...

WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out of their PhD program to go to LW "training camps" (which by the way don't take a few months -- EDIT: I was wrong, actually there was one).

Here is a real LW discussion with a PhD student; you can see what a realistic LW advice would look like. Here is some general study advice. Here is a CFAR "training camp" for students, and it absolutely doesn't require anyone to drop out of the school... hint: it takes two weeks in August.

In summary: real LW does not resemble the picture you described, and is sometimes actually more close to the opposite of it.

WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out of their PhD program to go to LW "training camps" (which by the way don't take a few months).

When I visited MIRI one of the first conversations I had with someone was them trying to convince me not to pursue a PhD. Although I don't know anything about the training camp part (well, I've certainly been repeatedly encouraged to go to a CFAR camp, but that is only a weekend and given that I teach for SPARC it seems like a legitimate request).

Convincing someone not to pursue a PhD is rather different than convincing someone to drop out of a top-10 PhD program to attend LW training camps. The latter does indeed merit the response WTF.

Also, there are lots of people, many of them graduate students and PhD's themselves, who will try to convince you not to do a PhD. Its not an unusual position.

I mean, are we so bad at communicating our ideas?

I find this presumption (that the most likely cause for disagreement is that someone misunderstood you) to be somewhat abrasive, and certainly unproductive (sorry for picking on you in particular, my intent is to criticize a general attitude that I've seen across the rationalist community and this thread seems like an appropriate place). You should consider the possibility that Algernoq has a relatively good understanding of this community and that his criticisms are fundamentally valid or at least partially valid. Surely that is the stance that offers greater opportunity for learning, at the very least.

I certainly considered that possibility and then rejected it. (If there are more 2 regular commenters here who think that rationality guarantees correctness and will solve all of their lives problems, I will buy a hat and then eat it).

I have come across serious criticism of the PhD programs at major universities, here on LW (and on OB). This is not quite the same as a recommendation to not enroll for a PhD, and it most certainly is not the same as a recommendation to quit from an ongoing PhD track, but I definitely interpreted such criticism as advice against taking such a PhD. Then again I have also heard similar criticism from other sources, so it might well be a genuine problem with some PhD tracks.

For what it's worth my personal experiences with the list of main points (not sure if this should be a separate post, but I think it is worth mentioning):

Rationality doesn't guarantee correctness.

Indeed, but as Villiam_Bur mentions this is way too high a standard. I personally notice that while not always correct I am certainly correct more often thanks to the ideas and knowledge I found at LW!

In particular, AI risk is overstated

I am not sure but I was under the impression that your suggestion of 'just building some AI, it doesn't have to be perfect right away' is the thought that researchers got stuck on last century (the problem being that even making a dumb prototype was insanely complicated), when people were optimistically attempting to make an AI and kept failing. Why should our attempt be different? As for AI risk itself: I don't know whether or not LW is blowing the risk out of proportion (in particular I do not disagree with them, I am simply unsure).

LW has a cult-like social structure.

I agree wholeheartedly, you beautifully managed to capture my feelings of unease. By targeting socially awkward nerds (such as me, I confess) it becomes unclear whether the popularity of LW among intellectuals (e.g. university students, I am looking for a better word than 'intellectuals' but fail to find anything) is due to genuine content or due to a clever approach to a vulnerable audience. However from my personal experience I can confidently assert that the material from LW (and OB, by the way) indeed is of high quality. So the question that remains is: if LW has good material, why does it/do we still target only a very susceptible audience? The obvious answer is that the nerds are most interested in the material discussed, but as there are many many more non-nerds than nerds it would make sense to appeal to a broader audience (at the cost of quality), right? This would probably take a lot of effort (like writing the Sequences for an audience that has trouble grasping fractions), but perhaps it would be worth it?

Many LWers are not very rational.

In my experience non-LWers are even less rational. I fear that again you have set the bar too high - reading the sequences will not make you a perfect Bayesian with Solomonoff priors, at best it will make you a bit closer of an approximation. And let me mention again that personally I have gotten decent mileage out of the sequences (but I am also counting the enjoyment I have reading the material as one of the benefits, I come here not just to learn but also to have fun).

LW membership would make me worse off.

This I mentioned earlier. I notice that you define success in terms of money and status (makes sense), and the easiest ways to try to get these would be using the 'Dark Arts'. If you want a PhD, just guess the teachers password. It worked for me so far (although I was also interested in learning the material, so I read papers and books with understanding as a goal in my spare time). However these topics are indeed not discussed (and certainly not in the form of: 'In order to get people to do what you want, use these three easy psychological hacks') on LW. Would it solve your problem if such things were available?

"Art of Rationality" is an oxymoron.

Just because something is true does not mean that it is not beautiful?

LW has a cult-like social structure. ...

Where the evidence for this is:

Appealing to people based on shared interests and values. Sharing specialized knowledge and associated jargon. Exhibiting a preference for like minded people. More likely to appeal to people actively looking to expand their social circle.

Seems a rather gigantic net to cast for "cults".

Well, there's this:

However, involvement in LW pulls people away from non-LWers.

But that is similarly gigantic -- on this front, in my experience LW isn't any worse than, say, joining a martial arts club. The hallmark of cultishness is that membership is contingent on actively cutting off contact with non-cult members.

Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.

Art in the other sense of the word. Think more along the lines of skills and practices.

I think "art" here is mainly intended to call attention to the fact that practical rationality's not a collection of facts or techniques but something that has to be drilled in through deliberate long-term practice: otherwise we'd end up with a lot of people that can quote the definitions of every cognitive bias in the literature and some we invented, but can't actually recognize when they show up in their lives. (YMMV on whether or not we've succeeded in that respect.)

Some of the early posts during the Overcoming Bias era talk about rationality using a martial arts metaphor. There's an old saying in that field that the art is 80% conditioning and 20% technique; I think something similar applies here. Or at least should.

(As an aside, I think most people who aren't artists -- martial or otherwise -- greatly overstate the role of talent and aesthetic invention in them, and greatly underestimate the role of practice. Even things like painting aren't anywhere close to pure aesthetics.)

Would it be fair to characterize most of your complaints as roughly "Less Wrong focuses too much on truth seeking and too little on instrumental rationality - actually achieving material success"?

In that case, I'm afraid your goals and the goals of many people here may simply be different. The common definition of rationality here is "systematic winning". However, this definition is very fuzzy because winning is goal dependent. Whether you are "winning" is dependent on what your goals and values are.

Can't speak for anyone else, but the reason why I am here is because I like polite but vigorous discussion. Its nice to be able to discuss topics with people on the internet in a way that does not drive me crazy. People here are usually open to new ideas, respectful, yet also uncompromising in the force of their arguments. Such an environment is much more helpful to me in learning about the world than the adversarial nature of most forum discussions. My goal in reading LessWrong is mostly finding likeminded people who I can talk to, share ideas with, learn form and disagree with, all without any bad feelings. That is a rare thing.

If your goal is achieving material success there are certainly very general tools and skills you can learn like getting over procrastination, managing your emotional state, or changing your value system to achieve your goals. CFAR is probably a better resource than Lesswrong for learning about these tools (but I've never actually been to a workshop). However, there is no general way to achieve success that is specific enough to be useful to one person's goals. No one resource can possibly provide that. There are heuristics like "Find someone who is as successful as you would like to be and is willing to help you on your path; if necessary harass them enough so that they help you". Which I would advocate for medicine or "Find someone who is both very high status and willing to help lesser beings and get them to be your mentor" which I would tentatively advocate for graduate school (my sample size is small though). But I don't know of a general path.

[I]nvolvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. [...] LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

I think you've got the causation going the wrong way here. LW does target a lot of socially awkward intellectuals. And a lot of LWers do harbor some contempt for their "less rational" peers. I submit, however, that this is not because they're LWers but rather because they're socially awkward intellectuals.

American geek culture has a strong exclusionist streak: "where were you when I was getting beaten up in high school?" Your average geek sees himself (using male pronouns here because I'm more familiar with the male side of the culture) as smarter and morally purer than Joe and Jane Sixpack -- who by comparison are cast as lunkish, thoughtless, cruel, but attractive and socially successful -- and as having suffered for that, which in turn justifies treating the mainstream with contempt and suspicion.

That's a pretty fragile worldview, though. It's threatened by any deviations from its binary classification of people, and indeed things don't line up so neatly in real life. LW's style of rationality provides a seemingly more robust line of division: on this side you have the people brave and smart enough to escape their heuristics and biases, and on that one you have the people that aren't, who continue to play bitches and Blunderbores as needed. You may recognize this as a species of outgroup homogeneity.

Where's the flaw in this line of thinking? Well, practical rationality is hard. A lot harder than pointing out biases in other people's thinking. But if all you're looking for from the community is a prop for your ego, you probably aren't strongly motivated to do much of that hard work; it's a lot easier just to fall back on old patterns of exclusion dressed up in new language.

I've debated myself about writing a detailed reply, since I don't want to come across as some brainwashed LW fanboi. Then I realized this was a stupid reason for not making a post. Just to clarify where I'm coming from.

I'm in more-or-less the same position as you are. The main difference being that I've read pretty much all of the Sequences (and am slowly rereading them) and I haven't signed up for cryonics. Maybe those even out. I think we can say that our positions on the LW - Non-LW scale are pretty similar.

And yet my experience has been almost completely opposite of yours. I don't like the point-by-point response on this sort of thing, but to properly respond and lay out my experiences, I'm going to have to do it.

Rationality doesn't guarantee correctness.

I'm not going to spend much time on this one, seeing as how pretty much everyone else commented on this part of your post.

Some short points, though:

Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. This is in a part of the Sequences you've probably haven't read. I generally advice "Three Worlds Collide" to people struggling with this distinction, but I haven't gotten any feedback on how useful that is.;

Rationality can help you make "should"-statements, if you know what your preferences are. It helps you optimize towards your preferences.

When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving.

I believe the Sequences give the example that to be good at baseball, one shouldn't calculate the trajectory of the ball. One should just use the intuitive "ball-catching" parts of the brain and train those. While overanalyzing things seems to be a bit of a hobby for the aspiring rationalist community, if you think that they're the sort of persons who will spend 25% of their time to shave 5% of driving time you're simply wrong about who's in that particular community.

LW tends to conflate rationality and intelligence.

This is actually a completely different issue. One worth addressing, but not as part of "rationality doesn't guarantee correctness."

In particular, AI risk is overstated

I'm not the best suited to answer this, and it's mostly about your estimate towards that particular risk. As ChristianKl points out, a big chunk of this community doesn't even think Unfriendly AGI is currently the biggest risk for humanity.

What I will say is that if AGI is possible (which I think it is), than UFAI is a risk. And since Friendliness is likely to be as hard as actually solving AGI, it's good that groundwork is being lain before AGI is becoming a reality. At least, that how I see it. I'd rather have some people working on that issue than none at all. Especially if the people working for MIRI are best at working on FAI, rather than another existential risk.

LW has a cult-like social structure

No more than any other community. Everything you say in that part could be applied to the time I got really into Magic: The Gathering.

I don't think Less Wrong targets "socially awkward intellectuals" inasmuch as it was founded by socially awkward intellectuals and that socially awkward intellectuals are more likely to find the presented material interesting.

However, involvement in LW pulls people away from non-LWers.

This has, in my case, not been true. My relationships with my close friends haven't changed one bit because of Less Wrong or the surrounding community, nor have my other personal relationships. If anything, Less Wrong has made me more likely to meet new people or do things with people I don't have a habit doing things with. LessWrong showed my that I needed a community to support myself (a need that I didn't consciously realized I had before) and HPMOR taught me a much-needed lesson about passing up on opportunities.

For the sake of honesty and completeness, I must say that I do very much enjoy the company of aspiring rationalists, both in meatspace at the meetups or in cyberspace (through various channels, mostly reddit, tumblr and skype). Fact of the matter is, you can talk about different things with aspiring rationalists. The inferential differences are smaller on some subjects. Just like how the inferential differences about the intricacies of Planeswalkers and magic are lower with my Magic: The Gathering friends.

Many LWers are not very rational.

This is only sorta true. Humans in general aren't very rational. Knowing this gets you part of the way. Reading Influence: Science and Practice or Thinking: Fast and Slow won't turn you into a god, but they can help you realize some mistakes you are making. And that still remains hard for all but the most orthodox aspiring rationalists. And I keep using "aspiring rationalists" because I think that sums it up: The Less Wrong-sphere just strives to do better than default in the area of both epistemic and instrumental rationality. I can't think of anyone I've met (online or off-) that believes that "perfect rationality" is a goal mere humans can attain.

And it's hard to measure degrees of rationality. Ideally, LWers should be more rational than average, but you can't quite measure that, can you. My experience is that aspiring rationalists at least put in greater effort to reaching their goals.

For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality

Rationality is a tool, not a goal. And the best interventions in my life have been shorter-term: Get more exercise, use HabitRPG, be aware of your preferences, Ask, Tell and Guess culture, Tsuyoku Naritai, Spaced Repetition Software... are the first things that come to mind that I use regularly that do actually improve my life and help me reach my goals.

And as anecdotal evidence: I once put it to the skype-group of rationalists that I converse with that every time I had no money, I felt like I was a bad rationalist, since I wasn't "winning." Not a single one blamed it on a Lack of Rationality.

Rationalists tend to have strong value judgments embedded in their opinions, and they don't realize that these judgments are irrational.

If you want to understand that behavior, I encourage you to read the Sequences on morality. I could try to explain it, but I don't think I can do it justice. I generally hate the "just read the Sequences"-advice, but here I think it's applicable.

LW membership would make me worse off.

This is where I disagree the biggest. (Well, not that it would make you worse off. I won't judge that.) Less Wrong has most definitely improved my life. The suggestion to use HabitRPG or leechblock, the stimulating conversations and boardgames I have at the meetup each month, the lessons I learned here that I could apply in my job, discovering my sexual orientation, having new friends, picking up a free concert, being able to comfort my girlfriend more effectively, being able to better figure out which things are true, doing more social things... Those are just the things I can think of off the top of my mind at 3.30 AM that Less Wrong allowed me to do.

I don't intend to convince you to become more active on Less Wrong. Hell, I'm not all that active on Less Wrong, but it has changed my life for the better in a way that a different community wouldn't have done.

Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn't actually create better outcomes for them.

It does, at least for me, and I seriously doubt that I'm the only one. I haven't reached a successful career (yet, working on that), but my life is more successful in other areas thanks in part to Less Wrong. (And my limited career-related successes are, in part, attributable to Less Wrong.) I can't quantify how much this success can be attributed to LW, but that's okay, I think. I'm reasonably certain that it played a significant part. If you have a way to measure this, I'll measure it.

"Art of Rationality" is an oxymoron.

I like that phrase because it's a reminder that (A) humans aren't perfectly rational and require practice to become better rationalists and (B) that rationality is a thing you need to do constantly. I like this SSC post as an explanation.

Thanks for the detailed reply!

Based on this feedback, I think my criticisms reflect mostly on my fit with the LWers I happened to meet, and on my unreasonably high standards for a largely informal group.

Hi Algernoq,

Thanks for writing this. This sentence particularly resonated:

LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills).

I was definitely explicitly discouraged from pursuing a PhD by certain rationalists and I think listening to their advice would have been one of the biggest mistakes of my life. Unfortunately I see this attitude continuing to be propagated so I am glad that you are speaking out against it.

EDIT: Although, it looks like you've changed my favorite part! The text that I quoted the above was not the original text (which talked more about dropping out of PhD and starting a start-up).

involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals.

Alternative hypothesis: Once a certain kind of person realizes that something like the LW community is possible and even available, they will gravitate towards it - not because LW is cultish, but because the people, social norms, and ideas appeal to them, and once that kind of interaction is available, it's a preferred substitute for some previously engaged-in interaction. From the outside, this may look like contempt for Normals. But from personal experience, I can say that form the inside it feels like you've been eating gruel all your life, and that's what you were used to, but then you discovered actual delicious food and don't need to eat gruel anymore.

Yes, it's rather odd to call a group of like minded people a cult because they enjoy and prefer each other's company.

In grad school I used to be in a couple of email lists that I enjoyed because of the quality of the intellectual interaction and the topics discussed, one being Extropians in the 90s. I'd given that stuff up for a long time.

Got back into it a little a few years ago. I had been spending time at a forum or two, but was getting bored with them primarily because of the low quality of discussion. I don't know how I happened on HPMOR, but I loved it, and so naturally came to the site to take a look. Seeing Jaynes, Pearl, and The Map is not the Territory served as good signaling to me of some intellectual taste around here.

I didn't come here and get indoctrinated - I saw evidence of good intellectual taste and that gave me the motivation to give LW a serious look.

This is one suggestion I'd have for recruiting. Play up canonical authors more. Jaynes, Kahneman, and Pearl convey so much more information than bayesian analysis, cognitive biases, and causal analysis. None of those guys are the be all and end all of their respective fields, but identifying them plants a flag where we see value that can attract similarly minded people.

Thanks for being bold enough to share your dissenting views. I'm voting you up just for that, given the reasoning I outline here.

I think you are good job detaching the ideas of LW that you think are valuable and adopting them and ditching the others. Kudos. Overall, I'm not sure about the usefulness of debating the goodness or badness of "LW" as a single construct. It seems more useful to discuss specific ideas and make specific criticisms. For example, I think lukeprog offered a good specific criticism of LW thinking/social norms here. In general, if people take the time to really think clearly and articulate their criticisms, I consider that extremely valuable. On the opposite end of the spectrum, if someone says something like "LW seems weird, and weird things make me feel uncomfortable" that is not as valuable.

I'll offer a specific criticism: I think we should de-emphasize the sequences in the LW introductory material (FAQ, homepage, about page). (Yes, I was the one who wrote most of the LW introductory material, but I was trying to capture the consensus of LW at the time I wrote it, and I don't want to change it without the change being a consensus decision.) In my opinion, the sequences are a lot longer than they need to be, not especially information-dense, and also hard to update (there have been controversies over whether some point or another in the Sequences is correct, but those controversies never get appended to the Sequences).

Rationality doesn't guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. (Or, you could not believe in free will. But most LWers don't live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.

I'm having a hard time forming a single coherent argument out of this paragraph. Yep, value judgements are important. I don't think anyone on Less Wrong denies this. Yep, it's hard to extrapolate beyond limited data. Is there a a particular LW post that advocates extrapolating based on limited data? I haven't seen one. If so, that sounds like a problem with the post, not with LW in general. Yes, learning from real-world data is great. I think LW does a decent job of this; we are frequently citing studies. Yes, it's possible to overthink things, and maybe LW does this. It might be especially useful to point to a specific instance where you think it happened.

I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it's necessary to build and test prototypes in the real world.

Makes sense. In my work as a software developer, I've found that it's useful to think for a bit about what I'm going to program before I program it. My understanding is that mathematicians frequently prove theorems, etc. without testing them, and this is considered useful. So to the extent that AI is like programming/math, highly theoretical work may be useful.

My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.

This seems like it deserves its own Open Thread comment/post if you want to explain it in detail. (I assume you have arguments for this idea as opposed to having it pop in to your head fully formed :])

One way this happens is by encouraging contempt for less-rational Normals.

I agree this is a problem.

I imagine the rationality "training camps" do this to an even greater extent.

I went to a 4-day CFAR workshop. I found the workshop disappointing overall (for reasons that probably don't apply to other people), but I didn't see the "contempt for less-rational Normals" you describe present at the workshop. There were a decent number of LW-naive folks there, and they didn't seem to be treated differently. Based on talking to CFAR employees, they are wise to some of the problems you describe and are actively trying to fight them.

LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.

Well sure, I might as well say that Comic-Con or Magic the Gathering attracts socially awkward people without many existing social ties. "LW recruiting" is not quite as strategic as you make it out to be (I'm speaking as someone who knows most of the CFAR and MIRI employees, goes to lots of LWer parties in the Bay Area, used to be housemates with lukeprog, etc.) I'm not saying it's not a thing... after the success of HPMOR, there have been efforts to capitalize on its success more fully. To the extent that specific types of people are "targeted", I'd say that intelligence is the #1 attribute. My guess is if you were to poll people at MIRI & CFAR, and other high-status Bay Area LW people like the South Bay meetup organizers, etc. if anything they would have a strong preference for having community members who are socially skilled and well-connected over socially awkward folks.

For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because "is" cannot imply "should").

Rationality seems like a pretty vague "solution" prescription. To the extent that there exists a hypothetical "LW consensus" on this topic, I think it would be that going to a CFAR workshop would solve these problems more effectively than reading the sequences, and a CFAR workshop is not much like reading the sequences.

LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW

Well, I think I have become substantially more successful during the time period when I've been a member of the LW community (got in to a prestigious university and am now working at a high paying job), and I think I can attribute LW to some of this success (my first two internships were at startups I met through the bay area LW network, and I think my mental health improved from making friends who think the same way I do). But that's just an anecdote.

"Art of Rationality" is an oxymoron.

Agreed. One could level similar criticisms at books with titles like The Art of Electronics or The Art of Computer Programming. But I think Eliezer made a mistake in trying to make rationality seem kind of cool and deep and wise in order to get people interested in it. (I think I remember reading him write this somewhere; can't remember where.)

Rationality doesn't guarantee correctness

That's a strawman. I don't think a majority of LW thinks that's true.

In particular, AI risk is overstated

The LW consensus on the matter of AI risk isn't that it's the biggest X-risk. If you look at the census you will find that different community members think different X-risks are the biggest and more people fear bioengineered pandemics than an UFAI event.

LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to drop out of their PhD program, go to "training camps" for a few months

I don't know what you mean with training camps but the CFAR events are 4 day camps.

If you mean App Academy with training camp, then yes some people might do it instead of a PHD program and then go on to work. There a trend that companies like Google do evidence-based hiring and care less about degrees of employees than they care about skills. AS companies get better at evaluating the skill of potential hires the signaling value of a degree gets less. Learning practical skills in App academy might be more useful for some people but it's of course no straightforward choice.

My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.

Having a lot of surplus income that gets thrown in a brute-force way at research problems might increases Xrisk instead of reducing it.

I read LW for entertainment, and I've gotten some useful phrases and heuristics from it, but the culture bothers me (more what I've seen from LWers in person than on the site). I avoid "rationalists" in meatspace because there's pressure to justify my preferences in terms of a higher-level explicit utility function before they can be considered valid. People of similar intelligence who don't consider themselves rationalists are much nicer when you tell them "I'm not sure why, but I don't feel like doing xyz right now." (To be fair, my sample is not large. And I hope it stays that way.)

Your criticism of rationality for not guaranteeing correctness is unfair because nothing can do that. Your criticism that rationality still requires action is equivalent to saying that a driver's license does not replace driving, though many less wrongers do overvalue rationality so I guess I agree with that bit. You do however seem to make a big mistake in buying into the whole fact- value dichotomy, which is a fallacy since at the fundamental level only objective reality exists. Everything is objectively true or false, and the fact that rationality cannot dictate terminal values does not contradict this.

I do agree with the general sense that less wrong is subject to a lot of group think however, and agree that this is a big issue.