LessWrongers as a group are often accused of talking about rationality without putting it into practice (for an elaborated discussion of this see Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality). This behavior is particularly insidious because it is self-reinforcing: it will attract more armchair rationalists to LessWrong who will in turn reinforce the trend in an affective death spiral until LessWrong is a community of utilitarian apologists akin to the internet communities of anorexics who congratulate each other on their weight loss. It will be a community where instead of discussing practical ways to "overcome bias" (the original intent of the sequences) we discuss arcane decision theories, who gets to be in our CEV, and the most rational birthday presents (sound familiar?).
A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": accomplishing a set of practical well-defined goals to increment your rationalist "level". It's hard to see how these goals fit into a long-term plan to achieve anything besides self-improvement for its own sake. Indeed, the article begins by priming us with a renaissance-man inspired quote and stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"
So what's the solution? I don't know. However I can tell you a few things about the solution, whatever it may be:
- It wont feel like the right thing to do; your moral intuitions (being designed to operate in a small community of hunter gatherers) are unlikely to suggest to you anything near the optimal task.
- It will be something you can start working on right now, immediately.
- It will disregard arbitrary self-limitations like abstaining from politics or keeping yourself aligned with a community of family and friends.
- Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.
Whatever you may decide to do, be sure it follows these principles. If none of your plans align with these guidelines then construct a new one, on the spot, immediately. Just do something: every moment you sit hundreds of thousands are dying and billions are suffering. Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.
I declare Crocker's rules on the writing style of this post.
Evidence? Who accuses them of this? One post (on Less Wrong itself!) is not evidence enough for this claim.
Since this barb is directed at me, I should respond. When I come across a superb intellect like Yudkowsky, I first shut up and read the bulk of what he has to say (in Yudkowsky's case, this is helpfully packaged in the sequences). Then I apply my modest intellect to exploring the areas of his thinking that I do not find convincing.
Note that the essay is not about "who gets to be in our CEV"; it is about whether the CEV should include all of humanity, or not. The ability to distinguish between these questions should be within the capability of a rationalist - although I expect your distortion is an intentional attempt to trivialise the subject for rhetorical effect.
Otherwise, what you have written boils down to this: “we should shut up and multiply. You people aren't shutting up and multiplying.”
Unfortunately, we are not consistent expected utility maximisers, so "shut up and multiply" can never be more than an ideal for unmodified and unextrapolated human beings. It is actually impossible to implement "shut up and mutliply" literally, if you aren't accurately described by a utility function.
Furthermore our introspective limitations, knowledge limitations and computational limitations give us no particular way of resolving conflicts between our values, even if we were expected utility maximisers. For example the value of enjoying an argument for its own sake and the value of arguing things in a strictly optimal attempt to minimise existential risk are somewhat opposed to one another. Yet even if I did have a personal utility function such that there existed an optimal way for my unmodified and unextrapolated self to resolve this conflict and maximise utility, I wouldn't know what it was!
It is sometimes fair to recommend that someone shut up and multiply, but I would only do so (I hope!) when the stakes are extreme enough that they outweigh this inconsistency. I might also do so in a specific discussion in which someone was conflicted about whether they should do something, because SUAM seems like the best possible answer if someone is going to ask what they "should" do.
But since neither of these conditions applies, there is really no basis for you saying that I or anyone else should not have arguments and discussions for enjoyment's sake alone, unless you have good reason to think that the consequences are really extreme (for example, I criticised Eliezer for not shutting up and multiplying in his proposals for CEV. Those are extreme consequences).
That said, setting aside the fact that my ability to contribute intellectually is modest, can you really see no benefit in discussing important concepts such as CEV? Why is discussion of overcoming biases worthwhile, but not discussion of important strategies for the future of humanity?
Finally, although it scarcely seems necessary to say this, you cannot expect to be taken seriously with this kind of portentousness ("billions are suffering" - "become an optimization process") unless you have some serious achievements of your own to point to. If you do in fact have something to boast about, please go ahead and tell us about it.