Today's post, Bayesians vs. Barbarians was originally published on 14 April 2009. A summary (taken from the LW wiki):

 

Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory. There's a certain concept of "rationality" which says that the rationalists inevitably lose, because the Barbarians believe in a heavenly afterlife if they die in battle, while the rationalists would all individually prefer to stay out of harm's way. So the rationalist civilization is doomed; it is too elegant and civilized to fight the savage Barbarians... And then there's the idea that rationalists should be able to (a) solve group coordination problems, (b) care a lot about other people and (c) win...


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Collective Apathy and the Internet, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
11 comments, sorted by Click to highlight new comments since:

While it may not be the point of the exercise, from examining the situation, it appears that the best course of action would be to attempt to convert as many people in Rationalistland as possible to altruism. The reason I find this interesting is because it mirrors real world behavior of many rationalists. There are a bevy of resources for effective altruism, discussions on optimizing the world based on an altruistic view, and numerous discussions which simply assume altruism in their construction. Very little exists in the way of openly discussing optimal egoism. As an egoist myself, I find this to be a perfectly acceptable situation, as it is beneficial for me if more people who aren't me or are in groups that do not include me become altruistic or become more effective at being altruistic.

Taboo "altruism" and "egoism". Those words in their original meaning are merely strawmen. Everyone cares about other people (except for psychopaths, but psychopaths are also bad at optimizing for their own long-term utility). Everyone cares about their own utility (even Mother Theresa was happy to get a lot of prestige for herself, and to promote her favorite religion). In real life, speaking about altruists and egoists is probably just speaking about signalling... who spends a lot of time announcing that they care about other people (regardless of what they really do for them), and who neglects this part of signalling (regardless of what they really do). Or sometimes it is merely about whom we like and whom we don't.

I had no intention of implying extreme altruism or egoism. I should be clear that by altruism I mean the case in which an agent believes that the values of of some other entity or group have a smaller discount rate than those of the agent itself, while egoism is the opposite scenario. I describe myself as an egoist, but this does not mean that I am completely indifferent to others. In the real world, one would not describe a person who engages in altruist signalling as an altruist, but rather that person would choose the label of altruist as a form of signalling.

Either way, returning to the topic at hand with the taboo in effect, those who value the continuation of their society as a greater value than personal survival will be willing to accept greater risks to their own life to improve the chances of victory at war. Likewise those who consider their own survival more highly, even if they expect that loss in war may endanger their life, will choose actions that are less risky to themselves even if it is a less advantageous action for the group. By attempting to modify the values of other to place greater weight on society over the individual, and by providing evidence which makes pro-social actions seems more appealing ,such as by making them appear less risky to the self, one can improve the probability of victory and improve the chance of personal survival. Of course if everyone were engaging in this behavior, and we assume equal skills and resources amongst all parties, there would either be no net effect on the utility values of agents within the group, or a general trend toward greater pro-social behavior would form, depending on what level of skill and susceptibility we assume. This is a positive outcome as the very act of researching and distributing the information required would create greater net resources in terms of knowledge and strategy for effectively contributing to the war effort.

[-][anonymous]-20

Every one of these complex problems and complex solutions fall away to an egoist. When rationalism is uncoupled from all who are not the rational agent, no paradox of self-preservation vs rationality endures. An egoist may fight or abstain as they see (rational / fit / entertaining). Many have claimed to bridge the is / ought divide, and with a few leaps of faith rationality appears to do so. But I say rationality is a-moral. IF I attack THEN I get your stuff ELSE run away is entirely rational.

A team of rational egoists could invent ways to make their precommitments credible. Then they could e.g. organize a lottery to randomly choose a few of them to sacrifice for the benefit of others.

Assuming that (a) participating in the lottery is better on average than not participating in the lottery, and (b) the precommitments are credible, which means that once you have been chosen by the lottery, it would be worse or impossible to not sacrifice yourself for the benefits of other lottery participants, and (c) it is impossible to get the benefits of the lottery as positive externalities while not participating in the lottery... then a rational egoist would choose to participate in the lottery.

Yeah, the three assumptions would be extremely difficult to fulfill. But there is no law of physics saying that it is impossible.

And as DanielLC suggests, if people have other values beyong their personal profit, that only makes the solution easier.

[-][anonymous]20

An egoist can do what they can do, including be on teams and make agreements. But an egoist is not always and only guided by personal profit. That would make a spook, a wheel in the head, of personal profit.

It's assuming they're all egoists. If you're not an egoist, you wouldn't need a complex solution to sacrifice the few to save the many.

[-][anonymous]00

I might be misunderstanding what you wrote, and if I have please correct me. You seem to be saying more than one egoist is needed for egoism to happen. This is not the case.

I think I might be misunderstanding what you wrote.

I thought you were saying that Eliezer was assuming that the people were not egoists, and if they were, it would all fall apart. I was replying that if they weren't egoists, none of this would be necessary, and it's intended to show that even egoists can work together if that's what it takes to win.

It's also possible that just some of them are egoists. If it's enough of them, you'd still have to do that stuff Eliezer mentioned.

What do you mean by "egoist" here?

[-][anonymous]00

The egoism of Max Stirner, Dora Marsden and (of course most important of all) myself.