Followup to: Scope Insensitivity

"Whoever saves a single life, it is as if he had saved the whole world."

-- The Talmud, Sanhedrin 4:5

It's a beautiful thought, isn't it? Feel that warm glow.

I can testify that helping one person feels just as good as helping the whole world. Once upon a time, when I was burned out for the day and wasting time on the Internet - it's a bit complicated, but essentially, I managed to turn someone's whole life around by leaving an anonymous blog comment. I wasn't expecting it to have an effect that large, but it did. When I discovered what I had accomplished, it gave me a tremendous high. The euphoria lasted through that day and into the night, only wearing off somewhat the next morning. It felt just as good (this is the scary part) as the euphoria of a major scientific insight, which had previously been my best referent for what it might feel like to do drugs.

Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.

But if you ever have a choice, dear reader, between saving a single life and saving the whole world - then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference.

For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.) Why might it not be obvious? Well, suppose there's a qualitative duty to save what lives you can - then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend - so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost - and thus passing to the entire world changes little.

I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world - not to be confused with pretend rhetorical saving the world - it is as if they had saved an intergalactic civilization.

Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the child. I'm nearby, within reach, so I leap forward and drag one child off the railroad tracks - and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. "Quick!" you scream to me. "Do something!" But (I call back) I already saved one child from the train tracks, and thus I am "unimaginably" far ahead on points. Whether I save the second child, or not, I will still be credited with an "unimaginably" good deed. Thus, I have no further motive to act. Doesn't sound right, does it?

Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don't think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives. Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer.

Addendum:  It's not cognitively easy to spend money to save lives, since cliche methods that instantly leap to mind don't work or are counterproductive.  (I will post later on why this tends to be so.)  Stuart Armstrong also points out that if we are to disdain the philanthropist who spends life-saving money inefficiently, we should be consistent and disdain more those who could spend money to save lives but don't.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 2:37 PM
Select new highlight date
All comments loaded

Also, whoever saves a person to live another fifty years, it is as if they had saved fifty people to live one more year. Whoever saves someone who very much enjoys life, it is as if they saved many people who are not sure they really want to live. And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.

Which is why I"m still puzzled by a simplistic moral dilemma that just won't go away for me: are we morally obligated to have children, and as many as we can? Sans using that using energy or money to more efficiently "save" lives, of course. It seems to me we should encourage people to have children, a common thing that many more people will actually do than donate philanthropically, in addition to other philanthropy encouragements.

are we morally obligated to have children, and as many as we can?

Cost of a first-world child is.... checks random Google result $180,000 to get them to age 18. Cost of saving a kid in Africa from dying of Malaria is ~$1,000.

Right now having children is massively selfish, because there's options that are more than TWO magnitudes of order more effective. It'd be like blowing up the train in order to save the deaf kids from the original post :)

Not necessarily. A full argument would consider the opportunities available to a child you raise -- it's perfectly possible for a single first-world child to be a more productive than 180 kids in Africa.

There's also the counter-point (to my previous point) that having children discourages other people from having children, due to the forces of the market (greater demand for stuff available to children => greater costs of stuff available to children). Of course, the effect on demand is spread out to stuff other than just stuff available to children, so overall this does not cause an equal and opposite reaction.

If you successfully teach your child to be utilitarian, effective altruist, etc., though, the utility of both previous points are dwarfed by this (the second point is dwarfed because the average first-world child probably wouldn't pick up utilitarianism, EA). I'm not sure what the probability of a child picking up stuff like that is (and it would make one heck of a difficult experiment), but my guess is that if taught properly it would be likely enough to dwarf the utility of the first two points.

A lot of people don't consider failure to exist the same as dying. Of course, we need some level of procreation as long as there is death, and humanity would probably continue to expand even then.

Why? Because dying is painful? Beyond that, I see them equivalently.

Non-existing is not the same thing as ceasing to exist.

Among other reasons, if you die there will be people mourning you, whereas if you had never existed in the first place there won't.

But the whole point of the post above is that our personal feelings are negligible next the enormity of the utilitarian consequences behind our feelings.

Caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway. Even without the feeling.

The fact that no one knows the unborn person yet doesn't mean that she doesn't matter.

Robin, we can definitely agree that my notion about relative conditional frequencies is not at all clearly true. This is one of those rare, rare issues that still confuses even me. As such - this is an important general principle, that I'd like to emphasize - when you try to model things that are deeply confusing and mysterious to you, you should not be very confident in your judgments about them.

If infinite people exist, how do our subjective probabilities come out right - why don't we always see every possible die roll with probability 1/6, even when the dice are loaded? How is computation possible, when every if statement always branches both ways? I seriously don't know. Maybe the numbers are finite but just very large. But, if for whatever reason it is possible to flip a biased coin and indeed see mostly heads, then we can try to shape the outcomes of people's lives so that their futures are mostly happy. I don't claim to be sure of this. It is just my attempt to make things add up to normality.

Jeremy, see Nick Bostrom's paper.

In a Big World, which this one appears to be on at least three counts (spatially infinite open universe, inflationary scenario in Standard Model, and Everett branches), everyone who could exist already exists with probability 1. Thus, the issue is not so much creating new people, but ensuring that good things happen to people given that they exist. Creating a new person helps when you can provide them with good outcomes, because what you're really doing is increasing the frequency of good outcomes from that starting point.

Or at least that's one anthropic interpretation of ethics. But it is one reason why I don't endorse running out and creating lots of people if that lowers the average standard of living. In a Big World, it's the average standard of living that you care about.

Tell me if I'm wrong, but doesn't a many worlds reality mean that all possible states of those people also occur with a probability of one? How can you possibly "[increase] the frequency of good outcomes"? All the outcomes occur in some world, irrespective of our actions.

Eliezer, I hope we can agree that your conclusion is intriguing, but far from clearly true. After all, if every possible person exists, then so does every possible history for every possible person. How then could you effect any relative frequencies?

As for the philanthropist, I think the relevant heuristic is that we approve of anyone who saves lives, to socially reinforce the urge for others to do so. If our instincts developed in a tribal environment, then saving a life, or a small group of lives, is the best that anyone can realistically do, so we had no need to scale our admiration to a larger scale.

But if we are to become less biased, and disdain the philanthropist who spends his life-saving money inefficiently, we should be totally consistent about it, and disdain far, far more those who could spend money to save lives and don't (unfortunately, that probably includes most of us).

Robin: And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.

I have to question that comparison. When you save a life that already exists, you are delivering them from a particular existential danger, even if not from the generic existential danger they face constantly by virtue of being alive. But when you create a life, you are delivering a new "hostage to fortune" and creating an existentially endangered being where none previously existed.

I have a paper on this problem of infinities in ethics: http://www.nickbostrom.com/ethics/infinite.pdf

It is a difficult topic.

Charles, you might want to read some of Peter Singer's writings on this point.

Robin, it's clear that relative frequencies exist and matter somehow, even though it might seem like they shouldn't (e.g. because of the ordering problem described in Dr. Bostrom's paper). We observe random events with nonuniform distributions to occur according to the distribution, as opposed to uniformly. We don't live in an extremely bizarre, acausal world even though there are an infinite number throughout spacetime, because the laws of physics are such as to make bizarre worlds rarer than normal ones (even though there are many more possible bizarre worlds than normal ones). "Difficult topic" is probably an understatement.

Where does this end? If a philanthropist saves one life instead of two he is damned as any murder. Surely we in the more prosperous countries could easily save many lives by cutting back on luxuries, but we choose not to (this would no doubt apply to nearly everyone in these countries) does that make us all murderers?

Yes. We just aren't socially condemned for it.

Robin's comment raises the interesting question of whether creating a new life is as good as saving one. It definitely seems to be easier to create a new one, at least at first (the long term effort is probably greater). Most people manage to create a new life or two, but probably never save any. We don't tend to celebrate new-life creators as much as we do life-savers, perhaps because it is seen as too easy.

No. It's way, way easier to save one. According to the Disease Control Priorities Project (http://tinyurl.com/y9wpk5e) you can save lives for about $3 per year. That's, what, $225 for a whole life? Creating a life requires nine months of pregnancy, during which you can't work as well, and you have to pay for food while you're eating for two, and that's just assuming you give the child up for adoption. You also can only do it once every nine months, and you have to be a girl, whereas you can save a life every time you earn $225.

That means it's cheaper and possible to do in greater volume - not easier. It's probably uncommon indeed to save lives by accident, let alone while actively trying not to, which happens in the creation department all the time. Easier certainly doesn't mean cheaper, or people would behave differently with credit cards.

Easier certainly doesn't mean cheaper

Having established that, would you say that a pregnancy (or several, since the average pregnancy produces less than one child) is easier or harder than mailing a check?

I'd distinguish here between "difficult" as in requiring discomfort and "difficult" as in requiring optional effort. By optional effort, I mean effort that one could feasibly take the null action rather than exert. None of the effort expended in carrying a baby to term is really optional at the time. If I were to get pregnant, I could at no time say to myself, "Well, I'd really rather not vomit right now, so I'll take the null action." Even if there were something I could have done earlier to enable the null action at that time, once it gets to that point, it's happening whether I like it or not. Similar with labor. I don't think anyone will perform an abortion when one is literally about to extrude an infant, practically speaking, so although labor is an immense effort, it is not an optional effort once it's gotten to that point. Taking Plan B, going through with an abortion, and yes - mailing a check, are all optional effort.

I think Alicorn's point was that being pregnant might be more unpleasant/expensive/"difficult" than mailing a check, but getting pregnant is much, much easier. So easy, in fact, one can do it accidentally.

I think the crucial point here is the disparity between sexes. The amount of effort required for a man to induce a pregnancy, the cost of the dating&mating game, is certainly not "easy". I expect this is also the case for some (least-generally-attractive) women.

But getting pregnant is not enough to make a child.

Barring spontaneous miscarriage or starvation, it is the default. A woman has to refrain from taking certain actions, but, once she's pregnant, she doesn't actively have to do much but not starve.

We don't tend to celebrate new-life creators as much as we do life-savers

We do celebrate life creators quite a bit. But we celebrate their good fortune rather than their altruism, since the parents are among the people who benefit most from their parenthood.

how a "duty" gets cashed out computationally, if not as a contribution to expected utility. If I'm not using some kind of moral points, how do I calculate what my "duty" is?

We humans don't seem to act as if we're cashing out an expected utility. Instead we act as if we had a patchwork of lexically distinct moral codes for different situations, and problems come when they overlap.

Since current AI is far from being intelligent, we probably shouldn't see it as compelling argument for how humans do or should behave.

Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization. Sounds right. The more reliable information I get about the world, the more my moral preferences start resembling a utility function. Out of interest, do you have a link to those theorems?

But the consistency assumption is not present in humans, even morally well-rounded ones. We are always learning, intellectually and morally. The moral decisions we make affect our moral values as well as the other way round (this post touched on similar ideas). Seeing morality as a learning process may bring it closer to Paul's queries: what sort of a person am I? What are my values?

Except here the answers to the questions come as a result of the moral action, rather than before it.

Paul, since my background is in AI, it is natural for me to ask how a "duty" gets cashed out computationally, if not as a contribution to expected utility. If I'm not using some kind of moral points, how do I calculate what my "duty" is?

How should I weigh a 10% chance of saving 20 lives against a 90% chance of saving one life?

If saving life takes lexical priority, should I weigh a 1/googleplex (or 1/Graham's Number) chance of saving one life equally with a certainty of making a billion people very unhappy for fifty years?

Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization.

I don't see the relevancy of Mr. Burrows' statement (correct, of course) that "Very wealthy people give less, as a percentage of their wealth and income, than people of much more limited means. For wealthy philanthropists, the value from giving may be in status from the publicity of large gifts."

This is certainly of concern if our goal is to maximize the virtue of rich people. If it is to maximize general welfare, it is of no concern at all. The recipients of charity don't need a percentage's worth of food, but a certain absolute amount.

I'd be curious to know if there is a principled model for optimal human happiness which does not conflict so violently with our moral instincts.

Seems we need to take "creating" and "destroying" humans out of the equation - total or average happiness can work fine in a fixed population (and indeed are the same). We can tweak the conditions maybe, and count the dead and the unborn as having a certain level of happiness - but it will still lead to assumptions that violate our instincts; there will always be moments where creating a new life while making everyone unhappy or killing off someone to raise average happiness will be the right thing for the model to do.

I think we need to deal with "creating" and "destroying" people with other principles than happiness.