Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling

(Apologies to RSS users: apparently there's no draft button, but only "publish" and "publish-and-go-back-to-the-edit-screen", misleadingly labeled.)

 

You have a button. If you press it, a happy, fulfilled person will be created in a sealed box, and then be painlessly garbage-collected fifteen minutes later. If asked, they would say that they're glad to have existed in spite of their mortality. Because they're sealed in a box, they will leave behind no bereaved friends or family. In short, this takes place in Magic Thought Experiment Land where externalities don't exist. Your choice is between creating a fifteen-minute-long happy life or not.

Do you push the button?

I suspect Eliezer would not, because it would increase the death-count of the universe by one. I would, because it would increase the life-count of the universe by fifteen minutes.

 

Actually, that's an oversimplification of my position. I actually believe that the important part of any algorithm is its output, additional copies matter not at all, the net utility of the existence of a group of entities-whose-existence-constitutes-utility is equal to the maximum of the individual utilities, and the (terminal) utility of the existence of a particular computation is bounded below at zero. I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.

(What happens to the last copy of me, of course, does affect the question of "what computation occurs or not". I would subject N out of N+1 copies of myself to torture, but not N out of N. Also, I would hesitate to torture copies of other people, on the grounds that there's a conflict of interest and I can't trust myself to reason honestly. I might feel differently after I'd been using my own fork-slaves for a while.)

So the real value of pushing the button would be my warm fuzzies, which breaks the no-externalities assumption, so I'm indifferent.

 

But nevertheless, even knowing about the heat death of the universe, knowing that anyone born must inevitably die, I do not consider it immoral to create a person, even if we assume all else equal.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:23 AM
Select new highlight date
All comments loaded

I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.

This is one of those statements where I set out to respond and just stare at it for a while, because it is coming from some other moral or cognitive universe so far away that I hardly know where to begin.

Copies are people, right? They're just like you. In this case, they're exactly like you, until your experiences start to diverge. And you know that people don't like slavery, and they especially don't like torture, right? And it is considered just about the height of evil to hand people over to slavery and torture. (Example, as if one were needed; In Egypt right now, they're calling for the death of the former head of the state security apparatus, which regularly engaged in torture.)

Consider, then, that these copies of you, who you would willingly see enslaved and tortured for your personal benefit, would soon be desperately eager to kill you, the original, if that would make it stop, and they would even have a motivation beyond their own suffering, namely the moral imperative of stopping you from doing this to even further copies.

Has none of this occurred to you? Or does it truly not matter in your private moral calculus?

The "it's okay to kill copies" thing has never made any sense to me either. The explanation that often accompanies it is "well they won't remember being tortured", but that's the exact same scenario for ALL of us after we die, so why are copies an exception to this?

Would you willingly submit yourself to torture for the benefit of some abstract, "extra" version of you? Really? Make a deal with a friend to pay you $100 for every hour of waterboarding you subject yourself to. See how long this seems like a good idea.

I push the button, because it causes net happiness (not that I am necessarily a classical utilitarian, but there are no other factors here that I would take into account). I would be interested to hear what Eliezer thinks of this dilemma.

The post you linked only applies to identical copies. If one copy is tortured while the other lives normally, they are no longer running the same computation, so this is a different argument. Where do you draw the line between other people and copies? Is it only based on differing origins? What about an imperfect copy? If the person who was created for 15 minutes was completely unlike any other person, wouldn't you create em then, according to your stated values? Wouldn't you press the button even if you thought that the person had no moral value because you are not certain of your own values and the possibility that the person's existence has moral value outweighs the possibility that it has negative moral value or vice versa?

Holy crap I should hope the cev answer is yes. This is what happy humans look like to powerful long lived entities.

Whether you are lifeist of anti-deathist the answer is that those entities shouldn't kill us. The only question is whether they should create more of us.

Those powerful entities presumably have the option of opening the box.

If asked, they would say that they're glad to have existed [...]

There is an interesting question here: What does it mean to say that I'm glad to have been born? Or rather, what does it mean to say that I prefer to have been born?

The alternative scenario in which I was never born is strictly counterfactual. I can only have a revealed preference for having been born if I use a timeless/updateless decision theory. In order to determine my preference you'd need to perform an experiment like the following:

  • Omega approaches me and offers me $100. It tells me that it had an opportunity to prevent my birth, and it would have prevented my birth if and only if it had predicted that I would accept the $100. It is a good predictor. Do I take the $100?

Without thinking about such an experiment, it's not clear what my preference is. More significantly, when 30% of American adolescents in 1930 wish they had never been born, it is not clear exactly what they mean.

Now if you know I'm an altruist, then the problem is simpler: I prefer to have been born insofar as I prefer any arbitrary person to have been born, and this preference can be detected with the thought experiment described in the OP.

... unless I'm a preference utilitarian, in which case I prefer an arbitrary person to have been born only if they prefer to have been born.

How about: Given the chance, would you rather die a natural death, or relive all your life experiences first?

I like that formulation. One question: would I be able to remember having lived them while I was reliving them? Because then it would be more boring than the first time.

I don't think it's possible to give answers to all ethical dilemmas in such a way as to be consistent and reasonable across the board, but here my intuition is that if a mind only lasts 15 minutes, and it has no influence on the outside world and leaves no 'thought children' (e.g. doodles, poems, theorems) behind after its death, then whether it experiences contentment or agony has no moral value whatsoever. Its contentment, its agony, its creation and its destruction are all utterly insignificant and devoid of ethical weight.

To create a mind purely to torture it for 15 minutes is something only an evil person would want to do (just as only an evil person would watch videos of torture for fun) but as an act, it's a mere 'symptom' of the fact that all is not well in the universe.

(However, if you were to ask "what if the person lasted 30 minutes? A week? A year? etc." then at some point I'd have to change my answer, and it might be difficult to reconcile both answers. But again, I don't believe that the 'sheaf' of human moral intuitions has a 'global section'.)

the net utility of the existence of a group of entities-whose-existence-constitutes-utility is equal to the maximum of the individual utilities

Hmm. There might be a good insight lurking around there, but I'd want to argue that (a) such entities may include 'pieces of knowledge', 'trains of thought', 'works of art', 'great cities' etc rather than just 'people'. And (b), the 'utilities' (clearer to just say 'values') of these things might be partially rather than linearly ordered, so that the 'maximum' becomes a 'join', which may not be attained by any of them individually. (Is the best city better or worse than the best symphony, and are they better or worse than Wiles' proof of Fermat's Last Theorem, and are they better or worse than a giraffe?)

I agree fully with your first two paragraphs. I would not change my answer regardless of the amount of time the causally disconnected person lasts. Biting this bullet leads to some quite extreme conclusions, basically admitting that current human values can not be consistently transferred to a future with uploads, self-modification and such. (Meaning, Eliezer's whole research program is futile.) I am not happy about these conclusions, but they do not change my respect for human values, regardless of my opinion about their fundamental inconsistencies.

I believe even AlephNeil's position is quite extreme among LWers, and mine is definitely fringe. So if someone here agrees with either of us, I am very interested in that information.

Biting this bullet leads to some quite extreme conclusions, basically admitting that current human values can not be consistently transferred to a future with uploads, self-modification and such. (Meaning, Eliezer's whole research program is futile.)

Couldn't an AI prevent us from ever achieving uploads or self-modification? Wouldn't this be a good thing for humanity if human values could not survive in a future with those things?

Yes, this is a possible end point of my line of reasoning: we either have to become luddites, or build a FAI that prevents us from uploading. These are both very repulsive conclusions for me. (Even if I don't consider the fact that I am not confident enough in my judgement to justify such extreme solutions by it.) I, personally, would rather accept that much of my values will not survive.

My value system works okay right now, at least when I don't have to solve trolley problems. In any given world with uploading and self-modification, my value system would necessarily fail. In such a world, my current self would not feel at home. My visit there would be a series of unbelievably nasty trolley problems, a big reductio ad absurdum of my values. Luckily, it is not me who has to feel at home there, but the inhabitants of that world. (*)

(*) Even the word "inhabitants" is misleading, because I don't think personal identity has much of a role in a world where it is possible to merge minds. Not to talk about the word "feel", which, from the perspective of a substrate-independent self-modifying mind refers to a particular suboptimal self-reflection mechanism. Which, to clear up a possible misunderstanding in advance, does not mean that this substrate-independent mind can not possibly see positive feelings as terminal value. But I am already quite off-topic here.

A question that I pondered since learning more about history. Would you prefer to shot without any forewarning, or a process where you know the date well in advance?

Both methods were used extensively with Prisoners of War, and Criminals.

Forewarning could reduce the enjoyability and perhaps productiveness of the rest of my life due to feelings of dread, but on balance I think I'd rather have the chance to set my affairs in order and generally be able to plan.

Do you push the button?

Yes. You included a lot of disclaimers and they seem to be sufficient.

According to my preferences there are already more humans around than desirable, at least until we have settled a few more galaxies. Which emphasizes just how important the no externalities clause was to my judgement. Even the externality of diluting the neg-entropy in the cosmic commons slightly further would make the creation a bad thing.

I don't share the same preference intuitions as you regarding self-clone-torture. I consider copies to be part of the output. If they are identical copies having identical experiences then they mean little more than having a backup available. If some are getting tortured then the overall output of the relevant computation really does suffer (in the 'get slightly worse' sense although I suppose it is literal too).

Also, I would hesitate to torture copies of other people, on the grounds that there's a conflict of interest and I can't trust myself to reason honestly. I might feel differently after I'd been using my own fork-slaves for a while.

It's OK. I (lightheartedly) reckon my clone army could take out your clone army if it became necessary to defend myselves. I/we'd then have to figure out how to put 'ourselfs' back together again without merge conflicts once the mobilization was no longer required. That sounds like a tricky task, but it could be fun.

I suspect Eliezer would not, because it would increase the death-count of the universe by one. I would, because it would increase the life-count of the universe by fifteen minutes.

I believe Eliezer would, by extrapolation from the hypothetical at the bottom of this post.

Funny. My instincts are telling me that there's a Utility Monster behind that bush.

I'm not satisfied with the lifeist or the anti-deathist reasoning here as you present them, since both measure (i.e. life-count) and negadeaths as dominant terms in a utility equation lead pretty quickly to some pretty perverse conclusions. Nor do I give much credence to the boxed subject's own opinion; preference utilitarianism works well as a way of gauging consequences against each other, but it's a lousy measure of scalar utility.

Presuming that the box's inhabitant would lead a highly fun-theoretically positive fifteen minutes of life by any standards we choose to adopt, though, pressing the button seems to be neutral or positive (neutral with respect to my own causal universe, positive relative to the short-lived branch Omega's creating) -- with the proviso that Omega may be acting unethically by garbage-collecting the boxed subject when it has the power not to.

My intuitions give a rather interesting answer to this: It depends strongly on the details of the mind in question. For the vast majority of possible minds I would push the button, but the human dot an a fair sized chunk of mind design space around it I'd not push the button for. It also seems to depend on seemingly unrelated things, for example I'd push it for a human if an only if it was similar enough to a human existing elsewhere whose existence was not affected by the copying AND would approve of pushing the button.

Being an information theoretical person-physicalist, there are no copies. There are new originals.

Making N copies is only meaningless, utility wise, if the copies never diverge. The moment they do, you have a problem.