In line with my fine tradition of beating old horses, in this post I'll try to summarize some arguments that people proposed in the ancient puzzle of Torture vs. Dust Specks and add some of my own. Not intended as an endorsement of either side. (I do have a preferred side, but don't know exactly why.)

  • The people saying one dust speck is "zero disutility" or "incommensurable utilities" are being naive. Just pick the smallest amount of suffering that in your opinion is non-zero or commensurable with the torture and restart.
  • Escalation argument: go from dust specks to torture in small steps, slightly increasing the suffering and massively decreasing the number of people at each step. If each individual change increases utility, so does the final result.
  • Fluctuation argument: the probability that the universe randomly subjects you to the torture scenario is considerably higher than 1/3^^^3 anyway, so choose torture without worries even if you're in the affected set. (This doesn't assume the least convenient possible world, so fails.)
  • Proximity argument: don't ask me to value strangers equally to friends and relatives. If each additional person matters 1% less than the previous one, then even an infinite number of people getting dust specks in their eyes adds up to a finite and not especially large amount of suffering. (This assumption negates the escalation argument once you do the math.)
  • Real-world analogy: we don't decide to pay one penny each to collectively save one starving African child, so choose torture. (This is resolved by the proximity argument.)
  • Observer splitting: if you split into 3^^^3 people tomorrow, would you prefer all of you to get dust specks, or one of you to be tortured for 50 years? (This neutralizes the proximity argument, but the escalation argument also becomes non-obvious.)

Oh what a tangle. I guess Eliezer is too altruistic to give up torture no matter what we throw at him; others will adopt excuses to choose specks; still others will stay gut-convinced but logically puzzled, like me. The right answer, or the right theory to guide you to the answer, no longer seems so inevitable and mathematically certain.

Edit: I submitted this post to LW by mistake, then deleted it which turned out to be the real mistake. Seeing the folks merrily discussing away in the comments long after the deletion, I tried to undelete the post somehow, but nothing worked. All right; let this be a sekrit area. A shame, really, because I just thought of a scenario that might have given even Eliezer cause for self-doubt:

  • Observer splitting with a twist: instead of you, one of your loved ones will be split into 3^^^3 people tomorrow. Torture a single branch for 50 years, or give every branch a dust speck?

 

New Comment
66 comments, sorted by Click to highlight new comments since:

I prefer dust specks because I insist on counting people one at a time. I think it's obvious that any single person presented with the opportunity to save someone else from fifty years of torture by experiencing a dust speck in the eye ought to do so. Any of those 3^^^3 people who would not voluntarily do so, I don't have enough sympathy for such individuals to step in on their behalf and spare them the dust speck.

I haven't yet worked out a good way to draw the line in the escalation scenario, since I suspect that "whatever level of discomfort I, personally, wouldn't voluntarily experience to save some random person from 50 years of torture" is unlikely to be the right answer.

I think it's obvious that any single person presented with the opportunity to save someone else from fifty years of torture by experiencing a dust speck in the eye ought to do so.

Woah.

How'd they end up responsible for the choice you make? You can't have it both ways, that's not how it works.

My (unfinished, don't ask for too much detail) ethical theory is based on rights, which can be waived by the would-be victim of an act that would otherwise be a rights violation. So in principle, if I could poll the 3^^^3 people, I would expect them to waive the right not to experience the dust specks. They aren't responsible for what I do, but my expectations of their dispositions about my choice inform that choice.

Then the "real-world analogy" point in the post prompts me to ask a fun question: do you consider yourself entitled to rob everyone else for one penny to save one starving African child? Because if someone would have refused to pay up, you "don't have enough sympathy for such individuals" and take the penny anyway.

Changing the example to one that involves money does wacky things to my intutions, especially since many people live in situations where a penny is not a trivial amount of money (whereas I take it that a dust speck in the eye is pretty much commensurate for everybody), and the fact that there are probably less expensive ways to save lives (so unlike the purely stipulated tradeoff of the dust speck/torture situation, I do not need a penny from everyone to save the starving child).

Thanks! It seems my question wasn't very relevant to the original dilemma. I vaguely recall arguing with you about your ethical theory some months ago, so let's not go there; but when you eventually finish that stuff, please post it here so we can all take a stab.

You are not placing the question in the least convenient possible world.

In the least convenient possible world: I take it that in this case, that world is the one where wealth is distributed equally enough that one penny means the same amount to everybody, and every cheaper opportunity to save a life has already been taken advantage of.

Why would a world that looked like that have a starving African child? If we all have X dollars, so a penny is worth the same to everyone, then doesn't the starving African child also have X dollars? If he does, and X dollars won't buy him dinner, then there just must not be any food in his region (because it doesn't make any sense for people to sell food at a price that literally no one can afford, and everybody only has X dollars) - so X dollars plus (population x 1¢) probably wouldn't help him either.

Perhaps you had a different inconvenient possible world in mind; can you describe it for me?

One where the African child really does need that cent.

I'm afraid that isn't enough detail for me to understand the question you'd like me to answer.

How's that possible? The question is this: there is, say, a trillion people, each has exactly one cent to give away. If almost every one of them parts with their cent, one life gets saved, otherwise one life is lost. Each of these people can either give up their cent voluntarily, or you, personally, can rob them of that cent (say, you can implement some worldwide policy to do that in bulk). Do you consider it the right choice to rob every one of these people who refuse to pay up?

It sounds like in this possible world, I am a tax collector.

I think it is a suitable use of taxes to save starving people.

So you are enabled to choose dust specks based on your prediction that the 3^^^3 people will waive their rights. However, you "don't have sympathy" for anyone who actually doesn't. Therefore, you are willing to violate the rights of anyone who does not comply with your predicted ethical conclusion. What, then, if all 3^^^3 people refuse to waive their rights? Then you aren't just putting a dust speck into the eyes of 3^^^3 people, you're also violating their rights by your own admission. Doesn't that imply a further compounding of disutility?

I don't see how your ethical theory can possibly function if those who refuse to waive their rights have them stripped away as a consequence.

By the same argument (i.e. refusing to multiply), wouldn't it also be better to torture 100 people for 49 years than to torture one person for 50 years?

Not if each of them considers it a wrong choice. Refusing to multiply goes both ways, and no math can debate this choice: whatever thought experiment you present, an intuitive response would be stumped on top and given as a reply.

I did say:

I haven't yet worked out a good way to draw the line in the escalation scenario, since I suspect that "whatever level of discomfort I, personally, wouldn't voluntarily experience to save some random person from 50 years of torture" is unlikely to be the right answer.

The scenario you present is among those I have no suitable answer for for this reason. However, I lean towards preferring the 50 years of torture for 1 person over 49 years for 100.

I prefer dust specks because I insist on counting people one at a time. I think it's obvious that any single person presented with the opportunity to save someone else from fifty years of torture by experiencing a dust speck in the eye ought to do so.

This is defection, a suboptimal strategy. Each person in isolation prefers to defect in Prisoner's dilemma.

Any of those 3^^^3 people who would not voluntarily do so, I don't have enough sympathy for such individuals to step in on their behalf and spare them the dust speck.

And this is preference for fuzzies over utility, inability to shut up and multiply.

And this is preference for fuzzies over utility, inability to shut up and multiply.

If this is true, then by reductio, preference of utility is incorrect.

There's another argument I think you might have missed:

Utilitarism is about being optimal. Instinctive morality is about being failsafe.

Implicit in all decisions is a nonzero possibility that you are wrong. Once you take that into account, having some "hard" rules like not agreeing to torture here (or in other dilemmas), not pushing the fat guy on the tracks in the trolley problem, etc, can save you from making horrible mistakes at the cost of slightly suboptimal decisions. Which is, incidentally, how I would want a friendly AI to decide as well - losing a bit in the average case to prevent a really horrible worst case.

That rule alone would, of course, make you vulnerable to Pascal's Mugging. I think the way to go here is to have some threshold at which you round very low (or very high) probabilities off to zero (or one) when the difference is small against the probability of you being wrong. Not only will this protect you against getting your decisions hacked, it will also stop you from wasting computing power on improbable outcomes. This seems to be the reason why Pascal's Mugging usually fails on humans.

Both of these are necessary patches because we operate on opaque, faulty and potentially hostile hardware. One without the other is vulnerable to hacks and catastrophic failure modes, but both taken together are a pretty strong base for decisions that, so far, have served us humans pretty well. In two rules:

1) Ignore outcomes to which you assign a lower probability than to you being wrong/mistaken about the situation. 2) Ignore decisions with horrible worst case scenarios if there are options with a less horrible worst case and still acceptable acceptable average case.

When both of these apply to the same thing, or this process eliminates all options, you have a dilemma. Try to reduce your uncertainty about 1) and start looking for other options in 2). If that is impossible, shut up and do it anyway.

the right answer is |U(3^^^3 + 1dustspecs) - U(3^^^3 dustspecs)| < |U(1 dustspec) - U(0 dustspecs)|, and U(any number of dustspecs) < U(torture)

There is no additivity axiom for utility.

This is called the "proximity argument" in the post.

I've no idea how we're managing to have this discussion under a deleted submission. It shouldn't have even been posted to LW! It was live for about 30 seconds until I realized I clicked the wrong button.

It's in the feed now, and everyone subscribed will see it. You can not unpublish on the Internet! Can you somehow "undelete" it, I think it's fine enough a post.

Nope, I just tried pushing some buttons (edit, save, submit etc.) and it didn't work. Oh, boy. I created a secret area on LW!

Hmm. That should probably be posted to Known Issues...

What smoofra said (although I would reverse the signs and assign torture and dust specks negative utility). Say there is a singularity in the utility function for torture (goes to negative infinity). The utility of many dust specks (finite negative) cannot add up to the utility for torture.

If the utility function for torture were negative infinity:

  • any choice with a nonzero probability of leading to torture gains infinite disutility,
  • any torture of any duration has the same disutility - infinite,
  • the criteria for torture vs. non-torture become rigid - something which is almost torture is literally infinitely better than something which is barely torture,

et cetera.

In other words, I don't think this is a rational moral stance.

RobinZ, perhaps my understanding of the term utility differs from yours. In finance & economics, utility is a scalar (i.e., a real number) function u of wealth w, subject to:

u(w) is non-decreasing; u(w) is concave downward.

(Negative) singularities to the left are admissable.

I confess I don't know about the history of how the utility concept has been generalized to encompass pain and pleasure. It seems a multi-valued utility function might work better than a scalar function.

The criteria you mention don't exclude a negative singularity to the left, but when you attempt to optimize for maximum utility, the singularity causes problems. I was describing a few.

Edit: I mean to say: in the utilitarianism-utility function, which has multiple inputs.

I can envision a vector utility function u(x) = (a, b), where the ordering is on the first term a, unless there is a tie at negative infinity; in that case the ordering is on the second term b. b is -1 for one person-hour of minimal torture, and it's multiplicative in persons, duration and severity >= 1. (Pain infliction of less than 1 times minimal torture severity is not considered torture.) This solves your second objection, and the other two are features of this 'Just say no to torture' utility function.

Quote: -any choice with a nonzero probability of leading to torture gains infinite disutility, -any torture of any duration has the same disutility - infinite, -the criteria for torture vs. non-torture become rigid - something which is almost torture is literally infinitely better than something which is barely torture,

But every choice has a nonzero probability of leading to torture. Your proposed moral stance amounts to "minimize the probability-times-intensity of torture", to which a reasonable answer might be, "set off a nuclear holocaust annihilating all life on the planet".

(And the distinction between torture and non-torture is - at least in the abstract - fuzzy. How much pain does it have to be to be torture?)

But every choice has a nonzero probability of leading to torture.

In real life or in this example? I don't believe this is true in real life.

There is nothing you can do that makes it impossible that there will be torture. Therefore, every choice has a nonzero probability of being followed by torture. I'm not sure whether "leading to torture" is the best way to phrase this, though.

What he said. Also, if you are evaluating the rectitude of each possible choice by its consequences (i.e. using your utility function), it doesn't matter if you actually (might) cause the torture or if it just (possibly) occurs within your light cone - you have to count it.

What he said.

Are you referring to me? I'm a she.

headdesk

What Alicorn said, yes. Damnit, I thought I was doing pretty good at avoiding the pronoun problems...

Don't worry about it. It was a safe bet, if you don't know who I am and this is the context you have to work with ;)

Hey, don't tell me what I'm not allowed to worry about! :P

(...geez, I feel like I'm about to be deleted as natter...)

I believe you should count choices that can measurably change the probability of torture. If you can't measure a change in the probability of torture, you should count that as no change. I believe this view more closely corresponds to current physical models than the infinite butterflies concept.

But if torture has infinite weight, even any change - even one too small to measure - has either infinite utility or infinite disutility. Which makes the situation even worse.

Anyway, I'm not arguing that you should measure it this way, I'm arguing that you don't. Mathematically, the implications of your proposal do not correspond to the value judgements you endorse, and therefore the proposal doesn't correspond to your actual algorithm, and should be abandoned.

Changes that are small enough to be beyond Heisenberg's epistemological barrier cannot in principle be shown to exist. So, they acquire Easter Bunny-like status.

Changes that are within this barrier but beyond my measurement capabilities aren't known to me; and, utility is an epistemological function. I can't measure it, so I can't know about it, so it doesn't enter into my utility.

I think a bigger problem is the question of enduring a split second of torture in exchange for a huge social good. This sort of thing is ruled out by that utility function.

But that's ridiculous. I would gladly exchange being tortured for a few seconds - say, waterboarding, like Christopher Hitchens suffered - for, say, an end to starvation worldwide!

More to the point, deleting infinities from your equations works sometimes - I've heard of it being done in quantum mechanics - but doing so with the noisy filter of your personal ignorance, or even the less-noisy filter of theoretical detectability, leaves wide open the possibility of inconsistencies in your system. It's just not what a consistent moral framework looks like.

I agree about the torture for a few seconds.

A utility function is just a way of describing the ranking of desirability of scenarios. I'm not convinced that singularities on the left can't be a part of that description.

Singularities on the left I can't rule out universally, but setting the utility of torture to negative infinity ... well, I've told you my reasons for objecting. If you want me to spend more time elaborating, let me know; for my own part, I'm done.

There is no "Heisenberg's epistemological barier". Utility function is defined on everything that could possibly be, whether you know specific possibilities to be real or don't. You are supposed to average over the set of possibilities that you can't distinguish because of limited knowledge.

The equation involving Planck's constant in the following link is not in dispute, and that equation does constitute an epistemological barrier:

http://en.wikipedia.org/wiki/Uncertainty_principle

Everyone has their own utility function (whether they're honest about it or not), I suppose. Personally, I would never try to place myself in the shoes of Laplace's Demon. They're probably those felt pointy jester shoes with the bells on the end.

Proof left to the reader?

If I am to choose between getting a glass of water or a cup of coffee, I am quite confident that neither choice will lead to torture. You certainly cannot prove that either choice will lead to torture. Absolute certainty has nothing to do with it, in my opinion.

You either have absolute certainty in the statement that neither choice will lead to torture, or you allow some probability of it being incorrect.

This was confronted in the Escalation Argument. Would you prefer 1000 people being tortured for 49 years to 1 person being tortured for 50 years? (If you would, take 1000 to 1000000 and 49 to 49.99, etc.) Is there any step of the argument where your projected utility function isn't additive enough to prefer that a much smaller number of people suffer a little bit more?

Actually, I think your right. The escalation argument has caught me in a contradiction. I wonder why I didn't see it last time around.

I still prefer the specs though. My prior in favor of the specs is strong enough that I have to conclude that there's something wrong with the escalation argument that I'm not presently clever enough to find. It's a bit like reading a proof that 2+2 = 5. You know you've just read a proof, and you checked each step, but you still, justifiably, don't believe it. It's far more likely that the proof fooled you in some subtle way than it is that arithmetic is actually inconsistent.

Well, we have better reasons to believe that arithmetic is consistent than we have to believe that human beings' strong moral impulses are coherent in cases outside of everyday experience. I think much of the point of the SPECKS vs. TORTURE debate was to emphasize that our moral intuitions aren't perceptions of a consistent world of values, but instead a thousand shards of moral desire which originated in a thousand different aspects of primate social life.

For one thing, our moral intuitions don't shut up and multiply. When we start making decisions that affect large numbers of people (3^^^3 isn't necessary; a million is enough to take us far outside of our usual domain), it's important to be aware that the actual best action might sometimes trigger a wave of moral disgust, if the harm to a few seems more salient than the benefit to the many, etc.

Keep in mind that this isn't arguing for implementing Utilitarianism of the "kill a healthy traveler and harvest his organs to save 10 other people" variety; among its faults, that kind of Utilitarianism fails to consider its probable consequences on human behavior if people know it's being implemented. The circularity of "SPECKS" just serves to point out one more domain in which Eliezer's Maxim applies:

You want to scream, "Just give up already! Intuition isn't always right!"

This came to mind: What you intuitively believe about a certain statement may as well be described as an "emotion" of "truthiness", triggered by the focus of attention holding the model just like any other emotion that values situations. Emotion isn't always right, estimate of plausibility isn't always right, but these are basically the same thing. I somehow used to separate them, along the line of probability-utility distinction, but this is probably more confusing then helpful a distinction, with truthiness on its own and the concept of emotions containing everything but it.

Yup. I get all that. I still want to go for the specs.

Perhaps it has to do with the fact that 3^^^3 is way more people than could possibly exist. Perhaps the specs v. torture hypothetical doesn't actually matter. I don't know. But I'm just not convinced.

Yup. I get all that. I still want to go for the specs.

Just give up already! Intuition isn't always right!

Hello. I think the Escalation Argument can sometimes be found on the wrong side of Zeno's Paradox. Say there is negative utility to both dust specks and torture, where dust specks have finite negative utility. Both dust specks and torture can be assigned to a 'infliction of discomfort' scale that corresponds to a segment of the real number line. At minimal torture, there is a singularity in the utility function - it goes to negative infinity.

At any point on the number line corresponding to an infliction of discomfort between dust specks and minimal torture, the utility is negative but finite. The Escalation Argument begins in the torture zone, and slowly diminishes the duration of the torture. I believe the argument breaks down when the infliction of discomfort is no longer torture. At that point, non-torture has higher utility than all preceding torture scenarios. If it's always torture, then you never get to dust specks.

Then your utility function can no longer say 25 years of torture is preferable to 50 years. This difficulty is surmountable - I believe the original post had some discussion on hyperreal utilities and the like - but the scheme looks a little contrived to me.

To me, a utility function is a contrivance. So it's OK if it's contrived. It's a map, not the territory, as illustrated above.

I take someone's answer to this question at their word. When they say that no number of dust specks equals torture, I accept that as a datum for their utility function. The task is then to contrive a function which is consistent with that.

Orthonormal, you're rehashing things I've covered in the post. Yes, many reasonable discounting methods (like exponential discounting in the "proximity argument") do have a specific step where the derivative becomes negative.

What's more, that fact doesn't look especially unintuitive once you zoom in on it; do the math and see. For example, in the proximity argument the step involves the additional people suffering so far away from you that even an infinity of them sums up to less than e.g. one close relative of yours. Not so unrealistic for everyday humans, is it?

[-][anonymous]00

What's more, that fact doesn't look especially unintuitive once you zoom in on it; do the math and see. For example, in the proximity argument the step involves the additional people suffering so far away from you that even an infinity of them sums up to less than e.g. one close relative of yours. Not so unrealistic for everyday humans, is it?

It's intuitive to me that everyday humans would do this, but not that it would be right.

[-]RobinZ-10

It seems to me that the idea of a critical threshold of suffering might be relevant. Most dust-speckers seem to maintain that a dust speck is always a negligible effect - a momentary discomfort that is immediately forgotten - but in a sufficiently large group of people, randomly selected, a low-probability situation in which a dust speck is critical could arise. For example, the dust speck could be a distraction while operating a moving vehicle, leading to a crash. Or the dust speck could be an additional frustration to an individual already deeply frustrated, leading to an outburst. Each conditional in these hypotheticals is improbable, but multiplying them out surely doesn't result in a number as large as 3^^^3, which means that it is highly likely that many of them will occur. Under this interpretation, the torture is the obvious winner.

If cascading consequences are ruled out, however, I'll have to think some more.

When you, personally, decide between your future containing a dust speck at unknown moment and some alternative, the value of that dust speck won't be significantly affected by the probability of it causing trouble, if probability is low enough.

You could replace a dust speck with 1 in 3^^^3/1000 probability of being tortured for 50 years, so that it's a choice between 3^^^3 people each having a 3^^^3/1000 probability of being tortured, and one person being tortured with certainty, or, derandomizing, a choice between 1000 people tortured and one person tortured. That one person is better be really special, for the proximity effect to elevate them above all those other people.

The proximity effect, as described in the post, makes your "derandomizing" step invalid.

It can't be invalid: just replace the initial rule by this: of all 3^^^3, a random selection of 1000 will be made who are to be tortured. Given this rule, each individual has about 1 in 3^^^3/1000 probability of getting selected for torture, which is presumably even better deal than a certain speck. This is compared to choosing one person to torture with certainty. The proximity effect may say that those 1000 people are from far away and so of little importance, which I mentioned in the comment above. I don't think the choice of saving one known person over a thousand ridiculously-far-away people is necessarily incorrect though.

Yes, this way is correct. I thought you implied the 1000 people were close, not far away.

Sure, makes sense. I imagine the probability is much less than 3^^^3/1000 of the consequences I'm hypothesizing, though, which makes the dust specks still worse.

[-][anonymous]00

For the original formulation of the problem, assume no cascading consequences and replace "dust speck" with "minimal non-negligible amount of suffering" as in the first point of the post.