For those not familiar with the topic, Torture vs. Dustspecks asks the question: "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?"

 

Most of the discussion that I have noted on the topic takes one of two assumptions in deriving their answer to that question: I think of one as the 'linear additive' answer, which says that torture is the proper choice for the utilitarian consequentialist, because a single person can only suffer so much over a fifty year window, as compared to the incomprehensible number of individuals who suffer only minutely; the other I think of as the 'logarithmically additive' answer, which inverts the answer on the grounds that forms of suffering are not equal, and cannot be added as simple 'units'.

What I have never yet seen is something akin to the notion expressed in Ursula K LeGuin's The Ones Who Walk Away From Omelas.If you haven't read it, I won't spoil it for you.

I believe that any metric of consequence which takes into account only suffering when making the choice of "torture" vs. "dust specks" misses the point. There are consequences to such a choice that extend beyond the suffering inflicted; moral responsibility, standards of behavior that either choice makes acceptable, and so on. Any solution to the question which ignores these elements in making its decision might be useful in revealing one's views about the nature of cumulative suffering, but beyond that are of no value in making practical decisions -- they cannot be, as 'consequence' extends beyond the mere instantiation of a given choice -- the exact pain inflicted by either scenario -- into the kind of society that such a choice would result in.

While I myself tend towards the 'logarithmic' than the 'linear' additive view of suffering, even if I stipulate the linear additive view, I still cannot agree with the conclusion of torture over the dust speck, for the same reason why I do not condone torture even in the "ticking time bomb" scenario: I cannot accept the culture/society that would permit such a torture to exist. To arbitrarily select out one individual for maximal suffering in order to spare others a negligible amount would require a legal or moral framework that accepted such choices, and this violates the principle of individual self-determination -- a principle I have seen Less Wrong's community spend a great deal of time trying to consider how to incorporate into Friendliness solutions for AGI. We as a society already implement something similar to this, economically: we accept taxing everyone, even according to a graduated scheme. What we do not accept is enslaving 20% of the population to provide for the needs of the State.

If there is a flaw in my reasoning here, please enlighten me.

New Comment
100 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]prase180
  1. As others have said, the scenario doesn't require linearity.
  2. You are doing a fairly standard job of rejecting a thought experiment by pointing out several side issues that are stipulated to be missing in the original. Although this is what people ordinarily do when confronted with a counterintuitive repugnant argument, it muddles the discussion and makes you the person who misses the point. If you want to say that the assumptions of the dust speck dilemma are unrealistic, you are free to do it (although such a statement is rather trivial; nobody believes that there are 3^^^^3 humans in the world). If you, on the other hand, object to the utilitarian principles involved in the answer, then do it. But please don't mix these two types of objections together.
  3. There were already many people who espoused choosing "specks", rationalising it by all sorts of elaborate arguments (not a surprising thing to see, since "specks" is the intuitive answer). This is the easy part. But I haven't seen anybody propose a coherent general decision algorithm which returns "specks" for this dilemma and doesn't return repugnant or even paradoxical answers to different questions. This is the hard part, which if you engaged, it would be much more interesting.
[-][anonymous]110

You are doing a fairly standard job of rejecting a thought experiment by pointing out several side issues that are stipulated to be missing in the original. Although this is what people ordinarily do when confronted with a counterintuitive repugnant argument, it muddles the discussion and makes you the person who misses the point.

This seems to be endemic in the discussion section, as of late.

0Logos01
By what means do you justify this assertion? Actually, there's two. Please explain your reasoning for both: 1. The notion that I am rejecting the thought experiment at all. 2. That I do so by means of "issues that are stipulated to be missing in the original". Insofar as I can determine, both of these are simply false. What about my argument makes you believe that my rejections are based on finding things repugnant as opposed to rejections on purely utilitarian grounds? I am confused as to why you would believe that I was objecting to utilitarian principles when my argument depends upon consequential utilitarianism. Examples?
3prase
The original thought experiment presents you with a choice between X: one person will suffer horribly for 50 years, and Y: 3^^^3 people will experience minimal inconvenience for a second. The point clearly was to compare the utilities of X and Y, so it is assumed that all other things are equal. You have said that you choose Y, because you "cannot accept the culture/society that would permit such a torture to exist". But the society would not be changed in the original experiment (assume, for example, that nobody except you would know about the tortured person). You have effectively added another effect Z: society would permit torture, and now you are comparing u(Y) against u(X and Z), not against u(X) alone. So, to explicitly reply to your questions, (1) you reject the original problem whether u(Y) > u(X), because you answer a different question, namely whether u(Y) > u(X and Z), and (2) the issue missing in the original is Z. Nothing. I have only said that you do the same thing what others do in similar situation. (In order not to be evasive, I admit believing that you reject the "torture" conclusion intuitively and then rationalise it. But this belief is based purely on the fact that this is what most people do; there is nothing in your arguments (apart from them being unconvincing) that further supports this belief. Now, do you admit that the "torture" variant is repugnant to you?) This is partly due to my bad formulation (I should have probably said "calculations" instead of "principles"), and partly due to the fact that it is not so clear from your post what your argument depends upon. Of what?
-1Logos01
This privileges the hypothesis. You're claiming that there will be no secondary consequences and therefore secondary consequences need not be considered. This is directly antithetical to the notion of treating these questions in an "all other things being equal" state: of course if you arbitrarily eliminate the potential results of decision X as compared to decision Y, that's going to affect the outcome of which decision is preferable. But that, then, isn't answering the question asked of us. THAT question is asked agnostic of the conditions in which it would be implemented. So we don't get to impose special conditions on how it would occur. Indeed, rather than me adding things the original hypothesis excludes, it seems to me that you are doing the exact opposite of this: you are excluding things the original hypothesis does not. In other words; to my current understanding of that hypothetical, I am the one closest to answering it without imposed additional conditions. I see. There is an error in your reasoning here, but I can understand why it would be non-obvious. You are assuming that u(n) != n + Z(n) in my formulation. The reason why this would be non-obvious is because I listed no value for Z(Y). The reason why I did not list such a value is because I am not at this time aware that said value is non-zero.So the equation remains a question of whether u(Y) is greater or lesser than u(X). The point we disagree on is not the hypothesis itself -- the comparison of u(Y) to u(X), but rather the terms of the utility function. In other words, exactly what I explicitly stated: I argue that the discussion on this topic thus far uses an insufficient definition of "utility", especially for consequentialistic utilitarianism, and therefore "misses the point". Fair enough. Thank you. I find no reason to accept the notion that my arguments are unconvincing. This, then, is the crux of the matter: What is your argument for supporting the notion that ONLY primary consequences
2prase
What? Which hypothesis do I privilege? How does assuming no secondary consequences of either variant contradict treating the other things as being equal? If n refers to either X or Y, I certainly don't assume that u(n) != n + Z(n), because such a thing has no sensible interpretation ("u(X) = X" would read "utility of torture is equal to torture"). If n refers to number of people dust-specked or some other quantity, I still have no idea what you mean by Z(n). In my notation, Z was not a function, but a change of state of the world (namely, that society begins tolerating torture). So, maybe there is an error in my reasoning, but certainly you are not understanding my reasoning correctly. As for your demanded examples, I am still not sure what do you want me to write. Edit: seems to me that I made the same reply as paper-machine, even accidentally using the same symbols X, Y and Z, but in his use these are already utilities, while in my use they are situations. So, paper-machine.X = prase.u(X).
-2Logos01
Because, in order to achieve that state, you must impose special conditions on the implementation of the hypothetical. Ones the hypothetical itself is agnostic to. The only way to eliminate secondary consequences from consideration, in other words, is to treat the hypotheticals unequally. I also began by stating, if you'll recall, that if you do so isolate the query to first-consequences only, all that you practically achieve is a comparison of the net total quantity of suffering directly imposed by the two scenarios. And all that achieves is to suss out whether your view of suffering is linear or logarithmic in nature. To the logarithmic-adherent, the torture scenario is an effectively infinite suffering. I don't know if you've ever tortured or been tortured, but I can assure you that fifty years is far more than is necessary for a single person's psyche to be irrevocably demolished, reconstructed, and demolished repeatedly. Eliezer's original discussion of said torture evinced, quite clearly, that he adheres to the linear-additive perspective. This is perfectly clear when he says that it "isn't the worst thing that could happen to a person". Alright, fine. u(n) = s(n) + Z(n), where u(n) is the total anti-utility of scenario n, s(n) is the suffering directly induced by scenario n, and Z(n) is the anti-utility of all secondary consequences of scenario n*. Z is the function for determining the secondary consequences of scenario n. It has a specific value depending on the scenario chosen. Where am I mistaken? What am I mistaking you on? ... Why would you declare a topic that you are unable to even describe interesting? You are the one who brought it up... provide examples of scenarios that fulfill your description. If you want to discuss the topic, if you find it interesting -- discuss it! I opened the floor to it.
0prase
I will not reply to the first paragraph, because we clearly disagree about what "ceteris paribus" means, while this disagreement has little to no relevance to the original problem. If it is finite, the logic behind choosing torture works. If it is infinite, you have other problems. But you can't have it both ways. You have said "[y]ou are assuming that u(n) != s(n) + Z(n) in my formulation", I had been assuming no such thing. Recall that you are probably reacting to this: No mention of any scenarios. If you want me to describe a consistent decision theory which returns "specks" and has no other obvious downsided, well, I can't, because I have none. Neither I believe that such a theory exists. You believe that "specks" is the correct solution.
0Logos01
If you are not stipulating the relevance of secondary consequences to the original hypothesis then this conversation is at an end, with this statement. Either they are relevant, as is my entire argument, or they are not. Claiming via fiat that they are not will earn you no esteem on my part, and will cause me to consider your position entirely without merit of any kind; it is the ultimate in dishonest argumentation tactics: "You are wrong because I say you are wrong." Rephrase this. As I currently read it, you are stating that "if torture is infinite suffering, then torture is the better thing to be chosen." That is contradictory. Not at all. As I have stated iteratively, suffering is not the sole relevant form of utility. Determining how to properly weight the various forms of utility against one another is necessary to untangling this. It is not at all obvious that they even can be so weighted. If that were the case then you really shouldn't have said this: "You have effectively added another effect Z: society would permit torture, and now you are comparing u(Y) against u(X and Z), not against u(X) alone." Because now we are let with two contradictory statements uttered by you. Either Z(n) is a part of the function of u(n), or it is not. These are mutually exclusive. You cannot have both. So, which statement of yours, then, is the false one? "repugnant or even paradoxical answers to different questions." <-- A rose, sir, by any other name. I do not know why you seem to find it necessary to insist that things you have said aren't in fact things you have said; I do not know why you seem to find it necessary to adhere to such rigid verbiage usage that synonymous terminology for things you have said are rejected as non-existent statements by yourself. It is, however, a frustrating pattern, and is causing me to lose interest in this dialogue.
1prase
Ending the dialogue may probably be the best option. I am only going to provide you one example of paradoxes you have demanded, since it was probably my fault that I haven't understood your request. (Next time I exhibit similar lack of understanding, please tell me plainly and directly what you are asking for. Beware illusion of transparency. I really have no dark motives to pretend misunderstanding when there is none.) So, the most basic problem with choosing "specks" over "torture" is that which is already described in the original post: torturing 1 person for 50 years (let's call that scenario X(0)) is clearly better than torturing 10 people for 50 years minus 1 second (X(1)); to deny that means that one is willing to subject 9 people to 50 years of agony just to spare 1 person one second of agony. X(1) is then better than torturing 100 people for 50 years minus 2 seconds (X(2)) and so on. There are about 1,5 billion seconds in 50 years, so let's define X(n) recursively as torturing ten times more people than in scenario X(n-1) for time equal to 1,499,999,999/1,500,000,000 of time used in scenario X(n-1). Let's also decrease the pain slightly in each step: since pain is difficult to measure, let's precisely define the way torture is done: by simulating the pain one feels when the skin is burned by hot iron on p percent of body surface; at X(0) we start with burning the whole surface and p is decreased in each step by the same factor as the duration of torture. At approximately n = 3.8 * 10^10, X(n) means taking 10^(3.8*10^10) people and touching their skin with a hot needle for 1/100 of a second (the tip of the needle which comes into contact with the skin will have 0.0001 square milimeters). Now this is so negligible pain that a dust speck in the eye is clearly worse. So, we have X(3.8*10^10) which is better than dust specks with just 10^(3.8*10^10) people (a number much lower than 3^^^3), and you say that dust specks are better than X(0). Therefore there must
1TimS
That is counter-intuitive, but isn't the anti-torture answer something analogous to sets? That is: R(0) is the set of all real numbers. We know that it is an uncountable infinity, and therefore larger than any countable infinity. Set R(n) is R(0) with n elements removed. As I understand it, so long as n is a countable infinity or smaller, R(n) is equal in size to R(0). [EDITED TO REMOVE INCORRECT MATH.] To cash out the analogy, it might be that certain torture scenarios are preferable to other torture scenarios, but all non-torture scenarios are less bad than all torture scenarios. As you increment down the amount of suffering in your example, you eventually remove so much that the scenario is no longer torture. In notation somewhat like yours, Y(50 yr) is the badness of imposing pain as you describe to one person for 50 years. We all seem to agree that Y(50 yr) is torture. I assert something like Y(50 yr - A) is torture if Y(A) would not be torture. I agree that you can't say that suffering is non-linear (that is, think that dust-specks is preferable to torture) without believing something like what I laid out. Logos, those "secondary" effects you point to are the properties that make Y(A) torture (or not).
5prase
This is consistent. But it induces further difficulties in the standard utilitarian decision process. To express the idea that all non-torture scenarios are less bad than all torture scenarios by utility function, there must be some (negative) boundary B between the two sets of scenarios, such that u(any torture scenario) < B and u(any non-torture scenario) > B. Now either B is finite or it is infinite; this matters when probabilities come into play. First consider the case of B finite. This is the logistic curve approach: it means, that any number of slightly super-boundary inconveniences happening to different people are preferable to a single case of a slightly sub-boundary torture. I know of no natural physiological boundary of such sort; if severity of pain can change continuously, which seems to be the case, the sub-boundary and super-boundary experiences may be effectively indistinguishable. Are you willing to accept this? Perhaps you are. Now this gets an interesting turn. Consider a couple of scenarios: X, which is slightly sub-boundary (thus "torture") with utility B - ε (ε positive), and Y, which is non-torture with u(Y) = B + ε. Now utilities may behave non-linearly with respect to the scenario-describing parameters, but expected utilities have to be pretty linear with respect to probabilities; anything else means throwing utilitarianism out from the window. A utility maximiser should therefore be indifferent between scenarios X' and Y', where X' = X with probability p and Y' = Y with probability p (B - ε) / (B + ε). Lets say one of the boundary cases is, for sake of concreteness, giving a person 7.5 seconds long electric shock of a given strength. So, you may prefer to give a billion people 7.4999 s shock in order to avoid one person getting a 7.5001 s shock, but in the same time you would prefer, say, 99.98% chance of one person getting 7.5001 s shock to 99.99% chance of one person getting 7.4999 s shock. Thus, although the torture/non-torture boun
1TimS
Your reference to sacred values reminded me of Spheres of Justice. In brief, Walzer argues that the best way of describing our morality is by noting which values may not be exchanged for which other values. For example, it is illicit to trade material wealth for political power over others (i.e. bribery is bad). Or trade lives for relief from suffering. But it is permissible to trade within a sphere (money for ice cream) or between some spheres (dowries might be a historical example, but I can't think of a modern one just this moment). It seems like your post is a mathematical demonstration that I cannot believe the Spheres of Justice argument and also be a utilitarian. Hadn't thought about it that way before.
0asr
I hear your general point, and I don't dispute it. But I think your set theory analogy isn't quite right. Consider the set R - [0,1] That's all real numbers less than 0 or greater than 1. This is still uncountably infinite, and has equal cardinality to R, even though I removed the set [0,1], which is itself uncountably infinite.
0TimS
Edited to remove improper math. Thanks.
-1Logos01
X(0)) is a smaller value of anti-utility than X(1)), absolutely. I do not, however, know that the decrease of one second is non-negligible for that measurement of anti-utility, under the definitions I have provided. That math gets ugly to try to conceptualize (fractional values of fractional values), but I can appreciate the intention. This is a non-trivial alteration to the argument, but I will stipulate it for the time being. "Clearly"? I suffer from opacity you apparently lack; I cannot distinguish between the two. The paradox exists only if suffering is quantified linearly. If it is quantified logarithmically, a one-billionth shift on some position of the logarithmic scale is going to overwhelm the signal of the linearly-multiplicative increasing population of individuals. (Please note that this quantification is on a per-individual basis, which can once quantified be simply added.) This is far from being a paradox: it is a natural and expected consequence.
0prase
Then substitute "worse or equal" for "worse", the argument remains. Same thing, doesn't matter whether it is or it isn't. The only things which matters is that X(n) is preferable or equal to X(n+1), and that "specks" is worse or equal to X(3.8 * 10^10). If "specks" is also preferable to X(0), we have circular preferences. So, you are saying that there indeed is n such that X(n) is worse than X(n+1); it means that there are t and p such that burning p percent of one person's skin for t seconds is worse than 0.999999999 t seconds of burning 0.999999999 p percent of skins of ten people. Do I interpret it correctly? Edited: "worse" substituted for "preferable" in the 2nd answer.
0Logos01
Yes.

There's something cruel about ending a post with a request for people to point out errors in your reasoning and then arguing in circles with anyone who tries. Are you trolling, or do you just never admit to being wrong?

and this violates the principle of individual self-determination -

To select 3^^^3 people to get dust specks in their eyes also violates the "principle" of individual self-determination. And if 3^^^3 people are possible, 3^^^^3 people are probably possible too, so the idea of fairness doesn't apply - these people have all been picked out to have their individual self-determination violated.

In general you seem to be trying to wriggle out of the hypothetical as stated by bringing in extra stuff and then deciding based only on that extra stuff.

1TimS
And assuming that those who reach a different conclusion didn't include the "extra stuff" in their analysis.
0FAWS
No, that doesn't follow at all. It's ridiculous to even compare the two numbers that way. I would agree that 3^^^4 people might seem somewhat plausible in that case, and 3^^^4 is already larger than 3^^^3 by a factor incomprehensibly greater than 3^^^3. Even 3^^^4 is probably already far excessive for what you need for your argument.
-3Logos01
True, but it does so significantly less-grossly. The impact on a person's self-determination and ability to do so from a negligible dust-specking is effectively not-measurable, compared to the lasting results of being tortured, even assuming the individual survives; such torture has consequences beyond the immediate suffering and impedes that person's ability to be who or what they wish to be even once the torture has ended. Please explain. I wasn't aware that the hypothetical as stated was anything other than "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?" If by "extra stuff" you mean "other consequences" and "based only on that extra stuff" you mean "determining which set of consequences would be less-optimal" then... you're absolutely right in this part.
8Manfred
Would you say it does so... a factor of 3^^^3 less grossly? There's a phrase: "all other things being equal." You can always give your answer and then point out that it's irrelevant to the real world, or ADBOC. But if you start making your own list of things you want the hypothetical to be about, you haven't given an answer at all.
0Logos01
Please rephrase. I understand every word and phrase you use, but the arrangement of them in context of this conversation is inscrutable. Ceteris paribus, yes. I'm fully aware of the phrase, and its ordinary implications to a given dialogue. I'm not able to derive intelligible clues as to why you think bringing it up is relevant to the conversation. Do you believe that I have somehow violated this principle? If so, please explain why -- because I disagree with that notion. Ahh... the question was framed in the context of how a consequentialistic utilitarian ought to answer it. In pointing out that first-order consequences are insufficient to properly calculate which is preferable I have not altered the question. Is the nature of your objection to my position the simple fact that I refuse to only consider the immediate suffering of the proposition? If so, then simply put my argument is that you are creating an insufficiently narrow view of the question. I.e.; you are "making your own list of things you want the hypothetical to be about" -- or, rather, not about. Whatever that thing is, it certainly doesn't properly address, as I argue, the hypothetical as given. Tell me; on what grounds do you choose to exclude secondary consequences from the metric of deciding which of the two choices is preferable to a consequentialistic utilitarian? How, in addition, does this standard of excluding or including consequences from said calculus affect consequentialism? Why are some consequences "fit" for consideration, whereas others are "unfit"?
5Manfred
The upshot is that as soon as you allow things to be immoral (or violate rights, or whatever) to various degrees, not just black and white "immoral" and "not immoral," you have exactly the same problem, so talking about torture being a violation of rights doesn't bring anything new to the table unless you're prepared to bite some pretty bitter bullets. Yeah, pretty much. If it were logically impossible for ceteris to be paribus, there would be every reason to reject the hypothetical. But it's not - those worlds are perfectly possible, you are merely asked to say which you like better. To bring in "secondary factors" (i.e. look at worlds where ceteris isn't paribus) and then decide based on those factors alone isn't a correction to the original question, it's answering a completely different question.
0Logos01
Do I correctly understand you to believe that I was including the right of individual self-determination as an independent valuative norm, rather than for its utility? (Also, please note that I did not originally use the term "right" but rather "principle".) Incorrect. I can only assume that you are thinking that I'm concerned, here, about the violation of rights in general, as opposed to individual self-determination in specific. My point was to demonstrate that the direct impact of torture vs. dustspecks goes beyond merely suffering. Really, this only serves to illustrate my belief of the eroneous nature of associating "utilons" with "happiness". We derive utility from things other than feeling pleasure; we experience disutility from things other than experiencing suffering. However, suffering when present in sufficient quantities in a single person can and does impact those other forms of disutility... such as the loss of capacity for self-determination. Please note: self-determination as I am using it does NOT refer to getting to choose whether or not the event itself (speck vs. torture) happens to you. It refers to the ability, thereafter, to make competent decisions about your own life, or have the capacity to determine for yourself who you wish to be. I see. You are under the misapprehension that I am not applying the principle of ceteris paribus to the argument. Rest assured that this is a misapprehension. I in fact am treating this as an "all other things being equal" scenario. I simply have a more expansive view of the definition of "consequence" than "suffering alone". 1. I never even intimated that "only the secondary consequences should be considered". Please discontinue the use of this strawman view of my argument. 2. Considering secondary consequences of a choice is not "looking at worlds where ceteris isn't paribus". I am quite frankly at a total loss as to understanding why you should be possessed of such a belief in the first place. 3. W
0Manfred
Just that you were applying it to torture while not applying it to dust specks - a qualitative difference. You never said, it, and yet in your argument only the secondary factors mattered to your decision. If you don't think you can judge how much you'd like two worlds to exist independent of there being someone in those worlds to make a "choice," then you reject utilitarianism. My guess would be when your argument wasn't in alignment with the ceteris paribus principle. Because the consequences that "actually count" were the ones that make up the original problem. "Secondary consequences" that are not logically equivalent (this does not mean causally related) to the original consequences merely mean that you're answering a different question than the one that was asked.
-4Logos01
Not even remotely. I applied it to both; it simply does not alter the anti-utility of dust specks. Receiving a dust-speck in one's eye does not alter in any measurable way the capacity for self-determination of an arbitrary individual. False. The secondary consequences when added to the primary caused torture, even in the linear-additive condition, to be the worse option. They overwhelmed the primary. ... what? At what point, exactly, did this become a valid thing for you to say to me? The hypothetical asked us which scenario was worse. That is a choice to be made. Furthermore, exactly how does the notion of required agency abrogate utilitarianism? That doesn't even remotely compute. Of the many-fold forms of utilitarianism of which I am aware, not a single one has such a standard. The very notion of a moral system which might require that it be applicable without an agent is self-contradicting. Ahh. I must conclude that either you haven't been reading a single thing I've written, or else you are delusional, or else you are writing to me somehow from a parallel world. Or you are simply lying. These are the only available options, as your statement is not in accordance with the reality I can observe in this comment thread. This is not even remotely interesting as an argument. The consequences of the original problem are those consequences the original problem's choices would result in. Your failure to consider those consequences does not, under any rational circumstances, mean those consequences did not exist. It only means that you made an incomplete analysis. And that, of course, was my original point: that the rejection of secondary consequences was a failure of analysis. They were there. A consequence of an action or history is a consequence of that action or history. When selecting from a given action or history against another, as a consequentialistic utilitarian, one must weigh the consequences of a given action or history against one another. That is e
0Manfred
I'll start winding down my answers now. This looks like it may actually hit reverse returns. And yet not long ago you said this: Also remember that the dust speck causes them to blink, measurably. - Utilitarianism means that the preference ranking of possible worlds is determined only by the properties of those worlds. The hypothetical is asking for that preference ranking. The fact that you have to choose is a not one of the properties of those worlds.
0Logos01
That has no bearing on the question of self-determination. The sum total of a finitely-large-but-humanly-incomprehensible infinitessimal suffering events in terms of their impact on self-determination is, arguably, non-negligible, but it certainly isn't equivalent to the total, repeated, ruination of said function. May or may not be a property of those worlds. The question is agnostic to how the worlds are implemented. This is not a trivial or irrelevant detail. In the absence of justification for removal of this property it must be considered. Even if we do away with that consideration, however, on the balance the argument I've been making holds true.

the other I think of as the 'logarithmically additive' answer, which inverts the answer on the grounds that forms of suffering are not equal, and cannot be added as simple 'units'.

This is a poor choice of terminology. Logarithmic functions grow slowly, but they're still unbounded: even if the badness of the dustspecks is a logarithmic function (say, the natural log) in the number of people specked, ln(3^^^3) is still so incomprehensibly large that the torture-favoring conclusion still follows. Perhaps what you mean is something more like logistic additi... (read more)

-7Logos01

If you truly want to become stronger, note that several people whose intellects you respect have said that you're not processing their objections correctly. You really should consider the possibility that your mind is subconsciously shrinking away from a particular line of thought, which is notoriously difficult to see as it's happening, especially when perceived social status is at stake.

From my perspective, it looks like you're either rejecting consequentialism (which is a respectable philosophical position in most circles, but you don't admit outright t... (read more)

I do not believe...only...misses the point.

Am I reading that correctly?

I do not condone torture even in the "ticking time bomb" scenario: I cannot accept the culture/society that would permit such a torture to exist. To arbitrarily select out one individual for maximal suffering in order to spare others a negligible amount would require a legal or moral framework that accepted such choices

There are multiple questions here, and they don't necessarily have similar answers.

Some examples:

A person who campaigns to ban torture and make it ille... (read more)

-1Logos01
Edited. ( s/do not// ) The mere fact that immoral things occur is not a criticism of the moral framework itself but rather of our inability to adhere to it. Noting this flaw in our behaviors, I believe, does not provide grounds for argument against the framework. And I noted that it is useful for sussing out whether one holds to the linear or the logarithmic view of suffering, in that context. I believe being tortured exquisitely for fifty years is (effectively, at least) infinitely worse than having a nearly-unnoticeable dust-speck in my eye.
0lessdazed
The sentences you quoted should be interpreted as fitting within their paragraph - they introduce the possibility that making something illegal might not reduce how often it occurs. In general, for almost no moral framework should one attempt to make the law match it. Can I or can I not take you to mean that increased incidence of torture, likely or assured, is always worth the benefit of torture being illegal according to the law and/or common perception? Usually worth it? Never worth it? Perhaps the biggest consideration is the nature of the causal relationship between one's act and the decision to torture?
0Logos01
Of course not. I was not especially aware that I had extended the discussion beyond the realm of the moral into the legal, however -- so I can't say I find anything relating to the comparison between the two to be especially relevant to the discussion at hand. I am not, here, making any forays into the legal arena. I will say that I, agnostic to any other considerations, strongly prefers scenarios that result in less torture being committed as opposed to more. My argument, here, does rest on the need to consider secondary consequences when making a properly "consequentialist" argument for which choice to make, yes. I'm not entirely sure that actually answers the question you're asking, however.
0lessdazed
I see the following as an argument against legalizing or otherwise endorsing behavior, but not as an argument against an individual's performing the behavior: On the balance, how did the gatherer effect the social taboo against "work" on the Sabbath? He provided an excuse to reinforce the prohibition. Someone knowing the outcome of the story couldn't have said his action made it less taboo, as it wasn't previously established that it as a capital offense. All the more so for the 1,000,001st person to torture someone, after the million who preceded him, so long as the 1,000,001st is punished severely.
0Logos01
As a general trend if we accept one form of action as opposed to the other we are reducing the threshold towards its being repeated. This is akin to the Broken Window Theory: what was permitted once may be argued more permissible in the future due to said permission. Individual instances of a behavior then become arguments for or against it. For example, I believe that the US's practice of condoning "enhanced interrogation techniques" was directly contributive to the events in Abu Ghraib. What I mean to say is: as a practical argument, deciding between the two must consider the impact of the decision on the likelihood for the type of behavior to recur, amongst other things. The key is in that "or otherwise endorsing behavior" -- graffiti in a neighborhood results in increased burglaries, litering, and other forms of crime. Increasing the instances of intentional/chosen torture increases the likelihood of acts of equivalent or lesser severity being committed. There is historical inertia to how individual actions accumulate to affect the actions society deems acceptable, yes. This is an element of my argument.
0lessdazed
Part of figuring out the impact of the decision on the likelihood for the type of behavior to recur is other peoples' responses to it, amongst other things. The act of painting graffiti doesn't cause crime, certain responses to it do. Increasing the instances of graffiti only increases crime all else equal, but does not increase it irrespective of communal response. In this post and several of its threads you seem to be violating the principle of least convenient possible world. One of your original criticisms of torture instead of specks was that it assumed very particular consequences of actions - that torturing wouldn't ever affect future choices to torture. However, you illegitimately assume that it will always affect future choices to torture by making it more likely. It seems almost parallel. If anything, appealing to future cases makes the argument for specking stronger. At a certain number of future cases, at a certain quantity of specks, more people would be tortured for 50 years if one always chose specks than if one always chose torture!
-2Logos01
... My original criticism depended on the idea that torturing would affect future choices to torture. That continues to be my criticism. From where do you derive this idea that I assert it does not? Please explain why you find this to be an "illegitimate assumption". Especially in the face of the explanations I have thus far given as to why it would in fact occur. I disagree. Strongly. Very strongly, in fact. For the same reason I've already made: by the time 3^^^3 people are tortured for fifty years as a result of dustspecks, for the equivalent number of choices to be made for torture instead would require -- even if we assume that the torture scenario has only a quarter the total suffering of the 3^^^3 speckings -- the sheer volume of such tortures would definitely invoke the Broken Window Theory. At a certain point human beings will -- from sheer necessity for psychological stability -- engage in the suspension of moral belief. "One person dying is a tragedy; a thousand is a statistic; a million is a number." Such immunization to the suffering of others as would be resultant from the sheer volume of such suffering would result, unless some major alterations are made to human psychology, in the institutionalization of such suffering. As a result of that, then, there would be -- again, all other things being equal -- far more torture, rape, and sheer absence of compassion and aid resultant. We still have societies of this nature today. If nothing else, the expansion from a single instance to multiple makes this principle far more overtly obvious -- it allowed me to make what I personally feel an absurd declaration (that 'terrific' torture for fifty years is equivalent to 1/4th of 3^^^3 almost-unnoticeable near-instant nuisance events in terms of direct suffering -- when as I said before I feel that dustspecking is infinitessimal in comparison to said torture.) And that's only considering the immediate suffering, as opposed to other consequences -- such as the i
5lessdazed
One of your original criticisms of the choice of torture instead of specks was that that choice assumed very particular consequences of actions - that torturing wouldn't ever affect future choices to torture. However, you assume that it would always affect future choices to torture by making it more likely. Both of these assumptions are too extreme for the real world, though fine for hypotheticals in which other questions - such as aggregation of utility - are the subject. Arguing that something would usually or often happen doesn't undermine the original thought experiment in which that wasn't one of the variables. In practice, I'm happy to say that for some small amount of pain and some number of people, inflicting more pain per person on fewer people is preferable, but those numbers depend on other consequences of the choice. If in practice every choice made to cause more pain to fewer people when it is not the first week of December, GMT, causes a plague somewhere, that affects the calculus. Sometimes it will be the first week of December, and in any case "some number of people" is not fixed and can be different depending on the week, etc. If inflicting x pain on Q people for t1 time directly causes the same amount of suffering as inflicting y pain on R people for t2 time, and inflicting x pain on Q people for t1 time indirectly causes more suffering than inflicting y pain on R people for t2 time, we prefer the first option. That doesn't undermine any utilitarianism or make one question the coherence of aggregating suffering. Teenage Mugger: [Dundee and Sue are approached by a black youth stepping out from the shadows, followed by some others] You got a light, buddy? Michael J. "Crocodile" Dundee: Yeah, sure kid. [reaches for lighter] Teenage Mugger: [flicks open a switchmillion] And your wallet! Sue Charlton: [guardedly] Mick, give him your wallet. Michael J. "Crocodile" Dundee: [amused] What for? Sue Charlton: [cautiously] He's got a large number. Michael J
-2Logos01
This is the exact opposite of a true statement about my original criticisms. Ceteris paribus, yes. All other things being equal, consciously selecting torture and then carrying it out will, in fact, make future tortures more likely. Under the assertions of the empirical research associated with the Broken Window Theory, this is not merely an assumption, it's a fact. (In other words, my assumption is that the experiments on the topic allow for valid predictions in this question.) I'm sorry, consequentialism doesn't work that way. Consequences of a choice are consequences of a choice. This is a tautology. When comparing the utilitarian consequences of a given choice, all utility-affecting consequences must be considered. Furthermore, I do not understand why you would phrase this in terms of "undermining the original thought experiment". Certainly, I'm undermining Eliezer's conclusion of the experiment -- and those who agree with him. But that's hardly equivalent to undermining the experiment itself. I'm arguing you are wrong to choose "torture". Not that the experiment is invalid. Say the value of direct disutility is d(X). We here stipulate that d(torture) and d(speck) are equal. Say that the indirect disutility is i(X). We here stipulate that i(torture) > i(speck). We have also stipulated that we are using identical units for disutility. d(torture)+i(torture) > d(speck)+i(speck), yet we prefer torture? I am going to choose to believe that by "prefer" you mean to say that you prefer to say that torture is the worse outcome. I believe your skills as a rationalist exceed the possibility of you intentionally saying the opposite. I never even remotely suggested either of these things were notions worthy of consideration. Why bring them up? I'm not quite sure what you were saying here, but I know it was funny as hell. :-)

This whole discussion seems to hinge on the possibly misleading choice of the word "torture" in the original thought-experiment. Words can be wrong and one way is to sneak in connotations and misleading vividness — and I think that's what's going on here.

In our world, torture implies a torturer, but dust specks do not imply a sandman. "Torture" refers chiefly to great suffering inflicted on a victim intentionally by some person, as a continuous voluntary act on the torturer's part, and usually to serve some claimed social or moral purpo... (read more)

-1Logos01
The point is that a choice between the two is made. How the choice is instantiated is entirely irrelevant, saving that it be done in equivalent manners. (I.e.; if torture -> torturer, then speck -> specker && if torture !-> torturer; then speck !-> specker) That would invalidate equivalency between the two options, however. We needn't go that far. As I originally said; if the question is meant merely to derive whether a person views suffering to operate linearly for quantification purposes, as opposed to logarithmically, then restricting the topic to immediate suffering is sensible. However, the question was not phrased in that manner: it was instead asked to derive which of the two options is preferable to a consequentialistic utilitarian. And my argument simply put was that a culture that permits such tortures to occur -- either at the hand of an agent or otherwise -- faces significantly greater secondary consequences than are associated with 3^^^3 dust-speckings. Not the least of which is the ancillary suffering experienced by those cognizant of the suffering who can do nothing to prevent it; and the resulting increases in suffering in general caused by the presence of at least one individual suffering to that extremity -- or, rather, caused by the innurement to human suffering engendered in a non-zero percentage of individuals aware of that suffering. And then there's the question of self-determination; the tortured individual is bereft of all ability to achieve individual utility -- all forms of utility, whereas the 3^^^3 speckees recieve only a barely noticeable disutility of displeasure and are otherwise almost entirely unaffected. (It's possible a non-zero portion of those individuals might have accidents or the like, but given how infrequently getting a dust-speck in your eye causes traffic accidents -- as in, I can find no record of such an incident -- that's negligible.) I hope this clears up any confusion here as to the nature of my argument.

You have 50 years of a horrible torture and then 50*3^^^3 years of a pleasant life with no dust speck.

OR

50*(3^^^3+1) years of a pleasant life with a dust speck every 50 years.

What would you take?

2TheOtherDave
I would almost certainly take the latter. So would everyone I've ever known. What does that demonstrate? I mean, it's also almost certainly true that after a year of horrible torture, if you offered me a choice between another 49 years of horrible torture followed by 3^^^3 years of pleasant life, or death, I would choose death. But again... so what?
0Thomas
So, you'd opt for the worse option, according to this list?
3TheOtherDave
(nods) Likely, if I were somehow placed in a situation where I could make such a choice. I mean, 50 years of horrible torture is scary as hell, and something I can just barely imagine. 3^^^3 years of pleasant life is so completely outside my experience that I can't even begin to imagine it. The odds that I would make any kind of sensible expected-utility calculation in that situation are basically zero... hell, I don't do all that well with real-life situations where I know that something mildly unpleasant now will bring me tangible benefits later. Again: what does that demonstrate?
-2Thomas
In a moment! What about some other guy, where would you put him? What about the case, where the 50 years of torture is in the middle? Or in the end?
0TheOtherDave
I expect I would choose the torture-free option in all these cases, if I were somehow faced with the choice, for basically the same reason: 50 years of torture is scary, and 3^^^3 years is basically inconceivable. I would like you to get to a point some time soon.
1ArisKatsaris
This analogy doesn't work, because if I had to choose between: * 50 years of torture now, followed by 50*3^^^3 years of life * 49 * 3^^^3 years now, followed by 100 years of torture I'd also end up choosing the latter, though there's less life, and more torture -- just because the years of torture are further away.
-4Thomas
So, you say, we are incapable to choose a better option for us. 50 years of torture plus 50*(3^^^3-1) years of a good life with no dust speck is a better then the second one, with a dust speck every 50 years and no torture - for 50 times 3^^^3 years? We just can't/won't go for a better one?
2ArisKatsaris
Am pretty sure I didn't say we are "incapable" of anything, and I have to warn you I don't appreciate a tactic of putting words in my mouth: it's a berserk button for me. So please be careful about this. But if you want me to say something using the word "incapable" in it, currently we're pretty incapable of understanding the scope of 3^^^3.

I believe that any metric of consequence which takes into account only suffering when making the choice of "torture" vs. "dust specks" misses the point.

No, that was the point alright. If you don't believe me, ask Eliezer.

moral responsibility,

If it's not happiness, I don't find it intrinsically important. Also, if you do consider moral responsibility to be intrinsically important, you end up with a self-referential moral system. I don't think that would end well.

standards of behavior that either choice makes acceptable,

A socie... (read more)

2CronoDAS
That would depend a lot on how different people's utility is weighted. As Mel Brooks put it, "It's good to be the king."
-4Logos01
If that's Eliezer's position then Eliezer is wrong. I have no choice but to treat him as such until such time as I am introduced to a persuasive argument for why some consequences are "fit" for consideration whereas others are "unfit". I cannot, of my own accord, derive an intelligible system for doing so. 1) I do not view "happiness" as intrinsically important, but I'm willing to stipulate that it is for this dialogue. 2) I made no argument of 'intrinsic value'/'significance' to moral responsibility. I said instead that how the choice would affect what we deem morally responsible would have consequences in terms of the utility of the resultant society. Yes, it would. But real utilty trumps pseudo utility. Certainly. Assuming that's what was done. The entire point of my argument was that the net impact of a given choice on utility should be what is considered. Even if we allow for the 3^^^3-dustspeck scenario to be "unimaginably worse" than the single torture, the primary and secondary consequences of the 3^^^3-dustspeck scenario are by no means clearly "unimaginably worse" than the primary and secondary consequences of the torture scenario. Strike "technically". It isn't torture. Imprisonment (with the exception of extreme forms of solitary confinement) in no way compares to the systematic use of pain and extreme conditions to disrupt the underlying psychological wellbeing of another person. Furthermore, the torture-vs-dustspeck question is of a ceteris-paribus ("all other things being equal") nature. Regardless of which choice you wished to consider, if it was phrased in terms of the suffering being inflicted with cause, then the two are indistinguishable -- though I personally am unable to imagine any person being capable of deserving being "terrifically tortured" for fifty years (or a month, or a week, for that matter. I could see a day for a child rapist. But that's neither here nor there.)

Interesting coincidence. I was just yesterday thinking of terming the torture position "Omelasian."

As an aside, the ones who walk away are also moral failures from the pro-specks standpoint. They should be fomenting revolution, not shrugging and leaving.

2CronoDAS
Absolutely. Unless you're the last person left, your choosing to opt out of the benefits of living in Omelas doesn't actually accomplish anything. The problem lies, therefore, in getting everyone to leave, which is as classic a collective action problem as any, I suppose...
-1Logos01
I agree with this. I didn't mention it because I wasn't prepared to address how that modified the argument.
[-]TimS-20

Adding the issue of choice (i.e. moral responsibility) for the outcome seems to be fighting the hypo. Imagine Evil Omega forces you to choose between torture and dust-specks (by threatening to end humanity or something else totally unacceptable). You could respond that you are not morally competent to make the choice. This is true, but also irrelevant because it won't convince Evil Omega to let you go.

In short, the interesting question of the debate is "Which is worse: torture or dust specks?" At best, I think you've made an interesting case that "Should we switch the status quo from dust specks to torture (or vice versa)?" is a different question.

-1Logos01
That doesn't modify, I believe, the argument/response as I have placed it. I was already stipulating that the choice was binary between the two options. (That is, I was already stipulating that the choice had to be made, and could not be avoided.) The point I was making was that the mere comparison of suffering is insufficient grounds to declare which outcome is preferable; there are other consequences that, I believe, ought to be included in the consequentialistic-utilitarian "weighting algorithm". Questions such as: "What kind of society would result from this choice?" Perhaps I didn't explain my meaning sufficiently? What I meant by "moral responsibility" in this case was that in comparing the two options, the "weighting" of the moral responsibility between the two choices needs to be included. (I'm curious; did you actually read the short story? Perhaps we merely took different things away from it. That is a problem of allegory.)
0TimS
This "weighting of the moral responsibility" thing seems like double counting to me. It isn't something that would make a linear-additive change her mind. And a logarithmic-additive like me doesn't need additional reasoning not to torture. ---------------------------------------- From the Wikipedia summary, it looks a bit like the criticism of act utilitarianism embedded in the sheriff faced with a riotous mob, but that's a different question. Picking a different moral theory doesn't get you out of the torture v. dust speck issue, but it basically decides whether you stay or leave Omelas.
-1Logos01
Not at all. It's acknowledging that the consequences of a given decision extend beyond the immediate result of the decision to the historical inertia of having accepted said decision and how that transforms any society that results from said decision being made. Example: We disallow the starving to steal bread not because we believe that the starving should starve -- nor even that the bread-sellers "deserve" the bread 'more' than the starving. We disallow it because of the impact that allowing it would have on our society. My argument rests entirely on the fact that we must acknowledge, in making the selection between the two, not just the immediate results of our decisions, but the ways in which our decisions will alter what is considered "morally responsible" thereafter. If and only if we exclusively consider the immediate results, certainly. But my entire argument rests on the notion that solely considering the immediate results is insufficient to properly considering the consequences of the decision. I am confused. Why do you believe the notion of "getting out of the issue" relevant to this discussion? I explicitly stated that it was stipulated as incontrovertible that the issue was unavoidable. I see. We did, in fact, take away different things from the story. I was referring to the story's depiction of the suffering of widespread individuals from the knowledge that their happiness -- or "lack of suffering" -- existed at the expense of another person. I.e.; I was attempting to note that there were secondary consequences that bore consideration. I was not making inferences about act/rule utilitarianism, or criticisms therein.
0TimS
You think the torture-choosers aren't including this already? Because I assumed they were, and it didn't change their result. I was only trying to explain why I don't think the story of Omelas is relevant.
-1Logos01
I so far haven't seen evidence that you are, either. All discussion I have seen previously on the topic discussed how the torture compared to the dust-specks directly, and at that solely in terms of which was the greater total suffering amount. I see. As I said; we have taken different things away from the story, because I did not take its reference as bearing on the topic of "getting out of the issue" at all.
-1TimS
As you said, ideas have momentum. I'm not sure if it's an expression of human cognitive bias or human moral plasticity. But it is the case that talking about whether to torture a person makes torturing someone more likely. Because evil is especially intractable when it is banal. But those are reasons not to have the conversation. At all. If that's what you believe, you shouldn't have resurrected the topic because it involves a secret man is "not ready to know." None of this is a reason to decide that suffering does not add linearly. And that's the only interesting question in the hypo. Because everyone already agrees that choosing either would be immoral if there were no forced choice. And everything you say about choosing to torture is just as true about choosing to dust-speck, except that it is totally impossible for us to dust-speck, given our current capacities. In short, you seem to want to avoid the question of linear-suffering entirely. That's why the criticism of "getting out of the issue" could have any bite at all.
-2Logos01
Citation, please? I have seen evidence that this is true of actual instances of torture. I have also seen evidence that this is true of cases where a person has written "I will torture". I have never seen evidence to support the idea that discussing the notion of torture causes rise in the rates of torture-incidences. (I am giving the benefit of the doubt and assuming you do not mean this in the 'magical thinking' sense.) None of that is relevant to the question of linear vs. logarithmic quantification of suffering, yet they are all questions raised by the hypothesis -- a direct falsification of your claim of that contrast being the "only interesting question" in the hypothesis. How do you figure? I am unable to conceive of a way for this statement to be valid. Enlighten me. What? I seem to want nothing of the sort. I even allowed for the stipulation of linear-additive suffering as a means of demonstrating that it was uninteresting to the topic at hand; the argument by myself that the refusal to acknowledge the non-immediate consequences of either option was grounds for invalidating the answers thus far given. Please stop using that phrase. It's putting words into my mouth and they just are NOT applicable to me or my argument whatsoever. It is patently dishonest of you to keep doing that. Just stop.
-1TimS
Cites See, e.g., the conventional wisdom that the show 24 made implementation of torture more politically feasible. ---------------------------------------- Look, people keep telling you that you are trying to fight the hypo. You admit the essential elements of this charge. That's fine with me. Some hypothetical questions are not worth engaging.
-5Logos01