One way to train this: in my number theory class, there was a type of problem called a PODASIP. This stood for Prove Or Disprove And Salvage If Possible. The instructor would give us a theorem to prove, without telling us if it was true or false. If it was true, we were to prove it. If it was false, then we had to disprove it and then come up with the "most general" theorem similar to it (e.g. prove it for Zp after coming up with a counterexample in Zm).

This trained us to be on the lookout for problems with the theorem, but then seeing the "least convenient possible world" in which it was true.

The Least Convenient Possible World

Related to: Is That Your True Rejection?

"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments.  But if you’re interested in producing truth, you will fix your opponents’ arguments for them.  To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."

   -- Black Belt Bayesian, via Rationality Quotes 13

Yesterday John Maxwell's post wondered how much the average person would do to save ten people from a ruthless tyrant. I remember asking some of my friends a vaguely related question as part of an investigation of the Trolley Problems:

You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he's there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?

I don't want to discuss the answer to this problem today. I want to discuss the answer one of my friends gave, because I think it illuminates a very interesting kind of defense mechanism that rationalists need to be watching for. My friend said:

It wouldn't be moral. After all, people often reject organs from random donors. The traveller would probably be a genetic mismatch for your patients, and the transplantees would have to spend the rest of their lives on immunosuppressants, only to die within a few years when the drugs failed.

On the one hand, I have to give my friend credit: his answer is biologically accurate, and beyond a doubt the technically correct answer to the question I asked. On the other hand, I don't have to give him very much credit: he completely missed the point and lost a valuable effort to examine the nature of morality.

So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"

He mumbled something about counterfactuals and refused to answer. But I learned something very important from him, and that is to always ask this question of myself. Sometimes the least convenient possible world is the only place where I can figure out my true motivations, or which step to take next. I offer three examples:

 

1:  Pascal's Wager. Upon being presented with Pascal's Wager, one of the first things most atheists think of is this:

Perhaps God values intellectual integrity so highly that He is prepared to reward honest atheists, but will punish anyone who practices a religion he does not truly believe simply for personal gain. Or perhaps, as the Discordians claim, "Hell is reserved for people who believe in it, and the hottest levels of Hell are reserved for people who believe in it on the principle that they'll go there if they don't."

This is a good argument against Pascal's Wager, but it isn't the least convenient possible world. The least convenient possible world is the one where Omega, the completely trustworthy superintelligence who is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.

Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that it's wrong to abandon your intellectual integrity on the off chance that a crazy deity is playing a perverted game of blind poker with your eternal soul?

2: The God-Shaped Hole. Christians claim there is one in every atheist, keeping him from spiritual fulfillment.

Some commenters on Raising the Sanity Waterline don't deny the existence of such a hole, if it is intepreted as a desire for purpose or connection to something greater than one's self. But, some commenters say, science and rationality can fill this hole even better than God can.

What luck! Evolution has by a wild coincidence created us with a big rationality-shaped hole in our brains! Good thing we happen to be rationalists, so we can fill this hole in the best possible way! I don't know - despite my sarcasm this may even be true. But in the least convenient possible world, Omega comes along and tells you that sorry, the hole is exactly God-shaped, and anyone without a religion will lead a less-than-optimally-happy life. Do you head down to the nearest church for a baptism? Or do you admit that even if believing something makes you happier, you still don't want to believe it unless it's true?

3: Extreme Altruism. John Maxwell mentions the utilitarian argument for donating almost everything to charity.

Some commenters object that many forms of charity, especially the classic "give to starving African orphans," are counterproductive, either because they enable dictators or thwart the free market. This is quite true.

But in the least convenient possible world, here comes Omega again and tells you that Charity X has been proven to do exactly what it claims: help the poor without any counterproductive effects. So is your real objection the corruption, or do you just not believe that you're morally obligated to give everything you own to starving Africans?

 

You may argue that this citing of convenient facts is at worst a venial sin. If you still get to the correct answer, and you do it by a correct method, what does it matter if this method isn't really the one that's convinced you personally?

One easy answer is that it saves you from embarrassment later. If some scientist does a study and finds that people really do have a god-shaped hole that can't be filled by anything else, no one can come up to you and say "Hey, didn't you say the reason you didn't convert to religion was because rationality filled the god-shaped hole better than God did? Well, I have some bad news for you..."

Another easy answer is that your real answer teaches you something about yourself. My friend may have successfully avoiding making a distasteful moral judgment, but he didn't learn anything about morality. My refusal to take the easy way out on the transplant question helped me develop the form of precedent-utilitarianism I use today.

But more than either of these, it matters because it seriously influences where you go next.

Say "I accept the argument that I need to donate almost all my money to poor African countries, but my only objection is that corrupt warlords might get it instead", and the obvious next step is to see if there's a poor African country without corrupt warlords (see: Ghana, Botswana, etc.) and donate almost all your money to them. Another acceptable answer would be to donate to another warlord-free charitable cause like the Singularity Institute.

If you just say "Nope, corrupt dictators might get it," you may go off and spend the money on a new TV. Which is fine, if a new TV is what you really want. But if you're the sort of person who would have been convinced by John Maxwell's argument, but you dismissed it by saying "Nope, corrupt dictators," then you've lost an opportunity to change your mind.

So I recommend: limit yourself to responses of the form "I completely reject the entire basis of your argument" or "I accept the basis of your argument, but it doesn't apply to the real world because of contingent fact X." If you just say "Yeah, well, contigent fact X!" and walk away, you've left yourself too much wiggle room.

In other words: always have a plan for what you would do in the least convenient possible world.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:47 PM
Select new highlight date
All comments loaded

I think a better way to frame this issue would be the following method.

  1. Present your philosophical thought-experiment.
  2. Ask your subject for their response and their justification.
  3. Ask your subject, what would need to change for them to change their belief?

For example, if I respond to your question of the solitary traveler with "You shouldn't do it because of biological concerns." Accept the answer and then ask, what would need to change in this situation for you to accept the killing of the traveler as moral?

I remember this method giving me deeper insight into the Happiness Box experiment.

Here is how the process works:

  1. There is a happiness box. Once you enter it, you will be completely happy through living in a virtual world. You will never leave the box. Would you enter it?
  2. Initial response. Yes, I would enter the box. Since my world is only made up of my perceptions of reality, there is no difference between the happiness box and the real world. Since I will be happier in the happiness box, I would enter.
  3. Reframing question. What would need to change so you would not enter the box.
  4. My response: Well, if I had children or people depending on me, I could not enter.

Surprising conclusion! Aha! Then you do believe that there is a difference between a happiness box and the real world, namely your acceptance of the existence of other minds and the obligations those minds place on you.

That distinction was important to me, not only intellectually but in how I approached my life.

Hope this contributes to the conversation.

David

I find a similar strategy useful when I am trying to argue my point to a stubborn friend. I ask them, "What would I have to prove in order for you to change your mind?" If they answer "nothing" you know they are probably not truth-seekers.

Namely, the point of reversal of your moral decision is that it helps to identify what this particular moral position is really about. There are many factors to every decision, so it might help to try varying each of them, and finding other conditions that compensate for the variation.

For example, you wouldn't enter the happiness box if you suspected that information about it giving the true happiness is flawed, that it's some kind of lie or misunderstanding (on anyone's part), of which the situation of leaving your family on the outside is a special case, and here is a new piece of information. Would you like your copy to enter the happiness box if you left behind your original self? Would you like a new child to be born within the happiness box? And so on.

I'm not sure if I'm evading the spirit of the post, but it seems to me that the answer to the opening problem is this:

If you were willing to kill this man to save these ten others, then you should long ago have simply had all ten patients agree to a 1/10 game of Russian Roulette, with the proviso that the nine winners get the organs of the one loser.

While emphasizing that I don't want this post to turn into a discussion of trolley problems, I endorse that solution.

In the least convenient possible world, only the random traveler has a blood type compatible with all ten patients.

There are real life examples where reality has turned out to be the "least convenient of possible worlds". I have spent many hours arguing with people who insist that there are no significant gender differences (beyond the obvious), and are convinced that to assert otherwise is morally reprehensible.

They have spent so long arguing that such differences do not exist, and this is the reason that sexism is wrong, that their morality just can't cope with a world in which this turns out not to be true. There are many similar politically charged issues - Pinker discusses quite a few in the Blank Slate - where people aren't wiling to listen to arguments about factual issues because they believe they have moral consequences.

The problem, of course - and I realise this is the main point of this post - is that if your morality is contingent on empirical issues where you might turn out to be wrong, you have to accept the consequences. If you believe that sexism is wrong because there are no heritable gender differences, you have to be willing to accept that if these differences do turn out to exist then you'll say sexism is ok.

This is probably a test you should apply to all of your moral beliefs - if it just so happens that I'm wrong about the factual issue on which I'm basing my belief is wrong, will really I be willing to change my mind?

One way to train this: in my number theory class, there was a type of problem called a PODASIP. This stood for Prove Or Disprove And Salvage If Possible. The instructor would give us a theorem to prove, without telling us if it was true or false. If it was true, we were to prove it. If it was false, then we had to disprove it and then come up with the "most general" theorem similar to it (e.g. prove it for Zp after coming up with a counterexample in Zm).

This trained us to be on the lookout for problems with the theorem, but then seeing the "least convenient possible world" in which it was true.

Let's try something different.

  • Puts on the reviewer's hat.

The Yvain's post presented a new method for dealing with the stopsign problem in reasoning about questions of morality. The stopsign problem consists in following an invalid excuse to avoid thinking about the issue at hand, instead of doing something constructive about resolving the issue.

The method presented by Yvain consists in putting in place the universal countermeasure against the stopsign excuses: whenever a stopsign comes up, you move the discussed moral issue to a different, hypothetical setting, where the stopsign no longer applies. The only valid excuse in this setting is that you shouldn't do something, which also resolves the moral question.

However, the moral questions should be concerned with reality, not with fantasy. Whenever a hypothetical setting is brought in the discussion of morality, it should be understood as a theoretical device for reasoning about the underlying moral judgment applicable to the real world. There is a danger in fallaciously generalizing the moral conclusion from fictional evidence, both because there might be factors in the fictional setting that change your decision and which you haven't properly accounted for in the conclusion, and because decision extracted from the fictional setting is drawn in the far mode, running a risk of being too removed from the real world to properly reflect people's preferences.

I do agree. I think in many ways reality already is "the least convenient possible world" and the clearsightedness of thought experiments doesn't match the muddiness of the world.

I voted up on your post, Yvain, as you've presented some really good ideas here. Although it may seem like I'm totally missing your point by my response to your 3 scenarios, I assure you that I am well aware that my responses are of the "dodging the question" type which you are advocating against. I simply cannot resist to explore these 3 scenarios on their own.

Pascal's Wager

In all 3 scenarios, I would ask Omega further questions. But these being "least convenient world" scenarios, I suspect it'd be all "Sorry, can't answer that" and then fly away. And I'd call it a big jerk.

For Pascal Wager's specific scenario, I'd probably ask Omega "Really? Either God doesn't exist or everything the Catholics say is correct? Even the self-contradicting stuff?" And of course, he'd decline to answer and fly away.

So then I'd be stuck trying to decide whether God doesn't exist, or logic is incorrect (i.e. reality can be logically self inconsistent). I'm tempted to adopt Catholicism (for the same reason I would one-box on Newcomb: I want the rewards), but I'm not sure how my brain could handle a non-logical reality. So I really don't know what would happen here.

But let's say Omega additionally tells me that Catholicism is actually self-consistent, and I just misunderstood something about it, before flying away. In that case, I guess I'd start to study Catholicism. If my revised view of Catholicism has me believe that it does some rather cruel stuff (stone people for minor offenses, etc.) then I'd have to weight that against my desire to not suffer eternal torture.

I mean, eternal torture is pretty frickin' bad. I think in the end, I'd convert. And I'd also try to convert as many other people as possible, because I suspect I'd need to be cruel to fewer people if fewer people went against Christianity.

The God-Shaped Hole

To clarify your scenario, I'm guessing Omega explicitly tells me that I will be happier if I believe something untrue (i.e. God). I would probably reject God in this case, as Omega is implicitly confirming that God does not exist, and I do care about truth more than happiness. I've already experience this in other manners, so this is a much easier scenario for me to imagine.

Extreme Altruism

I don't think I can overcome this challenge. No matter how much I think about it, I find myself putting up semantic stop signs. In my "least convenient world", Omega tells me that Africa is so poverty stricken, and that my contribution would be so helpful, that I would be improving the lives of billions of people, in exchange for giving up all my wealth. While I might not donate all my money to save 10, I think I value billions of lives more than my own life. Do I value it more than my own happiness? This is an extremely painful question for me to think about, so I stop thinking about it.

"Okay", I say to Omega, "what if I only donate X percent of my money, and keep the rest for myself?" In one possible "least convenient world", Omega tells me that the charity is run by some nutcase whom, for whatever reason, will only accept an all-or-nothing deal. Well, when I phrase it like that, I feel like not donating anything, and blaming it on the nutcase. So suppose instead Omega tells me "There's some sort of principles of economy of scale which is too complicated for me to explain to you which basically means that your contribution will be wasted unless you contribute at least Y amount of dollars, which coincidentally just happens to be your total net worth." Again, I'm torn and find it difficult to come to a conclusion.

Alternative, I say to Omega "I'll just donate X percent of my money." Omega tells me "that's good, but it's not optimum." And I reply "Okay, but I don't have to do the optimum." but then Omega convinces me that actually, yes, I really should be doing the optimum somehow. Perhaps something along the line of how my current "ignore Africa altogether" behaviour is better than the behaviour of going to Africa and killing, torturing, raping everyone there. That doesn't mean that the "ignore Africa" strategy is moral.

One difficulty with the least convenient possible world is where that least convenience is a significant change in the makeup of the human brain. For example, I don't trust myself to make a decision about killing a traveler with sufficient moral abstraction from the day-to-day concerns of being a human. I don't trust what I would become if I did kill a human. Or, if that's insufficient, fill in a lack of trust in the decisionmaking in general for the moment. (Another example would be the ability to trust Omega in his responses)

Because once that's a significant issue in the subject , then the least convenient possible world you're asking me to imagine doesn't include me -- it includes some variant of me whose reactions I can predict, but not really access. Porting them back to me is also nontrivial.

It is an interesting thought experiment, though.

There's another benefit: you remove a motivation to lie to yourself. If you think that a contingent fact will get you out of a hard choice, you might believe it. But you probably won't if it doesn't get you out of the hard choice anyway.

I like the phrase "precedent utilitarianism". It sounds to utilitarians like you're joining their camp, while actually pointing out that you're taking a long-term view of utility, which they usually refuse to do. The important ingredient is paying attention to incentives, which is really the rational response to most questions about morality. Many choices which seem "fairer", "more just", or whose alternatives provoke a disgust response don't take the long-term view into account. If we go around sacrificing every lonely stranger to the highest benefit of others nearby, no one is safe. It's a tragedy that all those people are sick and will die if they don't get help, but we don't make the world less tragic by sacrificing one to save ten every chance we get.

Actually, we would all be more safe, because we'd be in less danger from organ failure. We are each more likely to be one of the "others nearby" than the "lonely stranger".

If we go around sacrificing every lonely stranger to the highest benefit of others nearby, no one is safe.

That would make a great movie!

Lonely Stranger

Jason Statham wakes up and realises all his family and friends have been killed by a tornado while he survives through luck and general masculine superiority. Beset upon on all sides by scalpel and tranquiliser wielding doctors he must constantly slaughter all the nearby sick people just to keep himself alive. Meanwhile, a sexy young biologist has been captured by a militant sect of religious Fundamentalists. Will Statham be able to break the imprisoned costar out in time to reveal her secret human organ cloning technology or will civilisation as we know it be destroyed by utilitarianism gone wrong?

So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"

Obviously, you wait for one of the sick patients to die, and use that person's organs to save the others, letting the healthy traveler go on his way. ;)

But that isn't the least convenient possible world - the least convenient one is actually the one in which the traveler is compatible with all the sick people, but the sick people are not compatible with each other.

Actually, you don't even need to add that additional complexity to make the world sufficiently inconvenient.

If the rest of the patients are sufficiently sick, their organs may not really be suitable for use as transplants, right?

Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that it's wrong to abandon your intellectual integrity on the off chance that a crazy deity is playing a perverted game of blind poker with your eternal soul?

I don't think I would be able to bring myself to worship honestly a God who bestowed upon us the ability to reason and then rewarded us for not using it.

Would you want to, if you could? If so, given the stakes, you should try damn hard to make yourself able to.

I don't think I would be able to bring myself to worship honestly a God who bestowed upon us the ability to reason and then rewarded us for not using it.

I don't follow your reasoning. Because God made us able to do a particular thing, we shouldn't be rewarded for choosing not to do that thing? A quick word substitution illustrates my issue:

"I don't think I would be able to bring myself to worship honestly a God who bestowed upon us the ability to murder and then rewarded us for not using it."

The problem with the 'god shaped hole' situation (and questions of happiness in general) is that if something doesn't make you happy NOW, it becomes very difficult to believe that it will make you happy LATER.

For example, say some Soma-drug was invented that, once taken, would make you blissfully happy for the rest of your life. Would you take it? Our immediate reaction is to say 'no', probably because we don't like the idea of 'fake', chemically-induced happiness. In other words, because the idea doesn't make us happy now, we don't really believe it will make us happy later.

Valuing truth seems like just another way of saying truth makes you happy. Because filling the god shaped hole means not valuing truth, the idea doesn't make you happy right now, so you don't really believe it will make you happy later.

For example, say some Soma-drug was invented that, once taken, would make you blissfully happy for the rest of your life. Would you take it?

I try my best to value other peoples' happiness equal to my own. If taking a happiness-inducing pill was likely to make me a kinder, more generous, more productive person, I would choose to take it (with some misgivings related to it seeming like 'cheating' and 'not good for character-building') but if it were to make me less kind/generous/productive, I would have much stronger misgivings.

I would act differently in the least convenient world than I do in the world that I do live in.

Very good point, and crystalizes some of my thinking on some of the discussion on the tyrant/charity thing.

As far as the specific problems you posed...

For your souped up Pascal's Wager, I admit that one gives me pause. Taking into account the fact that Omega singled out one out of the space of all possible religions, etc etc... Well, the answer isn't obvious to me right now. This flavor would seem to not admit to any of the usual basic refutations of the wager. I think under these circumstances, assuming Omega wasn't open to answering any further questions and wasn't giving any other info, I'd probably at least spend rather more time investigating Catholicism, studying the religion a bit more and really thinking things through.

For question 2 (the really "god shaped" hole) though, personally, while I value happiness, it's not the only thing I value. I'll take truth, thank you very much. (In the spirit of this, I'm assuming there's no psychological trick that would let me fake-believe enough to fill the hole or other ways of getting around the problem.) But yeah, I think I'd choose truth there.

Question 3? Assuming the most inconvenient world (ie, there's no way that I could potentially do more good by keeping the money, etc etc, no way out of the "give it away to do maximal good") well, I'm not sure what I'd do, but I'm pretty sure I wouldn't be able to in any way justify not giving it away to Charity X. Though, if I actually had a known Omega give me that information, then I think that might just be enough to give me the mental/emotional/willpower strength to do it. ie, assuming that I KNEW that that way was really the path if I wanted to optimize the good I do in the world, not just in an abstract theoretical way, but was actually told that by a known Omega, well, that might be enough to get me to actually do it.

The souped up Pascal's Wager seems like the thousand door version of Monty Hall.