The Hidden Complexity of Wishes

Fragile Purposes

Followup toThe Tragedy of Group Selectionism, Fake Optimization Criteria, Terminal Values and Instrumental Values, Artificial Addition, Leaky Generalizations 

"I wish to live in the locations of my choice, in a physically healthy, uninjured, and apparently normal version of my current body containing my current mental state, a body which will heal from all injuries at a rate three sigmas faster than the average given the medical technology available to me, and which will be protected from any diseases, injuries or illnesses causing disability, pain, or degraded functionality or any sense, organ, or bodily function for more than ten days consecutively or fifteen days in any year..."
            -- The Open-Source Wish Project, Wish For Immortality 1.1

There are three kinds of genies:  Genies to whom you can safely say "I wish for you to do what I should wish for"; genies for which no wish is safe; and genies that aren't very powerful or intelligent.

Suppose your aged mother is trapped in a burning building, and it so happens that you're in a wheelchair; you can't rush in yourself.  You could cry, "Get my mother out of that building!" but there would be no one to hear.

Luckily you have, in your pocket, an Outcome Pump.  This handy device squeezes the flow of time, pouring probability into some outcomes, draining it from others.

The Outcome Pump is not sentient.  It contains a tiny time machine, which resets time unless a specified outcome occurs.  For example, if you hooked up the Outcome Pump's sensors to a coin, and specified that the time machine should keep resetting until it sees the coin come up heads, and then you actually flipped the coin, you would see the coin come up heads.  (The physicists say that any future in which a "reset" occurs is inconsistent, and therefore never happens in the first place - so you aren't actually killing any versions of yourself.)

Whatever proposition you can manage to input into the Outcome Pump, somehow happens, though not in a way that violates the laws of physics.  If you try to input a proposition that's too unlikely, the time machine will suffer a spontaneous mechanical failure before that outcome ever occurs.

You can also redirect probability flow in more quantitative ways using the "future function" to scale the temporal reset probability for different outcomes.  If the temporal reset probability is 99% when the coin comes up heads, and 1% when the coin comes up tails, the odds will go from 1:1 to 99:1 in favor of tails.  If you had a mysterious machine that spit out money, and you wanted to maximize the amount of money spit out, you would use reset probabilities that diminished as the amount of money increased.  For example, spitting out $10 might have a 99.999999% reset probability, and spitting out $100 might have a 99.99999% reset probability.  This way you can get an outcome that tends to be as high as possible in the future function, even when you don't know the best attainable maximum.

So you desperately yank the Outcome Pump from your pocket - your mother is still trapped in the burning building, remember? - and try to describe your goal: get your mother out of the building!

The user interface doesn't take English inputs.  The Outcome Pump isn't sentient, remember?  But it does have 3D scanners for the near vicinity, and built-in utilities for pattern matching.  So you hold up a photo of your mother's head and shoulders; match on the photo; use object contiguity to select your mother's whole body (not just her head and shoulders); and define the future function using your mother's distance from the building's center.  The further she gets from the building's center, the less the time machine's reset probability.

You cry "Get my mother out of the building!", for luck, and press Enter.

For a moment it seems like nothing happens.  You look around, waiting for the fire truck to pull up, and rescuers to arrive - or even just a strong, fast runner to haul your mother out of the building -

BOOM!  With a thundering roar, the gas main under the building explodes.  As the structure comes apart, in what seems like slow motion, you glimpse your mother's shattered body being hurled high into the air, traveling fast, rapidly increasing its distance from the former center of the building.

On the side of the Outcome Pump is an Emergency Regret Button.  All future functions are automatically defined with a huge negative value for the Regret Button being pressed - a temporal reset probability of nearly 1 - so that the Outcome Pump is extremely unlikely to do anything which upsets the user enough to make them press the Regret Button.  You can't ever remember pressing it.  But you've barely started to reach for the Regret Button (and what good will it do now?) when a flaming wooden beam drops out of the sky and smashes you flat.

Which wasn't really what you wanted, but scores very high in the defined future function...

The Outcome Pump is a genie of the second class.  No wish is safe.

If someone asked you to get their poor aged mother out of a burning building, you might help, or you might pretend not to hear.  But it wouldn't even occur to you to explode the building.  "Get my mother out of the building" sounds like a much safer wish than it really is, because you don't even consider the plans that you assign extreme negative values.

Consider again the Tragedy of Group Selectionism: Some early biologists asserted that group selection for low subpopulation sizes would produce individual restraint in breeding; and yet actually enforcing group selection in the laboratory produced cannibalism, especially of immature females.  It's obvious in hindsight that, given strong selection for small subpopulation sizes, cannibals will outreproduce individuals who voluntarily forego reproductive opportunities.  But eating little girls is such an un-aesthetic solution that Wynne-Edwards, Allee, Brereton, and the other group-selectionists simply didn't think of it.  They only saw the solutions they would have used themselves.

Suppose you try to patch the future function by specifying that the Outcome Pump should not explode the building: outcomes in which the building materials are distributed over too much volume, will have ~1 temporal reset probabilities.

So your mother falls out of a second-story window and breaks her neck.  The Outcome Pump took a different path through time that still ended up with your mother outside the building, and it still wasn't what you wanted, and it still wasn't a solution that would occur to a human rescuer.

If only the Open-Source Wish Project had developed a Wish To Get Your Mother Out Of A Burning Building:

"I wish to move my mother (defined as the woman who shares half my genes and gave birth to me) to outside the boundaries of the building currently closest to me which is on fire; but not by exploding the building; nor by causing the walls to crumble so that the building no longer has boundaries; nor by waiting until after the building finishes burning down for a rescue worker to take out the body..."

All these special cases, the seemingly unlimited number of required patches, should remind you of the parable of Artificial Addition - programming an Arithmetic Expert Systems by explicitly adding ever more assertions like "fifteen plus fifteen equals thirty, but fifteen plus sixteen equals thirty-one instead".

How do you exclude the outcome where the building explodes and flings your mother into the sky?  You look ahead, and you foresee that your mother would end up dead, and you don't want that consequence, so you try to forbid the event leading up to it.

Your brain isn't hardwired with a specific, prerecorded statement that "Blowing up a burning building containing my mother is a bad idea."  And yet you're trying to prerecord that exact specific statement in the Outcome Pump's future function.  So the wish is exploding, turning into a giant lookup table that records your judgment of every possible path through time.

You failed to ask for what you really wanted.  You wanted your mother to go on living, but you wished for her to become more distant from the center of the building.

Except that's not all you wanted.  If your mother was rescued from the building but was horribly burned, that outcome would rank lower in your preference ordering than an outcome where she was rescued safe and sound.  So you not only value your mother's life, but also her health.

And you value not just her bodily health, but her state of mind. Being rescued in a fashion that traumatizes her - for example, a giant purple monster roaring up out of nowhere and seizing her - is inferior to a fireman showing up and escorting her out through a non-burning route.  (Yes, we're supposed to stick with physics, but maybe a powerful enough Outcome Pump has aliens coincidentally showing up in the neighborhood at exactly that moment.)  You would certainly prefer her being rescued by the monster to her being roasted alive, however.

How about a wormhole spontaneously opening and swallowing her to a desert island?  Better than her being dead; but worse than her being alive, well, healthy, untraumatized, and in continual contact with you and the other members of her social network.

Would it be okay to save your mother's life at the cost of the family dog's life, if it ran to alert a fireman but then got run over by a car?  Clearly yes, but it would be better ceteris paribus to avoid killing the dog.  You wouldn't want to swap a human life for hers, but what about the life of a convicted murderer?  Does it matter if the murderer dies trying to save her, from the goodness of his heart?  How about two murderers?  If the cost of your mother's life was the destruction of every extant copy, including the memories, of Bach's Little Fugue in G Minor, would that be worth it?  How about if she had a terminal illness and would die anyway in eighteen months?

If your mother's foot is crushed by a burning beam, is it worthwhile to extract the rest of her?  What if her head is crushed, leaving her body?  What if her body is crushed, leaving only her head?  What if there's a cryonics team waiting outside, ready to suspend the head?  Is a frozen head a person?  Is Terry Schiavo a person?  How much is a chimpanzee worth?

Your brain is not infinitely complicated; there is only a finite Kolmogorov complexity / message length which suffices to describe all the judgments you would make.  But just because this complexity is finite does not make it small.  We value many things, and no they are not reducible to valuing happiness or valuing reproductive fitness.

There is no safe wish smaller than an entire human morality.  There are too many possible paths through Time.  You can't visualize all the roads that lead to the destination you give the genie.  "Maximizing the distance between your mother and the center of the building" can be done even more effectively by detonating a nuclear weapon.  Or, at higher levels of genie power, flinging her body out of the Solar System.  Or, at higher levels of genie intelligence, doing something that neither you nor I would think of, just like a chimpanzee wouldn't think of detonating a nuclear weapon.  You can't visualize all the paths through time, any more than you can program a chess-playing machine by hardcoding a move for every possible board position.

And real life is far more complicated than chess.  You cannot predict, in advance, which of your values will be needed to judge the path through time that the genie takes.  Especially if you wish for something longer-term or wider-range than rescuing your mother from a burning building.

I fear the Open-Source Wish Project is futile, except as an illustration of how not to think about genie problems.  The only safe genie is a genie that shares all your judgment criteria, and at that point, you can just say "I wish for you to do what I should wish for."  Which simply runs the genie's should function.

Indeed, it shouldn't be necessary to say anything.  To be a safe fulfiller of a wish, a genie must share the same values that led you to make the wish. Otherwise the genie may not choose a path through time which leads to the destination you had in mind, or it may fail to exclude horrible side effects that would lead you to not even consider a plan in the first place.  Wishes are leaky generalizations, derived from the huge but finite structure that is your entire morality; only by including this entire structure can you plug all the leaks.

With a safe genie, wishing is superfluous.  Just run the genie.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:06 AM
Select new highlight date
All comments loaded

Sounds like we need to formalize human morality first, otherwise you aren't guaranteed consistency. Of course formalizing human morality seems like a hopeless project. Maybe we can ask an AI for help!

"I wish for a genie that shares all my judgment criteria" is probably the only safe way.

This might be done by picking an arbitrary genie, and then modifying your judgement criteria to match that genie's.

This generalises. Since you don't know everything, anything you do might wind up being counterproductive.

Like, I once knew a group of young merchants who wanted their shopping district revitalised. They worked at it and got their share of federal money that was assigned to their city, and they got the lighting improved, and the landscaping, and a beautiful fountain, and so on. It took several years and most of the improvements came in the third year. Then their landlords all raised the rents and they had to move out.

That one was predictable in hindsight, but I didn't predict it. There could always be things like that.

When anything you do could backfire, are you better off to stay in bed? No, the advantages of that are obvious but it's also obvious you can't make a living that way.

You have to make your choices and take your chances. If I had an outcome pump and my mother was trapped in a burning building and I had no other way to get her out, I hope I'd use it. The result might be worse than letting her burn to death but at least there would be a chance for a good outcome. If I can just get it to remove some of the bad outcomes the result may be an improvement.

On further reflection, the wish as expressed by Nick Tarleton above sounds dangerous, because all human morality may either be inconsistent in some sense, or 'naive' (failing to account for important aspects of reality we aren't aware of yet). Human morality changes as our technology and understanding changes, sometimes significantly. There is no reason to believe this trend will stop. I am afraid (genuine fear, not figure of speech) that the quest to properly formalize and generalize human morality for use by a 'friendly AI' is akin to properly formalizing and generalizing Ptolemean astronomy.

Is there a safe way to wish for an unsafe genie to behave like a safe genie? That seems like a wish TOSWP should work on.

So positing an effective equivalent of the mythological figure "Genie" in technological form ignores the optimization-for-use that would take place at each stage of developing an Outcome-Pump. The technology-falling-from-heaven which is the Outcome Pump demands that we reverse engineer the optimization of parameters which would have necessarily taken place if it had in fact developed as human technologies do.

Unfortunately, Eric, when you build a powerful enough Outcome Pump, it can wish more powerful Outcome Pumps into existence, which can in turn wish even more powerful Outcome Pumps into existence. So once you cross a certain threshold, you get an explosion of optimization power, which mere trial and error is not sufficient to control because of the enormous change of context, in particular, the genie has gone from being less powerful than you to being more powerful than you, and what appeared to work in the former context won't work in the latter.

Which is precisely what happened to natural selection when it developed humans.

There was a story with an "outcome pump" like this, I do not remember the name. Essentially, a chemical had to get soaked with water due to some time travel related handwave. You could do minor things like getting your mom out of the building by pouring water on the chemical if you are satisfied with the outcome, with some risk that a hurricane would form instead and soak the chemical. It would produce the least improbable outcome (in the sense that all probabilities would become as if it is given that the chemical got soaked, so naturally the least improbable one had the highest chance to have occurred), so it's impact was generally quite limited - to do real damage you had to lock up the chemical in a very strong safe. With a minor plot hole that the least improbable condition was for the chemical to not get locked up in the safe in the first place.

Isaac Asimov's thiotimoline stories. The last turned it into a space drive.

Damn, it took me a long time to make the connection between the Outcome Pump and quantum suicide reality editing. And the argument that proves the unsafety of the Outcome Pump is perfectly isomorphic to the argument why quantum immortality is scary.

Rather than designing a genie to exactly match your moral criteria, the simple solution would be to cheat and use yourself as the genie. What the Outcome Pump should solve for is your own future satisfaction. To that end, you would omit all functionality other than the "regret button", and make the latter default-on, with activation by anything other than a satisfied-you vanishingly improbable. Say, with a lengthy password.

Of course, you could still end up in a universe where your brain has been spontaneously re-wired to hate your mother. However, I think that such an event is far less likely than a proper rescue.

Such a genie might already exist.
You mean GOD? From the good book? It's more plausible than some stories I could mention.

GOD, I meta-wish for an ((...Emergence-y Re-get) Emergence-y Re-get) Emergency Regret Button.

Eric, I think he was merely attempting to point out the futility of wishes. Or rather, the futility of asking something for something you want that does not share your judgments on things. The Outcome pump is merely, like the Genie, a mechanism by which to explain his intended meaning. The problem of the outcome pump is, twofold: 1. Any theory that states that time is anything other than a constant now with motion and probability may work mathematically but has yet to be able to actually alter the thing which it describes in a measurable way, and 2. The production of something such as a time machine to begin with would be so destructive as to ultimately prevent the creation of the Outcome Pump.

In fact, as rational as we would like to be, if we are so rational that we miss the forest for the trees, or in this case, the moral for the myth, we sort of undo the reason we have rationality. It's like disassembling a clock to find the time.

Anyhow, the problem of wishes is the trick of prayer: To get something that God will grant, we cannot create a God that wants what we want; it is our inherent experience in life that if God really is all-powerful and above all that he must be singular, and since men's wishes oft conflict he can not by any stretch of the imagination mysteriously coincide with your own capricious desires. Thus you must make the 'wish no wish' which is to change your judgment to that of God's, and then in that case you can not possibly wish something that he will NOT grant.

The mystery of it is that it is still not the same as the 'safe genie'; but at the same time not altogether different. But in the sense that some old Christian Mystics have said the best prayer is the one in which you make no petitions at all (and in fact say nothing!) probably attests to the fact that it is indeed the 'safe genie'.

"You cannot predict, in advance, which of your values will be needed to judge the path through time that the genie takes.... The only safe genie is a genie that shares all your judgment criteria."

Is a genie that does share all my judgment criteria necessarily safe?

Maybe my question is ill-formed; I am not sure what "safe" could mean besides "a predictable maximizer of my judgment criteria". But I am concerned that human judgment under ordinary circumstances increases some sort of Beauty/Value/Coolness which would not be increased if that same human judgment was used to search over a less restricted set of possibilities.

The world is full of cases where selecting for A automatically increases B when you are searching over a restricted set of possibilities but does not increase B when those restrictions are lifted. Overfitting is a classic example. In cases of overfitting, if we search only over a restricted set of few-parameter models, models that do well on the training set will automatically do well on the generalization set, but if we allow more parameters the correlation disappears.

Modern marketing / product development can search over a larger set of alternatives than we used to have access to. In many cases human judgments correlate with less when used on modern manufactured goods than when used on the smaller set of goods that was formerly available. Judgments of tastiness used to correlate with health but now do not. Judgments of "this is a limited resource which I should grab quickly" used to indicate resources which we really should grab quickly but now do not (because of manufactured "limited time offer only" signs and the like).

Genies or AGI's would search over an even larger space of possibilities than contemporary marketing searches over. In this larger space, many of the traditional correlates of human judgment will disappear. That is: in today's restricted search spaces, outcomes which are ranked highly according to human judgment criteria tend also to have various other properties P1, P2, ... Pk. In an AGI's search space, outcomes which are ranked highly according to human judgment criteria will not have properties P1... Pk.

I am worried that properties P1...Pk are somehow valuable. That is, I am worried that in this world human judgments pick out outcomes that are somehow valuable and that human judgments' ability to do this resides, not in our judgment criteria alone (which would be uploaded into our imagined genie) but in the conjunction of our judgment criteria with the restricted set of possibilities that has so far been available to us.

"It seems contradictory to previous experience that humans should develop a technology with "black box" functionality, i.e. whose effects could not be foreseen and accurately controlled by the end-user."

Eric, have you ever been a computer programmer? That technology becomes more and more like a black box is not only in line with previous experience, but I dare say is a trend as technological complexity increases.

I don't know that they are, but it's the conservative assumption, in that it carries less risk of the world being destroyed if you're wrong. Also, see the AI-box experiments.

"In what sense can [properties P1...Pk] be valuable, if they are not valued by human judgment criteria (even if not consciously most of the time)?"

I don't know. It might be that the only sense in which something can be valuable is to look valuable according to human judgment criteria (when thoroughly implemented, and well informed, and all that). If so, my concern is ill-formed or irrelevant.

On the other hand, it seems possible that human judgments of value are an imperfect approximation of what is valuable in some other (external?) sense. Imagine for example if we met multiple alien races and all of them said "I see what you're getting at with this 'value/goodness/beauty/truth' thing, but you are misunderstanding it a bit; in a few thousand years, you will modify your root judgment criteria in such-and-such a way." In that case I would wonder whether my current judgment criteria were not best understood as an approximation of this other set of criteria and whether it was not value according to this other set of criteria that I should be aiming for.

If human judgment criteria are an approximation of some other kind of value, they would probably cease to approximate that other kind of value when used to search over the large space of genie-accessible possibilities.

By way of analogy, scientists' criteria for judging scientific truth/relevance/etc. seem to be changing usefully over time, and it may be that scientists' criteria at different times can be viewed as successive approximations of some other (external?) truth-criteria. Galilean physicists had one way of determining what to believe, Newtonians another, and contemporary physicists yet another. In the restricted set of situations considered by Galilean physicists, Galilean methods yield approximately the same predictions as the methods of contemporary physicists. In the larger space of genie-accessible situations, they do not.

With a safe genie, wishing is superfluous. Just run the genie.

But while most genies are terminally unsafe, there is a domain of "nearly-safe" genies, which must dwarf the space of "safe" genies (examples of a nearly-safe genie: one that picks the moral code of a random living human before deciding on an action or a safe genie + noise). This might sound like semantics, but I think the search for a totally "safe" genie/AI is a pipe-dream, and we should go for "nearly safe" (I've got a short paper on one approach to this here).

"Whatever proposition you can manage to input into the Outcome Pump, somehow happens, though not in a way that violates the laws of physics. If you try to input a proposition that's too unlikely, the time machine will suffer a spontaneous mechanical failure before that outcome ever occurs."

So, a kind of Maxwell's demon? :)

Recovering Irrationalist said:

I wouldn't pick an omnipotent but equally ignorant me to be my best possible genie.

Right. It's silly to wish for a genie with the same beliefs as yourself, because the system consisting of you and an unsafe genie is already such a genie.

Upon some reflection, I remembered that Robin has showed that two Bayesians who share the same priors can't disagree. So perhaps you can get your wish from an unsafe genie by wishing, "... to run a genie that perfectly shares my goals and prior probabilities."