As humans, finding out facts that we would rather not be true is unpleasant. For example, I would dislike finding out that my girlfriend were cheating on me, or finding out that my parent had died, or that my bank account had been hacked and I had lost all my savings.

 

However, this is a consequence of the dodgily designed human brain. We don't operate with a utility function. Instead, we have separate neural circuitry for wanting and liking things, and behave according to those. If my girlfriend is cheating on me, I may want to know, but I wouldn't like knowing. In some cases, we'd rather not learn things: if I'm dying in hospital with only a few hours to live, I might rather be ignorant of another friend's death for the short remainder of my life.

 

However, a rational being, say an AI, would never rather not learn something, except for contrived cases like Omega offering you $100 if you can avoid learning the square of 156 for the next minute.

 

As far as I understand, an AI with a set of options decides by using approximately the following algorithm. This algorithm uses causal decision theory for simplicity.

 

"For each option, guess what will happen if you do it, and calculate the average utility. Choose the option with the highest utility."

 

So say Clippy is using that algorithm with his utility function of utility = number of paperclips in world.

 

Now imagine Clippy is on a planet making paperclips. He is considering listening to the Galactic Paperclip News radio broadcast. If he does so, there is a chance he might hear about a disaster leading to the destruction of thousands of paperclips. Would he decide in the following manner?

 

"If I listen to the radio show, there's maybe a 10% chance I will learn that 1000 paperclips were destroyed. My utility in from that decision would be on average reduced by 100. If I don't listen, there is no chance that I will learn about the destruction of paperclips. That is no utility reduction for me. Therefore, I won't listen to the broadcast. In fact, I'd pay up to 100 paperclips not to hear it."

 

Try and figure out the flaw in that reasoning. It took me a while to spot it, but perhaps I'm just slow.

 

 

* thinking space *

 

 

For Clippy to believe "If I listen to the radio show, there's maybe a 10% chance I will learn that 1000 paperclips were destroyed," he must also believe that there is already a 10% chance that 1000 paperclips have been destroyed. So his utility in either case is already reduced by 100. If he listens to the radio show, there's a 90% chance his utility will increase by 100, and a 10% chance it will decrease by 900, relative to ignorance. And so, he would be indifferent to gaining that knowledge.

 

As humans, we don't work that way. We don't constantly feel the pressure of knowledge like “People might have died since I last watched the news,” just because humans don't deal with probability in a rational manner. And also, as humans who feel things, learning about bad things is unpleasant in itself. If I were dying in my bed, I probably wouldn't even think to increase my probability that a friend had died just because no-one would have told me if they had. An AI probably would.

 

Of course, in real life, information has value. Maybe Clippy needs to know about these paperclip-destroying events in order to avoid them in his own paperclips, or he needs to be updated on current events to participate in effective socialising with other paperclip enthusiasts. So he would probably gain utility on average from choosing to listen to the radio broadcast.

 

In conclusion. An AI may prefer the world in one state than another, but almost always prefers more knowledge about the actual state of the world, even if what it learns isn't good.

New Comment
24 comments, sorted by Click to highlight new comments since:

Non-contrived instances where ignorance can have utility:

  • For entrainment: A movie watcher wants to remain ignorant of a movie's ending until she actually sees it.
  • To reduce bias: A scientist conducts a double-blind study.
  • To respect privacy: I delete the private emails that have been accidentally forwarded to me.
  • To avoid temptation: If I have difficulty keeping secrets, I should avoid learning them.
  • Ignorant people can avoid scams by being too dumb to fool. In the ants and prisoner's dilemma tournaments, smart people lost by getting too clever.

For entrainment: A movie watcher wants to remain ignorant of a movie's ending until she actually sees it.

Probably not as true as you think it is.

Off the top of my head you'll also want to add:

  • Plausible deniability
  • Games of asymmetric information (Peach or Lemon? [pdf])
  • Protecting knowledge important to your organisation by not possessing it (cf. espionage, terrorist cells, etc.)

Also a broader category of circumstances where you may be called upon to act in such a way that you don't know something you do, where your motives and knowledge will be scrutinised. I've yet to come across a satisfactory name for this, but I've provisionally referred to it as "protogaming" in the past (as opposed to "metagaming", where you use knowledge from outside of a game, which you shouldn't possess, to influence your actions to your own benefit).

Protogaming involves publicly giving someone information they shouldn't have access to, which would influence their decisions if they had it, and then asking them to make the decision as if they didn't have the information. If they'd been ignorant, their decisions could be made without scrutiny, but being in possession of the information obliges them to make less favourable options in order to appear as though they're not taking advantage of the non-legitimate information.

If anyone has a better term for that, please, please, please tell me what it is.

All of those things except perhaps the privacy one are human-specific, in that they only exist for humans, and an AI wouldn't worry about them.

  • In learning: You likely get more utility by not knowing what the answer to your homework question is, in advance of working it out
  • Entertainment: Puzzles, crosswords. Delayed telecast of sport events.

The flaw I'd point out is that Clippy's utility function is utility = number of paperclips in the world not utility = number of paperclips Clippy thinks are in the world.

Learning about the creation or destruction of paperclips does not actually increase or decrease the number of paperclips in the world.

I agree. That's the confusion that I was sorting out for myself.

The point I was making is that you attempt to maximise your utility function. Your utility function decreases when you learn of a bad thing happening. My point was that if you don't know, your utility function is the weighted average of the various things you could think could happen, so on average learning the state of the universe does not change your utility in that manner.

The point I was making is that you attempt to maximise your utility function. Your utility function decreases when you learn of a bad thing happening

I think you're still confused. Utility functions operate on outcomes and universes as they actually are, not as you believe them to be. Learning of things doesn't decrease or increase your utility function. If it did, you could maximize it by taking a pill that made you think it was maximized. Learning that you were wrong about the utility of the universe is not the same as changing the utility of the universe.

Learning of things doesn't decrease or increase your utility function.

It depends on what is meant by "your utility function". That's true if you mean a mathematical idealization of a calculation performed in your nervous system that ignores your knowledge state, but false if you mean the actual result calculated by your nervous system.

You are confusing utility with happiness. But you're confusing them even afterwards.

To see whether the utility of information can be negative, you shouldn't be thinking of scenarios like "I learned my girlfriend is cheating on me" -- (which is useful info to have in terms of determining her future trustworthiness and reliability), you should be thinking of scenarios like "My mother just told me she once discovered my father doing it doggy-style with his secretary on the kitchen table I'm currently eating on." which gross you out but don't really provide much more useful information than "My mother just told me she caught my father cheating on her; she let me know so I wouldn't be confused about why she's divorcing him".

There's a reason the sentence "TMI"="Too Much Information" has been invented. There's worth in information you can act on. Information that just makes you unhappy but you can't significantly act on is a negative, unless you value information for its own sake more than you value your own happiness.

People's happiness isn't the same as people's utility, but people are allowed to have a term for their own happiness in their utility function; most people do.

If I were dying in my bed, I probably wouldn't even think to increase my probability that a friend had died just because no-one would have told me if they had.

There is something intuitively wrong with that statement. If you do not gain any new information, your probability calculation should not change. If the probability of not being told of a friend's death is approximately the same either way, there should be little update, and any miniscule update would intuitively be in the direction of 'If someone were dead, I'd be told'. So I did a rough (very very rough) calculation using the general idea that someone is unlikely to tell Dying You if a friend is dead if they are, and even more unlikely to tell Dying You that a friend is dead when they aren't. (That's just not funny, dude.)

0.00843437006 = (0.01x0.8)/((0.01x0.8)+(0.95x.99))

p(friend is dead given someone tells Dying You that they're dead) = p(friend is dead) p(no one tells you your friend is dead)/ p(same as above) + p(no one tells you a friend is dead) p(friend is not dead)

Same is true even for higher values of 'likelihood that a friend dies'.

0.173913043 = (0.2 x 0.8) / ((0.2 x 0.8) + (0.95 x .8))

0.457142857 = (0.5 x 0.8) / ((0.5 x 0.8) + (0.95 x .5))

I'm tempted to adjust the other probabilities for 50%, though. If death is so common among my peers anyways then my friends would probably be more likely to tell me.

EDIT: Haha, I forgot the multiplication sign is used for formatting. Woops.

This seems to be true only if your mental states of knowledge can have no effect on things that you care about... that is to say, it's pretty clearly false!

Sometimes humans care about their own mental states. It might count positively in my utility function that I not be psychologically distressed. Plausibly, learning certain bits of knowledge that I can't use practically ("The Great Old Ones are about to awake and devour the earth!") might have this effect with no compensating benefits, and so learning them would have negative utility.

Alternatively, my state of knowledge may affect something in the world. If I'm very bad at concealing my knowledge, then knowing about the secret conspiracy to take over the world may just get me killed: that's pretty clearly negative utility. Or, say, on being told that the fate of the world rests upon my performance in some task, I'm likely to do worse than if I didn't know (due to nerves).

In short: if you're a disembodied observer who merely picks effects to occur in the world, and whose utility function ranges strictly over things in the world other than yourself, then this might be true. Otherwise, not so much.

[-][anonymous]00

This seems to be true only if your mental states of knowledge can have no effect on things that you care about... that is to say, it's pretty clearly false!

TRIGGER WARNING: If your brain works approximately like mine reading my link may literally make you physically uncomfortable for several minutes.

When looking into this, I found a meme which refers to pieces of information that your brain or body normally automatically takes care of for you, and that being reminded of it causes your consciousness to take manual control, which is disconcerting because you are distracted by dealing with the irrelevant information for a while, despite the fact that in general, this is handled automatically.

http://knowyourmeme.com/memes/you-are-now-breathing-manually

I'll admit I was aware of some of those, but some of them were entirely disconcertingly novel.

I prove this as a mathematical theorem here:

Theorem. Every act of observation has, before you make it, a non-negative expected utility.

Note that the theorem ignores everything that you might expect it to ignore from the formulation: non-rational or non-utility-maximising agents, the cost of acquiring and using the information, the effects of other people knowing you know it, the effects on yourself of knowing you know it (as distinct from merely knowing it and so updating your utility-maximising decisions), etc. etc.

Should I write this up for arXiv? The theorem is not deep, but neither is Aumann's agreement theorem, and that's in a journal.

[-][anonymous]00

Another special case: If learning the information requires great resources in computation or memory, and the expected utility of retaining (not using) these is greater than the expected utility of of the info, then you might not want to learn the info. (See Omohundro 2008

This special case can apply to AIs, where the metrics of computation and memory are well-defined.

But it can be true even for humans. The time or money spent learning something might be better spent in another way.

[This comment is no longer endorsed by its author]Reply

The clippy ought to instead be compelled to listen to paperclip news, just in case it hears of some way to optimize the paperclip number (e.g. it could hear that 10 000 paperclips were destroyed in some way which is likely to repeat but which clippy can prevent). As paperclip optimizer it then will never decide not to listen to the news. (Unless clippy doesn't really care about paperclips and it's bad feeling was merely put there as a heuristic to deal with paperclip destruction in the immediate vicinity, without distant news in mind)

The way the utility of listening to news can be negative though, is if the news are being constructed by an agent which hates paperclips, and knows enough about clippy as to impede the clippy's effectiveness with false news. (Note that any false news are true statements of form "news service said X")

I had this thought too a bit ago, and flailed about a bit trying to make it rigorous. I've thought about it off and on since then. Here's a sketch of what I've got:

Part 1: Suppose we have an entity who acts to maximize her own current credence in various positive outcomes. She will behave identically to a VNM-rational entity trying to maximize the probability of those outcomes iff she is VNM-rational and her utility function is U(P(outcome)=p) = k*p for all p and some constant k.

Part 2: Any credence-maximizer wants its future selves to be probability-maximizers, since this maximizes its current credence in future positive outcomes. Therefore, an entity that pre-commits to act as a probability-maximizer, or equivalently as though it had a linear utility-of-credence function, increases its current utility, at the possible expense of its future selves' utilities. Therefore any credence-maximizer that is capable of pre-commitment or self-modification will act identically to a VNM-rational probability maximizer.

As humans, we don't work that way. We don't constantly feel the pressure of knowledge like “People might have died since I last watched the news,” just because humans don't deal with probability in a rational manner.

Isn't this a thing some people do? Call your mother.

I would dislike finding out that my girlfriend were cheating on me [...] However, this is a consequence of the dodgily designed human brain.

You can argue that this, as well as many other real life examples, is a consequence of finite computational resources in general. The utility lost in trying to figure out what exactly is the proper course of action - now that you have the knowledge - perhaps outweighs the utility of knowing.

[-]TrE00

Did anyone else try to square 156 within 60 seconds after reading that line?

Actually, I didn't, and then Omega appeared and gave me $100.

Nope, for the same reason that when told to not think about a blue elephant playing the trombone, I immediately start imagining an orange rhinoceros riding a tiny bicycle and generally manage to not think of the forbidden thing.

Refraining from multiplying large numbers is somewhat easier.

No. That part of my brains which used to square even bigger numbers has became unemployed around 1980. I guess it has a new job now and the old skill has been forgotten.