Posts

Sorted by New

Wiki Contributions

Comments

I think this post is basically correct. You don't, however, give an argument that most minds would behave this way. However, here is a brief intuitive argument for it. A "utility function" does not mean something that is maximized in the ordinary sense of maximize; it just means "what the thing does in all situations." Look at computers: what do they do? In most situations, they sit there and compute things, and do not attempt to do anything in particular in the world. If you scale up their intelligence, that will not necessarily change their utility function much. In other words, it will lead to computers that mostly sit there and compute, without trying to do much in the world. That is to say, AIs will be weakly motivated. Most humans are weakly motivated, and most of the strength of their motivation does not come from intelligence, but from the desires that came from evolution. Since AIs will not have that evolution, they will be even more weakly motivated than humans, assuming a random design.

This theory is mostly true, but rather than being cynical about people caring about poor people, we should be cynical about the more general concept of people caring about stuff.

This is all basically right.

However, as I said in a recent comment, people do not actually have utility functions. So in that sense, they have neither a bounded nor an unbounded utility function. They can only try to make their preferences less inconsistent. And you have two options: you can pick some crazy consistency very different from normal, or you can try to increase normality at the same time as increasing consistency. The second choice is better. And in this case, the second choice means picking a bounded utility function, and the first choice means choosing an unbounded one, and going insane (because agreeing to be mugged is insane.)

You don't have any such basic list.

I don't think you understood the argument. Let's agree that an electron prefers what it is going to do, over what it is not going to do. But does an electron in China prefer that I write this comment, or a different one?

Obviously, it has no preference at all about that. So even if it has some local preferences, it does not have a coherent preference over all possible things. The same thing is true for human beings, for exactly the same reasons.

I don't know why you think I am assuming this. Regardless of the causes of your opinions, one thing which is not the cause is a coherent set of probabilities. In the same way, regardless of the causes of your actions, one thing which is not the cause is a coherent set of preferences.

This is necessarily true since you are built out of physical things which do not have sets of preferences about the world, and you follow physical laws which do not have sets of preferences about the world. They have something similar to this, e.g. you could metaphorically speak as if gravity has a preference for things being lower down or closer together. But you cannot compare any two arbitrary states of the world and say "Gravity would prefer this state to that one." Gravity simply has no such preferences. In a similar way, since your actions result from principles which are preference-like but not preferences, your actions are also somewhat preference-like, but they do not express a coherent set of preferences.

All that said, you are close to a truth, which is that since the incoherence of people's lives bothers them (both in thoughts and in actions), it is good for people to try to make those both more coherent. In general you could make incoherent thoughts and actions more coherent in two different directions, namely "more consistent with themselves but less consistent with the world" and "more consistent with themselves and also more consistent with the world". The second choice is better.

" It's a list of all our desires and preferences, in order of importance, for every situation ."

This is basically an assertion that we actually have a utility function. This is false. There might be a list of pairings between "situations you might be in" and "things you would do," but it does not correspond to any coherent set of preferences. It corresponds to someone sometimes preferring A to B, and sometimes B to A, without a coherent reason for this.

Asserting that there is such a coherent list would be like asserting that you have a list of probabilities for all statements that are based on a coherent prior and were coherently updated in their current state using Bayesian updating. This is nonsense: there is no such thing as "the actual probability that you really truly assign to the claim that you are about to change your name to Thomas John Walterson and move to Australia." You never thought of that claim before, and although you think it very unlikely when you think of it, this is not intrinsically numerical in any way. We assign probabilities by making them up, not by discovering something pre-existent.

In exactly the same way, coherent sets of preferences are made up, not discovered.

I predict the video was faked (i.e. that everyone in it knows what is happening and that in fact there was not even a test like this.)

Most people, most of the time, state their beliefs as binary propositions, not as probability statements. Furthermore, this is not just leaving out an actually existing detail, but it is a detail missing from reality. If I say, "That man is about 6 feet tall," you can argue that he has an objectively precise height of 6 feet 2 inches or whatever. But if I say "the sky is blue," it is false that there is an objectively precise probability that I have for that statement. If you push me, I might come up with the number. But I am basically making the number up: it is not something that exists like someone's height.

In other words, in the way that is relevant, beliefs are indeed binary propositions, and not probability statements. You are quite right, however, that in the process of becoming more consistent, you might want to approach the situation of having probabilities for your beliefs. But you do not currently have them for most of your beliefs, nor does any human.

Load More