Post Your Utility Function

A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.

I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.

I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.

This leads me to two possible conclusions:

  • I don't know how to draw utility functions, but they are a good model of my preferences, and I could learn how to do it.
  • Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.

Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?

I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:01 AM
Select new highlight date
All comments loaded

Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.

They may be a bad descriptive match. But in prescriptive terms, how do you "help" someone without a utility function?

To help someone, you don't need him to have an utility function, just preferences. Those preferences do have to have some internal consistency. But the consistency criteria you need to in order to help someone seem strictly weaker than the ones needed to establish an utility function. Among the von Neumann-Morgenstern axioms, maybe only completeness and transitivity are needed.

For example, suppose I know someone who currently faces choices A and B, and I know that if I also offer him choice C, his preferences will remain complete and transitive. Then I'd be helping him, or at least not hurting him, if I offered him choice C, without knowing anything else about his beliefs or values.

Or did you have some other notion of "help" in mind?

Furthermore, utility functions actually aren't too bad as a descriptive match when you are primarily concerned about aggregate outcomes. They may be almost useless when you try to write one that describes your own choices and preferences perfectly, but they are a good enough approximation that they are useful for understanding how the choices of individuals aggregate: see the discipline of economics. This is a good place for the George Box quote: "All models are wrong, but some are useful."

You want a neuron dump? I don't have a utility function, I embody one, and I don't have read access to my coding.

I'm not sure I embody one! I'm not sure that I don't just do whatever seems like the next thing to do at the time, based on a bunch of old habits and tendencies that I've rarely or never examined carefully.

I get up in the morning. I go to work. I come home. I spend more time reading the internets (both at work and at home) than I probably should -- on occasion I spend most of the day reading the internets, one way or another, and while I'm doing so have a vague but very real thought that I would prefer to be doing something else, and yet I continue reading the internets.

I eat more or less the same breakfast and the same lunch most days, just out of habit. Do I enjoy these meals more than other options? Almost certainly not. It's just habit, it's easy, I do it without thinking. Does this mean that I have a utility function that values what's easy and habitual over what would be enjoyable? Or does it mean that I'm not living in accord with my utility function?

In other words, is the sentence "I embody a utility function" intended to be tautological, in that by definition, any person's way of living reveals/embodies their utility function (a la "revealed preferences" in economics), or is it supposed to be something more than that, something to aspire to that many people fail at embodying?

If "I embody a utility function" is aspirational rather than tautological -- something one can fail at -- how many people reading this believe they have succeeded or are succeeding in embodying their utility function?

I've put a bit of thought into this over the years, and don't have a believable theory yet. I have learned quite a bit from the excercise, though.

1) I have many utility functions. Different parts of my identity or different frames of thought engage different preference orders, and there is no consistent winner. I bite this bullet: personal identity is a lie - I am a collective of many distinct algorithms. I also accept that Arrow’s impossibility theorem applies to my own decisions.

2) There are at least three dimensions (time, intensity, and risk) to my utility curves. None of these are anywhere near linear - the time element seems to be hyperbolic in terms of remembered happiness for past events, and while I try to keep it sane for future events, that's not my natural state, and I can't do it for all my pieces with equal effectiveness.

3) They change over time (which is different than the time element within the preference space). Things I prefer now, I will not necessarily prefer later. The meta-utility of balancing this possibly-anticipated change against the timeframe of the expected reward is very high, and I can sometimes even manage it.

Here's one data point. Some guidelines have been helpful for me when thinking about my utility curve over dollars. This has been helpful to me in business and medical decisions. It would also work, I think, for things that you can treat as equivalent to money (e.g. willingness-to-pay or willingness-to-be-paid).

  1. Over a small range, I am approximately risk neutral. For example, a 50-50 shot at $1 is worth just about $0.50, since the range we are talking about is only between $0 and $1. One way to think about this is that, over a small enough range, there isn't much practical difference between a curve and a straight line approximating that curve. Over the range -$10K and +$20K I am risk neutral.

  2. Over a larger range, my utility curve is approximately exponential. For me, between -$200K and +$400K, my utility curve is fairly close to u(x) = 1 - exp (-x/400K). The reason is that, for me, changing my wealth by a relatively small amount won't radically change my risk preference, and that implies an exponential curve. Give me $1M and my risk preferences might change, but within the above range, I pretty much would make the same decisions.

  3. Outside that range, it gets more complicated than I think I should go into here. In short, I am close to logarithmic for gains and exponential for losses, with many caveats and concerns (e.g. avoiding the zero illusion. My utility curve should not have any sort of "inflection point" around my current wealth; there's nothing special about that particular wealth level).

(1) and (2) can be summarized with one number, my risk tolerance of $400K. One way to assess this for yourself is to ask "Would I like a deal with a 50-50 shot at winning $X versus losing $X/2?" The X that makes you indifferent between having the deal and not having the deal is approximately your risk tolerance. I recommend acting risk neutral for deals between $X/20 and minus $X/40, and use an exponential utility function between $X and minus $X/2. If the numbers get too large, thinking about them in dollars per year instead of total dollars sometimes helps. For example, $400K seems large, but $20K per year forever may be easier to think about.

Long, incomplete answer, but I hope it helps.

thinking "If I had X, would I take Y instead", and "If I had Y, would I take X instead" very often resulted in a pair of "No"s

It's a well-known result that losing something produces roughly twice the disutility that gaining the same thing would produce in utility. (I.e., we "irrationally" prefer what we already have.)

I feel some people here are trying to define their utility functions via linear combinations of sub-functions which only depend on small parts of the world state.

Example: If I own X, that'll give me a utility of 5, if I own Y that'll give me a utility of 3, if I own Z, that'll give me a utility of 1.

Problem: Choose any two of {X, Y, Z}

Apparent Solution: {X, Y} for a total utility of 8.

But human utility functions are not a linear combination of such sub-functions, but functions from global World states into the real numbers. Think about the above example with X = Car, Y = Bike, Z = Car keys.

It seems obvious now, but the interdependencies are much more complicated. Like money utility being dependent on the market situation, food utility being dependent on the stuff you ate recently, material (as in building) utility being dependent on available tools and vice versa, internet availability utility being dependent on available computer, power, and time.

This leads me to two possible conclusions

A third possibility: Humans aren't in general capable of accurately reflecting on their preferences.

Utility functions are really bad match for human preferences, and one of the major premises we accept is wrong.

If utility functions are a bad match for human preferences, that would seem to imply that humans simply tend not to have very consistent preferences. What major premise does this invalidate?

A third possibility: Humans aren't in general capable of accurately reflecting on their preferences.

Humans are obviously capable of perceiving their own preferences at some level, otherwise they'd be unable to act on them. I assume what you propose here is that conscious introspection is unable to access those preferences?

In that case, utility functions could potentially be deduced by the individual placing themselves into situations that require real action based on relevant preferences, recording their choices, and attempting to deduce a consistent basis that explains those choices. I'm pretty sure that someone with a bit of math background who spent a few days taking or refusing various bets could deduce the nonlinearity and approximate shape of their utility function for money without any introspection, for instance.

This is a good exercise, I'll see what I can do for MY utility function.

First of all, a utility function is a function

f: X --> R

Where X is some set. What should that set be? Certainly it shouldn't be the set of states of the universe, because then you can't say that you enjoy certain processes (such as bringing up a child, as opposed to the child just appearing). Perhaps the set of possible histories of the universe is a better candidate. Even if we identify histories that are microscopically different but macroscopically identical, and apply some crude time horizon, we still have a GARGANTUAN set, probably of the order of 10^(10^10) elements.

We thus need a way of compressing the amount of numerical assignment we have to do. We can do this by saying things like

I assign utility $k to having a loving relationship with a partner with qualities a, b, c ... irrespective of almost any other factors

Moving away from abstract mathematical considerations to more pragmatic ones, the main tradeoff that I can't decide upon is how to trade personal gains such as status, wealth, nice social circle and partner against saving the world.

I realize that my utility function is inscrutable and I trust the unconscious par of me to make accurate judgments of what I want. When I've determined what I want, I use the conscious part of me to determine how I'll achieve it.

Don't trust the subconscious too much in determining what you want either. Interrogate it relentlessly, ask related questions, find incoherent claims and force the truth about your preference to the surface.

Interesting exercise. After trying for a while I completely failed; I ended up with terms that are completely vague (e.g. "comfort"), and actually didn't even begin to scratch the surface of a real (hypothesized) utility function. If it exists it is either extremely complicated (too complicated to write down perhaps) or needs "scientific" breakthroughs to uncover its simple form.

The result was also laughably self-serving, more like "here's roughly what I'd like the result to be" than an accurate depiction of what I do.

The real heresy is that this result does not particularly frighten or upset me. I probably can't be a "rationalist" when my utility function doesn't place much weight on understanding my utility function.

Can you write your own utility fuinction or adopt the one you think you should have? Is that sort of wholesale tampering wise?

What counts as a "successful" utility function?

In general terms there are two, conflicting, ways to come up with utility functions, and these seem to imply different metrics of success.

  1. The first assumes that "utility" corresponds to something real in the world, such as some sort of emotional or cognitive state. On this view, the goal, when specifying your utility function, is to get numbers that reflect this reality as closely as possible. You say "I think x will give me 2 emotilons", and "I think y will give me 3 emotilons"; you test this by giving yourself x, and y; and success is if the results seem to match up.

  2. The second assumes that we already have a set of preferences, and "utility" is just a number we use to represent these, such that xPy <=> u(x)>u(y), where xPy means "x is preferred to y". (More generally, when x and y may be gambles, we want: xPy <=> E[u(x)]>E[u(y)]).

It's less clear what the point of specifying a utility function is supposed to be in the second case. Once you have preferences, specifying the utility function has no additional information content: it's just a way of representing them with a real number. I guess "success" in this case simply consists in coming up with a utility function at all: if your preferences are inconsistent (e.g. incomplete, intransitive, ...) then you won't be able to do it, so being able to do it is a good sign.

Much of the discussion about utility functions on this site seems to me to conflate these two distinct senses of "utility", with the result that it's often difficult to tell what people really mean.

So, we're just listing how much we'd buy things for? I don't see why it's supposed to be hard.

I guess it gets a bit complicated when you consider combinations of things, rather than just their marginal value. For example, once I have a computer with an internet connection, I care for little else. Still, I just have to figure out what would be about neutral, and decide how much I'd pay an hour (or need to be payed an hour) to go from that to something else.

Playing a vaguely interesting game on the computer = 0.

Doing something interesting = 1-3.

Talking to a cute girl = 5.

Talking to a cute girl I know = 8.

Talking to the girl I really like = 50.

Thinking about a girl I really like if I talked to her within the last couple of days, or probably will within a couple of days = 4.

Having hugged the girl I really like within the last two hours = 50.

Hugging a cute girl I know = 50. Note that this one only lasts for about a second, so it's only about a 7000th as good as the last one.

Hugging a cute girl I don't know = 20.

Hugging anyone else except my brother = 10.

Homework = -2, unless it's interesting.

Eating while hungry = 2.

Asleep = ??? I have no idea how to figure that one out.

I didn't use dollar value because I'm too cheap to actually spend money. Knowing how much I can help people for the same cost will do that to you. Check out the Disease Control Priorities Project (http://tinyurl.com/y9wpk5e). There's one for $3 a QALY. Even the hugging the girl I like I only estimate at 0.02 QALYs.

Using that estimate, one unit is about twice my average happiness. More accurate than I'd expect.