Would Your Real Preferences Please Stand Up?

Related to: Cynicism in Ev Psych and Econ

In Finding the Source, a commenter says:

I have begun wondering whether claiming to be victim of 'akrasia' might just be a way of admitting that your real preferences, as revealed in your actions, don't match the preferences you want to signal (believing what you want to signal, even if untrue, makes the signals more effective).

I think I've seen Robin put forth something like this argument [EDIT: Something related, but very different], and TGGP points out that Brian Caplan explicitly believes pretty much the same thing1:

I've previously argued that much - perhaps most - talk about "self-control" problems reflects social desirability bias rather than genuine inner conflict.

Part of the reason why people who spend a lot of time and money on socially disapproved behaviors say they "want to change" is that that's what they're supposed to say.

Think of it this way: A guy loses his wife and kids because he's a drunk. Suppose he sincerely prefers alcohol to his wife and kids. He still probably won't admit it, because people judge a sinner even more harshly if he is unrepentent. The drunk who says "I was such a fool!" gets some pity; the drunk who says "I like Jack Daniels better than my wife and kids" gets horrified looks. And either way, he can keep drinking.

I'll call this the Cynic's Theory of Akrasia, as opposed to the Naive Theory. I used to think it was plausible. Now that I think about it a little more, I find it meaningless. Here's what changed my mind.

What part of the mind, exactly, prefers a socially unacceptable activity (like drinking whiskey or browsing Reddit) to an acceptable activity (like having a wife and kids, or studying)? The conscious mind? As Bill said in his comment, it doesn't seem like it works this way. I've had akrasia myself, and I never consciously think "Wow, I really like browsing Reddit...but I'll trick everyone else into thinking I'd rather be studying so I get more respect. Ha ha! The fools will never see it coming!"

No, my conscious mind fully believes that I would rather be studying2. And this even gets reflected in my actions. I've tried anti-procrastination techniques, both successfully and unsuccessfully, without ever telling them to another living soul. People trying to diet don't take out the cupcakes as soon as no one else is looking (or, if they do, they feel guilty about it).

This is as it should be. It is a classic finding in evolutionary psychology: the person who wants to fool others begins by fooling themselves. Some people even call the conscious mind the "public relations officer" of the brain, and argue that its entire point is to sit around and get fooled by everything we want to signal. As Bill said, "believing the signals, even if untrue, makes the signals more effective."

Now we have enough information to see why the Cynic's Theory is equivalent to the Naive Theory.

The Naive Theory says that you really want to stop drinking, but some force from your unconscious mind is hijacking your actions. The Cynic's Theory says that you really want to keep drinking, but your conscious mind is hijacking your thoughts and making you think otherwise.

In both cases, the conscious mind determines the signal and the unconscious mind determines the action. The only difference is which preference we define as "real" and worthy of sympathy. In the Naive Theory, we sympathize with the conscious mind, and the problem is the unconscious mind keeps committing contradictory actions. In the Cynic's Theory, we symapthize with the unconscious mind, and the problem is the conscious mind keeps sending out contradictory signals. The Naive say: find some way to make the unconscious mind stop hijacking actions! The Cynic says: find some way to make the conscious mind stop sending false signals!

So why prefer one theory over the other? Well, I'm not surprised that it's mostly economists who support the Cynic's Theory. Economists are understandably interested in revealed preferences3, because revealed preferences are revealed by economic transactions and are the ones that determine the economy. It's perfectly reasonable for an economist to care only about those and dimiss any other kind of preference as a red herring that has to be removed before economic calculations can be done. Someone like a philosopher, who is more interested in thought and the mind, might be more susceptible to the identify-with-conscious-thought Naive Theory.

But notice how the theory you choose also has serious political implications4. Consider how each of the two ways of looking at the problem would treat this example:

A wealthy liberal is a member of many environmental organizations, and wants taxes to go up to pay for better conservation programs. However, she can't bring herself to give up her gas-guzzling SUV, and is usually too lazy to sort all her trash for recycling.

I myself throw my support squarely behind the Naive Theory. Conscious minds are potentially rational5, informed by morality, and qualia-laden. Unconscious minds aren't, so who cares what they think?

 

Footnotes:

1: Caplan says that the lack of interest in Stickk offers support for the Cynic's Theory, but I don't see why it should, unless we believe the mental balance of power should be different when deciding whether to use Stickk than when deciding whether to do anything else.

Caplan also suggests in another article that he has never experienced procrastination as akrasia. Although I find this surprising, I don't find it absolutely impossible to believe. His mind may either be exceptionally well-integrated, or it may send signals differently. It seems within the range of normal human mental variation.

2: Of course, I could be lying here, to signal to you that I have socially acceptable beliefs. I suppose I can only make my point if you often have the same experience, or if you've caught someone else fighting akrasia when they didn't know you were there.

3: Even the term "revealed preferences" imports this value system, as if the act of buying something is a revelation that drives away the mist of the false consciously believed preferences.

4: For a real-world example of a politically-charged conflict surrounding the question of whether we should judge on conscious or unconscious beliefs, see Robin's post Redistribution Isn't About Sympathy and my reply.

5: Differences between the conscious and unconscious mind should usually correspond to differences between the goals of a person and the "goals" of the genome, or else between subgoals important today and subgoals important in the EEA.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:39 PM
Select new highlight date
All comments loaded

When I procrastinate over a task, it's usually because I'm in a situation like this:

1) I want something to have been done and 2) I don't want to experience doing it.

To use the classic example, I want to have done my homework but I don't want to be doing my homework.

Well, it's not so mysterious when you put it that way :-(

Consider the case of a hungry rat that sees food on the other side of an electrified floor. The rat wants to minimize its discomfort. It wants to not get shocked, and also wants not to be hungry.

A moderately stupid rat will compare the pain of its current hunger to the pain of crossing the floor. When its pain from hunger becomes as strong as the pain of crossing the floor, it'll decide to cross the floor.

A smarter rat will realize that it'll have to cross the floor eventually, and so will minimize its total pain by crossing immediately, so its hunger doesn't have a chance to build to a painful level.

A really stupid rat will notice that, when it steps onto the electrified floor, its current pain equals the sum of its pain from hunger and the pain from the shock. As this total is always greater than the pain from hunger alone, it'll never step on the electrified floor and it will starve to death.

When it comes to homework, my decision-making algorithm seems to act like the first rat...

In both cases, the conscious mind determines the signal and the unconscious mind determines the action.

Look more closely. All preferences are equal, in the sense of being within the same system -- and this includes signaling preferences. The drunk prefers to drink and prefers to not be thought of as preferring that. But these are not concepts of a different nature; they can both be expressed within the same behavioral preference system.

IOW, both the Cynical and Naive theories are wrong; we only have one set of preferences, it just sometimes works out that the "best" compromise (in the sense of being an approach that your brain can discover through trial and error) is to say one thing and do another. But both the saying and doing are behaviors of the same type; "conscious" vs. "unconscious" is a red herring here.

Now, if you want to say that you don't consciously identify with some subset of your choices or preferences, that's fine, but it's not useful to claim that this is the result of some schism in your being. It's all you, you just aren't being conscious of that part of "you" at the moment.

The "unconscious mind" isn't a real entity; it's not a "mind", in the anthropomorphic sense. It's just whatever you're not paying attention to right now, that keeps on going. If you pay attention to your breathing or your heart rate you can learn to control them. If you pay attention to your feet you'll know where they are right now. And if you pay attention to what you actually get from your "akrasic" behavior, you'll realize it's something you genuinely want.

What you haven't been doing, is negotiating fairly among all your "interests" (to use Ainslie's jargon), or cleanly prioritizing your "controlled variables" (to use Powers's).

The "unconscious mind" isn't a real entity; it's not a "mind", in the anthropomorphic sense. It's just whatever you're not paying attention to right now, that keeps on going. If you pay attention to your breathing or your heart rate you can learn to control them. If you pay attention to your feet you'll know where they are right now. And if you pay attention to what you actually get from your "akrasic" behavior, you'll realize it's something you genuinely want.

Patri Friedman once wrote something about "wanting" versus "wanting to want". I agree that everything you do, you genuinely want to do, in the sense that you're not doing it under duress. But not everything you do is something you want to want to do.

Likewise, if I imagine myself suddenly getting infinite willpower, there are certain things I would do more and other things I would do less.

I'm using the word "conscious" to refer to things I want to want and things I would do more with infinite willpower. I'm using the word "unconscious" to refer to things I don't want to want and things I would do less with infinite willpower. I don't think it's too controversial that those are two different categories.

I'm using the word "conscious" to refer to things I want to want and things I would do more with infinite willpower. I'm using the word "unconscious" to refer to things I don't want to want and things I would do less with infinite willpower. I don't think it's too controversial that those are two different categories.

But they're not natural categories. The problem is that "consciousness" tends to focus on behaviors rather than the goals of those behaviors... as will be obvious to you if you've ever been a programmer trying to get people to give you actual requirements instead of just feature specifications. ;-)

So, it can be quite factually the case that you want not to do certain things, while also wanting (implicitly) some part of the result of those actions.

The problem is that protesting you don't want the action is not helpful. Our preferences are most visible in the breach, because consciousness is effectively an error handler. So your attention is drawn to the errors caused by the behavior, rather than to the goal of the behavior. Your brain wants you to just fix the error, and leave the working part of the system (from its point of view) alone.

But in order to fix the errors intelligently, you need to understand a bigger part of the system than just the location where the error is occurring. Specifically, you need to understand the requirements that are actually being met by the behavior, so that you can find other ways to implement those requirements.

What's more, I can guarantee you that when you find out those requirements, they will ultimately be something that you either do want, or did want at some time in the past, even if on reflection they are no longer relevant. Calling them a product of the unconscious mind is a factual error, as well as misleading: it implies they came out of nowhere and there's nothing you can do about them, when in actual fact they are (part of) your true preferences, and you can choose to pay attention and find out what they are, as well as searching for better ways to get them met.

This is the clearest statement of your philosophy that I have seen yet PJ, and I HEARTILY agree with what I see here.

This is one place where Caplan seems to go off the deep end. I think it illustrates what happens if you take the Cynic's view to the logical conclusion. In his "gun to the head" analogy, Caplan suggests that OCD isn't really a disease! After all, if we put a gun to the head of someone doing (say) repetitive hand washing, we could convince them to stop. Instead, Caplan thinks it's better to just say that the person just really likes doing those repetitive behaviors.

As one commenter points out, this is equivalent to saying a person with a broken foot isn't really injured because they could walk up a flight of stairs if we put a gun to their head. They just prefer to not walk up the stairs.

It is an incredibly simplistic technique to reduce the brain to a single, unified organ, and determine the "true" desires by revealed preferences. Minds are much more complex and conflicted than that. Whatever people mean by "myself", it is surely not just the combined output of their brain.

I agree with your point here -- strongly. But I also think you're being unfair to Caplan. While his position is (I now realize) ridiculous, the example you gave is not.

In his "gun to the head" analogy, Caplan suggests that OCD isn't really a disease! After all, if we put a gun to the head of someone doing (say) repetitive hand washing, we could convince them to stop. Instead, Caplan thinks it's better to just say that the person just really likes doing those repetitive behaviors.

His position would not be that they like doing those behaviors per se, but rather, they have a very strange preference that makes those behaviors seem optimal. Caplan would probably call it "a preference for an unusually high level of certainty about something". For example, someone with OCD needs to perceive 1 million:1 odds that they're hands are now clean, while normal people need only 100:1 odds.

So the preference is for cleanliness-certainty, not the act of hand-washing. To get that higher level of certainty requires that they wash their hands much more often.

Likewise, an OCD victim who has to lock their door 10 times before leaving has an unusually high preference for "certainty that the door is locked", not for locking doors.

Again, I don't agree with this position, but it's handling of OCD isn't that stupid.

I used to have a mild case of OCD.

Let's say I cracked my first knuckle. Well, of course I'm going to crack the other three to balance things out. But then I mess up - my ring finger is only 70% cracked. I can feel it in the joint, a sort of localized anxiety (sort of like an itch, or a joint that needs stretching, but it's a purely psychological irritation). Obviously I can't 30% crack my knuckle - that's no different than moving the finger. So I have to over crack it, up to 130%, and then follow up with the other three fingers.

But now I"ve hit a problem - I've cracked each finger twice, that's not a good number. Things feel worse than they did before the crack. I'd better square things out, so that each finger has been cracked four times - that's a good number. But now my right hand is bothering me, so to even it out I crack each finger there four times. And... oh, what the hell. We'll crack each finger sixteen times. That's 2^8 * 2^4 - gorgeous. I mean, just look at that notation! How much prettier could you want it to be?

Everything's fine until next time I need to crack something... shudder

I eventually forced myself to stop doing this during my last years of High School. It was ridiculous, and I knew it, but each time I encountered it I'd have that itch. But I forced myself to ignore it, the same way you can't scratch your nose while you're on stage, and eventually I broke the habit. But even thinking about it now makes me want to do something that's exponentially symmetrical...

NO! NO, I WON'T GIVE INTO YOU OCD!

;)

It is a classic finding in evolutionary psychology: the person who wants to fool others begins by fooling themselves.

I object to two points here: (1) calling it a finding and (2) calling it evolutionary psychology. It certainly is popular in evolutionary psychology, but I don't see any argument (certainly not in that link) that it is selected over generations rather than learned over a lifetime. More importantly, it's a hypothesis, not a "finding." There's very little evidence, largely because it's difficult to test. I also doubt it's specific enough to test.

Differences between the conscious and unconscious mind should usually correspond to differences between the goals of a person and the "goals" of the genome, or else between subgoals important today and subgoals important in the EEA.

That's evolutionary psychology and it's rather at odds at the previous claim! (The first part might be a way of making the original claim more specific, but it's rather different from what people usually say.)

You misremember my position. I have not argued that the unconscious is right and the conscious wrong. I have argued (e.g., here) for trying to find a compromise to make peace between the conflicting parts of yourself. At the last OB/LW meetup I argued this point in person to several people, all of whom instead favored vigilant internal war.

I generally advocate internal peace Robin, but oddly we haven't discussed this. Maybe a phone call some time when we both have a minute? You should probably text first ideally.

This comment reads like an allegorical prescription for how one ought to resolve conflicts within oneself ;).

Conscious minds are potentially rational, informed by morality, and qualia-laden. Unconscious minds aren't.

Your entire argument for preferring conscious over unconscious minds is this last quick throw away sentence? That's it? Come on, why can't unconscious minds be rational, informed by morality, or qualia-laden? And why are those the features that matter? Are you really implying this is so completely obvious that this one quick sentence is all that needs to be said? Declaring conscious goals to be the goals of the "person", versus unconscious goals as goals of the genome, just presupposes your answer.

I guess I did consider it that completely obvious. If it's causing so much controversy, maybe I need to think about it more.

I'm defining my "conscious self" as the part of my mind that creates my verbal stream of thought and which controls what I believe I would do if I had infinite willpower. I'm defining "unconscious self" as the source of my inability to always go through with my conscious mind's desires.

By definition, my unconscious mind has no qualia / experiences / awareness, because if it did it would be part of my conscious mind (I suppose it's possible that it is a "different person" who has experiences that are not my experiences, but I have never heard anyone propose this before and don't know of any evidence for it.)

When I use the word "I", I refer to the locus of my qualia and experiences, and thus to my conscious mind. I have no selfish reason to care about my unconscious mind, because its state as happy or unhappy has no relationship to my state as happy or unhappy except insofar as the unconscious mind can influence the conscious mind. And I have no moral reason to care about my unconscious mind, because in my moral system only aware beings deserve moral consideration; the unconscious mind has no more awareness than a rock and deserves no more moral consideration than a rock does.

Along with my qualia, I identify with my rationality. My rationality is what tells me that there's very probably no such thing as ghosts. This satisfies my conscious mind, which then accepts that there's no such thing as ghosts. It does not satisfy my unconscious mind, which continues to make me flee haunted mansions or sleep with the lights on or something. My rationality is what tells me that I should ask that girl out because the worst she could do is say no. My conscious mind accepts that. My unconscious mind continues to use all of its resources to hold me back from asking.

It seems vanishingly unlikely that my unconscious actually has as supergoals "Flee haunted mansions" and "Never ask girls out" and is rationally achieving them. It seems much more likely that the unconscious is enacting genetic directives like "Avoid danger" and "Avoid taking risks that could lower your social status", but is too irrational to realize that although equivalents of these situations might have been problems in the EEA, they are no longer problems today. It thinks that "Flee haunted mansions" and "Never ask girls out" are appropriate subgoals of the supergoals "Avoid danger" and "Avoid taking risks that could lower your social status", but in fact they aren't. Since it's too dumb to realize this, I feel suitably superior to it to ignore its opinions.

The same is true of morality. My unconscious is what tells me to value the life of a photogenic American more than the life of a starving Ethiopian, to value the life of one specific person more than the life of fifty statistical people, to refuse to push the fat man onto the tracks in the trolley problem no matter how many lives it would save, et cetera. If another person had this morality, I wouldn't respect it in them, and if my own unconscious has this morality, I don't respect it in it either.

Let me also admit that I have a bias here. I've got obsessive-compulsive disorder. It means that my unconscious mind frequently tells me things like "Close that door there eighty two times, or I will throw a fit and not let you feel comfortable for the rest of the day." I know that feeling is caused by miswired circuits in the basal ganglia. Why should I give miswired circuits in the basal ganglia the same respect as I give myself, a full intelligent human being?

All of my other unconscious urges seem closer to that urge to close the door eighty-two times than they do to anything rational or worth respecting.

Since it's too dumb to realize this, I feel suitably superior to it to ignore its opinions.

Which is why you then experience akrasia. Or, if I was going to anthropomorphize(?), I'd say, "which is why it feels entitled to ignore your opinions right back". ;-)

See, "your" opinions don't count for all that much in what you actually do. If you want to change your behavior, it's your "unconscious" opinions that you need to change. But you won't change them without first being aware of them, and if you keep the attitude you have, you'll have no real inclination to pay attention to them or seriously consider them when designing for your requirements... thereby ensuring that your unconscious mind will be stuck with low-quality ways of getting those requirements met!

In other words, the reason your unconscious desires have such poor quality of thought-throughness and execution is precisely because you refuse to consciously participate in the process.

Clearly, argument for which of the two points of view is the right one wasn't the focus of the post, the problem statement was.

The example with the unrepentant drunk reminds me of this joke:

A hunter goes into the woods to hunt a bear. He carries his trusty 22-gauge rifle with him. After a while, he spots a very large bear, takes aim, and fires. When the smoke clears, the bear is gone.

A moment later the bear taps the hunter on the shoulder and says, "No one shoots at me and gets away with it. You have two choices: I can rip your throat out and eat you, or you can drop your trousers, bend over, and I'll do you in the ass."

The hunter decides that anything is better than death, so he drops his trousers and bends over, and the bear does what he said he would do. After the bear has left, the hunter pulls up his trousers again and staggers back into town. He's pretty mad.

He buys a much larger gun and returns to the forest. He sees the same bear, aims, and fires. When the smoke clears, the bear is gone. A moment later the bear taps the hunter on the shoulder and says,

"You know what to do."

Afterwards, the hunter pulls up his trousers, crawls back into town, and buys a bazooka. Now he's really mad. He returns to the forest, sees the bear, aims, and fires. The force of the bazooka blast knocks him flat on his back. When the smoke clears, the bear is standing over him and says,

"You're not doing this for the hunting, are you?"

My conscious mind is more disconnected from my natural human feelings and my natural human motives and agendas than most people's is. As best as I can tell, that makes it more difficult for me to motivate myself to do things that any sane person would agree I need to do regardless of the details of what my goals or motives are. But also, as best as I can tell, my conscious mind is also significantly less prone to self-deception than most people's is. (And yes, I realize that in this community, that is a boast.) The fact that the conscious mind tended to serve in the EEA as the public-relations officer of the mind does not in any way make it less probable that every human is completely dependent on the conscious mind for rationality -- at least the kind of rationality necessary for science, philanthropy and effective pursuit of long-term self-interest.

I think there's also a short-term/long-term thing going on with your examples. The drunk really wants to drink in the moment; they just don't enjoy living with the consequences later. Similarly, in the moment, you really do want to continue reading Reddit; it's only hours or days later that you wish you had also managed to complete that other project which was your responsibility.

I bet there's something going on here, about maximizing integrated lifetime happiness, vs. in-the-moment decision-making, possibly with great discounts to those future selves who will suffer the negative effects.

A ton of recent work in social psychology and social neuroscience suggests that quite a bit of the 'public relations officer' in the brain is processed unconsciously. I'll probably write a post on this eventually.

I like how you've identified the subtle value judgment in a supposedly value-free scientific belief.

However, you lost me at the end here:

But notice how the theory you choose also has serious political implications. Consider how each of the two ways of looking at the problem would treat this example:

A wealthy liberal is a member of many environmental organizations, and wants taxes to go up to pay for better conservation programs. However, she can't bring herself to give up her gas-guzzling SUV, and is usually too lazy to sort all her trash for recycling.

Either side would say(after becoming sufficiently informed and thinking about the issue long enough):

"Ms. SUV Liberal's consumption makes no noticeable difference to the environment. The only way to achieve her environment goals is through collective action -- i.e., add a 'cooperation enforcement mechanism' to this Prisoner's Dilemma. Otherwise, individual 'cooperation' (reducing fuel consumption, recycling, etc.) simply rewards everyone else who 'defects', and the desired state of 'sustainable global society' is an unstable node.

"Ms. SUV Liberal's decision to drive an SUV and not recycle might have symbolic value, but that dynamic was not the focus of your example, and therefore, her driving of an SUV is not holding back progress toward her professed goal saving the environment."

Is there another political example you can give?

I myself throw my support squarely behind the Naive Theory. Conscious minds are potentially rational, informed by morality, and qualia-laden. Unconscious minds aren't, so who cares what they think?

Because of your clarification in footnote 5, I agree with the point you're making here, but I think you've spoken too broadly. Unconscious minds do a lot of difficult, useful cognitive labor: pattern recognition, regularity detection, and yes, even value judgments. While we'll often be able to identify where the unconscious mind is not acting optimally, that's a far cry from "who cares what the unconscious thinks".

When I am feeling poorly, there is a part of my mind that seem to be able to veto pretty much any activity I am engaged in except for primitive motor actions. The activities that get the veto seem to be the kinds of activities that would scare or repel a small boy. Even when I don't feel particularly poorly, my trying to do something extremely scary or repellent to the little boy will probably draw a veto.

The part with the veto power, which I sometimes refer to as the Saboteur, seems to be able to flush my working memory. For example, it can cause me to forget where I put something I had in my hand a moment ago. The thing I had in my hand tends to be something I need to continue with the activity the little boy is trying to veto. If the little boy is putting up a particularly strong fight, then after I retrieve the item, I often find to my amazement that (for no good reason that I can imagine) I have put it down (again) in a different place, but (again) even though I just put it down, I cannot recall where. I recall going through four cycles of misplacing an item and retrieving it, one cycle right after the other. I have some brain damage, which probably significantly impairs my working memory, and I am currently very confused about how many of these "sabotage incidents" would have happened if I had not incurred the brain damage. Obviously, if none of them would have happened if I had not incurred the brain damage, "sabotage incident" is a misnomer, and I am assigning an agenda or a motive to cognitive impairments that in reality have no agenda or motive behind them. I frequently forget why I got up out my chair. I frequently forget whether the pills I just swallowed contain important pill X. Most of these failures of working memory have no rebellious or sabotaging motive behind them: the question is whether some of them do. If the reader has any insights into this, I am all ears.

When I say that the kinds of things that seem to get a veto seem to be the kinds of things that would scare a young boy, the reader will tend to start to suspect that I had a brutalizing childhood, and the reader would be right.

I seem to be in a mood for self-disclosure today. I publish this only because Richard Hollerith is an alias that I do not plan to use for, e.g., job hunting and because I made a note to re-read this comment at a later date to re-consider whether I want it on the public internet. I ask everyone not to quote the personal parts of this comment because of course if I do decide to delete or prune the original, I would be unable to delete the quotes.