Defeating Ugh Fields In Practice

Unsurprisingly related to: Ugh fields.

If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it's this article about financial incentives and medication compliance. In short, offering people small cash incentives vastly improves their adherence to life-saving medical regimens. That's right. For a significant number of people, a small chance at winning $10-100 can be the difference between whether or not they stick to a regimen that has a very good chance of saving their life. This technique has even shown promise in getting drug addicts and psychiatric patients to adhere to their regimens, for as little as a $20 gift certificate. This problem, in the aggregate, is estimated to cost about 5% of total health care spending -$100 billion - and that may not properly account for the utility lost by those who are harmed beyond repair. To claim that people are making a reasoned decision between the payoffs of taking and not-taking their medication, and that they be persuaded to change their behaviour by a payoff of about $900 a year (or less), is to crush reality into a theory that cannot hold it. This is doubly true when you consider that some of these people were fairly affluent. 

A likely explanation of this detrimental irrationality is something close to an Ugh field. It must be miserable having a life-threatening illness. Being reminded of it by taking a pill every single day (or more frequently) is not pleasant. Then there's the question of whether you already took the pill. Because if you take it twice in one day, you'll end up in the hospital. And Heaven forfend your treatment involves needles. Thus, people avoid taking their medicine because the process becomes so unpleasant, even though they know they really should be taking it.

As this experiment shows, this serious problem has a simple and elegant solution: make taking their medicine fun. As one person in the article describes it, using a low-reward lottery made taking his meds "like a game;" he couldn't wait to check the dispenser to see if he'd won (and take his meds again). Instead of thinking about how they have some terrible condition, they get excited thinking about how they could be winning money. The Ugh field has been demolished, with the once-feared procedure now associated with a tried-and-true intermittent reward system. It also wouldn't surprise me the least if people who are unlikely to adhere to a medical regimen are the kind of people who really enjoy playing the lottery.

This also explains why rewarding success may be more useful than punishing failure in the long run: if a kid does his homework because otherwise he doesn't get dessert, it's labor. If he gets some reward for getting it done, it becomes a positive. The problem is that if she knows what the reward is, she may anchor on already having the reward, turning it back into negative reinforcement - if you promise your kid a trip to Disneyland if they get above a 3.5, and they get a 3.3, they feel like they actually lost something. The use of a gambling mechanism may be key for this. If your reward is a chance at a real reward, you don't anchor as already having the reward, but the reward still excites you.

I believe that the fact that such a significant problem can be overcome with such a trivial solution has tremendous implications, the enumeration of all of which would make for a very unwieldy post. A particularly noteworthy issue is the difficulty of applying such a technique to one's own actions, a problem which I believe has a fairly large number of workable solutions. That's what comments, and, potentially, follow-up posts are for. 

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 8:00 PM
Select new highlight date
All comments loaded

A particularly noteworthy issue is the difficulty of applying such a technique to one's own actions, a problem which I believe has a fairly large number of workable solutions.

I have had success working around 'Ugh' reactions to various activities. I took the direct approach. I (intermittently) use nicotine lozenges as a stimulant while exercising. Apart from boosting physical performance and motivation it also happens to be the most potent substance I am aware of for increasing habit formation in the brain.

Perhaps more important than the, you know, chemical sledge hammer, is the fact that the process of training myself in that way brings up "anti-Ugh" associations. I love optimisation in general and self improvement in particular. I am also fascinated by pharmacology and instinctively 'cheeky'. Having never even considered smoking a cigarette and yet using the disreputable substance 'nicotine' in a way that can be expected to have improvements to my health and well-being is exactly the sort of thing I know my brain loves doing.

I have had success working around 'Ugh' reactions to various activities. I took the direct approach. I (intermittently) use nicotine lozenges as a stimulant while exercising. Apart from boosting physical performance and motivation it also happens to be the most potent substance I am aware of for increasing habit formation in the brain.

I like this idea, and might even adopt it myself. But I feel I should emphasize, for anyone who considers adopting this strategy, that it absolutely requires proper bookkeeping, a predetermined rate limit, and predetermined blackout periods. The rate limit protects you if a change in schedule increases the chem-reward frequency by too much. The blackout periods ensure you'll find out if any sort of dependency forms.

Respect for the process is important, and 'proper bookkeeping' sounds like a good theory but I know that this suggestion would 'absolutely' make the process counterproductive. Trying this would utterly destroy my exercise programming rather than helping it. Ugh! The opposite of what would work.

Cycling (drugs, especially stimulating ones) is important, both to prevent withdrawal effects and to ensure continued usefulness. But I've learned that it is best to do things in a way that works for me.

How about if it were handled by a button in your phone's UI, which would log the event, roll dice to determine whether you get the reinforcement that time, and enforce rate limits automatically?

That is something I would do. In fact, by preference I would spend a day coding it up instead of two hours in aggregate manually bookkeeping. "Flow" vs "Ugh"!

I should note that the role nicotine lozenges are taking here is not primarily as a training reward, like giving the rat electronically stimulated orgasms when it presses the lever. Nicotine isn't particularly strong in that role compared to alternatives (such as abusing ritalin), at least when it is not administered by a massive hit straight into the brain via the lungs. No, the particular potency of nicotine is that it potentates the formation of habits for activities undertaken while under the influence by means more fundamental than a 'mere' stimulus-reward mechanism. Habits that are found to be harder to extinct than an impulse to take a drug. This is what makes smoking so notoriously hard to quit even with patches and makes the use of fake cigarettes to suck on useful.

In a different thread I've been discussing nootropics that enhance learning via the acetylcholine system. Half of those acetylcholine receptors are called nAChRs (Nicotinic acetylcholine receptors). This is not a coincidence.

The other fascinating (to me) fact regarding nicotine is that it has the opposite effect on the sensitivity of the brain's reward mechanism than other stimulatory drugs of abuse. Where abusing meth, cocaine or coffee will make all rewards you experience in life less salient when you stop medicating, the reverse occurs with nicotine. The systems get downregulated but that mechanism is itself countered with the addition of more receptors leaving a net boost. This means that if you stop using nicotine food starts to taste really good (and you may gain weight!)

It would be very cool to read a series of top level posts about this experience. Perhaps...

  1. The first would give the basic idea, plus a set of warnings and provisos as to who might be seriously hurt by trying to replicate your results and general cautions . Perhaps you could create a sub area in the comments for other people to suggestion reasons for caution to be voted up and down?

  2. The second post would give some background theory as to why the general approach should be expected to work, possibly with some links to some psycho-pharmacology and so on. Also useful would be to suggest a way to measure success and/or detect negative side effects - possibly with a logging system like this?

  3. Finally, you provide practical instructions about how to "build a habit" in terms of habit design, and what to take, and when with an explanation of benefits and any side effects or worries that you were harboring on the side.

I think that would be enough for a brave soul or two (who was not likely to boomerang into a bad situation, like falling back into a smoking habit) to try to replicate your success in a documentable and relatively safe way, to see if they got similar benefits.

It would be hilarious (and almost plausible) if, five years from now, one of the primary reasons people gave for not smoking was because it interfered with their use of the "wedrifid method" for nicotine assisted positive habit formation :-)

I like your thoughts! Particluarly that part about the 'wedrifid method". A place where posts somewhat like what you mention are commonplace is imminst.org.

Before I got into anything quite so experimental I would probably want to post on some basics. There is some real low hanging fruit out there!

Please do! I would be very interested in a series on "use of chemicals to increase willpower". I would even contribute...

I don't know if you've written anything in the last ~year since (pretty sure you haven't), so I've started compiling information at http://www.gwern.net/Nootropics#nicotine

I would like to second patrissimo in a way more concrete than merely upvoting you. Have you made any progress on this?

The idea of using something as powerful as nicotine both terrifies and tempts me, and I'm not sure I'd want to try it without considerable documentation.

Since positive reinforcement is generally more effective than punishment, we could apply this idea across society.

Why pay police officers to sit on the side of the road all day, pulling over speeders and writing citations? How about automated cameras that can randomly reward drivers with $10-$20 for driving the speed limit? Shouldn't we expect more safe drivers and less overall expense?

Even if it were proven effective, the reason it won't take off with traffic or medication is that most people want to see wrong-doers punished more than they want to see less wrong-doing. Don't take your meds? You deserve your illness. Speeding (even though I do it too)? You deserve your $250 fine. You did the right thing? Woopty-do. What, you want a cookie?

Sure this applies to punishments in society, but for self-motivation it is the opposite. I want my self-motivation to be fun not punitive.

I seem to recall reading about this actually being tried, with the crime in question being not cleaning up after one's dog.

There's nothing stopping us from combining positive and negative reinforcement. I think it would be a pretty easy sell to propose adding the random, small no-speeding rewards without removing the existing laws and fines.

Nothing except for large segments of the population that will revolt at the very idea. Politicians win by promising to be "tough on crime" regardless of the real result. People like to think most others are much, much worse humans; and they like to see them punished for it to reinforce their belief. Paying a drug addict to get clean won't be popular, but paying people for driving "normally" won't fare much better.

I agree, though, we would ideally keep some/most existing laws and fines while cutting back on the number of officer-hours to make the immediate costs balance.

Paying a drug addict to get clean isn't rewarding good behavior so much as rewarding the cessation of bad behavior. This has clear problems. For one thing, it isn't random like the "follow the speed limit for a chance at a small reward" scheme.

A true equivalent would be rewarding random people for not being on drugs, including the population of former addicts that have since gone clean. Being on drugs would be a garantee of not getting this reward.

I find that a great way to self-motivate is to tie an action to intermittent, stimulating rewards. That's how mice get addicted to pushing levers, right? That's how people get addicted to WoW and similar games, right? But you can harness the power for good instead of evil.

  1. Exercise. I keep an exercise log in a public forum. Every now and then, someone comes by with a comment like "Great workout!" The prospect of getting those intermittent, stimulating responses -- which I only get if I post regularly -- is great motivation to keep exercising.

  2. Studying. I often find that my problem, when reading a technical book, is that I finish a chapter and don't review and summarize it. I'm in too much of a hurry. Solution: now I post summaries on a blog. I get intermittent rewards in the form of blog hits and comments.

The general theme here is that publicizing your goals is an easy, effective way to get intermittent rewards.

It occurs to me that it might be very useful to have some sort of 'hub' for such blogs - something similar to the autism hub (which I don't actually recommend; all the bloggers I've liked have left the hub in the last 6 months or so).

It seems to me that that would have the potential to increase the chance of getting positive feedback, and also the chance of getting feedback if you start to slip - if the blogs are sorted by the date of their most recent post, it's fairly easy for someone to scroll down to the last few entries and post comments along the lines of "hey, are you still doing this?". (Perhaps each participant could commit to making at least one comment of either type per month, or something.)

Sometimes a ugh field exists for good reasons. Sometimes a med has bad side effects which more than counterbalance its good effects. Sometimes a diet is ill-conceived.

Do methods which are just aimed at getting compliance need to be matched with methods of checking on whether the reinforced behavior is actually a good idea?

This also explains why rewarding success may be more useful than punishing failure in the long run: if a kid does his homework because otherwise he doesn't get dessert, it's labor.

The overjustification effect suggests caution may be warranted when giving rewards for desired behaviour.

In my experience, the rational actor model is generally more like a "model" or an "approximation" or sometimes an "emergent behavior" than an "assumption," and people who want us to criticize it as an "assumption" or "dogma" or "faith" or some such thing are seldom being objective.

(If you think this criticism is merely uninformed or based on a deep misunderstanding, then perhaps it would be rational to turn the phrase "the rationality assumption of neoclassical economics" in your opening paragraph into a hyperlink to some neoclassical authority you are engaging.)

There are various individual cases where it is quite justifiable to beat up neoclassical economists for trying to push rationality too far, either against the clear evidence in simple situations or beyond the messy evidence in complicated situations. As an example of the latter, my casual impression is that the running argument at Falkenblog against the Capital Asset Pricing Model could well be a valid and strong empirical critique. But there are also various individual cases where neoclassical economists can justifiably fire back with "[obvious rational reactions to] incentives matter [and are too often underappreciated]!" E.g., simple clean natural experiments like surprisingly large perverse consequences of a few thousand dollar tax incentive for babies born after a sharp cutoff date, or strong suggestive evidence in large messy cases like responses to price controls, high marginal tax rates, or various EU-style labor market policies.)

And it seems me that w.r.t. our rationality when we hold a discussion here about rationality in the real psychohistorical world, the elephant in the room is how commonly people's lively intellectual interest in pursuing the perverse consequences of some shiny new behavioral phenomenon in the real world turns out to be in fact an enthusiasm for privileging their preference for governmental institutions by judging market institutions (and evidence for them, and theoretical arguments for them, and observed utility of outcome from them) by a qualitatively harsher standard. The real world is dominated by mixed economies, so the implications of individual irrationality for existing governmental institutions (like democracy and hierarchical technocratic regulatory agencies) have at least as much practical importance as the implications for some idealized model of pure free markets. And neoclassical economists have some fairly good reasons (both theoretical and empirical) to expect key actors in market institutions to display more of some relevant kinds of rationality than (e.g.) random undergrads display in psych experiments, while AFAICS political scientists seldom have comparably good reasons to expect it in institutions in general.

I commend this post for picking a telling example of behavioral anomalies which show a strong impact in the real world (as opposed to, e.g., in bored undergraduates working off a psych class requirement by being lab rats). But I see nothing essentially market-specific about this anomaly. Thus, it is obvious why it is interesting regarding self-help w.r.t. ugh fields, and it is not obvious why when considering its application to the broader world, we should focus on its importance for economics-writ-very-small as opposed to its importance for existing mixed economies. And as above, unless you link to someone significant who actually makes your "rationality assumption" so broadly that this experiment would falsify it, I don't think you've actually engaged your enemy, merely a weak caricature.

This reminds me of a thought I had before:

University costs thousands. Imagine that you received, along with your exam marks, $1 per % for your average grade.

It's meaningless, really, compared to the value of the degree, but... it feels like you're getting something real for that work. You're directly receiving money, rather than earning the chance to earn it in the future.

Hell, you could even use this as a replacement for merit-based partial subsidies (though not for fully free education). Everybody pays 1000 at the beginning of the academic year, then over time they 'earn' back a percentage proportional to their grades, eg. 60% or so for a straight-A student.

The one truly massive drawback to this is it would strongly encourage students of little means to pursue courses of study populated by easy graders. It's my experience that more practical courses of study, like Accounting, Engineering, and hard sciences tend to be much harder to succeed in than, say, Art History or English Literature. So, while a good idea, this may nudge students towards academic tracks with lower expected earnings attached to them.

Reward grades more and students will respond. The fact that we are so worried about small amounts of money causing large distortions in behavior is a sign of how powerful we expect this incentive to be. If maximizing your grades is not a good way to learn then that is a sign we need to be evaluating students on a different metric, presumably one that rewards difficulty.

It's my experience that more practical courses of study, like Accounting, Engineering, and hard sciences tend to be much harder to succeed in than, say, Art History or English Literature.

Erk. I don't disbelieve your claims but the very thought seems so bizarre to me. In the hard sciences you get to go do exams that are worth about 90% of the mark, mostly objective and based on some rules from nature that are fairly easy to grasp. The alternative is trying to learn an endless stream of teacher's passwords!

I think it's just a human trait: we find it much easier to punish wrongness than not-very-rightness. On a math test, almost every answer you could give to a question is wrong. On an English Literature test, virtually any interpretation of the text is a right answer, provided you can back it up in some way, so even if your answer is flawed, it's easy to avoid saying anything obviously wrong. Furthermore, I think the culture of grading in the two differs greatly - the type of personality who is drawn to be a professor of creative writing is rather different than that of one who becomes a professor of electrical engineering, and I suspect the first is far less inclined to fail or treat people harshly.

the type of personality who is drawn to be a professor of creative writing is rather different than that of one who becomes a professor of electrical engineering, and I suspect the first is far less inclined to fail or treat people harshly.

Now that is an interesting consideration. You could well be right in general. But my anticipation of personal experience is of getting treated more harshly from a professor of creative writing than of engineering. This is because I can far more easily elicit the desired behavior from the engineering professor. That is, the desired behavior of giving me top marks and interfering too much with my education. If all goes well I may even be able to avoid him learning my name.

With a professor in something less objective I expect harsh marking for not optimally conforming to the (possibly flawed) positions that I was supposed to have guessed in the assessments. I am also more at risk of harsh treatment for political reasons. Given that their way of thinking is less like mine I am less able to predict what sort of things will piss them off and so provoke grudges more easily. I may say something that seems obvious to me but incidentally undermines something they care about. Once that happens I am not all that talented at making bitchy people not be hostile. My instinct is to avoid situations where I am potentially vulnerable to capricious whims.

(Yes, my personal anticipation is different than that of most people!)

That has less to do with professors' personalities than with the nature of their teaching.

An engineering professor may very well be a fanatical Nazist who would gladly fail any students he discovered harbouring pro-democracy views, but he's not going to discover them unless you wear a political t-shirt while handing over your home assignments. If he taught History of contemporary literature, however, the issue would be all but guaranteed to emerge.

Not that conflicts over personal views are limited to the humanities, of course. Imagine if Andrew Tanenbaum had been teaching at Helsinki in the early 90s...

An engineering professor may very well be a fanatical Nazist who would gladly fail any students he discovered harbouring pro-democracy views, but he's not going to discover them

That reminds me of the biology teacher who, when asked to write letters of recommendation, demanded that his students swear allegiance to evolution. A student sued in 2003. Some time between February and April, he added a little disclaimer. That form remains today. Of course, this was only for letters, not grades, and it was all put forward in writing ahead of time.

That has less to do with professors' personalities than with the nature of their teaching.

The nature of their teaching matters but I place specific emphasis on the professor's personalities:

I am also more at risk of harsh treatment for political reasons. Given that their way of thinking is less like mine I am less able to predict what sort of things will piss them off and so provoke grudges more easily. I may say something that seems obvious to me but incidentally undermines something they care about. Once that happens I am not all that talented at making bitchy people not be hostile. My instinct is to avoid situations where I am potentially vulnerable to capricious whims.

The effect of personality is real. And I am not merely talking hypothetically here. It can bite me in the arse if I'm not careful. It is all too easy to overestimate how similar people are to ourselves and doing so comes at great price.

I like this idea too, but I suspect it would be quickly hijacked - it's easier to bug your instructor until she lets you have a better grade than to study. Ask most "tough graders" how their student reviews compare to "easy graders."

I always hated those assessments that weren't marked anonymously!

Someone I know actually started a business around this idea:

http://www.ultrinsic.com/

In the intro to Dan Aliely's new book he describes dealing with his own medical compliance problem: he had to take some very rough hepatitis meds that made him nauseous, He essentially bribed himself with movies, which he liked a lot, specifically arranging the details to create positive associations (he would start the movie right away after giving himself the shot, before the nausea would set in). He was apparently the only one who finished the course (the treatment was experimental), so +1 for behavioral economists.

One wonders what effect his desire for his own theory to work might have played in this... Still, a good idea.

There's a difference between the psychology of being in a lottery by taking your medication and receiving cash every time you take your medicine.

There is also evidence that bribing people reduces their inherent interest in an activity. There was a study that showed that kids paid to do homework did it enthusiastically for a while, but then quickly lost interest over time as they became habituated to the possibility of reward and began to lose inherent interest in the material.

You have it all wrong. Your "ugh" field should go into their utility function! Whether or not they invest the resources to overcome that "ugh" field and save their life is endogenous to their situation!

You are making the case for rationality, it seems to me. Your suggestion may be that people are emotional, but not that they are irrational! Indeed, this is what the GMU crowd calls "rationally irrational." Which makes perfect sense--think about the perfectly rational decision to get drunk (and therefore be irrational). It has costs and benefits that you evaluate and decide that going with your emotions is preferable.

I see this comment as not understanding the definition of "rational" in economics, which would be simply maximizing utility subject to costs such as incomplete information (and endogeneous amounts of information), emotional constraints and costs, etc.

I appreciate the Devil's Advocacy. The simple issue, though, is that if you use a definition of "rational" that encompasses this behaviour, you've watered the word down to oblivion. If the behaviour I described is rational, then, "People who act always act rationally," is essentially indistinguishable from, "People who act always act." It's generally best to avoid having a core concept with a definition so vacuous it can be neatly excised by Occam's Razor.

You are just wrong. These are people whose utility function does not place a higher utility on "dieing but not having to take my meds".

If your preferred theory takes a human and forces the self-contradictions into a simple rational agent with a coherent utility function you must resolve the contradictions the way the agent would prefer them to be resolved if they were capable of resolving them intelligently. If your preferred theory does not do this then it is a crap theory. A map that does not describe the territory. A map that is better used as toilet paper.

"These are people whose utility function does not place a higher utility on 'dieing but not having to take my meds'."

Why are you making claims about their utility functions that the data does not back? Either people prefer less to more, knowingly, or they are making rational decisions about ignorance, and not violating their "ugh" field, which is costly for them.

How is that any different than a smoker being uncomfortable quitting smoking? (Here I recognize that smoking is obviously a rational behavior for people who choose to smoke).

I get it. You define humans as rational agents with utility functions of whatever it is that they happen to do because it was convenient for the purposes of a model they taught you in Economics 101. You are still just wrong.

Your posts under this name have the potential for some hilarious and educational trolling, though you have some stiff competition if you want to be the best.

You should probably refine your approach a little bit. Links to the literature would give you more points for style. Also, the parenthetical aside was a bit much - it made the trolling too obvious.

An alternative to making things fun is to make things unconscious and/or automatic. No healthy individual complains about insulin production because their pancreas does it for them unconsciously, but diabetic patients must actively intervene with unpleasant, routine injections. One option would be to make the injections less unpleasant (make the process fun and/or less painful), but a better option would be to bring them in line with non-diabetic people and make the process unconscious and automatic again.

The problem is that if she knows what the reward is, she may anchor on already having the reward...The use of a gambling mechanism may be key for this.

Brilliant formulation of the problem & solution.

(Very successful) animal trainers using reinforcement techniques make a distinction between bribe and reinforcement, which was not ever completely clear to me, but appears to be addressing the same problem. But one thing they do, "shaping" the expected behavior, always changing it a little bit to get loser to the "target", might be serving the same purpose as the gambling mechanism: preventing anchoring on obtaining a reward in specific manner.