In reply to:

This is something of a nitpick, but I think that it is more moral/ethical to hold a proposition than to hold its converse if there is good reason to think that that proposition is true. Is this un-Bayesian?

This website seems to have two definitions of rationality: rationality as truth-finding, and rationality as goal-achieving. Since truth deals with "is", and morality deals with "ought", morality will be of the latter kind. Because they are two different definitions, at some point they can be at odds -- but what if your primary goal is truth-finding (which might be required by your statement if you make no exceptions for beneficial self-deception)? How would you feel about ignoring some truths, because they might lead you to miss other truths?

This article is about how learning some truths can prevent you from learning other truths, with an implication that order of learning will mitigate these effects. In some cases, you might be well served by purging truths from your mind (for example, "there is a miniscule possibility of X" will activate priming and availability heuristic). Some truths are simply much more useful than others, so what do you do if some lesser truths can get in the way of greater truths?

Knowing About Biases Can Hurt People

Once upon a time I tried to tell my mother about the problem of expert calibration, saying:  "So when an expert says they're 99% confident, it only happens about 70% of the time."  Then there was a pause as, suddenly, I realized I was talking to my mother, and I hastily added:  "Of course, you've got to make sure to apply that skepticism evenhandedly, including to yourself, rather than just using it to argue against anything you disagree with—"

And my mother said:  "Are you kidding?  This is great!  I'm going to use it all the time!"

Taber and Lodge's Motivated skepticism in the evaluation of political beliefs describes the confirmation of six predictions:

  1. Prior attitude effect. Subjects who feel strongly about an issue—even when encouraged to be objective—will evaluate supportive arguments more favorably than contrary arguments.
  2. Disconfirmation bias. Subjects will spend more time and cognitive resources denigrating contrary arguments than supportive arguments.
  3. Confirmation bias. Subjects free to choose their information sources will seek out supportive rather than contrary sources.
  4. Attitude polarization. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarization.
  5. Attitude strength effect. Subjects voicing stronger attitudes will be more prone to the above biases.
  6. Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases.

If you're irrational to start with, having more knowledge can hurt you.  For a true Bayesian, information would never have negative expected utility.  But humans aren't perfect Bayes-wielders; if we're not careful, we can cut ourselves.

I've seen people severely messed up by their own knowledge of biases.  They have more ammunition with which to argue against anything they don't like.  And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich's "dysrationalia" sense of stupidity.

You can think of people who fit this description, right?  People with high g-factor who end up being less effective because they are too sophisticated as arguers?  Do you think you'd be helping them—making them more effective rationalists—if you just told them about a list of classic biases?

I recall someone who learned about the calibration / overconfidence problem.  Soon after he said:  "Well, you can't trust experts; they're wrong so often as experiments have shown.  So therefore, when I predict the future, I prefer to assume that things will continue historically as they have—" and went off into this whole complex, error-prone, highly questionable extrapolation.  Somehow, when it came to trusting his own preferred conclusions, all those biases and fallacies seemed much less salient—leapt much less readily to mind—than when he needed to counter-argue someone else.

I told the one about the problem of disconfirmation bias and sophisticated argument, and lo and behold, the next time I said something he didn't like, he accused me of being a sophisticated arguer.  He didn't try to point out any particular sophisticated argument, any particular flaw—just shook his head and sighed sadly over how I was apparently using my own intelligence to defeat itself.  He had acquired yet another Fully General Counterargument.

Even the notion of a "sophisticated arguer" can be deadly, if it leaps all too readily to mind when you encounter a seemingly intelligent person who says something you don't like.

I endeavor to learn from my mistakes.  The last time I gave a talk on heuristics and biases, I started out by introducing the general concept by way of the conjunction fallacy and representativeness heuristic.  And then I moved on to confirmation bias, disconfirmation bias, sophisticated argument, motivated skepticism, and other attitude effects.  I spent the next thirty minutes hammering on that theme, reintroducing it from as many different perspectives as I could.

I wanted to get my audience interested in the subject.  Well, a simple description of conjunction fallacy and representativeness would suffice for that.  But suppose they did get interested.  Then what?  The literature on bias is mostly cognitive psychology for cognitive psychology's sake.  I had to give my audience their dire warnings during that one lecture, or they probably wouldn't hear them at all.

Whether I do it on paper, or in speech, I now try to never mention calibration and overconfidence unless I have first talked about disconfirmation bias, motivated skepticism, sophisticated arguers, and dysrationalia in the mentally agile.  First, do no harm!

 

Part of the Against Rationalization subsequence of How To Actually Change Your Mind

Next post: "Update Yourself Incrementally"

Previous post: "The Genetic Fallacy" (end of previous subsequence)

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:00 PM
Select new highlight date
All comments loaded

I've pointed before to this very good review of Philip Tetlock's book, Expert Political Judgment. The review describes the results of Tetlock's experiments evaluating expert predictions in the field of international politics, where they did very poorly. On average the experts did about as well as random predictions and were badly outperformed by simple statistical extrapolations.

Even after going over the many ways the experts failed in detail, and even though the review is titled "Everybody’s An Expert", the reviewer concludes, "But the best lesson of Tetlock’s book may be the one that he seems most reluctant to draw: Think for yourself."

Does that make sense, though? Think for yourself? If you've just read an entire book describing how poorly people did who thought for themselves and had a lot more knowledge than you do, is it really likely that you will do better to think for yourself? This advice looks like the same kind of flaw Eliezer describes here, the failure to generalize from knowledge of others' failures to appreciation of your own.

There's a better counterargument than that in Tetlock - one of the data points he collected was from a group of university undergraduates, and they did worse than the worst experts, worse than blind chance. Thinking for yourself is the worst option Tetlock considered.

Actually, when I was rereading the comments and saw your mention of Tetlock, I thought you would point out the bit where he noted the hedgehog predictors made inferior predictions within their area of expertise than without.

Humans aren't just not perfect Bayesians. Very very few of us are even Bayesian wannabes. In essence, everyone who thinks that it is more moral/ethical to hold some proposition than to hold it's converse is taking some criterion other than appearent truth as normative with respect to the evaluation of beliefs.

This is something of a nitpick, but I think that it is more moral/ethical to hold a proposition than to hold its converse if there is good reason to think that that proposition is true. Is this un-Bayesian?

It's a meta-level/aliasing sort of problem, I think. You don't believe it's more ethical/moral to believe any specific proposition, you believe it's more ethical/moral to believe 'the proposition most likely to be true', which is a variable which can be filled with whatever proposition the situation suggests, so it's a different class of thing. Effectively it's equivalent to 'taking apparent truth as normative', so I'd call it the only position of that format that is Bayesian.

This website seems to have two definitions of rationality: rationality as truth-finding, and rationality as goal-achieving. Since truth deals with "is", and morality deals with "ought", morality will be of the latter kind. Because they are two different definitions, at some point they can be at odds -- but what if your primary goal is truth-finding (which might be required by your statement if you make no exceptions for beneficial self-deception)? How would you feel about ignoring some truths, because they might lead you to miss other truths?

This article is about how learning some truths can prevent you from learning other truths, with an implication that order of learning will mitigate these effects. In some cases, you might be well served by purging truths from your mind (for example, "there is a miniscule possibility of X" will activate priming and availability heuristic). Some truths are simply much more useful than others, so what do you do if some lesser truths can get in the way of greater truths?

Neither truth-finding nor goal-achieving quite captures the usual sense of the word around here. I'd say the latter is closer to how we usually use it, in that we're interested in fulfilling human values; but explicit, surface-level goals don't always further deep values, and in fact can be actively counterproductive thanks to bias or partial or asymmetrical information.

Almost everyone who thinks they terminally value truth-finding is wrong; it makes a good applause light, but our minds just aren't built that way. But since there are so many cognitive and informational obstacles in our way, finding the real truth is at some point going to be critically important to fulfilling almost any real-world set of human values.

On the other hand, I don't rule out beneficial self-deception in some situations, either. It shouldn't be necessary for any kind of hypothetical rationalist super-being, but there aren't too many of those running around.

The error here is similar to one I see all the time in beginning philosophy students: when confronted with reasons to be skeptics, they instead become relativists. That is, where the rational conclusion is to suspend judgment about an issue, all too many people instead conclude that any judgment is as plausible as any other.

Hmm... thanks for writing this. I just realized that I may resemble your argumentative friend in some ways. I should bookmark this.

Stanovich's "dysrationalia" sense of stupidity is one of my greatest fears.

I have also had repeated encounters with individuals who take the bias literature to provide 'equal and opposite biases' for every situation, and take this as reason to continue to hold their initial beliefs. The situation is reminiscent of many economic discussions, where bright minds question whether the effect of a change on some quantity will be positive, negative or ambiguous. The discussants eagerly search for at least one theoretical effect that could move the quantity in a positive direction, one that could move it in the negative, and then declare the effect ambiguous after demonstrating their cleverness, without evaluating the actual size of the opposed effects.

I would recommend that when we talk about opposed biases, at least those for which there is an experimental literature, we should give rough indications of their magnitudes to discourage our audiences from utilizing the 'it's all a wash' excuse to avoid analysis.

Hal, to be precise, the bias is generalizing from knowledge of others' failures to skepticism about disliked conclusions, but failing to generalize to skepticism about preferred conclusions or one's own conclusions. That is, the error is not absence of generalization, but imbalance of generalization, which is far deadlier. I do agree with you that the reviewer's conclusion is not supported (to put it mildly) by the evidence under review.

As far as I can tell, there have been few other studies which demonstrate the sophistication effect. One new study on this is West et al. (forthcoming), "Cognitive Sophistication Does Not Attenuate the Bias Blind Spot."

Here is the abstract:

The so-called bias blind spot arises when people report that thinking biases are more prevalent in others than in themselves. Bias turns out to be relatively easy to recognize in the behaviors of others, but often difficult to detect in our own judgments. Most previous research on the bias blind spot has focused on bias in the social domain. In two studies, we found replicable bias blind spots with respect to many of the classic cognitive biases studied in the heuristics and biases literature (e.g., Tversky & Kahneman, 1974). Further, we found that none of these bias blind spots were attenuated by measures of cognitive sophistication such as cognitive ability or thinking dispositions related to bias. If anything, a larger bias blind spot was associated with higher cognitive ability. Additional analyses indicated that being free of the bias blind spot does not help a person avoid the actual classic cognitive biases. We discuss these findings in terms of a generic dual-process theory of cognition.

Have there been any attempts to measure biases in researchers who study biases?

Unfortunately, the results of all such studies were rejected, due to... well, you know.

So why, then, is this blog not incorporating more statistical and collective de-biasing mechanisms? There are some out-of-the-box web widgets and mildly manual methods to incorporate that would at the very least provide new grist for the discussion mill.

"For a true Bayesian, information would never have negative expected utility."

Is this true in general? It seems to me that if a Bayesian has limited information handling ability, then they need to give some thought (not too much!) to the risks of being swamped with information and of spending too many resources on gathering information.

if a Bayesian has limited information handling ability

I believe that in this situation "true Bayesian" implies unbounded processing power/ logical omniscience.

The cost of gathering or processing the information may exceed the value of information, but the information is always positive value; At worst, you do nothing different, and the rest of the time you make a more informed choice.

Is this true in general?

Yes, in this technical sense.

It seems to me that if a Bayesian has limited information handling ability

A true Bayesian has unlimited information handling ability.

A true Bayesian has unlimited information handling ability.

I think I see that - because if it didn't, then not all of its probabilities would be properly updated, so its degrees of belief wouldn't have the relations implied by probability theory, so it wouldn't be a true Bayesian. Right?

Yes, one generally ignores the cost of making these computations. One might try to take it into account, but then one is ignoring the cost of doing that computation, etc. Historically, the "Bayesian revolution" needed computers before it could happen.

And, I notice, it has only gone as far as the computers allow. "True Bayesians" also have universal priors, that assign non-zero probability density to every logically possible hypothesis. Real Bayesian statisticians never do this; all those I have read deny that it is possible.

And, I notice, it has only gone as far as the computers allow. "True Bayesians" also have universal priors, that assign non-zero probability density to every logically possible hypothesis. Real Bayesian statisticians never do this; all those I have read deny that it is possible.

It is impossible, even in principal. The only way to have universal priors over all computable universes is if you have access to a source of hypercomputation, but that would mean the universe isn't computable so the truth still isn't in your prior set.

Yeah, certainly. The search might be expensive. Or, some of its resources might be devoted to distinguishing the most relevant among the information it receives - diluting its input with irrelevant truths makes it work harder to find what's really important.

An interpretation of the original statement that I think is true, though, is that in all these cases, receiving the information and getting a little more knowledgeable offsets the negative utility of whatever price was paid for it. The negative utility of the combination of search+learning is always negative because of the searching part of it - if you kept the searching but removed the learning at the end, it'd be even worse.

"For a true Bayesian, information would never have negative expected utility". I'm probably being a technicality bitch, attacking an unintended interpretation, but I can see bland examples of this being false if taken literally: A robot scans people to see how much knowledge they have and harms them more if they have more knowledge, leading to a potential for negative utility given more knowledge.

You have no choice but to bet at some odds. Life is about action, action is about expected utility, and expected utility demands that you assign some subjective weighting to outcomes based on how likely they are. Walking down the street, I offer to bet you a million dollars against one dollar that a stranger has string in their pockets. Do you take the bet? Whether you say yes or no, you've just made a statement of probability. The null action is also an action. Refusing to bet is like refusing to allow time to pass.

Nor do I permit probabilities of zero and one. All belief is belief of probability.

Given the unbelievable difficulty in overcoming cognitive bias (mentioned in this article and many others), is it even realistic to expect that it's possible? Maybe there are a lucky few who may have that capacity, but what about a majority of even those with above-average intelligence, even after years of work at it? Would most of them not just sort of drill themselves into a deeper hole of irrationality? Even discussing their thoughts with others would be of no help, given the fact that most others will be afflicted with cognitive biases as well. Since this blog is devoted to precisely that effort (i.e. helping people become more rational), I would think that those who write posts here must have reason to believe that it is indeed quite possible, but do you have any examples of such improvement? Have any scientists done any studies on overcoming cognitive bias? The ones I've seen only show that being aware of cognitive bias barely removes its effects.

It almost seems like the only way to truly overcome cognitive biases is to do something like design a computer program based on something you know for sure you're not biased about (e.g. statistics that people formed correct opinions about in various experiments) and then run it for something you are likely to be biased about.

I apologize if there are already a bunch of posts (or even comments!) answering this question; I've been on the site like all day and haven't come across any, so I figured it couldn't hurt to ask.

My main takeaway from this is that "I know about this bias, therefore I'm more immune to it" is wrong. To be less susceptible to a bias, you need to practice habits that help (like the premortem as a counter to the planning fallacy), not just know a lot of cognitive science.

Rafe, name three.

Rooney, I don't disagree that this would be a mistake, but in my experience the balance of evidence is very rarely exactly even - because hypotheses have inherent penalties for complexity. Where there is no evidence in favor of a complicated proposed belief, it is almost always correct to reject it, not suspend judgment. The only cases I can think of where I suspend judgment are binary or small discrete hypothesis spaces, like "Was it murder or suicide?", or matters like the anthropic principle, where there is no null hypothesis to take refuge in, and any position is attackable.

I would love to hear more about such methods, Rafe. This blog tends to be a somewhat abstract and "meta" but I would like to do more case studies on specific issues and look at how we could come to a less biased view of the truth. I did a couple of postings on the "Peak Oil" controversy a few months ago along these lines.

THIS is the proper use of humility. I hope I'm less of a fanatic and more tempered in my beliefs in the future.

I fear that the most common context in which people learn about cognitive biases is also the most detrimental. That is, they're arguing about something on the internet and someone, within the discussion, links them an article or tries to lecture them about how they really need to learn more about cognitive biases/heuristics/logical fallacies etc.. What I believe commonly happens then is that people realise that these things can be weapons; tools to get the satisfaction of "winning". I really wish everyone would just learn this in some neutral context (school maybe?) but most people learn this with an intent, and I think it colours their use of rationality in general, perhaps indefinitely. :/ But maybe I'm just being too pessimistic.

Critical Review recently devoted an issue to discussions of this 2006 study. Taber & Lodge's reply to the symposium on their paper is available here.

And now that we know that we're going to be more biased. Why'd you have to say that?

Why'd you have to say that?

Because knowing about biases can also help people. A cornerstone premise of Eliezer's entire life strategy.

Eliezer, I think we are misunderstanding each other, possibly merely about terminology.

When you (and pdf) say "reject", I am taking you to mean "regard as false". I may be mistaken about that.

I would hope that you don't mean that, for if so, your claim that "no evidence in favor -> almost always false" seems bound to lead to massive errors. For example, you have no evidence in favor of the claim "Rooney has string in his pockets". But you wouldn't on such grounds aver that such a claim is almost certainly false. The appropriate response would be to suspend judgment, i.e., to neither reject nor accept. Perhaps I am not understanding what counts as a suitably "complicated" belief.

As for Archimedes meeting Bell's theorem, perhaps it was too counter-factual an example. However, I wouldn't say it's comparable to the "high utility" of the winning lottery ticket: it the case of the lottery, the relevant probabilities are known. By contrast, Archimedes (supposing he were able to understand the theorem) would be ignorant of any evidence to confirm or disconfirm it. Thus I would hope that he would refrain from rejecting it, merely regarding it as a puzzling vision from Zeus, perhaps.