From the subreddit: Humans Are Hardwired To Dismiss Facts That Don’t Fit Their Worldview. Once you get through the preliminary Trump supporter and anti-vaxxer denunciations, it turns out to be an attempt at an evo psych explanation of confirmation bias:

Our ancestors evolved in small groups, where cooperation and persuasion had at least as much to do with reproductive success as holding accurate factual beliefs about the world. Assimilation into one’s tribe required assimilation into the group’s ideological belief system. An instinctive bias in favor of one’s in-group” and its worldview is deeply ingrained in human psychology.

I think the article as a whole makes good points, but I’m increasingly uncertain that confirmation bias can be separated from normal reasoning.

Suppose that one of my friends says she saw a coyote walk by her house in Berkeley. I know there are coyotes in the hills outside Berkeley, so I am not too surprised; I believe her.

Now suppose that same friend says she saw a polar bear walk by her house. I assume she is mistaken, lying, or hallucinating.

Is this confirmation bias? It sure sounds like it. When someone says something that confirms my preexisting beliefs (eg ‘coyotes live in this area, but not polar bears’), I believe it. If that same person provides the same evidence for something that challenges my preexisting beliefs, I reject it. What am I doing differently from an anti-vaxxer who rejects any information that challenges her preexisting beliefs (eg that vaccines cause autism)?

When new evidence challenges our established priors (eg a friend reports a polar bear, but I have a strong prior that there are no polar bears around), we ought to heavily discount the evidence and slightly shift our prior. So I should end up believing that my friend is probably wrong, but I should also be slightly less confident in my assertion that there are no polar bears loose in Berkeley today. This seems sufficient to explain confirmation bias, ie a tendency to stick to what we already believe and reject evidence against it.

The anti-vaxxer is still doing something wrong; she somehow managed to get a very strong prior on a false statement, and isn’t weighing the new evidence heavily enough. But I think it’s important to note that she’s attempting to carry out normal reasoning, and failing, rather than carrying out some special kind of reasoning called “confirmation bias”.

There are some important refinements to make to this model – maybe there’s a special “emotional reasoning” that locks down priors more tightly, and maybe people naturally overweight priors because that was adaptive in the ancestral environment. Maybe after you add these refinements, you end up at exactly the traditional model of confirmation bias (and the one the Fast Company article is using) and my objection becomes kind of pointless.

But not completely pointless. I still think it’s helpful to approach confirmation bias by thinking of it as a normal form of reasoning, and then asking under what conditions it fails.

New Comment
9 comments, sorted by Click to highlight new comments since:

When someone says something that confirms my preexisting beliefs (eg ‘coyotes live in this area, but not polar bears’), I believe it. If that same person provides the same evidence for something that challenges my preexisting beliefs, I reject it.

Is this confirmation bias?

Not as far as I know. Wikipedia gives three aspects of confirmation bias:

  1. Biased search: seeking out stories about coyotes but not polar bears.

  2. Biased interpretation: hearing an unknown animal rustle in the bushes, and treating that as additional evidence that coyotes outnumber polar bears.

  3. Biased recall: remembering coyote encounters more readily than polar bear encounters.

All of those seem different from your example, and none are valid Bayesian reasoning.

Some forms of biased recall are Bayesian. This is because "recall" is actually a process of reconstruction from noisy data, so naturally priors play a role.

Here's a fun experiment showing how people's priors on fruit size (pineapples > apples > raspberries ...) influenced their recollection of synthetic images where the sizes were manipulated: A Bayesian Account of Reconstructive Memory

I think this framework captures about half of the examples of biased recall mentioned in the Wikipedia article.

Related: Jess Whittlestone's PhD thesis, titled "The importance of making assumptions: why confirmation is not necessarily a bias."

I realised that most of the findings commonly cited as evidence for confirmation bias were much less convincing than they first seemed. In large part, this was because the complex question of what it really means to say that something is a ‘bias’ or ‘irrational’ is unacknowledged by most studies of confirmation bias. Often these studies don’t even state what standard of rationality they were claiming people were ‘irrational’ with respect to, or what better judgements might look like. I started to come across more and more papers suggesting that findings classically thought of demonstrating a confirmation bias might actually be interpreted as rational under slightly different assumptions - and found often these papers had much more convincing arguments, based on more thorough theories of rationality.
...
[I came to] conclusions I would not have expected myself to be sympathetic to a few years ago: that the extent to which our prior beliefs influence reasoning may well be adaptive across a range of scenarios given the various goals we are pursuing, and that it may not always be better to be ‘more open-minded’. It’s easy to say that people should be more willing to consider alternatives and less influenced by what they believe, but much harder to say how one does this. Being a total ‘blank slate’ with no assumptions or preconceptions is not a desirable or realistic starting point, and temporarily ‘setting aside’ one’s beliefs and assumptions whenever it would be useful to consider alternatives is incredibly cognitively demanding, if possible to do at all. There are tradeoffs we have to make, between the benefits of certainty and assumptions, and the benefits of having an ‘open mind’, that I had not acknowledged before.

As Stuart previously recognized with the anchoring bias, it's probably worth keeping in mind that any bias is likely only a "bias" against some normative backdrop. Without some way reasoning was supposed to turn out, there are no biases, only the way things happened to work.

Thus things look confusing around confirmation bias, because it only becomes bias when it results in reason that produces a result that doesn't predict reality after the fact. Otherwise it's just correct reasoning based on priors.

See also Mercier & Sperber 2011 on confirmation bias:

... an absence of reasoning is to be expected when people already hold some belief on the basis of perception, memory, or intuitive inference, and do not have to argue for it. Say, I believe that my keys are in my trousers because that is where I remember putting them. Time has passed, and they could now be in my jacket, for example. However, unless I have some positive reason to think otherwise, I just assume that they are still in my trousers, and I don’t even make the inference (which, if I am right, would be valid) that they are not in my jacket or any of the other places where, in principle, they might be. In such cases, people typically draw positive rather than negative inferences from their previous beliefs. These positive inferences are generally more relevant to testing these beliefs. For instance, I am more likely to get conclusive evidence that I was right or wrong by looking for my keys in my trousers rather than in my jacket (even if they turn out not to be in my jacket, I might still be wrong in thinking that they are in my trousers). We spontaneously derive positive consequences from our intuitive beliefs. This is just a trusting use of our beliefs, not a confirmation bias (see Klayman & Ha 1987). [...]

One of the areas in which the confirmation bias has been most thoroughly studied is that of hypothesis testing, often using Wason’s rule discovery task (Wason 1960). In this task, participants are told that the experimenter has in mind a rule for generating number triples and that they have to discover it. The experimenter starts by giving participants a triple that conforms to the rule (2, 4, 6). Participants can then think of a hypothesis about the rule and test it by proposing a triple of their own choice. The experimenter says whether or not this triple conforms to the rule. Participants can repeat the procedure until they feel ready to put forward their hypothesis about the rule. The experimenter tells them whether or not their hypothesis is true. If it is not, they can try again or give up.

Participants overwhelmingly propose triples that fit with the hypothesis they have in mind. For instance, if a participant has formed the hypothesis “three even numbers in ascending order,” she might try 8, 10, 12. As argued by Klayman and Ha (1987), such an answer corresponds to a “positive test strategy” of a type that would be quite effective in most cases. This strategy is not adopted in a reflective manner, but is rather, we suggest, the intuitive way to exploit one’s intuitive hypotheses, as when we check that our keys are where we believe we left them as opposed to checking that they are not where it follows from our belief that they should not be. What we see here, then, is a sound heuristic rather than a bias.

This heuristic misleads participants in this case only because of some very peculiar (and expressly designed) features of the task. What is really striking is the failure of attempts to get participants to reason in order to correct their ineffective approach. It has been shown that, even when instructed to try to falsify the hypotheses they generate, fewer than one participant in ten is able to do so (Poletiek 1996; Tweney et al. 1980). Since the hypotheses are generated by the participants themselves, this is what we should expect in the current framework: The situation is not an argumentative one and does not activate reasoning. However, if a hypothesis is presented as coming from someone else, it seems that more participants will try to falsify it and will give it up much more readily in favor of another hypothesis (Cowley & Byrne 2005). The same applies if the hypothesis is generated by a minority member in a group setting (Butera et al. 1992). Thus, falsification is accessible provided that the situation encourages participants to argue against a hypothesis that is not their own. [...]

When one is alone or with people who hold similar views, one’s arguments will not be critically evaluated. This is when the confirmation bias is most likely to lead to poor outcomes. However, when reasoning is used in a more felicitous context – that is, in arguments among people who disagree but have a common interest in the truth – the confirmation bias contributes to an efficient form of division of cognitive labor.

When a group has to solve a problem, it is much more efficient if each individual looks mostly for arguments supporting a given solution. They can then present these arguments to the group, to be tested by the other members. This method will work as long as people can be swayed by good arguments, and the results reviewed in section 2 show that this is generally the case. This joint dialogic approach is much more efficient than one where each individual on his or her own has to examine all possible solutions carefully. The advantages of the confirmation bias are even more obvious given that each participant in a discussion is often in a better position to look for arguments in favor of his or her favored solution (situations of asymmetrical information). So group discussions provide a much more efficient way of holding the confirmation bias in check. By contrast, the teaching of critical thinking skills, which is supposed to help us overcome the bias on a purely individual basis, does not seem to yield very good results (Ritchart & Perkins 2005; Willingham 2008).

I often found myself in a situation when I overupdated on the evidence. For example, if market fails 3 per cent, I used to start to think that economic collapse is soon.

Overupdating on random evidence is also a source of some conspiracy theories. A plate number of a car on my street is the same as my birthday? They must be watching me!

The protection trick here is "natural scepticism": just not update if you want to update your believes. But in this case the prior system becomes too rigid.

The protection trick here is "natural scepticism": just not update if you want to update your believes. But in this case the prior system becomes too rigid.

(not update if you want to protect your beliefs?, not update if you don't want to update your beliefs?)

Skepticism isn't just "not updating". And protection from what?

[-]jmh10

I like the general thought here, that some of our decision tools that are supposed to help make better decisions are themselves potentially subject to the same problems we're trying to avoid.

I wonder if the pathway might not be about the priors around the event (polar bears in Berkeley ) but an update about the reliability of the evidence presented -- your friend reporting the polar bear gets your priors on her reliability/honesty down graded thus helping to confirm the original prior about polar bears in Berkeley.

While "isolated demands for rigor" may be suspect, an outlier could be the result of high measurement error* or model failure. (Though people may be systematically overconfident in their models.)

*Which has implications for the model - the data thought previously correct may contain smaller amounts of error.