Mark Eichenlaub posted a great little case-study about the difficulty of updating beliefs, even over trivial matters like the slope of a baseball field. The basic story of Bayes-updating assumes the likelihood of evidence in different states is obvious, but feedback between observations and judgments about likelihood quickly complicate the situation:

The story of how belief is supposed to work is that for each bit of evidence, you consider its likelihood under all the various hypotheses, then multiplying these likelihoods, you find your final result, and it tells you exactly how confident you should be. If I can estimate how likely it is for Google Maps and my GPS to corroborate each other given that they are wrong, and how likely it is given that they are right, and then answer the same question for every other bit of evidence available to me, I don’t need to estimate my final beliefs – I calculate them. But even in this simple testbed of the matter of a sloped baseball field, I could feel my biases coming to bear on what evidence I considered, and how strong and relevant that evidence seemed to me.  The more I believed the baseball field was sloped, the more relevant (higher likelihood ratio) it seemed that there was that short steep hill on the side, and the less relevant that my intuition claimed the field was flat. The field even began looking more sloped to me as time went on, and I sometimes thought I could feel the slope as I ran, even though I never had before.

That’s what I was interested in here. I wanted to know more about the way my feelings and beliefs interacted with the evidence and with my methods of collecting it. It is common knowledge that people are likely to find what they’re looking for whatever the facts, but what does it feel like when you’re in the middle of doing this, and can recognizing that feeling lead you to stop?

Edit: Title changed from "An Empirical Evaluation into Runner's High," the original title of the article, to match the author's new title.

New Comment
6 comments, sorted by Click to highlight new comments since:
[-][anonymous]70

The title is funny but misleading. I read the article because of my initial confusion, and don't regret it, but please make it more descriptive.

I'm the author - thanks for the feedback. I think you're right that a more-topical title could help. Edit: done.

[-][anonymous]70

I was referring to the thread title here on LessWrong. I actually chuckled at yours, now I feel bad.

Great article by the way. My first thought was to use a tiltmeter app on a smartphone attached to a long ruler.

The story of how belief is supposed to work is that for each bit of evidence, you consider its likelihood under all the various hypotheses, then multiplying these likelihoods, you find your final result, and it tells you exactly how confident you should be.

You should add, shouldn't you? Not multiply? Because they're mutually exclusive and exhaustive probabilities? If you multiply, your probability would change depending on how finely you broke down the hypotheses.

I think he means multiply once for each piece of evidence, not each hypothesis.