Philosophy professors fail on basic philosophy problems

Imagine someone finding out that "Physics professors fail on basic physics problems". This, of course, would never happen. To become a physicist in academia, one has to (among million other things) demonstrate proficiency on far harder problems than that.

Philosophy professors, however, are a different story. Cosmologist Sean Carroll tweeted a link to a paper from the Harvard Moral Psychology Research Lab, which found that professional moral philosophers are no less subject to the effects of framing and order of presentation on the Trolley Problem than non-philosophers. This seems as basic an error as, say, confusing energy with momentum, or mixing up units on a physics test.

Abstract:

We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.

Some quotes (emphasis mine):

When scenario pairs were presented in order AB, participants responded differently than when the same scenario pairs were presented in order BA, and the philosophers showed no less of a shift than did the comparison groups, across several types of scenario.

[...] we could find no level of philosophical expertise that reduced the size of the order effects or the framing effects on judgments of specific cases. Across the board, professional philosophers (94% with PhD’s) showed about the same size order and framing effects as similarly educated non-philosophers. Nor were order effects and framing effects reduced by assignment to a condition enforcing a delay before responding and encouraging participants to reflect on “different variants of the scenario or different ways of describing the case”. Nor were order effects any smaller for the majority of philosopher participants reporting antecedent familiarity with the issues. Nor were order effects any smaller for the minority of philosopher participants reporting expertise on the very issues under investigation. Nor were order effects any smaller for the minority of philosopher participants reporting that before participating in our experiment they had stable views about the issues under investigation.

I am confused... I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily... What is going on?

 

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:03 PM
Select new highlight date
All comments loaded

I was under the impression that the research into biases by people like Kahneman and Tversky generally found that eliminating them was incredibly hard, and that expertise, and even familiarity with the biases in question generally didn't help at all. So this is not a particularly surprising result; what would be more interesting is if they had found anything that actually does reduce the effect of the biases.

Overcoming these biases is very easy if you have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.

Mathematicians aren't biased by being told "I colored 200 of 600 balls black" vs. "I colored all but 400 of 600 balls black", because the question "how to color the most balls" has a correct answer in the model used. This is true even if the model is unique to the mathematician answering the question: the most important thing is consistency.

If a moral theory can't prove the correctness of an answer to a very simple problem - a choice between just two alternatives, trading off in clearly morally significant issues (lives), without any complications (e.g. the different people who may die don't have any distinguishing features) - then it probably doesn't give clear answers to most other problems too, so what use is it?

I would assume that detecting the danger of the framing bias, such as "200 of 600 people will be saved" vs "400 of 600 people will die" is elementary enough and so is something an aspired moral philosopher ought to learn to recognize and avoid before she can be allowed to practice in the field. Otherwise all their research is very much suspect.

Realize what's occurring here, though. It's not that individual philosophers are being asked the question both ways and are answering differently in each case. That would be an egregious error that one would hope philosophical training would allay. What's actually happening is that when philosophers are presented with the "save" formulation (but not the "die" formulation) they react differently than when they are presented with the "die" formulation (but not the "save" formulation). This is an error, but also an extremely insidious error, and one that is hard to correct for. I mean, I'm perfectly aware of the error, I know I wouldn't give conflicting responses if presented with both options, but I am also reasonably confident that I would in fact make the error if presented with just one option. My responses in that case would quite probably be different than in the counterfactual where I was only provided with the other option. In each case, if you subsequently presented me with the second framing, I would immediately recognize that I ought to give the same answer as I gave for the first framing, but what that answer is would, I anticipate, be impacted by the initial framing.

I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment. I mean, I could, when presented with the "save" formulation, think to myself "What would I say in the 'die' formulation?" before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the "die" formulation in the first place.

Being able to detect a bias and actually being able to circumvent it are two different skills.

I find this amusing and slightly disturbing - but the Trolley Problem seems like a terrible example. A rational person might answer based on political considerations, which "order effects" might change in everyday conversations.

which found that professional moral philosophers are no less subject to the effects of framing and order of presentation

I think some people are missing the issue. It's not that they have a problem with the Trolley Problem, but that their answers vary according to irrelevant framing effects like order of presentation.

Heh

What is going on?

I think you're committing the category error of treating philosophy as science :-D

Yep.

So three people independently posted the same thing to LW: first as a comment in some thread, then as a top-level comment in the open thread, and finally as a post in Discussion :-)

Coming up: the post is promoted to Main; it is re-released as a MIRI whitepaper; Nick Bostrom publishes a book-length analysis; The New Yorker features a meandering article illustrated by a tasteful watercolor showing a trolly attacked by a Terminator.

Yes, that is funny. I'm glad the paper is garnering attention, as I think it's a powerful reminder that we are ALL subject to simple behavioral biases.

I reject the alternative explanation that philosophy and philosophers are crackpots.

So here's an article linking the poor thinking of philosophers with another study showing unscientific thought by scientists....

Teleological thinking, the belief that certain phenomena are better explained by purpose than by cause, is not scientific. An example of teleological thinking in biology would be: “Plants emit oxygen so that animals can breathe.” However, a recent study (Deborah Kelemen and others, Journal Experimental Psychology, 2013) showed that there is a strong, but suppressed, tendency towards teleological thinking among scientists, which surfaces under pressure.

Eighty scientists plus control groups were presented with 100 one-sentence statements and asked to answer true or false. Some of the statements were teleological, as in the example quoted above. Half had to answer within three seconds, while others had as long as they liked to answer.

The scientists endorsed fewer teleological statements than the controls (22 per cent versus 50 per cent). But when they were rushed, the scientists endorsed 29 per cent of the teleological statements compared with 15 per cent endorsed by unrushed scientists. This study seems to show that a teleological tendency is a resilient and enduring feature of the human mind.

The under pressure qualification is really important. Its known that people don't fire on all cylinders under pressure ... its one of the bases of Derren Brown style Dark arts. Scientists and philosophers, unlike ER room doctors or soldiers, don't produce their proffessional results as pressured individuals. The results are psychologically interesting, bug have no bearing on how well anyone is doing their job.

ETA

The under pressure qualification is really important. Its known that people don't fire on all cylinders under pressure ... its one of the bases of Derren Brown style Dark arts. Scientists and philosophers, unlike ER room doctors or soldiers, don't produce their proffessional results as pressured individuals. The results are psychologically interesting, bug have no bearing on how well anyone is doing their job.

So what you're saying is that 60% of the reduction in magical thinking that scientists show compared to the general population is at the 3 second level?

That... seems pretty impressive to me, but I'm not sure what I would have expected it to be.


Remember that you need to put a > in front of each paragraph to do a blockquote in comments.

I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily

Where did that assumption come from?

Physics professors have no such problem. Philosophy professors, however, are a different story.

If you ask physics professors questions that go counter to human intuition I wouldn't be to sure that they get them right either.

Where did that assumption come from?

This assumption comes from expecting an expert to know the basics of their field.

If you ask physics professors questions that go counter to human intuition I wouldn't be to sure that they get them right either.

A trained physicist's intuition is rather different from "human intuition" on physics problems, so that's unlikely.

You need to do some tweaking of your faith in experts. Experts tend to be effective in fields where they get immediate and tight feedback about whether they're right or wrong. Physics has this, philosophy does not. You should put significantly LESS faith in experts from fields where they don't have this tight feedback loop.

This assumption comes from expecting an expert to know the basics of their field.

I wouldn't characterize the failure in this case as reflecting a lack of knowledge. What you have here is evidence that philosophers are just as prone to bias as non-philosophers at a similar educational level, even when the tests for bias involve examples they're familiar with. In what sense is this a failure to "know the basics of their field"?

A trained physicist's intuition is rather different from "human intuition" on physics problems, so that's unlikely.

A relevantly similar test might involve checking whether physicists are just as prone as non-physicists to, say, the anchoring effect, when asked to estimate (without explicit calculation) some physical quantity. I'm not so sure that a trained physicist would be any less susceptible to the effect, although they might be better in general at estimating the quantity.

Take, for instance, evidence showing that medical doctors are just as susceptible to framing effects in medical treatment contexts as non-specialists. Does that indicate that doctors lack knowledge about the basics of their fields?

I think what this study suggests is that philosophical training is no more effective at de-biasing humans (at least for these particular biases) than a non-philosophical education. People have made claims to the contrary, and this is a useful corrective to that. The study doesn't show that philosophers are unaware of the basics of their field, or that philosophical training has nothing to offer in terms of expertise or problem-solving.

This assumption comes from expecting an expert to know the basics of their field.

There quite a difference between knowing basic on system II level and being able to apply it on system I.

Framing effect in math:

"The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" — Jerry Bona
This is a joke: although the three are all mathematically equivalent, many mathematicians find the axiom of choice to be intuitive, the well-ordering principle to be counterintuitive, and Zorn's lemma to be too complex for any intuition.

It might be worth saying explicitly what these three (equivalent) axioms say.

  • Axiom of choice: if you have a set A of nonempty sets, then there's a function that maps each element a of A to an element of a. (I.e., a way of choosing one element f(a) from each set a in A.)
  • Well-ordering principle: every set can be well-ordered: that is, you can put a (total) ordering on it with the property that there are no infinite descending sequences. E.g., < is a well-ordering on the positive integers but not on all the integers, but you can replace it with an ordering where 0 < -1 < 1 < -2 < 2 < -3 < 3 < -4 < 4 < ... which is a well-ordering. The well-ordering principle implies, e.g., that there's a well-ordering on the real numbers, or the set of sets of real numbers.
  • Zorn's lemma: if you have a partially ordered set, and every subset of it on which the partial order is actually total has an upper bound, then the whole thing has a maximal element.

The best way to explain what Zorn's lemma is saying is to give an example, so let me show that Zorn's lemma implies the ("obviously false") well-ordering principle. Let A be any set. We'll try to find a well-ordering of it. Let O be the set of well-orderings of subsets of A. Given two of these -- say, o1 and o2 -- say that o1 <= o2 if o2 is an "extension" of o1 -- that is, o2 is a well-ordering of a superset of whatever o1 is a well-ordering of, and o1 and o2 agree where both are defined. Now, this satisfies the condition in Zorn's lemma: if you have a subset of O on which <= is a total order, this means that for any two things in the subset one is an extension of the other, and then the union of all of them is an upper bound. So if Zorn's lemma is true then O has a maximal element, i.e. a well-ordering of some subset of A that extends every possible well-ordering of any subset of A. Call this W. Now W must actually be defined on the whole of A, because for every element a of A there's a "trivial" well-ordering of {a}, and W must extend this, which requires a to be in W's domain.

(A few bits of terminology that I didn't digress to define above. A total ordering on a set is a relation < for which if a<b and b<c then a<c, and for which exactly one of a<b, b<a, a=b holds for any a,b. OR a relation <= for which if a<=b and b<=c then a<=c, and for which for any a,b either a<=b or b<=a, and for which a<=b and b<=a imply a=b. A partial ordering is similar except that you're allowed to have pairs for which a<b and b<a (OR: a<=b and b<=a) both fail. We can translate between the "<" versions and the "<=" versions: "<" means "<= but not =", or "<=" means "< or =". Given a partial ordering, an upper bound for a set A is an element b for which a<=b for every a in A. A maximal element in a partially ordered set is an element of the set that's an upper bound for the whole set.)

This doesn't really bother me. Philosophers' expertise is not in making specific moral judgements, but in making arguments and counterarguments. I think that is a useful skill that collectively gets us closer to the truth.