Once upon a time, three groups of subjects were asked how much they would pay to save 2000 / 20000 / 200000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88 [1]. This is scope insensitivity or scope neglect: the number of birds saved - the scope of the altruistic action - had little effect on willingness to pay.
Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario [2], or that residents of four western US states would pay only 28% more to protect all 57 wilderness areas in those states than to protect a single area [3].
People visualize "a single exhausted bird, its feathers soaked in black oil, unable to escape" [4]. This image, or prototype, calls forth some level of emotional arousal that is primarily responsible for willingness-to-pay - and the image is the same in all cases. As for scope, it gets tossed out the window - no human can visualize 2000 birds at once, let alone 200000. The usual finding is that exponential increases in scope create linear increases in willingness-to-pay - perhaps corresponding to the linear time for our eyes to glaze over the zeroes; this small amount of affect is added, not multiplied, with the prototype affect. This hypothesis is known as "valuation by prototype".
An alternative hypothesis is "purchase of moral satisfaction". People spend enough money to create a warm glow in themselves, a sense of having done their duty. The level of spending needed to purchase a warm glow depends on personality and financial situation, but it certainly has nothing to do with the number of birds.
We are insensitive to scope even when human lives are at stake: Increasing the alleged risk of chlorinated drinking water from 0.004 to 2.43 annual deaths per 1000 - a factor of 600 - increased willingness-to-pay from $3.78 to $15.23 [5]. Baron and Greene found no effect from varying lives saved by a factor of 10 [6].
A paper entitled Insensitivity to the value of human life: A study of psychophysical numbing collected evidence that our perception of human deaths follows Weber's Law - obeys a logarithmic scale where the "just noticeable difference" is a constant fraction of the whole. A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year. [7]
The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.
[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.
[2] Kahneman, D. 1986. Comments on the contingent valuation method. Pp. 185-194 in Valuing environmental goods: a state of the arts assessment of the contingent valuation method, eds. R. G. Cummings, D. S. Brookshire and W. D. Schulze. Totowa, NJ: Roweman and Allanheld.
[3] McFadden, D. and Leonard, G. 1995. Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Contingent valuation: a critical assessment, ed. J. A. Hausman. Amsterdam: North Holland.
[4] Kahneman, D., Ritov, I. and Schkade, D. A. 1999. Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues, Journal of Risk and Uncertainty, 19: 203-235.
[5] Carson, R. T. and Mitchell, R. C. 1995. Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28(2): 155-73.
[6] Baron, J. and Greene, J. 1996. Determinants of insensitivity to quantity in valuation of public goods: contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2: 107-125.
[7] Fetherstonhaugh, D., Slovic, P., Johnson, S. and Friedrich, J. 1997. Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.
The 4,500 out of 11,000 ratio might cause the people to examine the claims of the charity that "intends to save" the 4,500 less closely than they would examine any claim at effecting a small minority ratio in a larger group, for several possibly valid reasons. When managing large numbers of people is involved in the task, "saving" the 4,500 might not seem as manageable. This might partially come down to a question of skepticism against the claims of the charity.
Maybe when the 4,500/11,000 lives are "being saved" the people imagine the aid will actually get to them, or will get to the group as a whole. Whereas, when the 4,500/250,000 number is used, the people imagine that the sociopaths of the 250,000 group will find a way to intercept the aid, the amount won't be enough, etc.
In the prior cases, it isn't that the people empathize with the sufferers less, it's that differences in scope start to force them to imagine what might go wrong with the best intentions of the charity, and consequently, bring to light the marginal utility question others have raised. (Such as "I could spend this money on a charity I previously have a commitment to, and a relationship with, rather than throw it at a distant African problem that may or may not be able to optimally use my gift." ...If I'm going to spend it on a charity at all.)
Moreover, it would be interesting to see if their experiments were designed in a rational manner that excluded most of Hansen's list of cognitive biases. Ie: the human version of Feynman's rat-in-a-maze experimental design problem in mind: "with the maze the rat must walk through being buried in sand, so it can't hear the echoes of its footfalls," etc.
It would also be interesting to know where the experiments were conducted. In liberal areas of the country where people believe politicians' promises? In conservative areas of the country, where people believe a different set of politicians' promises? Or in areas where the government has clearly betrayed the trust of the people, and people believe a smaller-than-normal subset of the politicians' impossible promises? Just because a charity (or a politician) promises to do X doesn't mean that X will get done.
Perhaps simply by asking two questions about two different groups of numbers, which form two different ratios, the people become more concerned with what the numbers might mean. This is rational concern with "details of a problem" and it is also rationally "less concern with details than with other more pressing problems." Moreover, thinking is difficult for most people. To deal with (devote optimal brain space to) ANY problem other than the "most important" problems (optimal diet, optimal governance, optimal clothing, furtherance of rationally-chosen top-importance plan, etc.) is irrational.
For this reason, I wonder if the researchers asked the people if they had their undivided attention, and down-ranked those who admitted they were not able to pay full attention. Moreover, could those who paid full attention have "less on their plate," and (therefore) be less rational overall? (The guy who lounges around a University library has less on his plate than the professor hurrying to give a lecture. Did these studies take place on university campuses? Did they take place in the city? In the country? All of the prior? How were they controlled?)
Most people lack anything resembling a consistent philosophy. Some have more rational biases and heuristics than others. The use of heuristics isn't entirely irrational, given how "unwittingly destructive" most people are, by default.
I don't necessarily doubt that some people feel better about "solving a larger part of a whole problem" without a rational choice of scope or scale. Still others might react the opposite way, based on marketing. It's been my experience that most people are generally incapable of evaluating high-hierarchical-level philosophical ideas on their own merits. They tend to judge "numbers of points in favor" vs. "number of arguments against" without properly assessing each argument for its relative weight.
Additionally, the larger number (250,000) might then force the person into thinking about large numbers of people for whom a contribution could help, forcing them to think about ...the country they live in. The 4,500/11,000 sounds like an anomaly, the 4,500/250,000 sounds like a small minority out of a large majority. Most social training trains people to be rather insensitive to minorities, and to accept that they "cannot save everyone." So, the person who is focused on 250,000 might as well focus on the minorities they could help in their own town, city, or state of 250,000. ...But that's not part of the study. (Or is it addressed by the study? I admit I didn't read up on the study.)
Still, very interesting. I'll have to look at it more to see if I believe it's applying scope restrictions to problems that don't merit them.