[Note (6/30/19): I think I made interesting and valid points with this piece, but my framing was terrible. I should have clarified who are the people for whom I think these arguments should be persuasive, had a longer introductory section talking about why skilled hunting is much less problematic than slaughter and consumption of farmed animals, spun the fifth argument off into its own post, and maybe left off an epistemic status. I'd like to revise this at some point, but not sure when I'll get around to it.]

Content Warning

Explicit descriptions of wild animal suffering and serious discussion of killing and consuming animals as a potentially net-positive intervention.

Premise

Two months ago, I believed that skillful hunting was ethical because it prevented animals from suffering painful natural deaths in the wild. However, this was a privately-held belief, and after discussing it in person for the first time, I began to have second thoughts. Now, most hunting isn’t skilled enough to prevent animals from suffering non-fatal injuries or prolonged deaths, and I doubt the lives of most LessWrong readers are impacted by their beliefs on the ethics of skilled hunting. But even if most readers are not passionate hunters or consumers of hunted meat, I expect that they will nonetheless find the issues discussed in this article—wild animal welfare, movement-building, habit formation, moral uncertainty, how to set epistemic priors— both interesting and relevant to their day-to-day-lives. I also hope this article provides a useful example of how to examine an object-level belief and actually change your mind.

Meta-Information

Epistemic status: ~90% confident in conclusion conditioned on moral patienthood of hunted animals.

Epistemic effort: 14 hours of writing and new research. Preceded by substantial background research into wild-animal suffering, two ~15 minute conversations with other rationalists, and ~15 minutes focused preliminary thinking about the issue.

Motivation for writing: Inspired by a conversation that I had at the NYC Solstice this year. Hoping to gain experience with less academic writing, and explore whether or not I’m personally comfortable with some wild-fish consumption.

Acknowledgements: Thanks to Ikaxas for feedback and edits.

Background Assumptions

I will be taking it for granted in this piece that all targets of hunting have enough sentience that we should worry about them being moral patients, and so we should care about their suffering. If you are confident that animals are not sentient, think of this article as an exercise in stepping out of your epistemic comfort zone—also, write up your reasoning! If we live in a world that is not quite that dark, I’d like to know so! (But I doubt we do.)

Introduction

According to Persis Eksander, who researched methods of lethal population control for elephants, kangaroos, wild hogs, and deer as part of the Wild-Animal Suffering Research project, common hunting practices do not lessen the suffering associated with an animal’s death—on the contrary, they may result in even more suffering than the animal would experience in the wild. In particular: it usually takes several shots to give an elephant a painful and prolonged death, hunted kangaroos are often grievously injured without being killed, and kangaroo joeys whose mothers are killed either starve to death or are inhumanely killed in accordance with Australian law. Hunted wild hogs sometimes have particularly grizzly deaths involving attacks by hunting dogs. Population-control programs which proceed by these specific methods of hunting do appear grossly unethical.

‘Wild hog’ is a useful term as it can be ambiguous between wild boars and feral pigs.

But let us consider the least convenient possible hunter. Suppose you are skilled enough at riflery that you can consistently kill wild land animals in a manner that involves less suffering than a natural death in the wild. Further suppose that you have not developed a meat aversion, and would get some positive utility from eating hunted meat and from engaging in the activity of hunting. Is this something you should do? My plan for answering this question is to outline different reasons why skilled hunting could remain an unethical action, and then conclude by reviewing the plausibility of these arguments and addressing possible objections.

Arguments against skilled hunting

1. The animal could have a happy life

If the remaining duration of a wild animal’s life has substantially positive net utility, then killing the animal is immoral. An exchange between Michael Plant and Brian Tomasik is partly concerned with the balance of pain and pleasure in the life of a relatively long-lived animal, such as a zebra. Tomasik concedes to Plant that “many (maybe a majority of) people would, even after learning more, decide that a typical zebra who lives a full 25 years has a net positive life”.

(Tomasik actually thinks that even a large quantity of pleasure experienced throughout an animal’s life would probably not justify the short-lived but very intense pain of a natural death. Also, a typical zebra doesn’t live that long.)

Consider a hypothetical adult zebra who has an expected lifespan of 18 years or so, over which time it experiences a small amount of pleasure roughly evenly distributed over its pre-death lifespan. If the suffering associated with the zebra’s death outweighs the pleasure obtained during x years of its lifespan, then once the zebra is 18−x years old a painless death would have positive expected utility.

(The death of a skillfully hunted animal is not actually painless, but I expect that it is sufficiently less painful that the difference in suffering between a skillfully inflicted death and a natural death could still be the equivalent of a few years of pleasant lifespan.)

So whether or not hunting zebra is justified therefore depends on the distribution and quantity of pleasant experiences throughout a zebra’s life, the difference in the magnitude of suffering associated with a zebra’s natural and hunted deaths, and the age distribution of zebra in the population being hunted. A field of welfare biology research would make these variables a little clearer, although I expect firm answers will be difficult to obtain.

A counterintuitive result of this model is that hunting very young animals could be more ethical than hunting adult animals, because high child mortality rates among wild animals suggest that young animals would be less likely to have enough pleasant experiences to outweigh the suffering associated with a natural death.

A concrete example: according to the National Zoo, 80% of wild lions die before reaching adulthood at 3 years old, but should they survive that long they can be expected to live another 11 years. Assuming roughly even age distribution among adult lions as well as among lion cubs, killing a lion cub will rob it of about 3 years of additional life in expectation, while killing an adult lion will rob it of 5.5 years of additional life in expectation. If the suffering associated with a natural death outweighs the pleasure experienced by a wild lion in a 3 year duration but not that experienced in a 5 year duration, then we find it is ethical to give a painless death to the lion cub but not to the adult lion.

(However, I suspect that stimuli are more vividly experienced by young animals than by older animals, which would complicate this line of reasoning.)

Essentially, the greater the extent to which you think suffering dominates pleasure among wild animals targeted by hunters, the more you should be willing to entertain the possibility of skilled hunting as a potentially net-positive intervention from a utilitarian standpoint.

2. Eating hunted meat makes it harder for you to not eat farmed meat

I think some meat might be mildly addictive, possibly owing to the central nervous stimulant hypoxanthine. Regardless, it shouldn’t be controversial that food consumption can be somewhat habit-forming, and meat in particular is a food people report craving. Google Trends suggests that people might have cravings for meat more often than for fruit or vegetables, but less often than for coffee or cheese.

(Preliminary searching did not turn up any academic research on addictive qualities of meat; this provides some evidence that meat is not addictive to any significant extent, with the caveat that the animal agriculture industry does retain substantial influence on nutritional research.)

If eating meat is in fact habit-forming, then making continued exceptions for hunted meat could make it more difficult for one to eventually quit farmed meat. Moreover, when adopting new habits, keeping things simple might help to conserve willpower. “Don’t eat any dead animals” seems like an easier rule to install than “Don’t eat any dead animals, unless you have a high degree of confidence that they were wild animals who were killed skillfully at a point in their lives such that them continuing to live until their natural death would cause more suffering than the inflicted death, or unless they are bivalves.”

"It seems a shame," the Walrus said,
"To play them such a trick,
After we've brought them out so far,
And made them trot so quick!"
The Carpenter said nothing but
"They’ve all the sentience of a brick!"
Adapted from Lewis Carroll’s Through the Looking Glass

In other words, if you construct a simple but strong Schelling fence for yourself, you won’t be as tempted to move it later on.

On the other hand, a gradual transition to a non-carnivorous diet might be easier than an abrupt one, and consuming skillfully hunted meat during the transitional period would be more ethical than eating factory-farmed meat. Also, eating only skillfully hunted meat is a variant of the meat-eating habit which involves more friction, so it might be more helpful in scaling up to a fully non-carnivorous diet than eating small amounts of farmed meat.

3. Hunting normalizes animal slaughter and consumption

Likewise, building a social movement around ending cruelty towards animals becomes a much easier task if you keep things simple and don’t complicate matters by endorsing slaughtering animals for human consumption or enjoyment, even if only in limited circumstances.

Actually, I think the term Schelling flag might be a useful piece of new jargon: the ideology may not be the movement, but many movements have particular behaviors they expect of their members. I would expect less associated status-jockeying and more ready adherence of potential members to the behavior if it exhibits characteristics of a Schelling point (i.e. members could have converged upon it in absence of coordination).

A concrete example: effective altruists who take the Giving What We Can Pledge promise to donate 10% of their income, with a few situational exceptions. The 10% figure is mostly arbitrary, although religious precedents and it being a nice round number made it a good potential Schelling point. Some effective altruists regularly donate more than 10% of their income, and not all effective altruists take the pledge. Nonetheless, I suspect that keeping a 10% pledge as a key component of the movement helps sell effective altruism to the public and results in more charitable contributions being made than would be otherwise.

The animal advocacy movement is no stranger to purity contests and a holier-than-thou public image. Meatless Monday and Vegan-Before-6 counter this tendency by creating more lenient Schelling flags for people to rally behind which are a little easier to adopt and could later be scaled up into stricter lifestyle habits. But I think the stricter rule of “Don’t eat dead animals” might be an especially compelling flag, and we should be wary of abandoning it.

In fact, I wouldn’t be too surprised if more absolutist campaigns like the Liberation Pledge, which requires that pledge-takers not sit where animals are being eaten, were crucial to achieving long-term reductions in animal suffering. Zack Groff characterizes the primary medium-term aim of the animal advocacy movement as being the achievement of a stigmatization of meat similar to the stigmatization of cigarettes: this seems to me to be a very good strategy for expanding humanity’s moral circle. Of course, Meatless Monday and welfarist campaigns might be necessary to achieve the conditions in which such a stigmatization is possible, and moreover it could still be helpful for the animal advocacy movement to retain “Don’t eat dead animals” and “Don’t kill animals” as Schelling flags under some circumstances in which absolutist advocacy is less generally effective.

For example, “Don’t eat dead animals” might help result in a more positive public image owing to the simplicity of the demand and its lessened susceptibility to hypocrisy accusations; it might result in a better conversion rate of high-commitment members; more suffering might be prevented than would be if weaker flags were more prominent.

So while more strategy research is crucial to predicting with higher confidence what messaging the animal advocacy movement should adopt, you should nonetheless highly value your ability to signal support for a particularly easy-to-communicate position like “No killing, no eating”, especially if you think stigmatization of meat consumption might be important to reduce animal suffering.

4. Killing animals could be inherently immoral

Just as our factual beliefs carry uncertainties, so do our (meta-)ethical beliefs.

One way to treat moral uncertainty is to take into account weighted predictions of moral theories we have low confidence in when making decisions. For example, a surgeon who declines to kill a healthy patient for their organs in order to save many others via organ transplants might say in their defense, “Well, according to my favorite moral theory I should really just shut up and multiply, but I actually ascribe some non-negligible probability that I’m mistaken about morality in such a way that murder is inherently wrong.” Even if the surgeon puts high confidence in consequentialism, a low confidence in deontology might be still impact your ethical decision-making.

Other than the level of guilt experienced by the perpetrator and potentially the degree of sentience of the victim, there is no morally-relevant distinction between killing an animal and killing a person [edit: except other instrumental concerns]. So if you would balk at killing a person against their will even when confident it would result in a net reduction of suffering, you should be concerned about killing an animal against its will even when confident it would result in a net reduction of suffering.

5. You should have a high prior that hunting is unethical

Disney movies like Bambi and Fox and the Hound contain strong anti-hunting messages, with the former inspiring Paul McCartney to be an animal advocate. Indeed, it is much more difficult for me to conceive of hunting being an ethical activity after watching this scene from Bambi (the emotional impact is lessened if you haven’t seen the movie from the start).

Brienne Yudkowsky has advocated that one should not take the effectiveness of emotional appeals like this as evidence, because the ability to craft an effective emotional appeal is not reliant on the truth of the position being advanced. I think Brienne is right that at least within social circles which highly value truth-seeking, trying to advance a position by emotional manipulation is not a good idea.

But although emotional appeals can be effective without being aligned to the truth, an emotional appeal for a true position will be easier to create than an emotional appeal than a false appeal. Imagine a children’s film whose plot still revolves around deer hunting, but which portrays hunting in a positive light. I would predict that one could make such a film, but that even with sky-high production values it would probably be substantially less compelling than Bambi.

Switching gears slightly, we know that it is not difficult to bias ourselves away from uncomfortable conclusions: if you take our moral intuitions to represent some real truth, either about the world or about our own fundamental values, then this truth becomes obscured when it conflicts with our own non-moral preferences or our socialization. The question I wish I could pose to you at this point is: if it looks like it's evil and it sounds like it's evil, then maybe it's evil?

But whatever evidential weight your "true" moral intuitions carry as to whether or not something looks evil could be easily overshadowed by the weight imposed by our own biases and preferences, and by framing effects. (Morality might just consist of biases all the way down, but if that’s the case some biases are more important to us than others.) So I want to propose a method by which we can more robustly intuit how evil something would appear to us at first glance if our biases were weaker.

Importantly, in The Sword of Good and elsewhere, Eliezer points out that sometimes villains look like heroes, and vice-versa. But usually (not always), the people wearing black robes and waving around red lightsabers really are the bad guys! It's just that we sometimes decide that red lightsabers must not be that bad after all, especially when we're the ones holding them.

“I’m really just misunderstood.”

My hope is that by applying strong framing effects to each of two positions and seeing which one required stronger framing to be plausible, we can get closer to whatever a true moral intuition might be regarding how evil something looks.

So I seriously think the following questions actually a provide a decent heuristic for setting priors:

  • How easy is it to conceive of a compelling children’s movie whose central conceit assumes that X is morally wrong?
  • How easy is it to conceive of a compelling children’s movie whose central conceit assumes that X is morally right?

If it is twice as easy to conceive of the X-is-wrong movie, then adopt a prior closer to 2:1 odds that X is wrong, and update that on whatever additional evidence you have accumulated.

The hope is that thinking in terms of what would be compelling to a child reduces biases imposed by your need to justify behavior and by elements of your socialization that a child would not have encountered, and in most cases that simplicity of creating an emotional appeal tracks truthfulness reasonably well.

The history of children’s cinema shows that when applied to hunting, our heuristic points strongly towards hunting being evil. I don’t think evidence in favor of hunting being ethical would be strong enough to outweigh a decently strong prior that hunting is unethical in most circumstances, so insofar as you think this line of reasoning produces priors about moral judgements which are more accurate than those accumulated through standard life experiences, you should believe that hunting is probably unethical.

Conclusion

I think that at least one of the above reasons is likely to be valid, which would imply that it is unethical to hunt animals even in a highly skilled manner. In particular, I think that spreading the compassion-for-animals-is-important signal which is sent by living a non-carnivorous lifestyle is really important to prevent massive amounts of suffering in the present and in the future, so the argument I find most persuasive is that hunting or eating hunted meat seems to severely reduce the quality of this signal (#3). I am more uncertain about the other arguments, although I think that each has enough possibility of being true that taken together they provide reasonably strong support for hunting being unethical.

But what about moral offsets? The underlying ethical assumptions behind moral offsets seem reasonable to me, but I think movement-building signaling effects are actually one of the biggest benefits of adopting a plant-based diet, so an accurate offset cost might be higher than expected.

In theory, an act of hunting with severe negative consequences to the animal could still be be justified so long as you experience a sufficient amount of pleasure from hunting or from eating hunted meat. This is not a concern if you have a suffering-focused ethics or otherwise believe that your interest in hunting or eating hunted meat is not comparable with an animal’s interest in avoiding pain. But realistically, even if you’re a bit of a utility monster, you’re not that strong a utility monster.

Ozy writes:

“…most people– even most animal-rights advocates– agree that humans matter more than pigs: if you have a choice of giving a delicious meal to a pig or a delicious meal to a human, you should probably not give it to the pig.
(This, of course, does not justify torturing a pig to feed a delicious meal to a human.)”

Even if you think life just wouldn’t be worth living without bacon, you might turn out to be wrong about that. The pleasure associated with eating different foods and participating in different hobbies seems fairly malleable to me in most people, on the basis of anecdotal evidence and on analogy with other preferences which are consistently reported to be less mutable than they actually are. Unfortunately, preliminary searching did not turn up anything reliable specifically on the malleability of food preferences in healthy adults.

Moreover, meat-substitutes are advanced enough that your current food preferences might remain largely satisfiable by a non-carnivorous diet. If you believe that eating hunted or farmed meat is unethical in a vacuum but that realistically you couldn’t handle a more plant-based diet, repeat the Litany of Tarski to yourself a few times (“If it is possible for me to not eat meat, then I want to believe it is possible for me to not eat meat. If it is not possible for me to not eat meat…”), and check if your beliefs are paying their rent. You might find that switching your eating habits is not quite as hard as you thought.

Don’t update whatever beliefs you’ve developed about the morality of hunting wild animals on this super-adorable picture of a baby deer, unless you also look up cute pictures of deer hunters and don’t find anything equally compelling, adjusting for any confounds that seem obvious. Just sit back and enjoy the cuteness.

[Edit: Originally posted to frontpage, now moved to personal blog.]

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 10:30 PM

(note: I have not read the piece in detail yet, but skimmed each section)

I think being able to explore this sort of topic is important – I personally think wild animal suffering is most likely very important, and the only reason I'm not more worried about it is that I expect uploads/AI to make the future a really weird place that changes the nature of what sort of "wild animal suffering" is useful to think about. In the meanwhile, concrete discussions of the world-that-is seem useful to explore intuitions and philosophy.

But I have a concrete concern, and a slightly vague concern.

Including Overviews of Considerations

Whenever I'm reading a piece on wild animal suffering, I start to feel antsy if I can't tell from early on what range of considerations an author is applying. (In particular, if they don't at least touch on things like "when predators are removed from a system, a default thing that seems to happen is that death-by-predator is replaced by death-by-starvation", and address that concern in some fashion. This essay touches briefly upon this, but I don't think delves into "how do you do population control without hunting?" I don't actually know the answer)

So the object-level thing is "I'd like to see those specific concerns at least touched on in wild-animal-suffering pieces", and the meta-level thing is "I think it's helpful for pieces exploring complicated topics to start with a brief overview of what considerations the author is thinking about." i.e. before diving into any one issue, just have a table of contents of the issues at hand.

This leads to a second concern:

Aiming to persuade, not Inform

AFAICT, all the major sections of this post are evidence in the "hunting is unethical" direction, and this raises a red flag – it's suspicious whenever a policy debate appears one sided. If I see a piece that only lists arguments on one side of argument, more often than not the piece isn't trying to do an evenhanded analysis of the situation, they're just trying to argue for their cause.

LessWrong 2.0 is specifically a place where "arguing to persuade" is frowned upon. Especially for frontpage posts in particular – the heuristic is "aim to explain, not persuade." Give people information, in such a way that they can evenhandedly form their own opinions about it. The line between explaining a concept and persuading is blurry, but this post seemed to cross the line in a few places - both in the one-sided arguments, and in the "Bambi" section towards the end.

I think there's an alternate version of this post that'd make sense for the frontpage, but as-is this seems more suited for your personal blog section.

Thanks for the feedback Raemon!

Concrete Concerns

I'd like to see ["when predators are removed from a system, a default thing that seems to happen is that death-by-predator is replaced by death-by-starvation" and "how do you do population control without hunting?"] at least touched on in wild-animal-suffering pieces

I'd like to see those talked about too! The reason I didn't is I really don't have any insights on how to do population control without hunting, or on which specific interventions for reducing wild animal suffering are promising. I could certainly add something indicating I think those sorts of questions are important, but that I don't really have any answers beyond "create welfare biology" and "spread anti-speciesism memes so that when we have better capabilities we will actually carry out large interventions".

have a table of contents of the issues at hand

I had a bit of one in the premise ("wild animal welfare, movement-building, habit formation, moral uncertainty, how to set epistemic priors"), but it sounds like you might be looking for something different/more specific? You're not talking about a table of contents consisting of more or less the section headings right?

Aiming to Persuade vs Inform

My methodology was "outline different reasons why skilled hunting could remain an unethical action", but I did a poor job of writing if the article seemed as though I thought each reason was likely to be true! I did put probabilities on everything to calculate the 90% figure at the top, but since I don't consider myself especially well-calibrated I thought it might be better to leave them off... The only reason that I think is actually more likely to be valid than wrong is #3, but I do assign enough probability mass to the others that I think they're of some concern.

I thought the arguments in favor of skilled hunting (make hunters happy and prevent animals from experience lives which might involve lots of suffering) were pretty apparent and compelling, but I might be typical-minding that. I also might be missing something more subtle?

In terms of whether that methodology was front-page appropriate, I do think that if the issue I was writing about was something slightly more political this would be very bad. But as I saw it, the main content of the piece isn't the proposition that skilled hunting is unethical, it's the different issues that come up in the process discussing it ("wild animal welfare, movement-building, habit formation, moral uncertainty, how to set epistemic priors"). My goal is not to persuade people that I'm right and you must not hunt even if you're really good at it, but to talk about interesting hammers in front of an interesting nail.

[Edit: Moved to personal blog.]

I just came up with this name for the thing I think I am seeing here - it's artificial morality. It is when you feel some things are moral and some are not, then you come up with a theory on why some things are moral and others are not, then you apply that theory to come up with other things that should feel moral/immoral and then you try to impose these should feelings to others even though there might not be a single person on earth who actaully feels that.

I both resonate with this sentiment, but am also hesitant since you could say similar things about linear algebra or prime factorizations, or most of mathematics:

You first come up with a theory of how to determine something is a prime number, based on the ones you know are primes, then you apply that theory to some numbers you intuitively thought were not primes to show that they are indeed prime, and then you impose that mathematical knowledge on others, even though there might currently not be a single person on earth who actually thinks the number you highlight is prime.

Or maybe a more historically accurate example is non-euclidian geometry, which if I remember things correctly, was assumed to be inconsistent since the 16th century, and a lot of the people who developed non-euclidian geometry actually set out to prove its inconsistency. But based on the methods they applied to other mathematical theorems, they then applied those methods to non-euclidian geometry and found that it should actually be consistent, and then they imposed that feeling of shouldness onto others, even though at the time the dominant mode of thinking was to believe non-euclidian geometry was inconsistent.

This is not an accurate comparison, for the simple reason that “prime number” is a formally defined concept. The reason we think that 2 or 5 or 13 are prime isn’t that we have an un-formalized (and perhaps un-formalizable) intuition that they’re prime; it’s that we have a formal definition, and 2 and 5 and 13 fit it!

So when we consider a number like 2,345,387,436,980,981, our “intuitions” about whether it’s prime, or whether anyone “thinks” it’s prime, are just as irrelevant as they are to the question of whether 2 is prime. Either a number fits the formal definition, or it doesn’t fit the formal definition, or we are as yet unable to determine whether it fits the formal definition. Nothing else matters.

With moral intuitions, obviously, things could not be more different…

I think you are overestimating the degree to which we have formal definitions for core mathematical concepts, or at least to what degree it was possible to make progress before we had formalized a large chunk of modern mathematics.

While I agree that morality is generally harder to formalize than mathematics, I do think we are only talking about a difference in degree, instead of a difference in kind. The study of mathematics is the study of our intuitions about certain types of relationships between mental object we have in our mind (which are probably informed by our real-world experience). We tend to develop mathematics in the areas where peoples intuitions about their mental objects agree with one another, or where we can reliably induce similar intuitions with the use of thought experiments or examples (i.e. counting apples, number lines, falling objects, linear transformations, dividing pies between friends, etc.).

The study of morality is similarly the study of a different set of relationships, which might be less universal, but not qualitatively differently universal than our intuitions about mathematical relationships. Good moral philosophy similarly tries to find out what moral intuitions people share, or induce shared intuitions with the help of examples and thought experiments, and then tries to apply the standards of consistency (which is just another aesthetic intuition), logical argument (also just based on aesthetic intuitions) and conceptual elegance, to extend their domain, similarly to mathematicians extending our intuitions about dividing pies to the concepts of the rational and real numbers.

Edit: A related point is that a proof in mathematics is just the application of a set of rules that seem self-evidently true to other mathematicians. If you for some reason you do not find the principle of induction, or the concept of proof-by-contradiction intuitively compelling, then those proofs will not be compelling to you. Mathematics is just built on our intuitions of how logical reasoning is supposed to work. Good moral ethics is trying to establish the foundations of our intuitions of how moral reasoning is supposed to work, and then apply those foundations to come to a deeper understanding of morality, similarly to how mathematics applied its foundation to come to a much deeper understanding of what logical truth is.

I disagree with your evaluation of both mathematics and morality, but it seems like we’ve wandered into somewhat of a tangent. I think I prefer to table this discussion until another time, with apologies.

Seems good. It does seem pretty removed from the OP.

Indeed. This, essentially, describes utilitarianism as a whole, which one can summarize thus:

Step 1: Notice a certain moral intuition (roughly—that it’s better when people’s lives are good than when they are bad; and it’s better when good things happen to more people, than to fewer).

Step 2: Taking this moral intuition as an axiom, extrapolate it into an entire, self-consistent moral system, which addresses all possible questions of moral action.

Step 3: Notice that one has other moral intuitions, and that some of them conflict with the dictates of the constructed system.

Step 4: Dismiss these other moral intuitions as invalid, on the grounds of their conflict with the constructed system.

Bonus Step: Conveniently forget that the whole edifice began with a moral intuition in the first place (and how otherwise—what else was there for it to have begun from?).

While I agree that this is a common error mode in moral ethics, saying that this "describes utilitarianism as a whole" strikes me as a strawman.

How do you mean? I agree that it’s an error mode, but… what I described isn’t (as far as I can tell) “utilitarianism gone wrong”; it’s just what utilitarianism is, period. (That is, I certainly don’t think that what I was doing constitutes anything like “tarring all utilitarians by association with the mistaken ones”! It truly seems to me that utilitarianism, at its core, consists entirely[1] of the exact thing I described.)

[1] No doubt there are exceptions, as all moral theories, especially popular and much-discussed ones like utilitarianism, have esoteric variants. But if we consider the (generously defined) central cluster of utilitarian views, I stand by my comments.

Hmm, we might have different experiences of how the word utilitarianism is used in ethics. While your definition is adjacent to how I see it used, it is missing an important subset of moral views that I see as quite central to the term. As an example of this, see Sam Harris’ Moral Landscape, which argues for utilitarianism, but for a version that seems to not align with your critique/definition.

But arguing over definitions is a lot less exciting, and I think we both agree that this is a common error mode in ethics. So let’s maybe table this for now.

"How easy would it be to make a childrens movie" is not a good heuristic. Think of how easy it would be to make a movie to scare children away from getting their shots compared to a movie that gets them comfortable with doctors stabbing them with needles.

Yes, in general people sticking sharp things in you is bad, and if you don't know anything else you should probably start there. However this is just an uninformed starting point, and it does not call for suspicion when society has come to the consensus that vaccines are great -- they were all kids at some point too, and then they were convinced to overcome their initial aversion to needles. Let's not discard that evidence. You don't get to "back up" to childhood without persuading people why everything they've learned growing up is a lie. In real life things are complicated, and we often must conclude things that viscerally disagree with our first impulses if we are to progress beyond childhood.

I don't think the vaccination example shows that the heuristic is flawed: in the case of vaccinations, we do have strong evidence that vaccinations are net-positive (since we know their impact on disease prevalance, and know how much suffering there can be associated with vaccinatable diseases). So if we start with a prior that vaccinations are evil, we quickly update to the belief that vaccinations are good based on the strength of the evidence. This is why I phrased the section in terms of prior-setting instead of evidence, even though I'm a little unsure how a prior-setting heuristic would fit into a Bayesian epistimology. If there's decently strong evidence that skilled hunting is net-positive, I think that should outweigh any prior developed through the children's movie heuristic. But in the absence of such evidence, I think we should default to the naive position of it being unethical. Same with vaccines.

I'd be interested to know if you can think of a clearer counterexample though: right now, I'm basing my opinion of the heuristic on a notion that the duck test is valuable when it comes to extrapolating moral judgements from a mess of intuitions. What I have in mind as a counterexample is a behavior that upon reflection seems immoral but without compelling explicit arguments on either side, for which it is much easier to construct a compelling children's movie whose central conceit is that the behavior is correct than it is to construct a movie with the conceit that the behavior is wrong (or vice-versa).

The way we test our heuristics is by seeing if they point to the correct conclusions or not, and the way that we verify whether or not the conclusion is correct is with evidence. A single example is only a single example, of course, but I don't see how the failure mode can be illustrated any more clearly than in the case of vaccines -- and precisely because of the strong evidence we have that our initial impulses are misdirected here. What kind of example are you looking for, if it's supposed to satisfy the criteria of "justifiably and convincingly show that the heuristic is bad" and "no strong evidence that the heuristic is wrong here"?

I'll try to rephrase to see if it makes my point any clearer:

Yes, of all things that children immediately see as bad, most are genuinely bad. Vaccines may be good, but sharing heroin needles under the bridge is bad, stepping on nails is bad, and getting a bull horn through your leg is bad. It's not a bad place to start. However, if you hear a mentally healthy adult (someone who was once a child and has access to and uses this same starting point) talking about letting someone cut him open and take part of his body out, my first thought is that he was probably convinced to make an exception for surgeons and tumors/infected appendix or something. I do not think it calls for anywhere near enough suspicion to drive one to think "I need to remind this person that getting cut open is bad and that even children know this". It's not that strong a heuristic and we should expect it to be overruled frequently.

Bringing it up, even as a "prior", is suggesting that people are under-weighting this heuristic relative to it's actual usefulness. This might be a solid point if there were evidence that things are simple , and that children are morally superior to adults. However, children are little assholes, and "you're behaving like a child" is not a compliment.

It might be a good thing to point out if your audience literally hadn't made it far enough in their moral development to even notice that it fails the "Disney test". However, I do not think that is the case. I think that it is a mistake, both relative to the LW audience and to the meat eating population at large, to assume that they haven't already made it that far. I think it's something that calls for more curiosity about why people would do these things that fail the Disney test.

I think normal priors on moral beliefs come from a combination of:

  • Moral intuitions
  • Reasons for belief that upon reflection, we would accept as valid (e.g. desire for parsimony with other high-level moral intuitions, empirical discoveries like "vaccines reduce disease prevalence")
  • Reasons for belief that upon reflection, we would not accept as valid (e.g. selfish desires, societal norms that upon reflection we would consider arbitrary, shying away from the dark world)

I think the "Disney test" is useful in that it seems like it depends much more on moral intuitions than on reasons for belief. In carrying out this test, the algorithm you would follow is (i) pick a prior based on the movie heuristic, (ii) recall all consciously held reasons for belief that seem valid, (iii) update your belief in the direction of those reasons from the heuristic-derived prior. So in cases where our belief could be biased by (possibly unconscious) reasons for belief that upon reflection we would not accept as valid, where the movie heuristic isn't picking up many of these reasons, I'd expect this algorithm to be useful.

In the case of vaccinations, the algorithm makes the correct prediction: the prior-setting heuristic would give you a strong prior that vaccinations are immoral, but I think the valid reasons for belief are strong enough that the prior is easily overwhelmed.

I can come up with a few cases where the heuristic points me towards other possible moral beliefs I wouldn't have otherwise considered, whose plausibility I've come to think is undervalued upon reflection. Here's a case where I think the algorithm might fail: wealth redistribution. There's a natural bias towards not wanting strong redistributive policies if you're wealthy, and an empirical case in favor of redistribution within a first-world country with some form of social safety net doesn't seem nearly as clear-cut to me as vaccines. My moral intuition is that hoarding wealth is still bad, but I think the heuristic might point the other way (it's easy to make a film about royalty with lots of servants, although there are some examples like Robin Hood in the other direction).

Also, your comments have made me think a lot more about what I was hoping to get out of the heuristic in the first place and about possible improvements; thanks for that! :-)

And, of course, animals would rather not die. This is perhaps the central reason why it's not ethical to go around shooting old people in nursing homes.

Not that I care about animal preferences all that much, either. My point is that the question of what to do with animals is not answered with animal preferences, and almost always answered with the asker's moral and aesthetic feelings about animals, however much they sometimes wish there was an external objective morality to tell them (or, better, other people) what to do. Would we rather some volume of space be full of wild animals being born, living, and dying, or would we rather that volume of space be a barren wasteland? Would we rather some plot of land contain chickens living their entire lives in cages, or would we rather it contain wheat? I have certain preferences about these questions, which may not be the same as your preferences.

Given the status of the questions, the arguments about them fall into predictable patterns of emotional appeal (Wouldn't you feel terrible looking at that barren wasteland where there used to be a forest?), Chesterton-esque reversals (I want hunting because I'm an anmial lover), and attempts to claim some kind of high ground (Chickens are basically stimulus-response machines with limited plasticity, therefore it must be fine to eat them). Not all such arguments are bad to make, and they can help people clarify their positions, but I also think they're in some sense dangerous. If someone is looking for that One Right Answer, the danger is that they think they find it. I think strict negative utilitarianism (The One Right Answer is to minimize suffering) is an example of this failure mode.

Focusing on experiences in the last 0.1% (a few hours out of a few years) is likely to ... distort your conclusions. The main argument for non-hunters to allow sport hunting is that the hunters are necessary allies in ecology and population preservation. Any ethical considerations have to be "compared to what", and compared to simply having fewer individuals of game species, hunting is probably justified.