A confusion about deontology and consequentialism

I think there’s a confusion in our discussions of deontology and consequentialism. I’m writing this post to try to clear up that confusion. First let me say that this post is not about any territorial facts. The issue here is how we use the philosophical terms of art ‘consequentialism’ and ‘deontology’.

The confusion is often stated thusly: “deontological theories are full of injunctions like ‘do not kill’, but they generally provide no (or no interesting) explanations for these injunctions.” There is of course an equivalently confused, though much less common, complaint about consequentialism.

This is confused because the term ‘deontology’ in philosophical jargon picks out a normative ethical theory, while the question ‘how do we know that it is wrong to kill?’ is not a normative but a meta-ethical question. Similarly, consequentialism contains in itself no explanation for why pleasure or utility are morally good, or why consequences should matter to morality at all. Nor does consequentialism/deontology make any claims about how we know moral facts (if there are any). That is also a meta-ethical question.

Some consequentialists and deontologists are also moral realists. Some are not. Some believe in divine commands, some are hedonists. Consequentialists and deontologists in practice always also subscribe to some meta-ethical theory which purports to explain the value of consequences or the source of injunctions. But consequentialism and deontology as such do not. In order to avoid strawmaning either the consequentialist or the deontologist, it’s important to either discuss the comprehensive views of particular ethicists, or to carefully leave aside meta-ethical issues.

This Stanford Encyclopedia of Philosophy article provides a helpful overview of the issues in the consequentialist-deontologist debate, and is careful to distinguish between ethical and meta-ethical concerns.

SEP article on Deontology

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 9:22 AM
Select new highlight date
Rendering 50/85 comments  show more

This is right in spirit but wrong in letter:

The confusion is often stated thusly: “deontological theories are full of injunctions like ‘do not kill’, but they generally provide no (or no interesting) explanations for these injunctions.”

It's not a confusion it's just something that isn't true. Deontological theories routinely provide explanations for these injunctions and some of these explanations are interesting (though I guess that's subjective).

This is confused because the term ‘deontology’ in philosophical jargon picks out a normative ethical theory, while the question ‘why is it wrong to kill?’ is not a normative but a meta-ethical question.

No it isn't. "Why is it wrong to kill?" is a great example of a normative question! Utilitarianism provides an answer. So does deontology. A meta-ethical question would be "what does it mean to say, 'it's wrong to kill'". An applied ethics question would be "in circumstances x, y and z, is it wrong to kill?". Normative theories are absolutely supposed to answer this question.

Some consequentialists and deontologists are also moral realists. Some are not.

While I guess this could be logically possible, anyone who is not a moral realist needs to provide some kind of explanation for what exactly a normative theory is supposed to be doing and what it mean's to assert one if there are no moral facts. I say this as a non-realist who is pretty confused about what everyone thinks they're arguing over.

To be absolutely clear, my post is about the way academic philosophy happens to organize a certain debate, and I cite that SEP article as my major source. It will be very helpful to me if you point out where you disagree with the SEP article (and on what basis), or where you think I've misread it. (Look specifically at this section: http://plato.stanford.edu/entries/ethics-deontological/#DeoTheMet

Again, there is no fact of the matter about what is a normative and what is a meta-ethical question, just a convention.

While I guess this could be logically possible, anyone who is not a moral realist needs to provide some kind of explanation for what exactly a normative theory is supposed to be doing and what it mean's to assert one if there are no moral facts.

Being a moral anti-realist is compatible with having, and following, a moral theory: you just think you have reasons to be moral which are not based on mind-independent facts. For example, you might think convention gives you reason to be moral, where conventionalism is traditionally described as a form of non-realism. (see: http://plato.stanford.edu/entries/moral-anti-realism/#ChaMorAntRea

Being a deontologist (I think, and my post assumes) is even compatible with being a moral nihilist: "Moral principles must come in the form of injunctions, and there are no such injunctions."

Again, there is no fact of the matter about what is a normative and what is a meta-ethical question, just a convention.

Well there is a fact of the matter, it's just a fact about a convention.

To be absolutely clear, my post is about the way academic philosophy happens to organize a certain debate, and I cite that SEP article as my major source. It will be very helpful to me if you point out where you disagree with the SEP article (and on what basis), or where you think I've misread it. (Look specifically at this section: http://plato.stanford.edu/entries/ethics-deontological/#DeoTheMet

Yes, I understand what your post was arguing and I'm familiar with the way academic philosophy organizes this debate. And yes, deontology does not presume any particular metaethics. Your error, as far as I can tell, is in not getting what counts as a meta-ethical question and what doesn't. "Why is murder wrong?" is a straightforward question for normative theory. Kantian deontology, for instance, answers by saying "Murder is wrong because it violates the Categorical Imperative." And then there are a lot of details about what the Categorical Imperative is and how murder violates it. Rule utilitarianism says that murder is wrong because a rule that prohibits murder provides for the greatest good for the greatest number. And so on. Normative theories exist precisely to explain why certain actions are moral and other actions are immoral. A normative theory that can't explain why murder is (usually) immoral is a terribly incomplete normative theory.

Meta-ethics isn't about asking why normative claims are true. It is about asking what it means to make a moral claim. Thus the "meta". E.g. questions like "are there moral facts?"

At no point have I mentioned credentials to try and win a philosophical debate on Less Wrong. But if there is anything my philosophy degree makes me a minimal expert in, it's jargon.

Being a moral anti-realist is compatible with having, and following, a moral theory: you just think you have reasons to be moral which are not based on mind-independent facts.

I realize this, but this resembles just about no one interested in debating consequentialism vs. deontology.

Being a deontologist (I think, and my post assumes) is even compatible with being a moral nihilist: "Moral principles must come in the form of injunctions, and there are no such injunctions."

Right. Like I said, it isn't logically impossible. It's just silly and sociologically implausible.

It is about asking what it means to make a moral claim.

Um, that's not a very interesting question, is it. Making a moral claim means, more or less: "I am right and you are wrong and you should do what I say". Note that this is not a morally absolutist view in the meta-ethical sense: even moral relativists make such claims all the time, they just admit that one's peculiar customs or opinions might affect the kinds of moral claims one makes.

What's a more interesting question is, "what should happen when folks make incompatible moral claims, or claim incompatible rights". This is what ethics (in the Rushworth Kidder sense of setting "right against right") is all about. When we do ethics, we abandon what might be called (in a perhaps naïve and philosophically incorrect way) "moral absolutism" or the simple practice of just making moral claims, and start debating them in public. Law, politics and civics are a further complication: they arise when societies get more complex and less "tribal", so simple ethical reasoning is no longer enough and we need more of a formal structure.

Making a moral claim means, more or less: "I am right and you are wrong and you should do what I say"

Well your attempt to explain what a normative claim is actually includes a normative claim so I don't think you've successfully dissolved the question. You are "right" about what? Facts? The world? What kind of facts? What kind of evidence can you offer to demonstrate that you are right and I am wrong?

"what should happen when folks make incompatible moral claims, or claim incompatible rights"

That "should" is there again.

When we do ethics, we abandon what might be called (in a perhaps naïve and philosophically incorrect sense) "moral absolutism" or the simple practice of just making moral claims, and start debating them in public.

I don't imagine there ever was a "simple practice of just making moral claims". Moral claims are generally claims made on others and they are speech acts which means they exist to communicate something. People don't spend a lot of time making moral claims that everyone agrees with and abides by which means it's pretty much in the nature of a moral claim to be part of a debate or discussion.

I can't see the importance or the force of the distinction you are trying to make.

To be absolutely clear, my post is about the way academic philosophy happens to organize a certain debate

Note that "the way academic philosophy happens to organize" debates about ethics and morality should be taken with a huge grain of salt. Most people who engage in moral/ethical judgment in everyday life pay very little attention to moral philosophy in the academic sense.

In fact, as it happens, most of the public debate about ethics and morals takes place outside academic philosophy, and is hard to disentangle from debate involving politics, law and general worldviews or "cosmologies" (in the anthropological sense).

Very true, though I think it's important to acknowledge two things: a) philosophers like Mill and Kant have had a huge impact on everyday moral thinking in the west, and b) the kinds of moral debates we typically have on this site are not independent of academic philosophy.

I wonder if it would be more useful, instead of talking about consequentialist vs. deontological positions, to talk about consequence-based and responsibility/rights-based inference steps, which can possibly coexist in the same moral system; or possibly consequence-based and responsibility/rights-based descriptions of morally desirable conditions?

_TL;DR: I see lots of debates flinging around "consequentialism" and "utilitarianism" and "moral realism" and "subjectivism" and various other philosophical terms, but each time I look up one of them or ask for an explanation, it inevitably ends up being something I already believe, even when it comes from both sides of a heated argument. So it turns out "I am a X" for nearly all X I've ever seen on LessWrong. Here's what I think about all of this, in honest lay-it-out-there form. For a charitable reading, assume there is no sarcasm or trolling anywhere in this comment._

Hmm. So...

I believe that there is an objective system of verifiable, moral facts which can be true or false. [3]

These facts depend on certain objective features of the universe. [2]

However, if one is to ask a moral question without including a specific group-referent (though usually, "all humans" or "most humans" is implicit) from which one can extract that objective algorithm that makes things moral or not, then there is no "final word" or "ultimate truth" about which answer is right, and in fact the question seems hopelessly self-contradictory to me. [1]

To my understanding, since something inside humans determines moral judgments and also determines our opinions on morality, they are correlated, but by a separate cause that seems all too often ignored. I believe that eventually we may be able to understand how this separate black box inside humans does decisions on morality, and then formulate equations to calculate how moral something is for a particular agent. [5]

Given that this "morality" thing only depends on the minds of people, it can also be said to be only about what these people think of it, in a very wide sense of the phrase. However, what opinions people generate and what turns out to be objectively moral are correlated, but from a third cause - one that is still a black box which we cannot describe very accurately (otherwise, you'd be able to show me exactly which neurons fire and in which order and exactly why that makes someone think and say that killing is, ceteris paribus, just simply bad and wrong).

Based on the above, if one were to remove humans altogether then I believe there would be no "right" or "wrong" or "moral" left at all, at least not in the way we mean those words. [1]

Since humans humans can influence the state of reality, and there's an algorithm somewhere that determines what we find moral, and humans "prefer" things that are moral (are programmed to act in a way that brings about higher quantities of this "moral" stuff), then if they do things which probably lead to more of it, they prefer that result, and if otherwise, they would have preferred that first result. It follows from this that humans should do things which (probably) lead to higher values of this moral stuff.

I would even go so far as to claim that anything that does not do the above, therefore breaks the rules of morality, and is not maximizing the algorithm of morality - they are breaking the rules and doing something outright wrong as a simple matter of mathematics. If they did the right thing, they would have more moral results. [4]

...So, what "am" I? What labels do I "get", having hereby cited, to the best of my understanding, the primary points and positions of all the sides of the debates here, with in my mind no contradiction whatsoever in any of the above?

Deontology and consequentialism aren't what's confusing me. What's confusing me is that there is all this confusion about the above points, and why people keep arguing about all of the above while to me they always seem to just be talking past eachother and seem to show clear signs of having the exact same model of the world (though sometimes assigning different names to different nodes or even to the model itself), or at least make the same predictions about morality.


Foundations, background and prior beliefs ("justifications"), to avoid more needless confusion:

0 - There is an objective, shared reality that we all live in that determines our experiences, not the other way around. This is simply the most natural, simplest way for the universe to function, and despite many claiming that there's a dragon in their garage, every single human I've ever met has always acted as if the above were true. With no exceptions.

1 - By studying the anthropic principle, physics, evolution, and some long-term history, I arrive at the conclusion that the universe isn't built for humans, that humans are a random artifact in it, and that if there were never any humans in the universe (or if we all go extinct), the rest of the universe will go on not giving a shit about us (as in, it can't give a shit, it doesn't have a mind, or even if it does, this mind just obviously doesn't do things according to human morality, otherwise we'd live in what humans would consider an ultimate heavenly utopia) and running along on its course of cruel physics and lifeforms suffering horribly before winking out of existence entirely for no reason or justification we might find valid or comforting right now.

2 - The Map is not the Territory, but the map is in the territory. Therefore, any part of the map is also an objective element of the landscape, an objective feature of reality. This includes human minds and human thoughts and human debates about morality.

3 - Since human minds are part of objective reality, they can be analyzed and objective, verifiable propositions can be stated about them.

4 - Numerically, some results will be better than others. However, if we assume that humans have multiple values as part of this "morality" thing and some of them have no relative ratios or bases of comparison, we run into game-theoretic issues of having to choose one of the pareto optimums in a significant number of possible games. It is theoretically possible in the real world that some issues will be this ambiguous, but in my experience in the vast majority of cases a more careful evaluation of the same morality algorithm will reveal that some of the possible choices which in the immediate seem to fulfill different values ambiguously will ultimately lead to strictly dominant outcomes when weighed over their effects on the world and opportunities for more fulfillment of values that are part of what is being currently valued.

In other words, while some possible choices may have multiple "optimal" terminal-value payoffs in a way that makes it ambiguous if calculated naively, the instrumental contribution of each choice to future worldstates will almost always make one of the outcomes strictly better than all the others because of the additional current value of generating worldstates that will give better odds of generating more value in future games.

5 - I reject all forms of dualism or claims that we can never possibly understand what goes on in human minds, on the basis of the same arguments and evidence cited in the Generalized Solution to P-Zombies. I can elaborate slightly more on this on request, but I personally consider the matter long resolved (as in, dissolved entirely such that I see no questions left to ask).

Edit: Added TLDR and fixed some of the formatting.

Thanks for the very clear direct account of your view. I do have one question: it seems that on your view it should be impossible to act according to your preferences, but morally wrongly. This is at least a pretty counterintuitive result, and may explain some of the confusion people have experienced with your view.

I believe that there is an objective system of verifiable, moral facts which can be true or false. ...- Since human minds are part of objective reality, they can be analyzed and objective, verifiable propositions can be stated about them.

But those "objective" facts would only be about the intuitions of individual minds,

and then formulate equations to calculate how moral something is for a particular agent.

Same problem. A thinks it is moral to kill B, B thinks it is not moral to be killed by A. Where is the objective moral fact there? Objective moral facts (or at least intersubjective ones) need to resolve conflicts between individuals. You have offered nothing that can do that.. Morality cannot just be a case of what an individual should do, because indiviuals interact.

However, if one is to ask a moral question without including a specific group-referent (though usually, "all humans" or "most humans" is implicit) from which one can extract that objective algorithm that makes things moral or not, then there is no "final word" or "ultimate truth" about which answer is right, and in fact the question seems hopelessly self-contradictory to me

Then morlaity is not so objective that it is graven into the very fabric of the universe. The problem remains that what you have presented is too subjective to do anything useful. By all means present a theory of human morality that is indexed to humans, but let it regulate interactions between humans.

However, what opinions people generate and what turns out to be objectively moral are correlated, but from a third cause - one that is still a black box which we cannot describe very accurately (

That is hard to inpterpret. Why should opinions be what is "objectively moral"? You might mean there is nothing more to morality than people's jugements about what is good or bad, but that is not an objective feature of the universe, it is mind projection. That the neural mechanisms involved are objective does not make what is projected by them objective. If objective neural activity makes me dream of unicorns, unicorns are not thereby objective.

And in any case, what is important is co-ordinating the judgements of individuals in the case of conflict.

Since humans humans can influence the state of reality, and there's an algorithm somewhere that determines what we find moral,

"We" individually, or "we" collectively? That is a very important point to skate over.

and humans "prefer" things that are moral (are programmed to act in a way that brings about higher quantities of this "moral" stuff), then if they do things which probably lead to more of it, they prefer that result, and if otherwise, they would have preferred that first result. It follows from this that humans should do things which (probably) lead to higher values of this moral stuff."

THat seems to be saying that it is instrumentally in people's interests to be moral. But if that were always straightforwardly the case, then there would be no issues of sacrifices and self-restraint involve in morality, which is scarcely credible. If I lay down my life for my country, that might lead to the greater good, but how good is it for me? The issue is much more complex than you have stated.

I think we're fairly close, but have one major difference.

I'd say there are moral facts. These moral facts are objective features of the universe. These facts are about the evaluations that could be made by the moral algorithms in our heads. Where I differ with you is in the number of black boxes. "We" don't have "a" black box. "Each" of us has our own black box.

Moral, as evaluated by you, is the result of your algorithm given the relevant information and sufficient processing time. I think this is somewhat in line with EY, though I can never tell if he is a universalist or not. Moral is the result of an idealized calculation of a moral algorithm, where the result of the idealization is often different than the actual because of lack of information and processing time.

A case could be made for this view to fall into many of the usual categories. Moral relativism. Ethical Subjectivism. Moral Realism. Moral Anti Realism. About the only thing ruled out is Universalism.

For Deontology vs. Consequentialism, it gets similarly murky.

Do consequentialists really do de novo analysis of the entire state of the universe again and again all day? If I shoot a gun at you, but miss, is it "no harm, no foul"? When a consequentialist actually thinks about it, all of a sudden I expect a lot of rules of behavior to come up. There will be some rule consequentialilsm. Then "acts" will be seen as part of the consequences too. Very quickly, we're seeing all sorts of aspects of deontology when a consequentialist works out the details.

The same thing with deontologists. Does the rule absolutely always apply? No? Maybe it depends on context? Why? Does it have something to do with the consequences in the different contexts? I bet it often does. Similarly, the "though the heavens fall, I shall do right" attitude is rarely taken in hypotheticals, and would be more rarely taken in actual fact. You won't tell a lie to keep everyone in the world from a fiery death? Really? I doubt it.

I'd expect a social animal to have both consequentialist and deontologist moral algorithms, but that there'd be significant feedback between the two. I'd expect the relative weighting of those algorithms to vary from animal to animal, much in the same way Haidt finds the relative strengths of the moral modalities he has identified vary between people.

Most of the argument over consequentialism and deontology probably comes more from how they are used as rationalizations for your preferences in moral modalities than the relative weighting of your consequentialist and deontological algorithms anyway. The meta argument over consequentialism vs. deontology is a way to avoid hard thinking that drives both algorithms to a settled conclusion.

This is confused because the term ‘deontology’ in philosophical jargon picks out a normative ethical theory, while the question ‘why is it wrong to kill?’ is not a normative but a meta-ethical question. Similarly, consequentialism contains in itself no explanation for why pleasure or utility are morally good, or why consequences should matter to morality at all. Nor does consequentialism/deontology make any claims about how we know moral facts (if there are any). That is also a meta-ethical question.

Either D-ology or C-ism can be taken meta-ethically or at the object level (ie following rules blindly or calculating consequences without knowing why).

Some consequentialists and deontologists are also moral realists.

Surely most are. C-ism is moral realism justified empirically, D-ology is moral realism jusitfied logically. Out of the two uses, the former, the meta ethical is more usual.

The confusion is often stated thusly: “deontological theories are full of injunctions like ‘do not kill’, but they generally provide no (or no interesting) explanations for these injunctions.”

I think if someone said this, what they probably mean (i.e., would say once you cleared up their confusion about terminology and convention) is something like "deontology does not seem compatible with any meta-ethical theories that I find plausible, while consequentialism does, and that is one reason why I'm more confident in consequentialism than in deontology." Is this statement sufficiently unconfused?

The best distinction I've seen between the two consists in whether you honour or promote your values.

Say I value not-murdering.

If I'm a consequentialist, I'll act on this by trying to maximise the amount of non-murdering (or minimising the amount of murdering). This might include murdering someone who I knew was a particularly prolific murderer.

If I'm a deontologist, I'll act on this value by honouring it: I'll withhold from murdering anyone, even if this might increase the total amount of murdering.

Unfortunately I can't remember offhand who came up with this analysis.

Say I value not-murdering.

If I'm a consequentialist, I'll act on this by trying to maximise the amount of non-murdering (or minimising the amount of murdering). This might include murdering someone who I knew was a particularly prolific murderer.

If I'm a deontologist, I'll act on this value by honouring it: I'll withhold from murdering anyone, even if this might increase the total amount of murdering.

This sounds like they are, in fact, valuing different things altogether. The consequentialist negvalues the amount of murdering there is, while the deontologist negvalues doing the murdering.

If the deontologist and consequentialist both value not-murdering-people, then the consequentialist takes the action which leads to them not having murdered someone (so they don't murder, even if it means more total murdering), and the deontologist is as quoted.

If they both negvalue the total amount of murders, the deontologist will honour not-doing-things-which-are-more-total-murder, which by logical necessity implies ¬( not murdering this one time), which means they also murder for the sake of less murdering.

It seems the distinction is, again, merely one of degree and probability estimates, and a difference in the general conceptspace of where people from both "camps" tend to usually pinpoint their values. To rephrase, this means it seems like the only real difference between consequentialists and deontologists is the language and the general empirical clusters of things they value more, including different probability estimates for certain values of the likelihood of some things.