Deontology for Consequentialists

Consequentialists see morality through consequence-colored lenses.  I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.

Consequentialism1 is built around a group of variations on the following basic assumption:

  • The rightness of something depends on what happens subsequently.

It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article.  "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism".  I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints.  All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple".  But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.

To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.

Deontology relies on things that do not happen after the act judged to judge the act.  This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.  This may include, but is not limited to:

  • The agent's epistemic state, either actual or ideal (e.g. thinking that some act would have a certain result, or being in a position such that it would be reasonable to think that the act would have that result)
  • The reference class of the act (e.g. it being an act of murder, theft, lying, etc.)
  • Historical facts (e.g. having made a promise, sworn a vow)
  • Counterfactuals (e.g. what would happen if others performed similar acts more frequently than they actually do)
  • Features of the people affected by the act (e.g. moral rights, preferences, relationship to the agent)
  • The agent's intentions (e.g. meaning well or maliciously, or acting deliberately or accidentally)

Individual deontological theories will have different profiles, just like different consequentialist theories.  And some of the theories you can generate using the criteria above have overlap with some consequentialist theories3.  The ultimate "overlap", of course, is the "consequentialist doppelganger", which applies the following transformation to some non-consequentialist theory X:

  1. What would the world look like if I followed theory X?
  2. You ought to act in such a way as to bring about the result of step 1.

And this cobbled-together theory will be extensionally equivalent to X: that is, it will tell you "yes" to the same acts and "no" to the same acts as X.

But extensional definitions are terribly unsatisfactory.  Suppose4 that as a matter of biological fact, every vertebrate is also a renate and vice versa (that all and only creatures with spines have kidneys).  You can then extensionally define "renate" as "has a spinal column", because only creatures with spinal columns are in fact renates, and no creatures with spinal columns are in fact non-renates.  The two terms will tell you "yes" to the same creatures and "no" to the same creatures.

But what "renate" means intensionally has to do with kidneys, not spines.  To try to capture renate-hood with vertebrate-hood is to miss the point of renate-hood in favor of being able to interpret everything in terms of a pet spine-related theory.  To try to capture a non-consequentialism with a doppelganger commits the same sin.  A rabbit is not a renate because it has a spine, and an act is not deontologically permitted because it brings about a particular consequence.

If a deontologist says "lying is wrong", and you mentally add something that sounds like "because my utility function has a term in it for the people around believing accurate things.  Lying tends to decrease the extent to which they do so, but if I knew that somebody would believe the opposite of whatever I said, then to maximize the extent to which they believed true things, I would have to lie to them.  And I would also have to lie if some other, greater term in my utility function were at stake and I could only salvage it with a lie.  But in practice the best I can do is to maximize my expected utility, and as a matter of fact I will never be as sure that lying is right as I'd need to be for it to be a good bet."5... you, my friend, have missed the point.  The deontologist wasn't thinking any of those things.  The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to"6.

But the deontologist is not thinking anything with the terms "utility function", and probably isn't thinking of extreme cases unless otherwise specified, and might not care whether anybody will believe the words of the hypothetical lie or not, and might hold to the prohibition against lying though the world burn around them for want of a fib.  And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: "because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad" has crossed into consequentialist territory.  (Nota bene: Adding another bit - say, "and I promised the reindeer I wouldn't do anything that would get them blown up" - can push this flight of fancy back into deontology again.  And then you can put it back under consequentialism again: "and if I break my promise, the vengeful spirits of the reindeer will haunt me, and that would make me miserable.")  The voices' instruction "happened" before the prospective act of lying.  The explosion at the North Pole is a subsequent potential event.  The promise to the reindeer is in the past.  The vengeful haunting comes up later.

A confusion crops up when one considers forms of deontology where the agent's epistemic state - real7 or ideal8 - is a factor.  It may start to look like the moral agent is in fact acting to achieve some post-action state of affairs, rather than in response to a pre-action something that has moral weight.  It may even look like that to the agent.  Per footnote 3, I'm ignoring expected utility "consequentialist" theories; however, in actual practice, the closest one can come to implementing an actual utility consequentialism is to deal with expected utility, because we cannot perfectly predict the effects of our actions.

The difference is subtle, and how it gets implemented depends on one's epistemological views.  Loosely, however: Suppose a deontologist judges some act X (to be performed by another agent) to be wrong because she predicts undesirable consequence Y.  The consequentialist sitting next to her judges X to be wrong, too, because he also predicts Y if the agent performs the act.  His assessment stops with "Y will happen if the agent performs X, and Y is axiologically bad."  (The evaluation of Y as axiologically bad might be more complicated, but this all that goes into evaluating X qua X.)  Her assessment, on the other hand, is more complicated, and can branch in a few places.  Does the agent know that X will lead to Y?  If so, the wrongness of X might hinge on the agent's intention to bring about Y, or an obligation from another source on the agent's part to try to avoid Y which is shirked by performing X in knowledge of its consequences.  If not, then another option is that the agent should (for other, also deontic reasons) know that X will bring about Y: the ignorance of this fact itself renders the agent culpable, which makes the agent responsible for ill effects of acts performed under that specter of ill-informedness.

 

1Having taken a course on weird forms of consequentialism, I now compulsively caveat anything I have to say about consequentialisms in general.  I apologize.  In practice, "consequentialism" is the sort of word that one has to learn by familiarity rather than definition, because any definition will tend to leave out something that most people think is a consequentialism.  "Utilitarianism" is a type of consequentialism that talks about utility (variously defined) instead of some other sort of consequence.

2Because it makes it dreadfully hard to write readably about consequentialism if I don't assume I'm only talking about act consequentialisms, I will only talk about act consequentialisms.  Transforming my explanations into rule consequentialisms or world consequentialisms or whatever other non-act consequentialisms you like is left as an exercise to the reader.  I also know that preferentism is more popular than hedonism around here, but hedonism is easier to quantify for ready reference, so if called for I will make hedonic rather than preferentist references.

3Most notable in the overlap department is expected utility "consequentialism", which says that not only is the best you can in fact do to maximize expected utility, but that is also what you absolutely ought to do.  Depending on how one cashes this out and who one asks, this may overlap so far as to not be a real form of consequentialism at all.  I will be ignoring expected utility consequentialisms for this reason.

4I say "suppose", but in fact the supposition may be actually true; Wikipedia is unclear.

5This is not intended to be a real model of anyone's consequentialist caveats.  But basically, if you interpret the deontologist's statement "lying is wrong" to have something to do with what happens after one tells a lie, you've got it wrong.

6As far as I know, no one seriously endorses "schizophrenic deontology".  I introduce it as a caricature of deontology that I can play with freely without having to worry about accurately representing someone's real views.  Please do not take it to be representative of deontic theories in general.

7Real epistemic state means the beliefs that the agent actually has and can in fact act on.

8Ideal epistemic state (for my purposes) means the beliefs that the agent would have and act on if (s)he'd demonstrated appropriate epistemic virtues, whether (s)he actually has or not.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 10:11 PM
Select new highlight date
All comments loaded

This might be unfair to deontologists, but I keep getting the feeling that deontology is a kind of "beginner's ethics". In other words, deontology is the kind of ethical system you get once you build it entirely around ethical injunctions, which is entirely reasonable if you don't have the computing power to calculate the probable consequences of your actions with a very high degree of confidence. So you resort to what are basically cached rules that seem to work most of the time, and elevate those to axioms instead of treating them as heuristics.

And before I'm accused of missing the difference between consequentialism and deontology: no, I don't claim that deontologists actually consciously think that this is why they're deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.

I don't claim that deontologists actually consciously think that this is why they're deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.

Indeed, I get the impression from the article that a deontologist is someone who makes moral choices based on whether they will feel bad about violating a moral injunction, or good for following it... and then either ignorantly or indignantly denies this is the case, treating the feeling as evidence of a moral judgment's truth, rather than as simply a cached response to prior experience.

Frankly, a big part of the work I do to help people is teaching them to shut off the compelling feelings attached to the explicit and implicit injunctions they picked up in childhood, so I'm definitely inclined to view deontology (at least as described by the article) as a hopelessly naive and tragically confused point of view, well below the sanity waterline... like any other belief in non-physical entities, rooted in mystery worship.

I also seem to recall that previous psychology research showed that that sort of thinking was something people naturally tended to grow out of as they got older (stages of moral reasoning), but then I also seem to recall that there was some more recent dispute about that, and accusations of gender bias in the research.

Nonetheless, it's evolutionarily plausible that we'd have a simple, injunction-based emotional trigger system used in early life, until our more sophisticated reasoning abilities come online. And my experience working with my own and other people's brains seems to support this: when broad childhood injunctions are switched off, people's behavior and judgments in the relevant area immediately become more flexible and sophisticated.

Unfortunately, the deontological view sounds like it's abusing higher reasoning simply to retroactively justify whatever (cached-feeling) injunctions are already in place, by finding more-sophisticated ways to spell the injunctions so they don't sound like they have anything to do with one's own past shames, guilts, fears, and other experiences. (What Robert Fritz refers to as an "ideal-belief-reality conflict", or what Shakespeare called, "The lady doth protest too much, methinks." I.e., we create high-sounding ideals and absolute moral injunctions specifically to conceal our personally-experienced failings or conflicts around those issues.)

Of course, I could just be missing the point of deontology entirely. But I can't seem to even guess at what that point would be, because everything I'm reading here seems to closely resemble something that I had to grow out of... making it really hard for me to take it seriously.

Do you think it is likely that the emotional core of your claim was captured by the statement that "everything I'm reading here seems to closely resemble something that I had to grow out of... making it really hard for me to take it seriously"?

And then assuming this question finds some measure of ground.... how likely do you think it is that you would grow in a rewarding way by applying "your emotional reprogramming techniques" to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?

I haven't read into your writings super extensively, but from what I read you have quite a lot of practice doing something like "soul dowsing" to find emotional reactions. Then you trace them back to especially vivid "formative memories" which can then then be rationally reprocessed using other techniques - the general goal being to allow clearer thinking about retrospectively critical experiences in a more careful manner and in light of subsequent life experiences. (I'm sure there's a huge amount more, but this is my gloss that's relevant to your post.)

I've never taken your specific suggestions along these lines into practice (for various reasons having mostly to do with opportunity costs) but the potential long term upside seem high and your post just seemed like a gorgeous opportunity to explore some of the longer term consequences of your suggested practices.

how likely do you think it is that you would grow in a rewarding way by applying "your emotional reprogramming techniques" to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?

That's an interesting question. I don't think an ideal-belief-reality conflict is involved, though, as an IBRC motivates someone to try to convince the "wrong" others of their error, and I didn't feel any particular motivation to convince deontologists that they're wrong! I included the disclaimer because I'm honestly frustrated by my inability to grok the concept of deontological morality except in terms of a feeling-driven injunctions model. (Had I been under the influence of an IBRC, I'd have been motivated to express greater certainty, as has happened occasionally in the past.)

So, if there's any emotional reaction taking place, I'd have to say it was frustration with an inability to understand something... and the intensity level was pretty low.

In contrast, I've had discussions here last year where I definitely felt an inclination to convince people of things, and at a much higher emotional intensity -- so I fixed them. This doesn't feel to me like something in the same category.

It might be interesting to check out the frustration-at-inability-to-understand thing at some point, but at the moment it's a bit like a hard-to-reproduce bug. I don't have a specific trigger thought I can use to call up the feeling of frustration, so I would have no way at the moment to know if I actually changed anything.

from what I read you have quite a lot of practice doing something like "soul dowsing" to find emotional reactions.

I've never heard that phrase before, and Google actually finds your comment as the third-highest ranking result for the phrase. Is it of your invention?

In any event, I don't believe I do anything that could be called dowsing. It would be more appropriate to refer it as a form of behavior modification via memory alteration.

We know that memories are fluid and their interpretations can be altered by suggestively-worded questions - mindhacking can be thought of as a way of using this brain bug, to fix other brain bugs.

Then you trace them back to especially vivid "formative memories" which can then then be rationally reprocessed using other techniques - the general goal being to allow clearer thinking

So far, so good, but this bit:

about retrospectively critical experiences in a more careful manner and in light of subsequent life experiences.

is beside the point. The purpose is that once you've altered the memory structure involved, your behavior -- both in the form of thought patterns and actions -- automatically changes to fall in line with the shift in the emotional relevance of what's stored in your memory. The memory goes from being an unconscious emotional trigger, to an easily forgotten irrelevancy.

Indeed, the only reason I even remember the content of what I changed the other day regarding my mother yelling at me, is because I make a deliberate practice of trying to retain such memories. If I don't write something down about what I change, the specific memories involved fade rapidly. I've had clients who within minutes or hours forgot they'd even had a problem in the first place.

Even trying to retain them in memory, the only record I now have of a change I made about two weeks ago, is the one I wrote down at the time. I remember remembering it, sure, but I don't remember it directly -- it's now more like a story I heard, than something that actually happened to me.

IOW, amnesia for the original issue or where it come from is a normal and expected side-effect of successfully changing an emotionally-charged memory into a merely factual anecdote about something that happened to you, once upon a time.

I've never taken your specific suggestions along these lines into practice (for various reasons having mostly to do with opportunity costs) but the potential long term upside seem high and your post just seemed like a gorgeous opportunity to explore some of the longer term consequences of your suggested practices.

The intended outcome is to provide a means of effective self-modification, one that does not require constant vigilance to monitor an ever-increasing number of biases or enforce an ever-increasing number of required behaviors. There are an enormous number of hardware biases that I cannot modify, but on a day-to-day basis, we are far more affected by our acquired, "software" biases anyway.

To give a concrete example, what I do can't modify the general tendency of humans to identify with ingroups and attack outgroups -- but it can remove entries from the "outgroup description table" in an individual's brain, one at a time!

This isn't much, but it's still something. I call it mindhacking, because that's really what it is: making use of the brain's bugs (e.g. malleable memory) to patch over some of its other bugs.

Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I'll work through it "live" right now.

[edit: the rest was too long to fit, so I've split it off into a separate, child comment as a reply to this one]

[split from parent comment due to length]

Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I'll work through it "live" right now.

I am frustrated at being unable to find common ground with what seems like abstract thoughts taken to the point of magical and circular thinking... and it seems the emotional memory is arguing theism and other subjects with my mother at a relatively young age... she would tie me in knots, not with clever rhetoric, but with sheer insanity -- logical rudeness writ large.

But I couldn't just come out and say that to her... not just because of the power differential, but also because I had no handy list of biases and fallacies to point to, and she had no attention span for any logically-built-up arguments.

Huh. No wonder I feel frustrated trying to understand deontology... I get the same, "I can't even understand this craziness well enough to be able to say it's wrong" feeling.

Okay, so what abilities did I lose to learned helplessness in this context? I learned that there was nothing I could say or do about logical craziness... which would certainly explain why I started and deleted my deontology comment multiple times before finally posting it... and didn't really try to achieve any common ground during it... I just took a victim posture and said deontology was nonsense. I also waited until I could "safely" say it in the context of someone else's comment, rather than directly addressing the post's author -- either to seek the truth or argue a clear position.

So, what do I want to replace that feeling of helplessness with? Would I rather be curious, so that I find out more about someone's apparently circular reasoning before dismissing it or fighting with it? How about compassionate, so I try to help the person find the flaw in their reasoning, if they're actually interested in the first place? What about amusement, so that I'm merely entertained and move on?

Just questioning these possibilities and bringing them into mind is already modifying the emotional response, since I've now had an (imagined) sensory experience of what it would be like to have those different emotions and behaviors in the circumstance. I can also see that I don't need to understand or persuade in such a circumstance, which feels like a relief. I can see that I didn't need to argue with my mother and frustrate myself; I could have just let her be who she was, and gone about my business.

So, this is a good time for a test. How do I feel about arguing theism with my mother? No big deal. How about deontology? Not a big deal either, but then it wasn't earlier, either, which is why I couldn't use it as a test directly. So the real test is the thought of "having to explain practical things to people hopelessly stuck in impractical thinking", which was reliably causing me to wrinkle my brow, hunch slightly, and sigh in frustration.

Now, instead of that, I get a mixed feeling of compassion/patience, felt lightly in the chest area... but there's still a hint of the old feeling, like a component is still there.

Ah... I see, I've dealt with only one need axis: connection/bonding, but not status/significance. A portion of the frustration was not being able to connect, and that portion I've resolved, but the other part was frustration with a status differential: the person making the argument is succeeding in lowering my status if I can't address their (nonsensical) argument.

Ugh. I hate status entanglements. I can't fix the brain's need for status, only remove specific entries from the "status threats" table. So let's see if we can take this one out.

I'm noticing that other memories of kids teasing or insulting me in school are coming up in connection with this -- the same fundamental circumstance of being in a conversation with no good answers, silence included. No matter what I do, I will lose face.

Ouch. This is a tough one. The rookie mistake here would be to think I have to be able to come up with better comebacks or something... that is, that I have to solve the problem in the outside world, in order to change my feelings. But if I instead change my feelings first on the inside, then my behavior will change to match.

So, what do I want to feel? Amused? Confident? As with other forms of learned helplessness, I am best off if I can feel the outcome emotions in advance of tthe outside world conforming to my preference. (That is, if I already feel the self-esteem I want from the interaction, before the interaction takes place, it is more likely that I will act in a way that results in a favorable interaction.)

So how would I feel if those kids were praising, instead of teasing or insulting? I would feel honored by the attention...

Boom! The memory just changed, popping into a new interpretation: the kids teasing and insulting me were giving me positive attention. This new interpretation drives a different feeling about it... along with a change to my feelings about certain discussions that have taken place on LW. ;-) Netiher seems like a threat any more.

Similarly, thinking about being criticized in other contexts doesn't seem like a threat... I strangely feel genuinely honored that somebody took the time to tell me how they feel, even if I don't agree with it. Wow. Weird. ;-) (But then, as I'm constantly telling people, if your change doesn't surprise you in some way, you probably didn't really change anything.)

The change also sent me reeling for a moment, as suddenly the sense of loneliness and "outsider"-ness I had as a child begins to feel downright stupid and unnecessary in retrospect.

Wow. Deep stuff. Did not expect anything of this depth from your suggestion, JenniferRM. I think I will take the rest of my processing offline, as it's been increasingly difficult to type about this while doing it... trying to explain the extra context/purpose stuff has been kind of distracting anyway, while I was in the middle of doing things.

Whew. Anyway, I hope that was helpfully illustrative, nonetheless.

This comment has done more than anything else you've written to convince me that you aren't generally talking nonsense.

Thank you, that's very kind of you to say.

Overnight, I continued working on that thread of thoughts, and dug up several related issues. One of them was that I've also not been nearly as generous with giving positive attention and appreciation as I would've liked others to be. So I made a change to fix that this morning, and I actually felt genuine warmth and gratitude in response to your comment... something that I generally haven't felt, even towards very positive comments here in the past.

So really, thank you, as it was indeed both kind and generous of you to say it.

Thanks for the response.

That was way more than I was hoping to get back and went in really interesting directions - the corrections about the way the "reprocessing" works and the limits of reprocessing was helpful. The detail about the way vivid memories can no longer be accessed through the same "index" and become more like stories was totally unexpected and fascinating.

Also, that was very impressive in terms of just... raw emotional openness, I guess. I don't know about other readers, but it stirred up my emotions just reading about your issues as you worked through them. I have a hard time imagining the courage it would take for me to make similar emotional disclosures in a place like this if they were my own. I'm a little frightened by how much trust you gave me I think? But I'm very grateful too.

(And yes, "soul dousing" is a term I made up for the post for the sake of trying to summarize things I've read by you in the past in my own words to see if I was hearing what you were trying to say.)

I have a hard time imagining the courage it would take for me to make similar emotional disclosures in a place like this if they were my own.

Not as much as you might think. Bear in mind that by the time anybody reads anything I've written about something like that, it's no longer the least bit emotional for me -- it has become an interesting anecdote about something "once upon a time".

If it was still emotional for me after I made the changes, I would have more trouble sharing it, here or even with my subscribers. In fact, the reason I cut off the post where I did was because there was some stuff I wasn't yet "done" with and wanted to work on some more.

Likewise, it's a lot easier to admit to your failures and shortcomings if you are acutely aware that 1) "you" aren't really responsible, and 2) you can change. It's easier to face the truth of what you did wrong, if you know that your reaction will be different in the future. It takes out the "feeling of being a bad person" part of the equation.

Deciding whether a rule "works" based on whether it usually brings about good consequences, and following the rules that do and calling that "right", is called rule consequentialism, not deontology.

That's if you do it consciously, which I wasn't suggesting. My suggestion was that this would be a mainly unconscious process, similar to the process of picking up any other deeply-rooted preference during childhood / young age.

Sometimes I believe that

  • consequentialism calls possible worlds good
  • deontology calls acts good
  • virtue ethics calls people good

Of course, everyone uses "good" to label all three, but the difference is what is fundamental. cf Richard Chappell

My issue with deontology-as-fundamental is that, whenever someone feels compelled to defend a deontological principle, they invariably end up making a consequentialist argument.

E.g. "Of course lying is wrong, because if lying were the general habit, communication would be impossible" or variants thereof.

The trouble, it seems to me, is that consequentialist moralities are easier to ground in human preferences (current and extrapolated) than are deontological ones, which seem to beg for a Framework of Objective Value to justify them. This is borne out by the fact that it is extremely difficult to think of a basic deontological rule which the vast majority of people (or the vast majority of educated people, etc.) would uphold unconditionally in every hypothetical.

If someone is going to argue that their deontological system should be adopted on the basis of its probable consequences, fine, that's perfectly valid. But in that case, as in the story of Churchill, we've already established what they are, we're just haggling over the price.

As someone who is on the fence between between noncognitivism and deontic/virtue ethics, I seem to be witnessing a kind of incommensurability of ethical theories going on in this thread. It is almost like Alicorn is trying to show us the rabbit, but all we are seeing is the duck and talking about the "rabbit" as if is it some kind of bad metaphor for a duck.

On Less Wrong, consequentialism isn't just another ethical theory that you can swap in and out of our web of belief. It seems to be something much more central and interwoven. This might be due to the fact that some disciplines like economics implicitly assume some kind of vague utilitarianism and so we let certain ethical theories become more central to our web of belief than is warranted.

I predict that Alicorn would have similar problems trying to get people on Less Wrong to understand Aristotelian physics, since it is really closer to common sense biology than Einsteinian physics (which I am guessing is very central to our web of belief).

You're confusing "understand" and "accept as useful or true".

Alicorn's post was good summary of deontology. I understand it, I just don't agree with it. Richard Garfinkle's SF novel Celestial Matters in addition to being a great read, also elucidates some consequences of Aristotelian physics, increasing the intuition of the reader. I certainly think that Garfinkle understands Aristotelian physics, and just as assuredly is unwilling to use it for orbital calculations in practice (though quite capable of doing the same for fiction purposes).

EDIT: reading further in the comments, I do indeed see plenty of people who don't understand deontic ethics. But just your comment about "not being able to swap in or out" does not at all demonstrate lack of understanding.

EDIT: I'd also appreciate a comment by the person who downvoted me about their reasoning (or anyone else who disagrees with the substance). I obviously think this is fairly straight-forward point -- understanding and accepting are two different things. Wanting to swap a framework in or out of our web of belief is not purely about understanding it, but about accepting it. Related, certainly (it really helps to understand something in order to accept it), but not the same.

Deontology relies on things that do not happen after the act judged to judge the act. This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.

I'm not convinced that this 'backward-looking vs. forward-looking' contrast really cuts to the heart of the distinction. Note that consequentialists may accept an 'holistic' axiology according to which whether some future event is good or bad depends on what has previously happened. (For a simple example, retributivists may hold that it's positively good when those who are guilty of heinous crimes suffer. But then in order to tell whether we should relieve Bob's suffering, we need to look backwards in time to see whether he's a mass-murderer.) It strikes me as misleading to characterize this as involving a form of "overlap" with deontological theories. It's purely consequentialist in form; it merely has a more complex axiology than (say) hedonism.

The distinction may be better characterised in terms of the relative priority of 'the right' and 'the good'. Consequentialists take goodness (i.e. desirability, or what you ought to want) as fundamental, and thus have a teleological conception of action: the point of acting is to achieve some prior goal (which, again, needn't be purely forward-looking). Deontologists reverse this. They begin with a conception of how one ought to act (e.g. in ways that would be universalizable, or justifiable to others, or respects everyone's rights), and only subsequently derive the doppelganger's conception of the good (as you put it: "what would the world look like if I follow theory X").

An interesting consequence of this analysis is that so-called "rule consequentialism" turns out to be a borderline case: the good (what to want) is partly, but not entirely, prior to the right (how to act). I explain this in more detail in my post Analyzing Consequentialisms.

Deontology treats morality as terminal. Consequentialism treats morality as instrumental.

Is this a fair understanding of deontology? Or is this looking at deontology through a consequentialism lens?

The deontologist wasn't thinking any of those things. The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to". But the deontologist is not thinking anything with the terms "utility function" [...]

Right, but what about Dutch book-type arguments? Even if I agree that lying is wrong and not because of its likely consequences, I still have to make decisions under uncertainty. The reason for trying to bludgeon everything into being a utility function is not that "the rightness of something depends on what happens subsequently." It's that, well, we have these theorems that say that all coherent decisionmaking processes have to satisfy these-and-such constraints on pain of being value-pumped. Anything you might say about rights or virtues is fine qua moral justification, but qua decision process, it either has to be eaten by decision theory or it loses.

+10karma for you!

I have a bit of a negative reaction to deontology, but upon consideration the argument would be equally applicable to consequentialism: the prescriptions and proscriptions of a deontological morality are necessarily arbitrary, and likewise the desideratum and disdesideratum (what is the proper antonym? Edit: komponisto suggests "evitandum", which seems excellent) of a consequentialist morality are necessarily arbitrary.

...which makes me wonder if the all-atheists-are-nihilists meme is founded in deontological intuitions.

desideratum...(what is the proper antonym?)

"Evitandum"?

Sounds even better in the plural: "The evitanda of the theory..."

You can then extensionally define "renate" as "has a spinal column"

But what "renate" means intensionally has to do with kidneys, not spines.

I don't think this has been covered here yet, so for those not familiar with these two terms: inferring something extensionally means you infer something based on the set in which an object belongs to. Inferring something intensionally means you infer something based on the actual properties of the object.

Wikipedia formulates these as

An extensional definition of a concept or term formulates its meaning by specifying its extension, that is, every object that falls under the definition of the concept or term in question.

For example, an extensional definition of the term "nation of the world" might be given by listing all of the nations of the world, or by giving some other means of recognizing the members of the corresponding class.*

and

an intensional definition gives the meaning of a term by specifying all the properties required to come to that definition, that is, the necessary and sufficient conditions for belonging to the set being defined.

For example, an intensional definition of "bachelor" is "unmarried man." Being an unmarried man is an essential property of something referred to as a bachelor. It is a necessary condition: one cannot be a bachelor without being an unmarried man. It is also a sufficient condition: any unmarried man is a bachelor

Rule of thumb in case you get forget which is which: EXTEnsion refers to "external" properties, like the group you happen to belong into, while INTEnsion refers to internal properties.

I can perfectly understand the idea that lying is fundamentally bad, not just because of its consequences. My problem comes up for how that doesn't imply that something else can be bad because it leads to other people lying.

The only way I can understand it is that deontology is fundamentally egoist. It's not hedonist; you worry about things besides your well-being. But you only worry about things in terms of yourself. You don't care if the world descends into sin so long is you are the moral victor. You're not willing to murder one Austrian to save him from murdering six million Jews.

Am I missing something?