Pluralistic Moral Reductionism

Part of the sequence: No-Nonsense Metaethics

Disputes over the definition of morality... are disputes over words which raise no really significant issues. [Of course,] lack of clarity about the meaning of words is an important source of error… My complaint is that what should be regarded as something to be got out of the way in the introduction to a work of moral philosophy has become the subject matter of almost the whole of moral philosophy...

Peter Singer

 

If a tree falls in the forest, and no one hears it, does it make a sound? If by 'sound' you mean 'acoustic vibrations in the air', the answer is 'Yes.' But if by 'sound' you mean an auditory experience in the brain, the answer is 'No.'

We might call this straightforward solution pluralistic sound reductionism. If people use the word 'sound' to mean different things, and people have different intuitions about the meaning of the word 'sound', then we needn't endlessly debate which definition is 'correct'.1 We can be pluralists about the meanings of 'sound'. 

To facilitate communication, we can taboo and reduce: we can replace the symbol with the substance and talk about facts and anticipations, not definitions. We can avoid using the word 'sound' and instead talk about 'acoustic vibrations' or 'auditory brain experiences.'

Still, some definitions can be wrong:

Alex: If a tree falls in the forest, and no one hears it, does it make a sound?

Austere MetaAcousticist: Tell me what you mean by 'sound', and I will tell you the answer.

Alex: By 'sound' I mean 'acoustic messenger fairies flying through the ether'.

Austere MetaAcousticist: There's no such thing. Now, if you had asked me about this other definition of 'sound'...

There are other ways for words to be wrong, too. But once we admit to multiple potentially useful reductions of 'sound', it is not hard to see how we could admit to multiple useful reductions of moral terms.

 

Many Moral Reductionisms

Moral terms are used in a greater variety of ways than sound terms are. There is little hope of arriving at the One True Theory of Morality by analyzing common usage or by triangulating from the platitudes of folk moral discourse. But we can use stipulation, and we can taboo and reduce. We can use pluralistic moral reductionism2 (for austere metaethics, not for empathic metaethics).

Example #1:

Neuroscientist Sam Harris: Which is better? Religious totalitarianism or the Northern European welfare state?

Austere Metaethicist: What do you mean by 'better'?

Harris: By 'better' I mean 'that which tends to maximize the well-being of conscious creatures'.

Austere Metaethicist: Assuming we have similar reductions of 'well-being' and 'conscious creatures' in mind, the evidence I know of suggests that the Northern European welfare state is more likely to maximize the well-being of conscious creatures than religious totalitarianism.

Example #2:

Philosopher Peter Railton: Is capitalism the best economic system?

Austere Metaethicist: What do you mean by 'best'?

Railton: By 'best' I mean 'would be approved of by an ideally instrumentally rational and fully informed agent considering the question ‘How best to maximize the amount of non-moral goodness?' from a social point of view in which the interests of all potentially affected individuals are counted equally.

Austere Metaethicist: Assuming we agree on the meaning of 'ideally instrumentally rational' and 'fully informed' and 'agent' and 'non-moral goodness' and a few other things, the evidence I know of suggests that capitalism would not be approved of by an ideally instrumentally rational and fully informed agent considering the question ‘How best to maximize the amount of non-moral goodness?' from a social point of view in which the interests of all potentially affected individuals were counted equally.

Example #3:

Theologian Bill Craig: Ought we to give 50% of our income to efficient charities?

Austere Metaethicist: What do you mean by 'ought'?

Craig: By 'ought' I mean 'approved of by an essentially just and loving God'.

Austere Metaethicist: Your definition doesn't connect to reality. It's like talking about atom-for-atom 'indexical identity' even though the world is made of configurations and amplitudes instead of Newtonian billiard balls. Gods don't exist.

But before we get to empathic metaethics, let's examine the standard problems of metaethics using the framework of pluralistic moral reductionism.

 

Cognitivism vs. Noncognitivism

One standard debate in metaethics is cognitivism vs. noncognitivism. Alexander Miller explains:

Consider a particular moral judgement, such as the judgement that murder is wrong. What sort of psychological state does this express? Some philosophers, called cognitivists, think that a moral judgement such as this expresses a belief. Beliefs can be true or false: they are truth-apt, or apt to be assessed in terms of truth and falsity. So cognitivists think that moral judgements are capable of being true or false. On the other hand, non-cognitivists think that moral judgements express non-cognitive states such as emotions or desires. Desires and emotions are not truth-apt. So moral judgements are not capable of being true or false.3

But why should we expect all people to use moral judgments like "Stealing is wrong" to express the same thing?4

Some people who say "Stealing is wrong" are really just trying to express emotions: "Stealing? Yuck!" Others use moral judgments like "Stealing is wrong" to express commands: "Don't steal!" Still others use moral judgments like "Stealing is wrong" to assert factual claims, such as "stealing is against the will of God" or "stealing is a practice that usually adds pain rather than pleasure to the world."

It may be interesting to study all such uses of moral discourse, but this post focuses on addressing cognitivists, who use moral judgments to assert factual claims. We ask: Are those claims true or false? What are their implications?

 

Objective vs. Subjective Morality

Is morality objective or subjective? It depends which moral reductionism you have in mind, and what you mean by 'objective' and 'subjective'.

Here are some common5 uses of the objective/subjective distinction in ethics:

  • Moral facts are objective1 if they are made true or false by mind-independent facts, otherwise they are subjective1.
  • Moral facts are objective2 if they are made true or false by facts independent of the opinions of sentient beings, otherwise they are subjective2.
  • Moral facts are objective3 if they are made true or false by facts independent of the opinions of humans, otherwise they are subjective3.

Now, consider Harris' reduction of morality to facts about the well-being of conscious creatures. His theory of morality is objectiveand objective2, because facts about well-being are independent of anyone's opinion. Even if the Nazis had won WWII and brainwashed everybody to have the opinion that torturing Jews was moral, it would remain true that torturing Jews does not increase the average well-being of conscious creatures. But Harris' theory of morality is not objective1, because facts about the well-being of conscious creatures are mind-dependent facts.

Or, consider Craig's theory of morality in terms of divine approval. His theory doesn't connect to reality, but still: is it objective or subjective? Craig's theory says that moral facts are objective3, because they don't depend on human opinion (God isn't human). But his theory doesn't say that morality is objective2 or objective1, because for him, moral facts depend on the opinion of a sentient being: God.

A warning: ambiguous terms like 'objective' and 'subjective' are attractors for sneaking in connotations. Craig himself provides an example. In his writings and public appearances, Craig insists that only God-based morality can be objective.6 What does he mean by 'objective'? On a single page,7 he uses 'objective' to mean "independent of people's opinions" (objective2) and also to mean "independent of human opinion" (objective3). I'll assume he means that only God-based morality can be objective3, because God-based morality is clearly not objective2 (Craig's God is a person, a sentient being).

And yet, Craig says that we need God in order to have objective3 morality as if this should be a big deal. But hold on. Even a moral code defined in terms of the preferences of Washoe the chimpanzee is objective3. So not only is Bill's claim that only God-based morality can be objectivefalse (because Harris' moral theory is also objective3), but also it's trivially easy to come up with a moral theory that is 'objective' in Craig's (apparent) sense of the term (that is, objective3).8

Moreover, Harris' theory of morality is objective in a 'stronger' sense than Craig's theory of morality is. Harris' theory is objective3 and objective2, while Craig's theory is merely objective3. Whether he's doing it consciously or not, I wonder if Craig is using the word 'objective' to try to sneak in connotations that don't actually apply to his claims once you pay attention to what Craig actually means by the word 'objective'. If Craig told his audience that we need God for morality to be 'objective' in the same sense that morality defined in terms of the preferences of a chimpanzee is 'objective', would this still still have his desired effect on his audience? I doubt it.

Once you've stipulated your use of 'objective' and 'subjective', it is often trivial to determine whether a given moral reductionism is 'objective' or 'subjective'. But what of it? What force should those words carry after you've tabooed them? Be careful not to sneak in connotations that don't belong.

 

Relative vs. Absolute Morality

Is morality relative or absolute? Again, it depends which moral reductionism you have in mind, and what you mean by 'relative' and 'absolute'. Again, we must be careful about sneaking in connotations.

 

Moore's Open Question Argument

"He's an unmarried man, but is he a bachelor?" This is a 'closed' question. The answer is obviously "Yes."

In contrast, said G.E. Moore, all questions of the type "Such and such is X, but is it good?" are open questions. It feels like you can always ask, "Yes, but is it good?" In this way, Moore resists the identification of 'morally good' with any set of natural facts. This is Moore's Open Question Argument. Because some moral reductionisms do identify 'good' or 'right' with a particular X, those reductionisms had better have an answer to Moore.

The Yudkowskian response is to point out that when cognitivists use the term 'good', their intuitive notion of 'good' is captured by a massive logical function that can't be expressed in simple statements like "maximize pleasure" or "act only in accordance with maxims you could wish to be a universal law without contradiction." Even if you think everything you want (or rather, want to want) can be realized by (say) maximizing the well-being of conscious creatures, you're wrong. Your values are more complex than that, and we can't see the structure of our values. That is why it feels like an open question remains no matter which simplistic identification of "Good = X" you choose.

The problem is not that there is no way to identify 'good' or 'right' (as used intuitively, without tabooing) with a certain X. The problem is that X is huge and complicated and we don't (yet) have access to its structure.

But that's the response to Moore after righting a wrong question - that is, when doing empathic metaethics. When doing mere pluralistic moral reductionism, Moore's argument doesn't apply. If we taboo and reduce, then the question of "...but is it good?" is out of place. The reply is: "Yes it is, because I just told you that's what I mean to communicate when I use the word-tool 'good' for this discussion. I'm not here to debate definitions; I'm here to get something done."9

 

The Is-Ought Gap

(This section rewritten for clarity.)

Many claim that you cannot infer an 'ought' statement from a series of 'is' statements. The objection comes from Hume, who said he was surprised whenever an argument made of is and is not propositions suddenly shifted to an ought or ought not claim, without explanation.10

The solution is to make explicit the bridge from 'ought' statements to 'is' statements.

Perhaps the arguer means something non-natural by 'ought', such as 'commanded by God' or 'in accord with irreducible, non-natural facts about goodness' (see Moore). If so, I would reject that premise of the argument, because I'm a reductionist. At this point, our discussion might need to shift to a debate over the merits of reductionism.

Or perhaps by 'you ought to X' the arguer means something fully natural, such as:

  • "X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z."
  • Or: "X tends to maximizes reward signals in agents exhibiting multiple-drafts consciousness" (or, as Sam Harris more broadly puts it, "X tends to maximize well-being in conscious creatures").
  • Or: "X is what a Bayes-rational and Hubble-volume-omniscient agent would do if it was motivated to maximize the amount of non-moral goodness from a view in which the interests of all potentially affected individuals were counted equally, where 'non-moral goodness' refers to what an agent would want if it were he to contemplate its present situation from a standpoint fully and vividly informed about itself and its circumstances, and entirely free of cognitive error or lapses of instrumental rationality" (see Railton's metaethics).
  • Or: "X maximizes the complicated function that can be computed by extrapolating (in a particular way) the motivations encoded by my brain" (see CEV).
  • Or: "[insert here whatever statement, if believed, would motivate one to do X]" (see Will Sawin).

Or, the speaker may have in mind a common ought-reductionism known as the hypothetical imperative. This is an ought of the kind: "If you desire to lose weight, then you ought to consume fewer calories than your burn." (But usually, people leave off the implied if statement, and simply say "You should eat less and exercise more.")

A hypothetical imperative (as some use it) can be translated from 'ought' to 'is' in a straightforward way: "If you desire to lose weight, then you ought to consume fewer calories than you burn" translates to the claim "If you consume fewer calories than you burn, then you will (or are, ceteris paribus, more likely to) fulfill your desire to lose weight."11

Or, the speaker may be using 'ought' to communicate something only about other symbols (example: Bayes' Rule), leaving the bridge from 'ought' to 'is' to be built when the logical function represented by his use of 'ought' is plugged into a theory that refers to the world.

But one must not fall into the trap of thinking that a definition you've stipulated (aloud or in your head) for 'ought' must match up to your intended meaning of 'ought' (to which you don't have introspective access). In fact, I suspect it never does, which is why the conceptual analysis of 'ought' language can go in circles for centuries, and why any stipulated meaning of 'ought' is a fake utility function. To see clearly to our intuitive concept of ought, we'll have to try empathic metaethics (see below).

But whatever our intended meaning of 'ought' is, the same reasoning applies. Either our intended meaning of 'ought' refers (eventually) to the world of math and physics (in which case the is-ought gap is bridged), or else it doesn't (in which case it fails to refer).12

 

Moral realism vs. Anti-realism

So, does all this mean that we can embrace moral realism, or does it doom us to moral anti-realism? Again, it depends on what you mean by 'realism' and 'anti-realism'.

In a sense, pluralistic moral reductionism can be considered a robust form of moral 'realism', in the same way that pluralistic sound reductionism is a robust form of sound realism. "Yes, there really is sound, and we can locate it in reality — either as vibrations in the air or as mental auditory experiences, however you are using the term." In the same way: "Yes, there really is morality, and we can locate it in reality — either as a set of facts about the well-being of conscious creatures, or as a set of facts about what an ideally rational and perfectly informed agent would prefer, or as some other set of natural facts."

But in another sense, pluralistic moral reductionism is 'anti-realist'. It suggests that there is no One True Theory of Morality. (We use moral terms in a variety of ways, and some of those ways refer to different sets of natural facts.) And as a reductionist approach to morality, it might also leave no room for moral theories which say there are universally binding moral rules for which the universe (e.g. via a God) will hold us accountable.

What matters are the facts, not whether labels like 'realism' or 'anti-realism' apply to 'morality'.

 

Toward Empathic Metaethics

But pluralistic moral reductionism satisfies only a would-be austere metaethicist, not an empathic metaethicist.

Recall that when Alex asks how she can do what is right, the Austere Metaethicist replies:

Tell me what you mean by 'right', and I will tell you what is the right thing to do. If by 'right' you mean X, then Y is the right thing to do. If by 'right' you mean P, then Z is the right thing to do. But if you can't tell me what you mean by 'right', then you have failed to ask a coherent question, and no one can answer an incoherent question.

Alex may reply to the Austere Metaethicist:

Okay, I'm not sure exactly what I mean by 'right'. So how do I do what is right if I'm not sure what I mean by 'right'?

The Austere Metaethicist refuses to answer this question. The Empathic Metaethicist, however, is willing to go the extra mile. He says to Alex:

You may not know what you mean by 'right.' But let's not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we'll be able to answer your question. Then we can tell you what the right thing to do is.

This may seem like too much work. Would we be motivated to decode the cognitive algorithms producing Albert and Barry's use of the word 'sound'? Would we try to solve 'empathic meta-acoustics'? Probably not. We can simply taboo and reduce 'sound' and then get some work done.

But moral terms and value terms are about what we want. And unfortunately, we often don't know what we want. As such, we're unlikely to get what we really want if the world is re-engineered in accordance with our current best guess as to what we want. That's why we need to decode the cognitive algorithms that generate our questions about value and morality.

So how can the Empathic Metaethicist answer Alex's question? We don't know the details yet. For example, we don't have a completed cognitive neuroscience. But we have some ideas, and we know of some open problems that may admit of progress once more people understand them. In the next few posts, we'll take our first look at empathic metaethics.13

 

Previous post: Conceptual Analysis and Moral Theory

 

 

Notes

1 Some have objected that the conceptual analysis argued against in Conceptual Analysis and Moral Theory is not just a battle over definitions. But a definition is "the formal statement of the meaning or significance of a word, phrase, etc.", and a conceptual analysis is (usually) a "formal statement of the meaning or significance of a word, phrase, etc." in terms of necessary and sufficient conditions. The goal of a conceptual analysis is to arrive at a definition for a term that captures our intuitions about its meaning. The process is to bash our intuitions against others' intuitions until we converge upon a set of necessary and sufficient conditions that captures them all. But consider Barry and Albert's debate over the definition of 'sound'. Why think Albert and Barry have the same concept in mind? Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning? And, let's say we arrive at a messy set of 6 necessary and sufficient conditions for the intuitive meaning of the term. Is that going to be as useful for communication as one we consciously chose because it carved-up thingspace well? I doubt it. The IAU's definition of 'planet' is more useful than the folk-intuitions definition of 'planet'. Folk intuitions about 'planet' evolved over thousands of years and different people have different intuitions which may not always converge. In 2006, the IAU used modern astronomical knowledge to carve up thingspace in a more useful and informed way than our intuitions do.

A passage from Bertrand Russell (1953) is appropriate. Russell said that many philosophers reminded him of

the shopkeeper of whom I once asked the shortest way to Winchester. He called to a man in the back premises:

"Gentleman wants to know the shortest way to Winchester."

"Winchester?" an unseen voice replied.

"Aye."

"Way to Winchester?"

"Aye."

"Shortest way?"

"Aye."

"Dunno."

He wanted to get the nature of the question clear, but took no interest in answering it. This is exactly what modern philosophy does for the earnest seeker after truth. Is it surprising that young people turn to other studies?

2 Compare also to the biologist's 'species concept pluralism' and the philosopher's 'art concept pluralism.' See Uidhir & Magnus (2011). Also see 'causal pluralism' (Godfrey-Smith, 2009; Cartwright, 2007), 'theory concept pluralism' (Magnus, 2009) and, especially, 'metaethical contextualism' (Bjornsson & Finlay, 2010) or 'metaethical pluralism' or 'metaethical ambivalence' (Joyce, 2011). Joyce quotes Lewis (1989), who wrote that some concepts of value refer to things that really exist, and some concepts don't, and what you make of this situation is largely a matter of temperament:

What to make of the situation is mainly a matter of temperament. You can bang the drum about how philosophy has uncovered a terrible secret: there are no values! ... Or you can think it better for public safety to keep quiet and hope people will go on as before. Or you can declare that there are no values, but that nevertheless it is legitimate—and not just expedient—for us to carry on with value-talk, since we can make it all go smoothly if we just give the name of value to claimants that don't quite deserve it... Or you can think it an empty question whether there are values: say what you please, speak strictly or loosely. When it comes to deserving a name, there's better and worse but who's to say how good is good enough? Or you can think it clear that the imperfect deservers of the name are good enough, but only just, and say that although there are values we are still terribly wrong about them. Or you can calmly say that value (like simultaneity) is not quite as some of us sometimes thought. Myself, I prefer the calm and conservative responses. But as far as the analysis of value goes, they're all much of a muchness.

Joyce concludes that, for example, the moral naturalist and the moral error theorist may agree with each other (when adopting each other's own language):

[Metaethical ambivalence] begins with a kind of metametaethical enlightenment. The moral naturalist espouses moral naturalism, but this espousal reflects a mature decision, by which I mean that the moral naturalist doesn't claim to have latched on to an incontrovertiblerealm of moral facts of which the skeptic is foolishly ignorant, but rather acknowledges that this moral naturalism has been achieved only via a non-mandatory piece of conceptual precisification. Likewise, the moral skeptic champions moral skepticism, but this too is a sophisticated verdict: not the simple declaration that there are no moral values and that the naturalist is gullibly uncritical, but rather a decision that recognizes that this skepticism has been earned only by making certain non-obligatory but permissible conceptual clarifications.

...The enlightened moral naturalist doesn't merely (grudgingly) admit that the skeptic is warranted in his or her views, but is able to adopt the skeptical position in order to gain the insights that come from recognizing that we live in a world without values. And the enlightened moral skeptic goes beyond (grudgingly) conceding that moral naturalism is reasonable, but is capable of assuming that perspective in order to gain whatever benefits come from enjoying epistemic access to a realm of moral facts.

3 Miller (2003), p. 3.

4 I changed the example moral judgment from "murder is wrong" to "stealing is wrong" because the former invites confusion. 'Murder' often means wrongful killing.

5 Also see Jacobs (2002), starting on p. 2.

6 The first premise of one of his favorite arguments for God's existence is "If God does not exist, objective moral values and duties do not exist."

7 Craig (2010), p. 11.

8 It's also possible that Craig intended a different sense of objective than the ones explicitly given in his article. Perhaps he meant objective4: "morality is objective4 if it is not grounded in the opinion of non-divine persons."

9 Also see Moral Reductionism and Moore's Open Question Argument.

10 Hume (1739), p. 469. The famous paragraph is:

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it should be observed and explained; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.

11 For more on reducing certain kinds of normative statements, see Finlay (2010).

12 Assuming reductionism is true. If reductionism is false, then of course there are problems for pluralistic moral reductionism as a theory of austere (but not empathic) metaethics. The clarifications in the last three paragraphs of this section are due to discussions with Wei Dai and Vladimir Nesov.

13 My thanks to Steve Rayhawk and Will Newsome for their feedback on early drafts of this post.

 

References

Bjornsson & Finlay (2010). Metaethical contextualism defended. Ethics, 121: 7-36.

Craig (2010). Five Arguments for God. The Gospel Coalition.

Cartwright (2007).Hunting Causes and Using Them: Approaches in Philosophy and Economics. Cambridge University Press.

Godfrey-Smith (2009). Causal pluralism. In Beebee, Hitchcock, & Menzies (eds.), The Oxford Handbook of Causation (pp. 326-337). Oxford University Press.

Hume (1739). A Treatise on Human Nature. John Noon.

Finlay (2010). Normativity, Necessity and Tense: A Recipe for Homebaked Normativity. In Shafer-Landau (ed.), Oxford Studies in Metaethics 5 (pp. 57-85). Oxford University Press.

Jacobs (2002). Dimensions of Moral Theory. Wiley-Blackwell.

Joyce (2011).Metaethical pluralism: How both moral naturalism and moral skepticism may be permissible positions. In Nuccetelli & Seay (eds.), Ethical Naturalism: Current Debates. Cambridge University Press.

Lewis (1989). Dispositional theories of value. Part II. Proceedings of the Aristotelian Society, supplementary vol. 63: 113-137.

Magnus (2009). What species can teach us about theory

Miller (2003). An Introduction to Contemporary Metaethics. Polity.

Russell (1953). The cult of common usage. British Journal for the Philosophy of Science, 12: 305-306.

Uidhir & Magnus (2011). Art concept pluralism. Metaphilosophy, 42: 83-97.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 1:14 AM
Select new highlight date
All comments loaded

Thanks, this is exactly the feedback I was hoping to receive. :)

Basically, I want this post and the last one to be where Less Wrongers can send people whenever they appear confused about standard philosophical debates in moral theory: "Wait, stop. Go read lukeprog's article on this and then let me know if you still think the same thing."

I feel like your austere meta-ethicist is mostly missing the point. It's utterly routine for different people to have conflicting beliefs about whether a given act is moral*. And often they can have a useful discussion, at the end of which one or both participants change their beliefs. These conversations can happen without the participants changing their definitions of words like 'moral', and often without them having a clear definition at all.

[This is my first LW comment -- if I do something wrong, please bear with me]

This suggests that precise definitions or agreement about definitions isn't all that critical. But it's sometimes useful to be able to reason from stipulated and mutually agreed definitions, in which case meta-ethical speculation and reasoning is doing useful work if it offers a menu of crisp, useful, definitions that can be used in discussion of specific moral claims. Relatedly, it's also doing useful work by offering a set of definitions that help people conceptualize and articulate their personal feelings about morality, even absent a concrete first-order question.

And part of what goes into picking definitions is to understand their consequences. A philosopher is doing useful work for me if he shows me that a tempting-sounding definition of 'morality' doesn't pick out the set of things I want it to pick out, or that some other definition turns out not to refer to any clear set at all.

Many mathematical entities have multiple logically equivalent definitions, that are of different utility in different contexts. (E.g., sometimes I want to think about a circle as a locus of points, and sometimes as the solution set to an equation.) In the real world, something similar happens.

When I discuss, say, abortion, with somebody, probably there are multiple working definitions of 'moral' that could be mutually agreed upon for the purpose of the conversation, and the underlying dispute would still be nontrivial and intelligible. But some definitions might be more directly applicable to the discussion -- and philosophical reasoning might be helpful in figuring out what the consequences of various definitions are. For instance, a non-cognitive strikes me intuitively as less likely to be useful -- but I'd be open to an argument showing how it could be useful in a debate.

Probably a great deal of academic writing on meta-ethics is low value. But that's true of most writing on most topics and doesn't show that the topic is pointless. (With academics being major offenders, but not the only offenders.)

*I'm thinking of the individual personal changes in belief that went along with increased opposition to official racism in America over the course of the 20th century. Or opposition to slavery in the 19th.

A philosopher is doing useful work for me if he shows me that a tempting-sounding definition of 'morality' doesn't pick out the set of things I want it to pick out, or that some other definition turns out not to refer to any clear set at all.

That is an important point. People often run on examples as much as or more than they do on definitions,and if their intuitions about examples are strong, that can be used to fix their definitions (ie give them revised definitions that serve their intuitions better).

The rest of the post contained good material that needed saying.

When I have serious conversations with thoughtful religious people who have faith but no major theological training, I find it helpful to think of their statements about "God" as being statements about "all worldly optimization processes stronger than me that I don't have time to understand in very much detail like evolution, entropy, economics, democratic politics, organizational dynamics, similar regularities in the structure of the world that science hasn't started analyzing yet, plus many small activist groups throughout history, and a huge number of specific powerful agents silently influencing my life right now like various investors and celebrities, the local chief of police, the local school principal, my employer, my ancestors, and so on".

I can imagine a relatively simple life heuristic, H, that might successfully navigate this vast and bewildering array of optimization pressures in their life and can ask "Does God want you to H". Also, this translation scheme helps me to listen to evangelical radio and learn things from it :-)

I bring this up because it feels to me like you're doing a lot of work to resuscitate ideas from moral philosophy that are significantly helped by "pluralistic reduction" to unpack the ideas into more specific and coherent claims, but you seem to be doing it in a lop-sided way by not unpacking the ideas of "the other side" in a similarly generous manner. Also, to a lesser extent, it seems to be leaving some of "our" ideas unpacked that could probably use some pluralistic reduction but might not look as shiny if unpacked this way.

I guess what I'm trying to say is that "acoustic messenger ferries" in the "ether" seem to me like perfectly adequate placeholder terms if I'm in a conversation with someone whose starting vocabulary uses them as atomic concepts, but if I'm tossing those terms out then "an ideally instrumentally rational and fully informed agent" seems roughly as questionable given how much difficulty people seem to have when using mind-shaped conceptually-atomic entities in their theories.

Do you think my impression of lopsided conceptual unpacking is accurate? If yes, I'm wondering if you could try to introspect on your writing process and try to articulate how you decided which things to unpack and which to leave fuzzy.

the Austere Metaethicist replies:

"Tell me what you mean by 'right', and I will tell you what is the right thing to do."

That is of course, not what is right, but what she thinks is right. So far, so subjective.

You may not know what you mean by 'right.' But let's not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we'll be able to answer your question. Then we can tell you what the right thing to do is.

Again, that is not the right thing, that is just what she thinks. An Objective metaethicist could answer the question what is right.

But moral terms and value terms are about what we want.

No: they are value terms about what we should want and be and do.

And the "we" is important here. Your metaethicists are like therapsists or life coaches or personal shoppers who advise people how to make their individual lives spiffier. But moral action is not solipsistic: moral choices affect other people. That's why we can't stop at "whatever you think is right is right". I don't want one of your metaethicists telling my neighbour how to be a better serial killer.

Or, perhaps someone has a moral reductionism in mind during a particular use of 'ought' language. Perhaps by "You ought to be more forgiving" they really mean "If you are more forgiving, this is likely to increase the amount of pleasure in the world."

As you can see, it is not hard to bridge the is-ought gap.

I don't think it is impossible, but it is harder than you are making out. The examples given are not complete syllogisms, or other logical forms. It is easy to validly derive an ought form an is: you start with the factual statement and then invoke a bridging principle of the form:

if then

However, the argument is not sound unless the bridging statement is true. But the bridging statement is itself an derivation of an ought from an is, so there is a kind of circularity there. You are assuming that the ought-from-is problem has been solved in order to solve it.

As I said, I don't think the situation is hopeless. The bridging premise is not exactly the same thing as a moral argument: it is usually more of a general statement along the lines of "if X increases well being, it should be done". That provides some scope for an analytical justification of bridging principles.

I don't understand the terms "world of is" and "world of is not". Does "talking about world of is not" mean "deducing from false assumptions", or is there something more to it? Anyway, "talking about world of is" sounds like the worst kind of continental philosophy babble.

Else, the article is clear, comprehensible and well readable.

While "of is, of is not" didn't hurt my understanding that much, the article would be better off without them.

While "of is, of is not" didn't hurt my understanding that much, the article would be better off without them.

I agree, and also note that the way luke dismisses the "is not" misses much of the point that is trying to be expressed by the phrase. If it is going to be discussed at all it deserves the same kind of parameterizing as 'objective' received.

Tangentially:

facts about the well-being of conscious creatures are mind-dependent facts

How so? (Note that a proposition may be in some sense about minds without its truth value being mind-dependent. E.g. "Any experience of red is an experience of colour" is true regardless of what minds exist. I would think the same is true of, e.g., "All else equal, pain is bad for the experiencer.")

That's why we need to decode the cognitive algorithms that generate our questions about value and morality. ... So how can the Empathic Metaethicist answer Alex's question? We don't know the details yet. For example, we don't have a completed cognitive neuroscience.

Assume you have a complete knowledge of all details of the way human brain works, and a detailed trace of the sequence of neurological events that leads people to ask moral questions. Then what?

My only guess is that you look this trace over using your current moral judgment, and decide that you expect that changing certain things in the algorithm will make the judgments of this brain better. But this is not a FAI-grade tool for defining morality (unless we have to go the uploads-driven way, in which case you just gradually and manually improve humans for a very long time).

The problem is not that there is no way to identify 'good' or 'right' (as used intuitively, without tabooing) with a certain X. The problem is that X is huge and complicated and we don't (yet) have access to its structure.

Strictly speaking, we can exhibit any definition of "good", even one that doesn't make any of the errors you pointed out, and still ask "Is it good?". The criteria for exhibiting a particular definition are ultimately non-rigorous, even if the selected definition is, so we can always examine them further.

Moore's argument might fail in the unintended use case of post-FAI morality not because at some point there might be no more potential for asking the question, but because, as with "Does 2+2 equal 4?", there is a point at which we are certain enough to turn to other projects, even if in principle some uncertainty and lack of clarity in the intended meaning remains. It's not at all clear this will ever happen to morality.

In The Is_Ought Gap, Luke writes

If someone makes a claim of the 'ought' type, either they are talking about the world of is, or they are talking about the world of is not. If they are talking about the world of is not, then I quickly lose interest because the world of is not isn't my subject of interest.

Ironically, this is where I quickly lost interest in this article, because glib word-play isn't my subject of interest.

Consider this dialog:

Student: "Wise master, what ought I do?"

Wise master: "You ought to help the poor by giving 50% of your income to efficient charity and supporting the European-style welfare state."

Student: "Alright."

*student runs off and gives 50% of his or her income to efficient charity and supports the European-style welfare state

This dialog rings true as a fact about ought statements - once we become convinced of them, they do and should constrain our behavior.

But my dialogs and your dialogs contradict each other! Because if "ought" determines our behavior, and we can define what "ought" means, then we can define proper behavior into existence - a construction as absurd as Descartes defining God into existence or Plato defining man as both a hairless featherless biped and a mortal.

We must give up one, and I say give up yours. "ought" is one of those words that we are not free to define - it has a single meaning. Look to its consequences, not its causes.

If we taboo and reduce, then the question of "...but is it good?" is out of place. The reply is: "Yes it is, because I just told you that's what I mean to communicate when I use the word-tool 'good' for this discussion. I'm not here to debate definitions; I'm here to get something done."

I just wanted to flag that a non-reductionist moral realist (like myself) is also "not here to debate definitions". See my post on The Importance of Implications. This is compatible with thinking well of the Open Question Argument, if we think we have an adequate grasp of some fundamental normative concept (be it 'good', 'reason', or 'ought' -- I lean towards 'reason', myself, such that to speak of a person's welfare is just to talk about what a sympathetic party has reason to desire for the person's sake).

Note that if we're right to consider some normative concepts to be conceptually primitive (not analytically reducible to non-normative concepts) then your practice of "tabooing" all normative vocabulary actually has the effect of depriving us of the conceptual tools necessary to even talk about the normative sphere. Consequent talk of people's (incl. God's) desires or dispositions is simply changing the subject, on this way of looking at things.

Out of interest: Will you be arguing anywhere in this sequence against non-reductionist moral realism? Or are you simply assuming its falsity from the start, and exploring the implications from there? (Even the latter, more modest project is of course worth pursing, but I personally would be more interested in the former.) Either way, it'd be good to be clear about this. (You could then skip the silly rhetoric about how what is not "is", must be "is not".)

I think you are incorrect with regards to Hume's is-ought gap, although I find its relevance to be somewhat overstated. A hypothetical imperative such as your example relies on an equivocation between 'ought' as (1) a normative injunction and (2) conveying a possible causal pathway from here to there.

-

Here is the incorrect syllogism:

Premise 1: A desires C (is)

Premise 2: B will produce C (is)

Conclusion: A ought to do B (ought)

-

There is a hidden normative premise that is often ignored. It is

Premise 3: A should obtain its desires. (ought)

-

The correct syllogism would then be:

Premise 1 (is): A desires C

Premise 2 (is): B will produce C

Premise 3 (ought): A ought to obtain its desires.

Conclusion: A ought to do B (ought)

-

The necessity of Premise 3 is made clear by use of an admittedly extreme example:

P1: Hitler wants to kill a great number of people

P2: Zyklon B will kill a great number of people

C1: Hitler ought to use Zyklon B to kill a great number of people

While the conclusion is derived from the premises using definition (2) of the word 'ought', few would express it as a normative recommendation.

-

Hume's fact/value dichotomy remains valid. A normative conclusion can only be validly deduced from a group of premises including at least one which is itself normative.

Although I think this series of posts is interesting and mostly very well reasoned, I find the discussion about objectivity to be strangely crafted. At the risk of arguing about definitions: the hierarchy you lay out about objectivity is only remotely related to what I mean by objective, and my sense is that it doesn't cohere very well with common usage.

First, there seems no better reason to split off objective1 than objectiveA which is "software-independent facts". Okay, so I can't say anything objective about my web browser, just because we've said I can't. Why is this helpful? The only reason to split this out is if you are some sort of dualist; otherwise the mind is a computational phenomenon just like DNA replication or whatnot.

Second, as Emile already pointed out, nowhere in the hierarchy is uniqueness addressed, yet this is the clearest conventional distinction between subjectivity and objectivity. 5+7 = 12 for everyone. Mint chocolate chip ice cream is better than rocky road ice cream is not the case for everyone (in the conventional sense, anyway). So these things are all colloquially objective:

  • Rocky road has more chocolate than mint chocolate chip
  • The author of this post enjoys mint chocolate chip more than rocky road
  • My IPv4 address has a higher value than does lesswrong.org
  • The Bible describes God endorsing the consumption of only certain animals

Referring to God doesn't make things non-objective in the standard sense presuming God exists. Of course, without a way to measure God's preferences, you may lose your theoretical objectivity, but any other single source or self-consistent group can fill in (e.g. the Pope) as an source for objective answers to what would otherwise be subjective questions.

The issue isn't whether that is subjective or objective; it's whether that method of gaining objectivity is practical and useful.

And since humans are the only sentient beings, I really fail to see what the distinction is between 2 and 3 is in a practical way, once you split off God (or any other singularly identifiable entity).

So I strongly suggest that this section ought to be rethought. Objectivity seems central to this sort of moral reductionism, and so it is worth using definitions that are not too misleading. Either the definitions should change, or there should be much more motivation about why we care about the distinctions between any of the definitions you've offered.

In jest, I'm going to accuse you of plagiarizing my work, then tell you two problems that I have with the approach that you've outlined, and then wax e-peen and say that mine is similar, but more instructive on moral discourse among all users of it.

My problem here is that we already have a common language (in terms of wants) which reduces "should" and provides the kind of plurality that you're seeking out of this approach, so there's no need to claim, "I'm using 'is good' to mean P," and then eek out a true statement whose truth is a matter of lexical elaboration, when instead people use moral language to alter others' of their own behavior, perspective, etc. without all of that theorizing on top of it. Most people don't make moral arguments on the basis of grand (meta)ethical stances. But it seems like everyone would have to be a deep ethicist to get any traction out of this theory, and that would mean that it really only explains how experts use moral language, but not how everyday people do. But why would one have to be deep about ethics to prescribe that someone do something? Can't people prescribe an action and justify it without appealing to a definition of 'goodness' at all?

My last issue here is a potential paradox that I spotted when I made two pluralistic moral reductionists confront each other:

PMR1: "Is Harris's defined 'being good' better than Craig's defined 'being good'?"

PMR2: "What do you mean by 'better'?"

PMR1: "I mean whatever you mean when you say 'better' in questions like this."

PMR2: "But by 'better', I mean whatever you mean when you say 'better' in questions like this."

[Re-post with correction]

Hi Luke,

I've questioned your metaethical views before (in your "desirist" days) and I think you're making similar mistakes now as then. But rather than rehash old criticisms I'd like to make a different point.

Since you claim to be taking a scientific or naturalized approach to philosophy I would expect you to offer evidence in support of your position. Yet I see nothing here specifically identified as evidence, and very little that could be construed as evidence. I don't see how your approach here is significantly different from the intuition-based philosophical approaches that you've criticised elsewhere.

Some people who say "Stealing is wrong" are really just trying to express emotions: "Stealing? Yuck!" Others use moral judgments like "Stealing is wrong" to express commands: "Don't steal!" Still others use moral judgments like "Stealing is wrong" to assert factual claims, such as "stealing is against the will of God" or "stealing is a practice that usually adds pain rather than pleasure to the world."

How do you know this? Where's the evidence? I don't doubt that some people say, "Stealing is wrong because it's against the will of God". But where's the evidence that they use "Stealing is wrong" to mean "Stealing is against the will of God"?

But moral terms and value terms are about what we want.

How do you know? And this seems to contradict your claim above that some people use "Stealing is wrong" to mean "stealing is against the will of God". That's not about what we want. (I say that moral terms are primarily about obligations, not wants.)

Hi Luke,

I've questioned your metaethical views before (in your "desirist" days) and I think you're making similar mistakes now as then. But rather than rehash old criticisms I'd like to make a different point.

Since you claim to be taking a scientific or naturalized approach to philosophy I would expect you to offer evidence in support of your position. Yet I see nothing here specifically identified as evidence, and very little that could be construed as evidence. I don't see how your approach here is significantly different from the intuition-based philosophical approaches that you've criticised elsewhere.

Some people who say "Stealing is wrong" are really just trying to express emotions: "Stealing? Yuck!" Others use moral judgments like "Stealing is wrong" to express commands: "Don't steal!" Still others use moral judgments like "Stealing is wrong" to assert factual claims, such as "stealing is against the will of God" or "stealing is a practice that usually adds pain rather than pleasure to the world."

How do you know this? Where's the evidence? I don't doubt that some people say, "Stealing is wrong because it's against the will of God". But where's the evidence that they use "Stealing is wrong" to mean "Stealing is against the will of God"? (Indeed, if they meant that it would be very strange to say "Stealing is wrong because it's against the will of God". That would be equivalent to saying "A is true because A is true." Yet it seems perfectly natural for someone to say this.)

But moral terms and value terms are about what we want.

How do you know? And this seems to contradict your claim above that some people use "Stealing is wrong" to mean "stealing is against the will of God". That's not about what we want. (I say that moral terms are primarily about obligations, not wants.)