Many secular materialist are puzzled by Sam Harris's frequent assertion that science can bridge Hume's is–ought gap. Indeed, bafflement abounds on both sides whenever he debates his "bridge" with other materialists. Both sides are unable to understand how the other can fail to grasp elementary and undeniable points. This podcast conversation[1] with the physicist Sean Carroll provides a vivid yet amicable demonstration.

I believe that this mutual confusion is a consequence of two distinct but unspoken ways of thinking about idealized moral argumentation. I'll call these two ways logical and dialectical.

Roughly, logical argumentation is focused on logical proofs of statements. Dialectical argumentation is geared towards rational persuasion of agents. These two different approaches lead to very different conclusions about what kinds of statements are necessary in rigorous moral arguments. In particular, the is–ought gap is unavoidable when you take the logical point of view. But this gap evaporates when you take the dialectical point of view.[2]

I won't be arguing for one of these views over the other. My goal is rather to dissolve disagreement. I believe that properly understanding these two views will render a lot of arguments unnecessary.

Logical moral argumentation

Logical argumentation, in the sense in which I'm using the term here, is focused on finding rigorous logical proofs of moral statements. The reasoning proceeds by logical inference from premises to conclusion. The ideal model is something like a theory in mathematical logic, with all conclusions proved from a basic set of axioms using just the rules of logic.

People who undertake moral argumentation with this ideal in mind envision a theory that can express "is" statements, but which also contains an "ought" symbol. Under suitable circumstances, the theory proves "is" statements like "You are pulling the switch that diverts the trolley," and "If you pull the switch, the trolley will be diverted." But what makes the theory moral is that it can also prove "ought" statements like "You ought to pull the switch that diverts the trolley."[3]

Now, this "ought" symbol could appear in the ideal formal theory in one of only two ways: Either the "ought" symbol is an undefined symbol appearing among the axioms, or the "ought" symbol is subsequently defined in terms of the more-primitive "is" symbols used to express the axioms.[4]

When Harris claims to be able to bridge the is–ought gap in purely scientific terms, many listeners think that he's claiming to do so from this "logical argumentation" point of view. In that case, such a bridge would be successful only if every possible scientifically competent agent would accept the axioms of the theory used. In particular, "accepting" a statement that includes the "ought" symbol would mean something like "actually being motivated to do what the statement says that one 'ought' to do, at least in the limit of ideal reflection".

But, on these terms, the is–ought gap is unavoidable: No moral theory can be purely scientific in this sense. For, however "ought" is defined by a particular sequence of "is" symbols, there is always a possible scientifically competent agent who is not motivated by "ought" so defined.[5]

Thus, from this point of view, no moral theory can bridge the is–ought gap by scientific means alone. Moral argumentation must always include an "ought" symbol, but the use of this symbol cannot be justified on purely scientific grounds. This doesn't mean that moral arguments can't be successful at all. It doesn't even mean that they can't be objectively right or wrong. But it does mean that their justification must rest on premises that go beyond the purely scientific.

This is Sean Carroll's point of view in the podcast conversation with Harris linked above. But Harris, I claim, could not understand Carroll's argument, and Carroll in turn could not understand Harris's, because Harris is coming from the dialectical point of view.

Dialectical moral argumentation

Dialectical moral argumentation is not modeled on logical proof. Rather, it is modeled on rational persuasion. The ideal context envisioned here is a conversation between rational agents in which one of the agents is persuading the other to do something. The persuader proceeds from assertion to assertion until the listener is persuaded to act.[6]

But here is the point: Such arguments shouldn't include an "ought" symbol at all!—At least, not ideally.

By way of analogy, suppose that you're trying to convince me to eat some ice cream. (This is not a moral argument, which is why this is only an analogy.) Then obviously you can't use "You should eat ice cream" as an axiom, because that would be circular. But, more to the point, you wouldn't even have to use that statement in the course of your argument. Instead, ideally, your argument would just be a bunch of "is" facts about the ice cream (cold, creamy, sweet, and so on). If the ice cream has chocolate chips, and you know that I like chocolate chips, you will tell me facts about the chocolate chips (high in quantity and quality, etc.). But there's no need to add, "And you should eat chocolate chips."

Instead, you will just give me all of those "is" facts about the ice cream, maybe draw some "is" implications, and then rely on my internal motivational drives to find those facts compelling. If the "is" facts alone aren't motivating me, then something has gone wrong with the conversation. Either you as the persuader failed to pick facts that will motivate me, or I as the listener failed to understand properly the facts that you picked.

Now, practically speaking, when you attempt to persuade me to X, you might find it helpful to say things like "You ought to X". But, ideally, this usage of "ought" should serve just as a sort of signpost to help me to follow the argument, not as an essential part of the argument itself. Nonetheless, you might use "ought" as a framing device: "I'm about to convince you that you ought to X." Or: "Remember, I already convinced you that you ought to X. Now I'm going to convince you that doing X requires doing Y."

But an ideal argument wouldn't need any such signposts. You would just convince me of certain facts about the world, and then you'd leave it to my internal motivational drives to do the rest—to induce me to act as you desired on the basis of the facts that you showed me.

Put another way, if you're trying to persuade me to X, then you shouldn't have to tell me explicitly that doing X would be good. If you have to say that, then the "is" facts about X must not actually be motivating to me. But, in that case, just telling me that doing X would be good isn't going to convince me, so your argument has failed.

Likewise, if the statement "Doing X would cause Y" isn't already motivating, then the statement "Doing X would cause Y, and Y would be good" shouldn't be motivating either, at least not ideally. If you're doing your job right, you've already picked an is-statement Y that motivates me directly, or which entails another is-statement that will motivate me directly. So adding "and Y would be good" shouldn't be telling me anything useful. It would be at best a rhetorical flourish, and so not a part of ideal argumentation.

Thus, from this point of view, there really is a sense in which "ought" reduces to "is". The is–ought gap vanishes! Wherever "ought" appears, it will be found, on closer inspection, to be unnecessary. All of the rational work in the argument is done purely by "is". Of course, crucial work is also done by my internal motivational structure. Without that, your "is" statements couldn't have the desired effect. But that structure isn't part of your argument. In the argument itself, it's all just "is... is... is...", all the way down.[7]

This, I take it, is Sam Harris's implicit point of view. Or, at least, it should be.


Footnotes

[1] ETA: The relevant part of the podcast runs from 00:46:38 to 01:09:00.

[2] I am not saying that these views of moral argumentation exhaust all of the possibilities. I'm not even saying that they are mutually exclusive in practice. Normally, people slide among such views as the needs of the situation require. But I think that some people tend to get needlessly locked into one view when they try to think abstractly about what counts as a valid and rigorous moral argument.

[3] From the logical point of view, all moral argumentation is first and foremost about the assertions that we can prove. Argumentation is not directly about action. There is only an indirect connection to action in the sense that arguments can prove assertions about actions, like "X ought to be done". Furthermore, this "ought" must be explicit. Otherwise, you're just proving "is" statements.

[4] Analogously, you can get the symbol in a theory of arithmetic in two ways. On the one hand, in first-order Peano arithmetic, the symbol is undefined, but it appears in the axioms, which govern its behavior. On the other hand, in the original second-order Peano axioms, there was no symbol. Instead, there was only a successor-of symbol. But one may subsequently introduce by defining it in terms of the successor-of symbol using second-order logic.

[5] Some might dispute this, especially regarding the meaning of the phrase "possible scientifically competent agent". But this is not the crux of the disagreement that I'm trying to dissolve. [ETA: See this comment.]

[6] Here I mean "persuaded" in the sense of "choosing to act out of a sense of moral conviction", rather than out of considerations of taste or whatever.

[7] ETA: The phrases "internal motivational drives" and "internal motivational structure" do not refer to statements, for example to statements about what is good that I happen to believe. Those phrases refer instead to how I act upon beliefs, to the ways in which different beliefs have different motivational effects on me.

The point is: This unspoken "internal" work is not being done by still more statements, and certainly not by "ought" statements. Rather, it's being done by the manner in which I am constituted so as to do particular things once I accept certain statements.

Eliezer Yudkowsky discussed this distinction at greater length in Created Already In Motion, where he contrasts "data" and "dynamics". (Thanks to dxu for making this connection.)

New Comment
46 comments, sorted by Click to highlight new comments since:

I curated this post because "why do people disagree and how can we come to agreement" feels like one of the most important questions of the whole rationality project. In my experience, when two smart people keep disagreeing while having a hard time understanding how the other could fail to understand something so obvious, it's because they are operating under different frameworks without realizing it. Having analyzed examples of this helps understand and recognize it, and then hopefully learn to avoid it.

So ... a prisoner's dilemma but on a meta level? Which then results in primary consensus.

What does this have to do with the Prisoners' Dilemma?

I could be off base here. But a lot of cooperate vs non-cooperate classical stories often involve two parties who hate each other's ideologies.

Could you then not say: "They have to first agree and/or fight a Prisoner's Dilemma on an ideological field"?

I think you're going to need to be more explicit. My best understanding of what you're saying is this: Each participant has two options -- to attempt to actually understand the other, or to attempt to vilify them for disagreeing, and we can lay these out in a payoff matrix and turn this into a game.

I don't see offhand why this would be a Prisoner's Dilemma, though I guess that seems plausible if you actually do this. It certainly doesn't seem like a Stag Hunt or Chicken which I guess are the other classic cooperate-or-don't games.

My biggest problem here is the question of how you're constructing the payoff matrices. The reward for defecting is greater ingroup acceptance, at the cost of understanding; the reward for both cooperating is increased understanding, but likely at the cost of ingroup acceptance. And the penalty for cooperating and being defected on seems to be in the form of decreased outgroup acceptance. I'm not sure how you make all these commensurable to come up with a single payoff matrix. I guess you have to somehow, but that the result would be a Prisoner's Dilemma isn't obvious. Indeed it's actually not obvious to me here that cooperating and being defected on is worse than what you get if both players defect, depending on one's priorities, which woud definitely not make it a Prisoner's Dilemma. I think that part of what's going on here is that different people's weighting of these things may substantially affect the resulting game.

It seems less and less like a Prisoner's Dilemma the more I think about it. Chances are, "oops" I messed up.

I still feel like the thing with famous names like Sam Harris, is that there is a "drag" force on his penetration on the culture nowadays because there is a bunch of history that has been (incorrectly) publicized. His name is associated with controversy; despite his best to avoid it.

I feel like you need to overcome a "barrier to entry" when listening to him. Unlike Eliezer, who's public image (in my limited opinion) is actually new user friendly.

Somehow this all is meant to tie back to Prisoner's Dilemmas. And in my head, it for some reason does. Perhaps I ought to prune that connection. Let me try my best to fully explain that link:

It's a multi stage "chess game" in where you engage with the ideas that you hear from someone like Sam Harris; but there is doubt because there is a (misconception) of him saying "Muslims are bad" (a trivialization of the argument). What makes me think of a Prisoner's Dilemma is this: you have to engage into "cooperate" or "don't cooperate" game with the message based on nothing more or less then reputation of the source.

Sam doesn't necessarily broadcast his basic values regularly that I can see. He's a thoughtful, quite rational person; but I feel like he forgets that his image needs work. He needs to do qumbaiya as it were, once a while. To reaffirm his basic beliefs in life and it's preciousness. (And I bet if I look, I'd find some, but it rarely percolates up on the feed).

Anyway. Chances are I am wrong on using the concept of Prisoner's Dilemma here. Sorry.

This is a great post, and I think does a good job of capturing why the two sides tend to talk past each other. A is baffled by why B claims to be able to reduce free-floating symbols to other symbols; B is baffled by why A claims to be using free-floating symbols.

They're also both probably right when it comes to "defending standard usage", and are just defending/highlighting different aspects of folk moral communication.

People often use "should" language to try to communicate facts; and if they were more self-aware about the truth-conditions of that language, they would be better able to communicate and achieve their goals. Harris thinks this is important.

People also often use "should" language to try to directly modify each others' motivations. (E.g., trying to express themselves in ways they think will apply social pressure or tug at someone's heartstrings.) Harris' critics think this is important, and worry that uncritically accepting Harris' project could conceal this phenomenon without making it go away.

(Well, I think the latter is less mysterian than the typical anti-Harris ethics argument, and Harris would probably be more sympathetic to the above framing than to the typical "ought is just its own thing, end of story" argument.)

I essentially agree with you that science can't bridge the is-ought gap (see caveats) but it's a good deal more complicated than the arguments you give here allow for (they are a good intro but I felt it's worth pointing out the complexities).

  1. When someone claims to have bridged the is-ought gap they aren't usually claiming to have analytically identified (i.e. identified as a matter of definition) ought with some is statements. That' s a crazily high bar and modern philosophers (and Sam Harris was trained as a philosopher) tend to feel true analytic identities are rare but are not the only kind of necessary truths. For instance, the fact that "water is H20" is widely regarded as a necessary truth that isn't analytic (do a search if you want an explanation) and there are any number of other philosophical arguments that are seen as establishing necessary truths which don't amount to the definitional relationship you demand.

I think the standard Harris is using is much weaker even than that.

  1. You insist that to be an ought it must be motivating for the subject. This is a matter of some debate. Some moral realists would endorse this while others would insist that it need only motivate certain kinds of agents who aren't too screwed up in some way. But, I tend to agree with your conclusion just suggest it be qualified by saying we presuming the standard sense of moral realism here.

  2. One has to be really careful with what you mean by 'science' here. One way people have snuck around the is-ought gap before is using terms like cruel which are kinda 'is' facts that back in an ought (to be cruel requires that you immorally inflict suffering etc..).

  3. It's not that Harris is purely embedded in some kind of dialectical tradition. He was trained as an analytic philosopher and they invented the is-ought gap and are no strangers to the former mode of argumentation. IT's more that Carrol is a physicist and doesn't know the terminology that would let him pin Harris down in terms he would understand and keep him from squirming off the point.

However, I'm pretty sure (based on my interaction with Harris emailing him over what sounded like a similarly wrongheaded view in the philosophy of mind) that Harris would admit that he hasn't bridged Hume's is-ought gap as philosophers understand it but instead explain that he means to address the general public's sense that science has no moral insight to offer.

In that sense I think he is right. Most people don't realize how much science can inform our moral discussions...he's just being hyperbolic to sell it.

Thanks for making point 2. Moral oughts need not motivate sociopaths, who sometimes admit (when there is no cost of doing so) that they've done wrong and just don't give a damn. The "is-ought" gap is better relabeled the "thought-motivation" gap. "Ought"s are thoughts; motives are something else.

This post not only made me understand the relevant positions better, but the two different perspectives on thinking about motivation have remained with me in general. (I often find the Harris one more useful, which is interesting by itself since he had been sold to me as "the guy who doesn't really understand philosophy".)

[-]dxu130

This post seems relevant. (Indeed, it seems to dissolve the question entirely, and a full decade in advance.)

It certainly shows that Eliezer understands the distinction that I'm highlighting.

Two points:

First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof? Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty--a lower confidence level is acceptable? Or something else?

Second, I appreciate this post because what Harris's disagreements with others so often need is exactly dissolution. And you've accurately described Harris's project: He is trying to persuade an ideal listener of moral claims (e.g., it's good to help people live happy and fulfilling lives), rather than trying to prove the truth of these claims from non-moral axioms.

Some elaboration on what Harris is doing, in my view:

  • Construct a hellish state of affairs (e.g., everyone suffering for all eternity to no redeeming purpose), call on the interlocutor to admit that such a situation is bad.
  • Construct a second state of affairs that is not so hellish (e.g., everyone happy and virtuous).
  • Call on the interlocutor to admit that the first situation is bad, and that the second situation is better.
  • Conclude that the interlocutor has admitted the truth of moral claims, even though Harris himself never explicitly said anything moral.

But by adding notions like "to no redeeming purpose" and "virtuous," Harris is smuggling oughts into the universes he describes. (He has to do this in order to block the interlocutor from saying "I don't admit the first situation is bad because the suffering could be for a good reason, and the second situation might not be good because maybe everyone is happy in a trivial sense because they've just wireheaded.)

In other words, Harris has not bridged the gap because he has begun on the "ought" side.

Rhetorically, Harris might omit the bits about purpose or virtue, and the interlocutor might still admit that the first state is bad and the second better, because the interlocutor has cooperatively embedded these additional moral premises.

In this case, to bridge the gap Harris counts on the listener supplying the first "ought."

First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof?

I don't mean to distinguish it from logical proof in the everyday sense of that term. Rational persuasion can be as logically rigorous as the circumstances require. What I'm distinguishing "rational persuasion" from is a whole model of moral argumentation that I'm calling "logical argumentation" for the purposes of this post.

If you take the model of logical argumentation as your ideal, then you act as if a "perfect" moral argument could be embedded, from beginning to end, from axiomatic assumptions to "ought"-laden conclusions, as a formal proof in a formal logical system.

On the other hand, if you're working from a model of dialectical argumentation, then you act as if the natural endpoint is to persuade a rational agent to act. This doesn't mean that any one argument has to work for all agents. Harris, for example, is interested in making arguments only to agents who, in the limit of ideal reflection, acknowledge that a universe consisting exclusively of extreme suffering would be bad. However, you may think that you could still find arguments that would be persuasive (in the limit of ideal reflection) to nearly all humans.

Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty--a lower confidence level is acceptable? Or something else?

For the purposes of this post, I'm leaving much of this open. I'm just trying to describe how people are guided by various vague ideals about what ideal moral argumentation "should be".

But you're right that the word "rational" is doing some work here. Roughly, let's say that you're a rational agent if you act effectively to bring the world into states that you prefer. On this ideal, to decide how to act, you just need information about the world. Your own preferences do the work of using that information to evaluate plans of action. However, you aren't omniscient, so you benefit from hearing information from other people and even from having them draw out some of its implication for you. So you find value in participating in conversations about what to do. Nonetheless, you aren't affected by rhetorical fireworks, and you don't get overwhelmed by appeals to unreflective emotion (emotional impulses that you would come to regret on reflection). You're unaffected by the superficial features of who is telling you the information and how. You're just interested in how the world actually is and what you can do about it.

Do you need to have "deductive certainty" in the information that you use? Sometimes you do, but often you don't. You like it when you can get it, but you don't make a fetish of it. If you can see that it would be wasteful to spend more time on eking out a bit more certainty, then you won't do it.

"Rational persuasion" is the kind of persuasion that works on an agent like that. This is the rough idea.

Now, this "ought" symbol could appear in the ideal formal theory in one of only two ways: Either the "ought" symbol is an undefined symbol appearing among the axioms, or the "ought" symbol is subsequently defined in terms of the more-primitive "is" symbols used to express the axioms.

Ok I am also moral naturalist and I hold the same view as Harris does (at least I think I do). And I have to say that the easiest way to resolve your dichotomy this is to say that the "Ought" is embeded in axiom. But even then I feel it as a very strange thing to say. Let me explain.

Imagine I have two apples and eat one apple. How many apples do I have now? Now I can use mathematical logic to resolve the question. But mathematical logic is insufficient to establish truth value of my claim of having two apples initially nor is it able to establish the truth value of the fact if I indeed have eaten the apple or if laws of the universe were not broken and a new apple thermodynamagically appeared in my hand out of thin air. So what I want to say is that you cannot logic something into existence. Logic can only tell you the truth values under certain assumptions. But you need to find out truth values of those assumptions out there in the universe.

Imagine having a choice making agent capable of learning. Let's say AlphaZero chess program. This is a computer program that is fed the rules of chess and then it is capable of playing chess to refine its understanding of the game to become very strong chess player.

The program awaits inputs in form of chess moves from an opponent. It then processes the moves, evaluates multiple options and then decides on an action in form of chess move that is most likely to satisfy its values - which is to win or at least draw the game of chess.

Now one can ask the question of why the program plays chess and why it does not do something else such as working toward world peace or other goal people deem worthy. I think the answer is obvious - the program was created so that it plays chess (and go and shogi). It does not even value aesthetics of the chess board or many other things superficially related to chess. To ask a question of why a chess program plays chess and not something else is meaningless in this sense. It was created to play chess. It cannot do differently given its programming. This is your moral axiom. AlphaZero values winning in chess.

But you cannot find the answer on what AlphaZero values from some logical structure with some theoretical axioms. The crucial premise in naturalistic morality is that all thinking agents including moral agents have to have physical substance that you can examine. You cannot change moral agents values without coresponding change in the brain and vice versa. So you can make moral statements from IS sentences all the way down. For example.

IS statement 1: AlphaZero is a chess program that was programmed so that it values winning in chess.

IS statement 2: Therefore it ought to make move X and not move Y when playing an opponent because move Y is objectively a worse chess move compared to X.

Again you may object that this is circular reasoning and I am assuming ought right in the statement 1. But it would be like saying that I am assuming to have two apples. Sure, I am assuming that. And what is the problem exactly? Is it not how we apply logic in our daily experience? Having two apples is a fact about the universe. Perfectly correct IS statement. AlphaZero wanting to win a game of chess is perfectly correct IS statement about AlphaZero - the program running in computer memory somewhere in Google building. And wanting to eat is a correct IS statement about me now typing this text.

Again you may object that this is circular reasoning and I am assuming ought right in the statement 1. But it would be like saying that I am assuming to have two apples. Sure, I am assuming that. And what is the problem exactly?

The difference from Sean Carroll's point of view (logical argumentation) is that not every scientifically competent agent will find this notion of "ought" compelling. (Really, only chess-playing programs would, if "ought" is taken in a terminal-value sense.) Whereas, such an agent's scientific competence would lead it to find compelling the axiom that you have two apples.

And I think that Sam Harris would agree with that, so far as it goes. But he would deny that this keeps him from reducing "ought" statements to purely scientific "is" statements, because he's taking the dialectical-argumentation point of view, not the logical-argumentation one. At any rate, Harris understands that a superintelligent AI might not be bothered by a universe consisting purely of extreme suffering. This was clear from his conversation with Eliezer Yudkowsky.

The whole thing hangs on footnote #4, and you don't seem to understand what realists actually believe. Of course they would dispute it, and not just "some" but most philosophers.

Right, the whole things seems like a rather strange confusion to me, since the is-ought gap is a problem certain kinds of anti-realists face but is not a problem for most realists since for them morals facts are still facts. So it seems to me, not being familiar with Harris, an alternative interpretation is that Harris is a moral realist and so believes there is no is-ought gap and thus this business with dialectical v. logical explanations is superfluous.

It's true that a moral realist could always bridge the is–ought gap by the simple expedient of converting every statement of the form "I ought to X" to "Objectively and factually, X is what I ought to do".

But that is not enough for Sam's purposes. It's not enough for him that every moral claim is or is not the case. It's not enough that moral claims are matters of fact. He wants them to be matters of scientific fact.

On my reading, what he means by that is the following: When you are pursuing a moral inquiry, you are already a moral agent who finds certain objective and scientifically determinable facts to be motivating (inducing of pursuit or avoidance). You are, as Eliezer puts it, "created already in motion". Your inquiry, therefore, is properly restricted just to determining which scientific "is" statements are true and which are false. In that sense, moral inquiry reduces entirely to matters of scientific fact. This is the dialectical-argumentation point of view.

But his interlocutors misread him to be saying that every scientifically competent agent should find the same objective facts to be motivating. In other words, all such agents should [edit: I should have said "would"] feel compelled to act according to the same moral axioms. This is what "bridging the is–ought gap" would mean if you confined yourself to the logical-argumentation framework. But it's not what Sam is claiming to have shown.

When you are pursuing a moral inquiry, you are already a moral agent who finds certain objective and scientifically determinable facts to be motivating (inducing of pursuit or avoidance). You are, as Eliezer puts it, “created already in motion”. Your inquiry, therefore, is properly restricted just to determining which scientific “is” statements are true and which are false. In that sense, moral inquiry reduces entirely to matters of scientific fact. This is the dialectical-argumentation point of view.

Note that while this account is internally consistent (at least, to a first approximation), it lacks a critical component of Sam Harris’s (apparent) view—namely, that all humans (minus a handful of sociopaths, perhaps) find the same “objective and scientifically determinable facts” to be “motivating”.

Without that assumption, the possibility is left open that while each individual human does, indeed, already find certain objective facts to be morally motivating, those facts differ between groups, between types of people, between individuals, etc. It would then be impossible to make any meaningful claims about what “we ought to do”, for any interesting value of “we”.


So, we might ask, what is the problem with that? Suppose we add this additional claim to the quoted account. Isn’t it still coherent? Well, sure, as far as it goes, but: suppose that I agree with Sam Harris that ~all humans find the same set of objective facts to be morally motivating. But then it turns out that we disagree on just which facts those are! How do we resolve this disagreement? We can hardly appeal to objective facts, to do so…

And we’re right back at square one.

suppose that I agree with Sam Harris that ~all humans find the same set of objective facts to be morally motivating. But then it turns out that we disagree on just which facts those are! How do we resolve this disagreement? We can hardly appeal to objective facts, to do so…

I don't follow. Sam would say (and I would agree) that which facts which humans find motivating (in the limit of ideal reflection, etc.) is an empirical question. With regard to each human, it is a scientific question about that human's motivational architecture.

Indeed—but that “in the limit of ideal reflection” clause is the crux of the matter!

Yes, in the limit of ideal reflection, which facts I find motivating is an empirical question. But how long does it take to reach the limit of ideal reflection? What does it take, to get there? (Is it even a well-defined concept?! Well, let’s assume it is… though that’s one heck of an assumption!)

In fact, isn’t one way to reach that “limit of ideal reflection” simply (hah!) to… debate morality? Endless arguments about moral concepts—what is that? Steps on the path to the limit of ideal reflection, mightn’t we say? (And god forbid you and I disagree on just what constitutes “the limit of ideal reflection”, and how to define it, and how to approach it, and how to recognize it! How do we resolve that? What if I say that I’ve reflected quite a bit, now, and I don’t see what else there is to reflect, and I’ve come to my conclusions; what have you to say to me? Can you respond “no, you have more reflecting to do”? Is that an empirical claim?)

What is clear enough is that the answer to these questions—“empirical” though they may be, in a certain technical sense—is a very different sort of fact, than the “scientific” facts that Sam Harris wants to claim are all that we need, to know the answers to moral questions. We can’t really go out and just look. We can’t use any sort of agreed-upon measurement procedure. We don’t really even agree on how to recognize such facts, if and when we come into possession of them!

So labeling this just another “scientific question” seems unwarranted.

Sam Harris grants the claim that you find objectionable (see his podcast conversation with Yudkowsky). So it’s not the crux of the disagreement that this post is about.

Could you point out where he does that exactly? Here's the transcript: https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/

Thank you for the link to the transcript. Here are the parts that I read in that way (emphasis added):

[Sam:] So it seems that goal-directed behavior is implicit (or even explicit) in this definition of intelligence. And so whatever intelligence is, it is inseparable from the kinds of behavior in the world that result in the fulfillment of goals. So we’re talking about agents that can do things; and once you see that, then it becomes pretty clear that if we build systems that harbor primary goals—you know, there are cartoon examples here like making paperclips—these are not systems that will spontaneously decide that they could be doing more enlightened things than (say) making paperclips.
This moves to the question of how deeply unfamiliar artificial intelligence might be, because there are no natural goals that will arrive in these systems apart from the ones we put in there. And we have common-sense intuitions that make it very difficult for us to think about how strange an artificial intelligence could be. Even one that becomes more and more competent to meet its goals.

[...]

[Sam:] One thing this [paperclip-maximizer] thought experiment does: it also cuts against the assumption that [...] we’re not going to build something that is superhuman in competence that could be moving along some path that’s as incompatible with our wellbeing as turning every spare atom on Earth into a paperclip.

A bit later, Sam does deny that facts and values are "orthogonal" to each other, but he does so in the context of human minds ("we" ... "us") in particular:

Sam: So generally speaking, when we say that some set of concerns is orthogonal to another, it’s just that there’s no direct implication from one to the other. Some people think that facts and values are orthogonal to one another. So we can have all the facts there are to know, but that wouldn’t tell us what is good. What is good has to be pursued in some other domain. I don’t happen to agree with that, as you know, but that’s an example.
Eliezer: I don’t technically agree with it either. What I would say is that the facts are not motivating. “You can know all there is to know about what is good, and still make paperclips,” is the way I would phrase that.
Sam: I wasn’t connecting that example to the present conversation, but yeah.

So, Sam and Eliezer agree that humans and paperclip maximizers both learn what "good" means (to humans) from facts alone. They agree that humans are motivated by this category of "good" to pursue those things (world states or experiences or whatever) that are "good" in this sense. Furthermore, that a thing X is in this "good" category is an "is" statement. That is, there's a particular bundle of exclusively "is" statements that captures just the qualities of a thing that are necessary and sufficient for it to be "good" in the human sense of the word.

More to my point, Sam goes on to agree, furthermore, that a superintelligent paperclip maximizer will not be motivated by this notion of "good". It will be able to classify things correctly as "good" in the human sense. But no amount of additional scientific knowledge will induce it to be motivated by this knowledge to pursue good things.

Sam does later say that "There are places where intelligence does converge with other kinds of value-laden qualities of a mind":

[Sam:] I do think there’s certain goals and certain things that we may become smarter and smarter with respect to, like human wellbeing. These are places where intelligence does converge with other kinds of value-laden qualities of a mind, but generally speaking, they can be kept apart for a very long time. So if you’re just talking about an ability to turn matter into useful objects or extract energy from the environment to do the same, this can be pursued with the purpose of tiling the world with paperclips, or not. And it just seems like there’s no law of nature that would prevent an intelligent system from doing that.

Here I read him again to be saying that, in some contexts, such as in the case of humans and human-descendant minds, intelligence should converge on morality. However, no law of nature guarantees any such convergence for an arbitrary intelligent system, such as a paperclip maximizer.

This quote might make my point in the most direct way:

[Sam:] For instance, I think the is-ought distinction is ultimately specious, and this is something that I’ve argued about when I talk about morality and values and the connection to facts. But I can still grant that it is logically possible (and I would certainly imagine physically possible) to have a system that has a utility function that is sufficiently strange that scaling up its intelligence doesn’t get you values that we would recognize as good. It certainly doesn’t guarantee values that are compatible with our wellbeing. Whether “paperclip maximizer” is too specialized a case to motivate this conversation, there’s certainly something that we could fail to put into a superhuman AI that we really would want to put in so as to make it aligned with us.

A bit further on, Sam again describes how, in his view, "ought" evaporates into "is" statements under a consequentialist analysis. His argument is consistent with my "dialectical" reading. He also reiterates his agreement that sufficient intelligence alone isn't enough to guarantee convergence on morality:

[Sam:] This is my claim: anything that you can tell me is a moral principle that is a matter of oughts and shoulds and not otherwise susceptible to a consequentialist analysis, I feel I can translate that back into a consequentialist way of speaking about facts. These are just “is” questions, just what actually happens to all the relevant minds, without remainder, and I’ve yet to find an example of somebody giving me a real moral concern that wasn’t at bottom a matter of the actual or possible consequences on conscious creatures somewhere in our light cone.
Eliezer: But that’s the sort of thing that you are built to care about. It is a fact about the kind of mind you are that, presented with these answers to these “is” questions, it hooks up to your motor output, it can cause your fingers to move, your lips to move. And a paperclip maximizer is built so as to respond to “is” questions about paperclips, not about what is right and what is good and the greatest flourishing of sentient beings and so on.
Sam: Exactly. I can well imagine that such minds could exist ...

Good post. I think Carroll's PoV is correct and Sam's is probably correct. Thinking about it, I would have phrased that one very differently, but I think there'd be zero difference on substance.

Edit: Having caught myself referencing this to explain Harris position, I amend my post to say that the way you put is actually exactly right, and the way I would have put it would at best have been a mildly confused version of the same thing.

To me "ought" is unpacked as "I have assigned a certain moral weight or utility to various possible worlds that appear to depend on an agent's actions, and I prefer the world with the higher utility". Not sure if this matches any specific moral philosophy or whatever.

This post isn't arguing for any particular moral point of view over another, so you'll get no debate from me :).

Just to elaborate on the point of the post, though:

From the logical-argumentation point of view, something like the unpacking that you describe is necessary, because a moral argument has to conclude with an "ought" statement, in which "ought" appears explicitly, so the "ought" has to get introduced somewhere along the way, either in the original axioms or as a subsequent definition.

From the dialectical-argumentation point of view, this unpacking of "ought" is unnecessary, at least within the moral argument itself.

Granted, the persuader will need to know what kinds of "is" facts actually persuade you. So the persuader will have to know that "ought" means whatever it means to you. But the persuader won't use the word "ought" in the argument, except in some non-essential and eliminable way.

It's not like the persuader should have to say, "Do X, because doing X will bring about world W, and you assign high moral weight or utility to W."

Instead, the persuader will just say, "Doing X will bring about world W". That's purely an "is" statement. Your internal process of moral evaluation does the rest. But that process has to happen inside of you. It shouldn't—indeed, it can't—be carried out somehow within the statements of the argument itself.

This makes sense, but what you call "dialectical moral argumentation" seems to me like it can just be considered as what you call "logical moral argumentation" but with the "ought" premises left implicit, you know? From this point of view, you could say that they're two different ways of framing the same argument. Basically, dialectical moral argumentation is the hypothetical syllogism to logical moral argumentation's repeated modus ponens. Because if you want to prove C, where C is "You should take action X", starting from A, where A is "You want to accomplish Y", then logical moral argumentation makes the premise A explicit, and so supplied with the facts A => B and B=> C, can first make B and then make C (although obviously that's not the only way to do it but let's just go with this); whereas dialectical moral argumentation doesn't actually have the premise A to hand and so instead can only apply hypothetical syllogism to get A => C, and then has to hand this to the other party who then has A and can make C with it.

So, like, this is a good way of making sense of Sam Harris, as you say, but I'm not sure this new point of view actually adds anything new. It sounds like a fairly trivial rephrasing, and to me at least seems like a less clear one, hiding some of the premises.

(Btw thank you for the comment below with the interview quotes; that really seems to demonstrate that yes, your explanation really is what Harris means, not the ridiculous thing it sounds like he's saying!)

and then rely on my internal motivational drives to find those facts compelling.

Doesn't this assume that our internal motivational drives, our core values, are sufficiently aligned that our "oughts" also align? This strikes me as an unreasonable assumption.

If you're trying to convince me to do some thing X, then you must want me to do X, too. So we must be at least that aligned.

We don't have to be aligned in every regard. And you needn't yourself value every consequence of X that you hold up to me to entice me to X. But you do have to understand me well enough to know that I find that consequence enticing.

But that seems to me to be both plausible and enough to support the kind of dialectical moral argumentation that I'm talking about.

I haven't finished reading the comments here, so it's possible my mind will be changed.

I actually see the difference between these two arguments (represented by Sam Harris and Hume) as being a buckets issue. Sam Harris puts "ought" in the same bucket as "is." In most cases, the thing that causes harm or joy is relatively obvious, so there is no problem with having "ought" and "is" in the same bucket. The problem with having things in the same bucket in general is that we tend to forget that the bucket exists, and think the two concepts are always inherently linked. I think Sam Harris' view is a valid one, especially on an intuitive and emotional level. However, I think there is still an important distinction to made here and that arguing between these two points of view is still valid.

What principles guide the "oughts?" Let's look at abortion. People on one side say that ending life is obviously causing suffering by depriving a living thing of the rest of its life. People on the other side don't disagree with that principle, but they disagree that a fetus counts as a living thing, so when they look at a "suffering equation," the mother's feelings are taken into account, but the fetus' aren't. Both sides think their own "ought" logically follows from the observable "is." If we don't recognize that there is a bucket issue happening, then we continue to argue past each other without understanding the actual sticking point. In the context of objective morality or development of AI, the question of what principles guide the "oughts" is of utmost importance. We can't answer the question by ignoring that it is a question.

This seems rather disappointing on Sam Harris's part, given that he indeed had training in philosophy (he has a B.A. in philosophy from Stanford, according to Wikipedia). If this post describes Harris's position correctly (I haven't read the source material), it seems to boil down to Harris saying that science can tell you what your instrumental goals/values should be, given your terminal goals/values, but it shouldn't be hard to see (or steelman) that when someone says "science can't bridge Hume’s is–ought gap" they're saying that science can't tell you what your terminal goals/values should be. It seems like either Harris couldn't figure out the relatively simple nature of the disagreement/misunderstanding, or that he could figure it out but deliberately chooses not to clarify/acknowledge it in order to keep marketing that he knows how "science can bridge Hume’s is–ought gap".

I'm not sure this post has actually captured Harris' frame (I'd weakly bet against it, actually, both because I think capturing people's frames is hard, and because 'it's all marketing or weird politics' is pretty high on my list of possible causes).

But it's not that surprising to me that people could spend years not able to understand that they are coming at a situation from very different frames that "should" be blatantly obvious.

In epistemic structural realism the bridge is all we have. One end of the bridge feels more 'is' like and one end feels more 'ought' like. Both are subject to extensionalism in trying to figure out what's 'really there'. I find this stance much much less confusing than the more standard indirect realism that typically underlies the is-ought distinction.

There's also the general pattern, see if by inverting the nature of the representation (turn the edges into vertices and vice versa) a false dichotomy disappears.

I think I can express Sam's point using logical argumentation.

1) Your internal motivational structure is aimed at X.

2) Y is a pre-requisite for X.

3) You ought to do Y in order to achieve X. (This is the only sense in which "ought" means anything).

First time post! I signed up just to say this. If there's a problem with the formulation I just described, I'd love to know what that is. I've been confused for years about why this seems so difficult for some, or why Sam can't put it in these terms.

I also feel like this conundrum is pretty easily solved, but I have a different take on it; one which analyses both situations you've presented identically, although it ultimately reduces to 'there is an is-ought problem'.

The primary thrust of my view on this is: All 'ought' statements are convolutions of objective and subjective components. The objective components can be dealt with purely scientifically and the subjective components can be variously criticised. There is no need to deal with them together.

The minimal subjective component of an ought statement is simply a goal, or utility metric, upon which to measure your ought statement. The syllogism thus becomes: If the policy scores highest on the utility metric, and if one ascribes to that utility function, implement that policy. Clause the first is completely objective and addressable by science to the fullest extent. Whether or not the utility function is ascribed to is also completely objective. But it is completely subjective as to which utility function one chooses. The conclusion then follows directly.

The objective components can be addressed objectively, through science and evidence. We can only hope that the subjective component (the choice of utility function) is well constrained by human biology (and there is objective evidence that it is), but we cannot justify any particular choice.

If we apply this to the logical approach described above the chosen utility metric/function is just an axiom, and the rest follows objectively and logically. If we apply this to the dialectical approach then we have not removed the axiom, rather only moved it.

When you argue with me about how creamy the icecream is, and how great the chocolate chips are, you are appealing to *my* axiomatic utility metric. So even from the dialectical point of you've still not solved the is-ought problem, you've just pushed the responsibility of connecting the is to the ought onto the victim of your rhetoric.

Essentially this dialectical approach is performing the two easy bits of the computation: Objectively determine if your victim maximises X, objectively determine how to maximise X, then proscribe the action to your victim. But at no point has the ought been bridged, just an existing arbitrarily chosen and non-scientifically justified ought exploited.

Subsequent to your prescription, having done so rightly, the victim, rather than yourself, says *"Oh yes, I ought do that"*, and while you might never need implement anything unscientific to get to this resolution, there is no doubt that your victim didn't bridge that gap with science or logic.

[-]TAG10

It's not suprising that "ought" statements concerning arbitrary preferences have a subjective component, but the topic is specifically about moral "Oughts", and it is quite possible that being about morality constrains things in a way that brings in additional objectivty.

Moral oughts are not different to any kind of other ought statement. Almost all of my post is formulated in terms of a generic policy and utility function, anyway, so you can replace it with moral or amoral ought as you wish. If you dislike the icecream example, the same point is trivially made with any other moral ought statement.

[-]TAG10

Oh, I think you'll find that moral oughts are different. For one thing, you can be jailed for breaking them.

Even if that were true (it isn't, since laws do not map to morality) it wouldn't really have anything to do with the is-ought problem unless you presume that the entity implements a utility function which values not being jailed (which, is exactly the subjective axiom that allows the bridging of is-ought in my analysis above).

[-]TAG10

Even if that were true (it isn’t, since laws do not map to morality

That is an extraordinary claim.

You need a moral justification to put someone in jail. Legal systems approximate morality, and inasmuch as they depart from it, and they are flawed,like a bridge that doesn't stay up.

implements a utility function which values not being jailed (which, is exactly the subjective axiom

If everyone is subject to the same punishments, then they have to be ones that are negatively valued by everyone... who likes being in jail? So it is not subjective on an interesting way

In any case, that is approaching the problem from the wrong end. Morality is not a matter of using decision theory to avoid being punished for breaking arbitrary, incomprehensible, rules. It is the non arbitrary basis of the rules.

There are many cases were people are jailed arbitrarily, or unfairly. At no point in a legal case is the jury asked to consider whether it is moral to jail the defendant, only whether the law says they should be in jail. At best, the only moral leeway the legal system has is in the judge's ability to change the magnitude of a sentence (which in many countries is severely hampered by mandatory minimums).

An individual's morality may occasionally line up with law (especially if one's subjective axiom is 'Don't break the law'), but this alignment is rarely if ever on purpose, but rather a coincidence.

As to who likes being in jail? Many a person has purposely committed crimes and handed themselves into the police because they prefer being in jail to being homeless, or prefer the lifestyle of living and surviving in jail to having to engage in the rat race, and so on.

Their utility function, rates the reduction in utility from the lack of freedom of jail less than the gain in utility from avoiding the rat race, or living on the street, etc.

Their morality can be objectively analyzed through science by simulating their utility function and objectively determining which actions will likely lead to high expectation values. But their particular choice of a function which values being in jail over harming others via crime is completely subjective.

It is beneficial to be able to separate these two components because there may be (and likely is) many cases in which someone is objectively poor at maximizing their utility function and it would be beneficial to steer them in the right direction without getting bogged down in their choice of axiom.

None of this is about 'avoiding being punished for breaking arbitrary rules', it is about maximizing expected utility, where the definition of utility is subjective.

There are many evolutionary and biological reasons why people might have similar subjective axioms but there is no justifying a particular subjective axiom. If you claim there is, then take the example of a psychopath who claims that "I do not value life, humanity, community, empathy, or any other typically recognizable human 'goods'. I enjoy torturing and murdering people, and do not enjoy jail or capital punishment, therefore it is good if I get to torture and murder with no repercussions" and demonstrate how you can objectively prove that most extreme of example statements false. And if you find that task easy, feel free to try on any more realistic and nuanced set of justifications for various actions.

[-]TAG10

Bad law can be immoral, just as bad bridges can fall down. As I have already pointed out, the connection between morality and law is normative.

coincidence

What do you think guides the creation of law? Why is murder illegal?

As to who likes being in jail? Many a person has purposely committed crimes and handed themselves into the police because they prefer being in jail to being homeless, or prefer the lifestyle of living and surviving in jail to having to engage in the rat race, and so on.

You are trying to argue general rules from exceptional cases.

The law is the product of many individuals each with different subjective axioms separately trying to maximize their particular utility functions during the legislation process. As a result the law as written and implemented has at best, an extremely tenuous link to any individuals morality, let alone society at large's morality. Murder is illegal because for biological reasons the vast majority of people assign large negative value to murder so the result of legislators minimax procedure is that murder is illegal in most cases.

But if an individual did not assign negative value (for whatever reason) to murder how would you convince them they're wrong to do such a thing? It should be easy if it is objective. If you can't do the extreme cases then how can you hope to address real moral issues which are significantly more nuanced? This is the real question that you need to answer here since your original claim is that it is objective. I hope you'll not quote snipe around it again.

I'm not arguing general rules from exceptional ones, I'm not proposing any rules at all. I am proposing an analytic system that is productive rather than arbitrarily exclusionary.

[-]TAG10

If morality is (are) seven billion utility functions, then a legal system will be a poor match for it (them).

But there are good reasons for thinking that can't be the case. For one thing, people can have preferences that are intuitively immoral. If a psychopath wants to murder, that does not make murder moral.

For another, it is hard to see what purpose morality serves when there is no interaction between people. Someone who is alone on a desert island iskand has no need of rules and against murder because there is no one to murder, and no need of rules against theft because there is no one to steal, and from and so on.

If morality is a series of negotiations and trade offs about preferences, then the law can match it closely. We can answer the question "why is murder illegal" with "because murder is wrong".