Less Wrong views on morality?

Do you believe in an objective morality capable of being scientifically investigated (a la Sam Harris *or others*), or are you a moral nihilist/relativist? There seems to be some division on this point. I would have thought Less Wrong to be well in the former camp.

 

Edit: There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris *or others*)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically. It is a definite "is" from which we can make true "ought" statements relative to that "is". See drethelin's comment and my analysis of Clippy.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 4:03 AM
Select new highlight date
Rendering 50/146 comments  show more

There seems to be some division on this point.

I might be mistaken but I got the feeling that there's not much of a division, the picture I've got of LW on meta-ethics is something along the lines of: values exist in peoples heads, those are real, but if there were no people there wouldn't be any values. Values are to some extent universal, since most people care about similar things, this makes some values behave as if they were objective. If you want to categories - though I don't know what you would get out of it, it's a form of nihilism.

An appropriate question when discussing objective and subjective morality is:

  • What would an objective morality look like? VS a subjective one?

People here seem to share anti-realist sensibilities but then balk at the label and do weird things for anti-realists like treat moral judgments as beliefs, make is-ought mistakes, argue against non-consequentialism as if there were a fact of the matter, and expect morality to be describable in terms of a coherent and consistent set of rules instead of an ugly mess of evolved heuristics.

I'm not saying it can never be reasonable for an anti-realist to do any of those things, but it certainly seems like belief in subjective or non-cognitive morality hasn't filtered all the way through people's beliefs.

I attribute this behavior in part to the desire to preserve the possibility of universal provably Friendly AI. I don't think a moral anti-realist is likely to think an AGI can be friendly to me and to Aristotle. It might not even be possible to be friendly to me and any other person.

I attribute this behavior in part to the desire to preserve the possibility of universal provably Friendly AI

Well that seems like the most dangerous instance of motivated cognition ever.

Agreed.

I just posted a more detailed description of these beliefs (which are mine) here.

If anyone here believes in an objectively existing morality I am interested in dialogue. Right now it seems like a "not even wrong", muddled idea to me, but I could be wrong or thinking of a strawman.

After reading lots of debates on these topics, I'm no longer sure what the terms mean. Is a paperclip maximizer a "moral nihilist"? If yes, then so am I. Same for no.

Morality is a human behavior. It is in some ways analogous to trade or language: a structured social behavior that has developed in a way that often approximates particular mathematical patterns.

All of these can be investigated both empirically and intellectually: you can go out and record what people actually do, and draw conclusions from it; or you can reason from first principles about what sorts of patterns are mathematically possible; or both. For instance, you could investigate trade either beginning from the histories of actual markets, or from principles of microeconomics. You could investigate language beginning from linguistic corpora and historical linguistics ("What sorts of language do people actually use? How do they use it?"); or from formal language theory, parsing, generative grammar, etc. ("What sorts of language are possible?")

Some of the intellectual investigation of possible moralities we call "game theory"; others, somewhat less mathematical but more checked against moral intuition, "metaethics".

Asking whether there are universal, objective moral principles is a little like asking whether there are universal, objective principles of economics. Sure, in one sense there are: but they're not the sort of applied advice that people making moral or economic claims are usually looking for! There are no theorems of economics that give applied advice such as "real estate is always a good investment," and there are no theorems of morality that say things like "it's never okay to sleep with your neighbor's wife".

In summary of my own current position (and which I keep wanting to make a fuller post thereof):

If factual reality F can represent a function F(M) -> M from moral instructions to moral instructions (e.g. given the fact that burning people hurts them, F("it's wrong to hurt people")-> "It's wrong to burn people"), then there may exist universal moral attractors for our given reality -- these would represent objective moralities that are true for a vast set of different moral starting positions. Much like you reach the Sierpinski Triangle no matter the starting shape.

This would however still not be able to motivate an agent that starts with an empty set of moral instructions.

There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris or others)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically. It is a definite "is" from which we can make true "ought" statements relative to that "is". See drethelin's comment and my analysis of Clippy.

So first of all, that's not what Sam Harris means so stop invoking him. Second of all, give an example of what kind of facts you would refer to in order to decide whether or not murder is immoral.

If you are referring to facts about your brain/mind then your account is subjectivist. Nothing about subjectivism says we can't investigate people's moral beliefs scientifically.

Now it is the case that if you define morality as "whatever that thing in my brain that tells me what is right and wrong says" there is in some sense an "is from which you can get an ought". But this is not at all what Hume is talking about. Hume is talking about argument and justification. His point is that an argument with only descriptive premises can't take you to a normative conclusion. But note that your "is" potentially differs from individual to individual. I suppose you could use it to justify your own moral beliefs to your self but that does not moral realism make. What you can't do is use it to convince anyone else.

This discussion is getting rather frustrating because I don't think your beliefs are actually wrong. You're just a) refusing to use or learn standard terminology that can be quickly picked up by glancing at the Stanford Encyclopedia of Philosophy and b) thinking that whether or not we can learn about evolved or programmed utility function-like things is a question related to the whether or not moral realism is true. I'm a very typical moral anti-realist but I still think humans have lots of values in common, that there are scientific ways to learn about those values, and that this is a worthy pursuit.

If you still disagree I'd like to hear what you think people in my camp are supposed to believe.

I suspect that there exists an objective morality capable of being investigated, but not using the methods commonly known as science.

What we currently think of as objective knowledge comes from one of two methods:

1) Start with self-evident axioms and apply logical rules of inference. The knowledge obtained from this method is called "mathematics".

2) The method commonly called the "scientific method". Note that thanks to the problem of induction the knowledge obtained using this method can never satisfy method 1's criterion for knowledge.

I suspect investigation morality will require a third method, and that the is-ought problem is analogous to the problem of induction in that it will stop moral statements from being scientific (just as scientific statements aren't mathematical) but ultimately won't prevent a reasonably objective investigation of morality.

Reading your edit... I believe that there exists some X such that X developed through natural selection, X does not depend on any particular knowledge, X can be investigated scientifically, and for any moral intuition M possessed by a human in the real world, there's a high probability that M depends on X such that if X did not exist, M would not exist either. (Which is not to say that X is the sole cause of M, or that two intuitions M1 and M2 can't both derive from X such that M1 and M2 motivate mutually exclusive judgments in certain real-world situations.)

The proper relationship of X to the labels "objective morality," "moral nihilism", "moral relativism," "Platonic Form of Good", "is statement" and "ought statement" is decidedly unclear to me.

There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris or others)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically

I think you are bringing up two separate questions:

  • Can science tell us what we value? This question do not rely on whether morality is universal, any more than the scientific investigation of hippos food preference rely on elephants having the same.

  • Can science tell us what to value? If I have not misunderstood Harris, his central claim in The Moral Landscape is that science can. Harris have been criticized for not actually showing that but rather if one presupposes that maximum "well-being" (defined) is morally good - suffering bad - then science can tell us what is moral good/bad action. But this is no different form claiming that if we define morally good the amount of paperclips there are, then science than tell us what is good/bad action.

Well, an awful lot of what we think of as morality is dictated, ultimately, by game theory. Which is pretty universal, as far as I can tell. Rational-as-in-winning agents will tend to favor tit-for-tat strategies, from which much of morality can be systematically derived.

from which much of morality can be systematically derived

Not all of it, though, because you still need some "core" or "terminal" values that you use do decide what counts as a win. In fact, all the stuff that's derived from game theory seems to be what we call instrumental values, and they're in some sense the less important ones, the larger portion of the arguments about morality end up being about those "terminal" values, if they even exist.

You are talking about different things Dolores is talking about "why should I cooperate instead of cheating" kind of morality. You on the other hand are talking about meta-ethics, that is what is the meaning of right and wrong, what is value etc.

If Euthyphro's dilemma proves religious morality to be false, it also does the same to evolutionary morality: http://atheistethicist.blogspot.com/2009/02/euthyphro-and-evolutionary-ethics.html