Followup toIs Morality Preference?

In the dialogue "Is Morality Preference?", Obert argues for the existence of moral progress by pointing to free speech, democracy, mass street protests against wars, the end of slavery... and we could also cite female suffrage, or the fact that burning a cat alive was once a popular entertainment... and many other things that our ancestors believed were right, but which we have come to see as wrong, or vice versa.

But Subhan points out that if your only measure of progress is to take a difference against your current state, then you can follow a random walk, and still see the appearance of inevitable progress.

One way of refuting the simplest version of this argument, would be to say that we don't automatically think ourselves the very apex of possible morality; that we can imagine our descendants being more moral than us.

But can you concretely imagine a being morally wiser than yourself—one who knows that some particular thing is wrong, when you believe it to be right?

Certainly:  I am not sure of the moral status of chimpanzees, and hence I find it easy to imagine that a future civilization will label them definitely people, and castigate us for failing to cryopreserve the chimpanzees who died in human custody.

Yet this still doesn't prove the existence of moral progress.  Maybe I am simply mistaken about the nature of changes in morality that have previously occurred—like looking at a time chart of "differences between past and present", noting that the difference has been steadily decreasing, and saying, without being able to visualize it, "Extrapolating this chart into the future, we find that the future will be even less different from the present than the present."

So let me throw the question open to my readers:  Whither moral progress?

You might say, perhaps, "Over time, people have become more willing to help one another—that is the very substance and definition of moral progress."

But as John McCarthy put it:

"If everyone were to live for others all the time, life would be like a procession of ants following each other around in a circle."

Once you make "People helping each other more" the definition of moral progress, then people helping each other all the time, is by definition the apex of moral progress.

At the very least we have Moore's Open Question:  It is not clear that helping others all the time is automatically moral progress, whether or not you argue that it is; and so we apparently have some notion of what constitutes "moral progress" that goes beyond the direct identification with "helping others more often".

Or if you identify moral progress with "Democracy!", then at some point there was a first democratic civilization—at some point, people went from having no notion of democracy as a good thing, to inventing the idea of democracy as a good thing.  If increasing democracy is the very substance of moral progress, then how did this moral progress come about to exist in the world?  How did people invent, without knowing it, this very substance of moral progress?

It's easy to come up with concrete examples of moral progress.  Just point to a moral disagreement between past and present civilizations; or point to a disagreement between yourself and present civilization, and claim that future civilizations might agree with you.

It's harder to answer Subhan's challenge—to show directionality, rather than a random walk, on the meta-level.  And explain how this directionality is implemented, on the meta-level: how people go from not having a moral ideal, to having it.

(I have my own ideas about this, as some of you know.  And I'll thank you not to link to them in the comments, or quote them and attribute them to me, until at least 24 hours have passed from this post.)

 

Part of The Metaethics Sequence

Next post: "The Gift We Give To Tomorrow"

Previous post: "Probability is Subjectively Objective"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 1:26 PM
Select new highlight date
All comments loaded

Part of the answer could lie into "what would someone teleported to another culture think ?" I don't think it totally solves the question, but it's a hint, or a part of the answer.

If you take someone from now, and he's teleported to dark ages, with absolute monarchy, serfdom, capital punishment with the most horrible ways of killing, torture, ... he will be horrified.

If you take someone from the dark ages and teleport him now, he'll probably be very lost at first, but I don't think he would be horrified by the fact we manage to take more-or-less reasonable decisions using democracy (at least as reasonable at what the kings used to do), that the society doesn't collapse into crime and chaos when we suppress death penalty, serfdom, torture, ...

Many people who, in the past, advocated the use of what we now consider barbaric (torture, death penalty, dictatorship, ...) did it saying "there is no alternative", "if we don't maintain order, it'll be chaos and everyone will murder each other", "if you don't have a king, no decision will be taken", ...

The same applies to points which are debated right now in western societies, like "painless" death penalty, or corporal punishment in education. People who are against them are horrified by them, people who are in favor are more arguing "we need them".

And it also applies things like prison, which are almost consensual right now. I find very few people around me who justify prison for the sake of it, but only because we need it to prevent/deter crime. So when we'll find a way to do without prison (or using it much less), by finding alternatives (using technology like electronic monitoring, societal evolution, better understanding of psychology and sociology, ...) people in the future will be outraged thinking how we locked people away for so long, while we, if teleported in that future, would be confused on how they manage to keep society from chaos without prison, but not outraged of it.

That gives a general direction of "ethical progress", which is (to a point) universal to all humans. But it's just a hint to a real theory of moral progress (I didn't read yet the following posts, nor took months formalizing it).

I like the criteria above. If people on one side are arguing that x is "necessary" and people on the other are arguing that x is "horrible", then it should be clear that x is horrible and something should be done about it. (make x less horrible, find an alternative to x, remove whatever makes x necessary)

Applies well to things like medical testing on animals, prisons, and death.

...the fact we manage to take more-or-less reasonable decisions using democracy (at least as reasonable at what the kings used to do), that the society doesn't collapse into crime and chaos when we suppress death penalty, serfdom, torture, ...

Recently, it has been quite fashionable on LW to profoundly disagree with all of those points. At the very least, someone's going to say that, when an attempt to suppress slavery was made, the US society did for a while collapse into chaos unheard of before or since.

Speaking quite frankly (and in purple prose), though, there are few other things in the realm of the mind I'd desire right now than to be able to trust securely in all those points, and rest well, knowing that the job of SIAI and partly LW is simply to fight our way upwards before the sky comes crashing down - not also to run as fast as possible from the eldritch monster born of our own shadow!

Temporary chaos frequently happen when changes are made - but that's not what I was referring to. The issue of "will chaos occur when moving from slavery to no slavery" is a different issue than "would a society without slavery be more chaotic". That can justify inertia (keeping things as they are), but is not in itself an argument for or against slavery (or anything else).

And that fact that despite that inertia we still see things like torture or slavery mostly disappearing is a good indicator or moral progress.

Eh, I'm just not the go-to guy here. You should try talking to people like:

  • sam0345 (low-level combat tutorial)

  • TGGP (online co-op mode)

  • Aurini (MEDIUM) - and he might end up just opening the gate and letting you pass if you look like enough of a bro - has recently been witnessed in a brawl against a pick-up raid. Pick-up, get it? Get it? Eh heh!

  • Konkivistador (HARD)

  • steven0461 (BONUS CONTENT; need the Meta^2-Contrarian Edition DLC to unlock - BUY NOW for only LW$ 5499)

  • Vladimir_M (VERY HARD)

  • ??? (IMPOSSIBLE)

MORAL KOMBAT!

Edit: Lyrics need to be included obviously:

Test your mind, Test your mind,
Test your mind, Test your mind. 
MORAL KOMBAT!
FIGHT!
MORAL KOMBAT!
EXCELLENT!
Konkvistador, TGGP, Roko, Will_Newsome,
steven, cousin_it, Vladimir.
MORAL KOMBAT!
FIGHT!
MORAL KOMBAT!
Konkvistador, TGGP, Roko, Will_Newsome,
steven, cousin_it, Vladimir.
MORAL KOMBAT!
(Modus ponens!)
(Ceteris paribus)
(Aumann's agreement)
(Excellent!)
FIGHT!
Test your mind, Test your mind.
Konkvistador, TGGP, Roko, Will_Newsome,
steven, cousin_it, Vladimir.
MORAL KOMBAT!
FIGHT!
MORAL KOMBAT! [4x]

Since I'm apparently a stepping stone on the path to the Final Boss of the contrarian Internet, I wonder what my fatality is.

So, we have an agreement that outright flattering each other in the future shall be reprociated with positive karma loops, as long as it's done in a sufficiently nerdy manner? C'mon, bro, just say yeah!

Past behaviour is an excellent predictor of future behaviour. Nerdy flattery and humour seem to be consistently rewarded on LessWrong.

:reads the edit:

Now you're just adding insult to injury, except that "injury" is "awesomeness" and "insult" is "nostalgia".

We are glad to announce an upcoming full-fledged expansion pack: 'The Twisting Way'

Engage the enigmatic genius Will_Newsome and rescue Lady AspiringKnitter from his unspeakable experiments; survive the shamanistic Rites of Hanson (not for the sake of survival!); endure stigma and uproar as you optimize your threads for the gaze of the feared Outsiders; boldly embark upon the Doomed Quest for Mencius' Magnificient Monocle, and more!

I find very few people around me who justify prison for the sake of it, but only because we need it to prevent/deter crime.

Seems like there are more such people than we'd expect. (Are you in Europe too?)

A possibility that I have mentioned here before has to do with positive feedback loops in an isolated society between economic growth and luxury spending on moral coherence. On this account, people always had qualms about slavery but considered it to impractical to seriously consider abandoning it. When feeling rich they abandoned it anyway, either as conspicuous consumption or as luxury spending on simplicity. Having done so, it turned out, made them richer, affirming this sort of apparent luxury spending or conspicuous consumption as actually being moral progress. Viewing them as an ecosystem of godshatter, increased power destabilized the balance of power between dissonant utility functions, allowing certain elements to largely erase others while still further increasing in power. One problem with this story is that it passes some of the buck to economic growth, though only some, as access to resources and population are surely part of the answer there. Another problem is that it doesn't add up to normality, but proposed moralities should only add up to normality when approximated crudely, not when approximated precisely.

positive feedback loops in an isolated society between economic growth and luxury spending on moral coherence

Or as Saul Alinsky put it “[C]oncern with ethics increases with the number of means available and vice versa.” It is easy to be ethical when you have little at stake.

Why on earth not? Aristotle thought some people were naturally suited for slavery. We now know that's not true.
No, we don't. We know no such thing.

Morally, we do know such a thing.

This sounds like an is-ought confusion. "Some people would be happier as slaves." is an is-statement -- it's either right or wrong (true or false) as a matter of fact, regardless of morality. "Slavery oughtn't exist" is a moral statement -- it only has a truth value according to a particular ethical/moral set.

I don't know whether "naturally suited for slavery" is supposed to be a "is" or an "ought" statement (descriptive or prescriptive). If it's an is-statement then our moral sense is irrelevant to whether the statement is true or false as a matter of fact.

"Some people would be happier as slaves." is an is-statement -- it's either right or wrong (true or false) as a matter of fact, regardless of morality.

I agree generally with your point, but this sentence assumes "happier" is an objective quality - which may not be true. if we were to taboo "happier" in that sentence, the new phrasing might include a moral claim. Consider:

"Everyone is happier if jocks can haze nerds without complaint" --> "Jocks by show virtue by hazing nerds, and nerds show virtue by accepting hazing without complaint."

The second sentence contains a number of explicit and implicit moral claims. Those moral claims are also present in the first sentence, just concealed by the applause light word "happy."

Wiseman, if everyone were blissed-out by direct stimulation of their pleasure center all the time, would that by definition be moral progress?

Marshall, how is your "usefulness" not isomorphic to the word "good"? Useful for what?

Lowly Undergrad, early societies didn't have this idea of reducing violent death to zero - through what mechanism did they acquire this belief, given that they didn't start out with the idea that it was "moral progress"?

Robin Brandt, is whatever increasing technology does to a society, moral progress by definition, or does increasing technology only tend to cause moral progress?

Tim, if we all cooperated with each other all the time, would that by definition be moral progress?

Paul, do you think that your own morality is optimum or can you conceive of someone more moral than yourself - not just a being who better adheres to your current ideals, but a being with better ideals than you?

Not everyone has the same intuition about the wrongness of slavery, though, and "they're not us and they're more use to us this way" is justification enough for some. People have divergent intuitions about empirical and logical propositions, too, but in those cases there's an obvious (if not always practical) way to settle things: go and look, or find a (dis)proof. You can trivially demonstrate that 1+1≠3, but it's hard to see how you could reject with nearly as much rigor even something as ridiculous as "it's good to enslave people born on a Tuesday." You could put the latter into a calculator if you defined "good" in minute detail, and if you copied the definition out of your own skull you could even get an output you're justified in caring about, but try to claim fully mind-independent truth and you fall right into the Open Question. It still adds up to normality, though, to the extent that definitions of "good" converge under increasing knowledge, reflection, and discussion - probably a large extent, excluding a few sociopaths and other oddballs.

But maybe, as Eliezer's been pointing out, I'm wrong to think 1+1=2 is mind-independently demonstrable either - really, in both cases, a mind needs to be running certain dynamics to appreciate the argument; it's just harder to imagine a mind (that deserves the term) with different arithmetic dynamics than different moral dynamics.

(BTW, like Ben, I think novel interpretations of the Bible re: slavery were mostly rationalizations of already-changing fundamental values.)

There is a tendency for older generation to feel nostalgic for the time of their youth and for the younger generation to strive for changing the status quo. So I wonder whether the modern perception of moral progress (as opposed to perennial complaints of moral degradation popular among our ancestors) comes from the youth being more economically and politically empowered than ever before, which allows it to dominate public discourse.

Yvain: I think you're equivocating between two definitions of utility, "happiness" and "the quantity that's maximized". This dual meaning is really unfortunate.

Sebastian: moral progress might be random except that people (very plausibly) try not to return to a rejected past state. This would be directionless (or move in an arbitrary direction) but produce very few reversals.

poke: pursuing knowledge could be painful and depressing but still intuitively moral.

I see a bit of what looks like terminal/instrumental confusion in this thread. I don't think discovering better instrumental values toward the same terminal values you always had counts as moral progress, at least if those terminal values are consciously, explicitly held.

I think a lot of people are confusing a) improved ability to act morally, and b) improved moral wisdom.

Remember, things like "having fewer deaths, conflicts" does not mean moral progress. It's only moral progress if people in general change their evaluation of the merit of e.g. fewer deaths, conflicts.

So it really is a difficult question Eliezer is asking: can you imagine how you would have/achieve greater moral wisdom in the future, as evaluated with your present mental faculties?

My best answer is yes, in that I can imagine being better able to discern inherent conflict between certain moral principles. Haphazard example: today, I might believe that a) assaulting people out-of-the-blue is bad, and b) credibly demonstrating ability to fend off assaulters are good. In the future, I might notice that these come into conflict, that if people value both of these, some people will inevitably have a utility function that encourages them to do a), and this is unavoidable. So then I find out more precisely how much of one comes at how much cost of the other, and that persuing certain combinations of them is impossible.

I call that moral progress. Am I right, assuming the premises?

My view is similar to Robin Brandt's, but I would say that technological progress has caused the appearance of moral progress, because we responded to past technological progress by changing our moral perceptions in roughly the same direction. But different kinds of future technological progress may cause further changes in orthogonal or even opposite directions. It's easy to imagine for example that slavery may make a comeback if a perfect mind control technology was invented.

One possibility: we can see a connection between morality and certain empirical facts -- for example, if we believe that more moral societies will be more stable, we might think that we can see moral progress in the form of changes that are brought about by previous morally related instability. That's not very clear -- but a much clearer and more sophisticated variant on that idea can perhaps be seen in an old paper by Joshua Cohen, "The Arc of the Moral Universe" (google scholar will get it, and definitely read it, because a) it's brilliant, and b) I'm not representing it very well).

Or we might think that some of our morally relevant behaviors are consistently dependent on empirical facts, in which we might progress in finding out. For example, we might have always thought that beings who are as intelligent as we are and have as complex social and emotional lives as do we deserve to be treated as equals. Suppose we think the above at year 1 and year 500, but at year 500, we discover that some group of entities X (which could include fellow humans, as with the slaves, or other species) is as intelligent, etc., and act accordingly. Then it seems like we've made clearly directional moral progress -- we've learned to more accurately make the empirical judgments about which our unchanged moral judgment depends.

The discussion in the comments has been interesting, but I believe I have a simple answer to Eliezer's question (please tell me if I am mistaken). Consider a society that has a moral idea say, like valuing bodily autonomy, but they don't give woman that right. They often kill women for the organs to give to men and children, due to an old tribal culture mainly forgotten. Unfortunately, certain rituals and dogma still continue on. One day, a leading public intellectual points this out on tv, and they change their actions to fit in with their true moral beliefs, and stop acting on non-moral ones. Wouldn't this be an example of moral progress?

Consider a different society that has a moral idea like valuing the bodily autonomy of non-women, but for various historical reasons this has historically been expressed as "valuing bodily autonomy" without specifying gender. Their behavior has been identical to the example you give, until one day someone points this out, and they start expressing it as "valuing bodily autonomy for non-women" instead, while continuing to do everything else the way they used to.

Is this also an example of moral progress?

If not, why not?

I see. I've said that if people become more aligned with their meta-morals in practice, then it is progress... And you've offered that their meta-morals might seem or be bad anyway, so it wouldn't seem to be progress to us. I suppose, to be able to show my progress to be directional and not arbitrary, I'd have to present a perfect, objective basis for morality. I won't be doing that in this post (sorry) so my point is redundant. Thanks for clearing that up with me.

Inspired by this article http://www.thecherrycreeknews.com/news-mainmenu-2/1-latest/5517-higher-intelligence-associated-with-liberalism-atheism.html I think one way of doing it might be to show directionality in terms of evolutionary novelty. That is, look at what parts of our evolutionary psychology we have rationally worked against as a culture, and why we came to those more intellectual conclusions. That way, the measure of our progress could be in how we learn to fix the mistakes of the stupid natural selection.

However, that sounds a lot to me like reversed stupidity, which I now know to be a false means of winning, but I do think it at least explains our perception of moral progress, if not progress as an absolute. If we somehow discover that when cultures step away from their evolutionary psychology that it is always for the sake of positive rational morality, then the concept might hold more weight in terms of a holistic moral progress.

I see moral progress as 1) increased empathy, defined as increasingly satisfying, increasingly accurate mental models of sentient beings, including oneself, and 2) increased ability to predict the future, to map out the potential chains of causality for one's actions.

As I said previously, I think "moral progress" is the heroic story we tell of social change, and I find it unlikely that these changes are really caused by moral deliberation. I'm not a cultural relativist but I think we need to be more attuned to the fact that people inside a culture are less harmed by its practices than outsiders feel they would be in that culture. You can't simply imagine how you would feel as, say, a woman in Islam. Baselines change, expectations change, and we need to keep track of these things.

As for democracy, I think there are many cases where democracy is an impediment to economic progress, and so causes standards of living to be lower. I doubt Singapore would have been better off had it been more democratic and I suspect it would have been much worse off (nowadays it probably wouldn't make a lot of difference either way). Likewise, I think Japan, Taiwan and South Korea probably benefited from relative authoritarianism during their respective periods of industrialization.

My own perspective on electoral democracy is that it's essentially symbolic and the only real benefit for developing countries is legitimacy in the eyes of the West; it's rather like a modern form of Christianization. Westerners tend to use "democracy" as a catch-all term for every good they perceive in their society and imagine having an election will somehow solve a country's problems. I think we'd be better off talking about openness, responsiveness, lawfulness and how to achieve institutional benevolence rather than elections and representation.

Now, you could argue that because I value things like economic progress, I have a moral system. I don't think it's that clear cut though. One of the distinctive features of moral philosophy is that it's tested against people's supposed moral intuitions. I value technological progress and growth in knowledge but, importantly, I would still value them if they were intuitively anti-moral. If technological progress and growth in knowledge were net harms for us as human beings I would still want to maximize them. I think many people here would agree (although perhaps they've never thought about it): if pursuing knowledge was somehow painful and depressing, I'd still want to do it, and I'd still encourage the whole of society to be ordered towards that goal.

This has been mentioned many times, by Peter Singer, for instance, but one way towards moral progress is by expanding the domain over which we feel morally obligated. While we may have evolved to feel morally responsible in our dealings with close relatives and tribesmen, it is harder to hold ourselves to the same standards when dealing with whoever we consider to be not part of this group. Maybe we can attribute some of moral progress to a widening of who we consider to be a part of our tribe, which would be driven by technology forcing us to live and interact with and identify with larger and more diverse groups of people. Clearly this doesn't solve all the problems of moral progress, but I think this idea could chip away at parts of the problem.

Re: if we all cooperated with each other all the time, would that by definition be moral progress?

If we all cooperated with each other all the time, that would be moral progress.

Moral progress simply means a systematic improvement of morals over time - so widespread cooperation would indeed represent an improvement over today's fighting and deceit.

If you take the list of things that were moral yesterday and the list that are moral today, and look for pairs between the lists that are kind of the same idea, but just in different quantity (e.g. like and love) then you could step back and see if there is an overall direction.

The key idea is to recognize when two things with different names are really different amounts of some higher more abstract idea.

I'm by no means sure that the idea of moral progress can be salvaged. But it might be interesting to try and make a case that we have fewer circular preferences now than we used to.

" the future will be even less different from the present than the present."

instead of

" the future will be even less different from the present than the present from the past."

?

I don't think anyone can really argue that a large-scale decrease in global violence and violent death is a sign of moral progress. So I must point to this Steven Pinker conference where he lays out some statistics showing the gradual decline of violence and violent death throughout our history: http://www.ted.com/index.php/talks/steven_pinker_on_the_myth_of_violence.html

This has actually been trenchantly criticized on statistical grounds. https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature

The basic idea is that if the Cuban Missile Crisis(or numerous other similar events) had ended badly the conclusion would have been reversed. And according to people who were there, such as President John F Kennedy, it very well could have ended badly.

I was going to say something about moral progress being changes in society that result in global increase in happiness, but I ran into some problems pretty fast following that thought. Hell, if we could poll every single living being from 11th century and 21st century and ask them to rate their happiness from 1-10 why do I have a feeling we'd end up with same average in both cases?

If you gave me exensional definition of moral progress by listing free speech, end of slavery and democracy, and then ask me for intensional definition, I'd say moral progress is global and local increase in co-operation between humans. That does not necessarily mean increase in global happiness.