Happiness and Goodness as Universal Terminal Virtues

Hi, I'm new to LessWrong. I stumbled onto this site a month ago, and ever since, I've been devouring Rationality: AI to Zombies faster than I used to go through my favorite fantasy novels. I've spent some time on website too, and I'm pretty intimidated about posting, since you guys all seem so smart and knowledgeable, but here goes... This is probably the first intellectual idea I've had in my life, so if you want to tear it to shreds, you are more than welcome to, but please be gentle with my feelings. :)
Edit: Thanks to many helpful comments, I've cleaned up the original post quite a bit and changed the title to reflect this. 

Ends-in-themselves

As humans, we seem to share the same terminal values, or terminal virtues. We want to do things that make ourselves happy, and we want to do things that make others happy. We want to 'become happy' and 'become good.' 

Because various determinants--including, for instance, personal fulfillment--can affect an individual's happiness, there is significant overlap between these ultimate motivators. Doing good for others usually brings us happiness. For example, donating to charity makes people feel warm and fuzzy. Some might recognize this overlap and conclude that all humans are entirely selfish, that even those who appear altruistic are subconsciously acting purely out of self-interest. Yet many of us choose to donate to charities that we believe do the most good per dollar, rather than handing out money through personal-happiness-optimizing random acts of kindness. Seemingly rational human beings sometimes make conscious decisions to inefficiently maximize their personal happiness for the sake of others. Consider Eliezer's example in Terminal Values and Instrumental Values of a mother who sacrifices her life for her son. 

Why would people do stuff that they know won't efficiently increase their happiness? Before I de-converted from Christianity and started to learn what evolution and natural selection actually were, before I realized that altruistic tendencies are partially genetic, it used to utterly mystify me that atheists would sometimes act so virtuously. I did believe that God gave them a conscience, but I kinda thought that surely someone rational enough to become an atheist would be rational enough to realize that his conscience didn't always lead him to his optimal mind-state, and work to overcome it. Personally, I used to joke with my friends that Christianity was the only thing stopping me from pursuing my true dream job of becoming a thief (strategy + challenge + adrenaline + variety = what more could I ask for?) Then, when I de-converted, it hit me: Hey, you know, Ellen, you really *could* become a thief now! What fun you could have!flinched from the thought. Why didn't I want to overcome my conscience, become a thief, and live a fun-filled life? Well, this isn't as baffling to me now, simply because I've changed where I draw the boundary. I've come to classify goodness as an end-in-itself, just like I'd always done with happiness. 

Becoming good

I first read about virtue ethics in On Terminal Goals and Virtue Ethics. As I read, I couldn't help but want to be a virtue ethicist and a consequentialist. Most virtues just seemed like instrumental values.

The post's author mentioned Divergent protagonist Tris as an example of virtue ethics:

Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’

I suspect that goodness is, perhaps subconsciously, a terminal virtue for the vast majority of virtue ethicists. I appreciate Oscar Wilde's writing in De Profundis:

Now I find hidden somewhere away in my nature something that tells me that nothing in the whole world is meaningless, and suffering least of all.. 

It is the last thing left in me, and the best: the ultimate discovery at which I have arrived, the starting-point for a fresh development. It has come to me right out of myself, so I know that it has come at the proper time. It could not have come before, nor later. Had anyone told me of it, I would have rejected it. Had it been brought to me, I would have refused it. As I found it, I want to keep it. I must do so...

Of all things it is the strangest.

Wilde's thoughts on humility translate quite nicely to an innate desire for goodness.

When presented with a conflict between an elected virtue, such as loyalty, or truth, and the underlying desire to be good, most virtue ethicists would likely abandon the elected virtue. With truth, consider the classic example of lying to Nazis to save Jews. Generally speaking, it is wrong to conceal the truth, but in special cases, most people would agree that lying is actually less wrong than truth-telling. I'm not certain, but my hunch is that most professing virtue ethicists would find that in extreme thought experiments, their terminal virtue of goodness would eventually trump their other virtues, too. 

Becoming happy

However, there's one exception. One desire can sometimes trump even the desire for goodness, and that's the desire for personal happiness. 

We usually want what makes us happy. I want what makes me happy. Spending time with family makes me happy. Playing board games makes me happy. Going hiking makes me happy. Winning races makes me happy. Being open-minded makes me happy. Hearing praise makes me happy. Learning new things makes me happy. Thinking strategically makes me happy. Playing touch football with friends makes me happy. Sharing ideas makes me happy. Independence makes me happy. Adventure makes me happy. Even divulging personal information makes me happy.

Fun, accomplishment, positive self-image, sense of security, and others' approval: all of these are examples of happiness contributors, or things that lead me to my own, personal optimal mind-state. Every time I engage in one of the happiness increasers above, I'm fulfilling an instrumental value. I'm doing the same thing when I reject activities I dislike or work to reverse personality traits that I think decrease my overall happiness.

Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be.

Tris was, in other words, pursuing happiness by trying to change an aspect of her personality she disliked.

Guessing at subconscious motivation

By now, you might be wondering, "But what about the virtue ethicist who is religious? Wouldn't she be ultimately motivated by something other than happiness and goodness?" 

Well, in the case of Christianity, most people probably just want to 'become Christ-like' which, for them, overlaps quite conveniently with personal satisfaction and helping others. Happiness and goodness might be intuitively driving them to choose this instrumental goal, and for them, conflict between the two never seems to arise. 

Let's consider 'become obedient to God's will' from a modern-day Christian perspective. 1 Timothy 2:4 says, "[God our Savior] wants all men to be saved and to come to a knowledge of the truth." Mark 12:31 says, "Love your neighbor as yourself." Well, I love myself enough that I want to do everything in my power to avoid eternal punishment; therefore, I should love my neighbor enough to do everything in my power to stop him from going to hell, too.

So anytime a Christian does anything but pray for others, do faith-strengthening activities, spread the gospel, or earn money to donate to missionaries, he is anticipating as if God/hell doesn't exist. As a Christian, I totally realized this, and often tried to convince myself and others that we were acting wrongly by not being more devout. I couldn't shake the notion that spending time having fun instead of praying or sharing the gospel was somehow wrong because it went against God's will of wanting all men being saved, and I believed God's will, by definition, was right. (Oops.) But I still acted in accordance with my personal happiness on many occasions. I said God's will was the only end-in-itself, but I didn't act like it. I didn't feel like it. The innate desire to pursue personal happiness is an extremely strong motivating force, so strong that Christians really don't like to label it as sin. Imagine how many deconversions we would see if it were suddenly sinful to play football, watch movies with your family, or splurge on tasty restaurant meals. Yet the Bible often mentions giving up material wealth entirely, and in Luke 9:23 Jesus says, "Whoever wants to be my disciple must deny themselves and take up their cross daily and follow me."

Let's further consider those who believe God's will is good, by definition. Such Christians tend to believe "God wants what's best for us, even when we don't understand it." Unless they have exceptionally strong tendencies to analyze opportunity costs, their understanding of God's will and their intuitive idea of what's best for humanity rarely conflict. But let's imagine it does. Let's say someone strongly believes in God, and is led to believe that God wants him to sacrifice his child. This action would certainly go against his terminal value of goodness and may cause cognitive dissonance. But he could still do it, subconsciously satisfying his (latent) terminal value of personal happiness. What on earth does personal happiness have to do with sacrificing a child? Well, the believer takes  comfort in his belief in God and his hope of heaven (the child gets a shortcut there). He takes comfort in his religious community. To not sacrifice the child would be to deny God and lose that immense source of comfort. 

These thoughts obviously don't happen on a conscious level, but maybe people have personal-happiness-optimizing intuitions. Of course, I have near-zero scientific knowledge, no clue what really goes on in the subconscious, and I'm just guessing at all this.

Individual variance

Again, happiness has a huge overlap with goodness. Goodness often, but not always, leads to personal happiness. A lot of seemingly random stuff leads to personal happiness, actually. Whatever that stuff is, it largely accounts for the individual variance in which virtues are pursued. It's probably closely tied to the four Kiersey Temperaments of security-seeking, sensation-seeking, knowledge-seeking, and identity-seeking types. (Unsurprisingly, most people here at LW reported knowledge-seeking personality types.) I'm a sensation-seeker. An identity-seeker could find his identity in the religious community and in being a 'child of God'. A security-seeker could find security in his belief in heaven. An identity-seeking rationalist might be the type most likely to aspire to 'become completely truthful' even if she somehow knew with complete certainty that telling the truth, in a certain situation, would lead to a bad outcome for humanity.

Perhaps the general tendency among professing virtue ethicists is to pursue happiness and goodness relatively intuitively, while professing consequentialists pursue the same values more analytically.

Also worth noting is the individual variance in someone's "preference ratio" of happiness relative to goodness. Among professing consequentialists, we might find sociopaths and extreme altruists at opposite ends of a happiness-goodness continuum, with most of us falling somewhere in between. To position virtue ethicists on such a continuum would be significantly more difficult, requiring further speculation about subconscious motivation.

Real-life convergence of moral views

I immediately identified with consequentialism when I first read about it. Then I read about virtue ethics, and I immediately identified with that, too. I naturally analyze my actions with my goals in mind. But I also often find myself idolizing a certain trait in others, such as environmental consciousness, and then pursuing that trait on my own. For example:

I've had friends who care a lot about the environment. I think it's cool that they do. So even before hearing about virtue ethics, I wanted to 'become someone who cares about the environment'. Subconsciously, I must have suspected that this would help me achieve my terminal goals of happiness and goodness.

If caring about the environment is my instrumental goal, I can feel good about myself when I instinctively pick up trash, conserve energy, use a reusable water bottle; i.e. do things environmentally conscious people do. It's quick, it's efficient, and having labeled 'caring about the environment' as a personal virtue, I'm spared from analyzing every last decision. Being environmentally conscious is a valuable habit.

Yet I can still do opportunity cost analyses with my chosen virtue. For example, I could stop showering to help conserve California's water. Or, I could apparently have the same effect by eating six fewer hamburgers in a year. More goodness would result if I stopped eating meat and limited my showering, but doing so would interfere with my personal happiness. I naturally seek to balance my terminal goals of goodness and happiness. Personally, I prefer showering to eating hamburgers, so I cut significantly back on my meat consumption without worrying too much about my showering habits. This practical convergence of virtue ethics and consequentialism satisfies my desires for happiness and goodness harmoniously.


To summarize:

Personal happiness refers to an individual's optimal mind-state. Pleasure, pain, and personal satisfaction are examples of happiness level determinants. Goodness refers to promoting happiness in others.

Terminal values are ends-in-themselves. The only true terminal values, or virtues, seem to be happiness and goodness. Think of them as psychological motivators, consciously or subconsciously driving us to make the decisions we do. (Physical motivators, like addiction or inertia, can also affect decisions.)

Preferences are what we tend to choose. These can be based on psychological or physical motivators.

Instrumental values are the sub-goals or sub-virtues that we (consciously or subconsciously) believe will best fulfill our terminal values of happiness and goodness. We seem to choose them arbitrarily.

Of course, we're not always aware of what actually leads to optimal mind-states in ourselves and others. Yet as we rationally pursue our goals, we may sometimes intuit like virtue ethicists and other times analyze like consequentialists. Both moral views are useful.

Practical value

So does this idea have any potential practical value? 

It took some friendly prodding, but I was finally brought to realize that my purpose in writing this article was not to argue the existence of goodness or the theoretical equality of consequentialism and virtue ethics or anything at all. The real point I'm making here is that however we categorize personal happiness, goodness belongs in the same category, because in practice, all other goals seem to stem from one or both of these concepts. Clarity of expression is an instrumental value, so I'm just saying that perhaps we should consider redrawing our boundaries a bit:

Figuring where to cut reality in order to carve along the joints—this is the problem worthy of a rationalist.  It is what people should be trying to do, when they set out in search of the floating essence of a word.

P.S. If anyone is interested in reading a really, really long conversation I had with adamzerner, you can trace the development of this idea. Language issues were overcome, biases were admitted, new facts were learned, minds were changed, and discussion bounced from ambition, to serial killers, to arrogance, to religion, to the subconscious, to agenthood, to skepticism about the happiness set-point theory, all interconnected somehow. In short, it was the first time I've had a conversation with a fellow "rationalist" and it was one of the coolest experiences I've ever had.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:44 AM
Select new highlight date
Rendering 50/67 comments  show more

Welcome to LessWrong, and thanks for posting!

Regarding the evolution of emotions, consider this:

Imagine a group of life forms of the same species who compete for resources. Lets say that either they are fairly even in power level, and thus it is superior for them to cooperate with each other and divide resources fairly to avoid wasting energy fighting. Alternately, some (alphas) are superior in power level, but the game theoretically optimal outcome is for the more dominant to take a larger share of resources, but still allow the others to have some. (This is superior for them to fighting to the death to try to get everything).

If life forms cannot communicate with each other, then they suffer from a prisoner's dilemma problem. They would like to cooperate in the prisoners dilemma, but without the ability to signal to each other, they cannot be sure the other will not defect. Thus they end up defecting against each other.

We can thus see that if the life forms evolved methods of signalling to each other, it would improve their chances of survival. Thus, complex life forms develop ways of signalling to each other. We can see many, many examples of this throughout a broad range of life.

The life forms also need to be able to mentally model each other now to predict each other's actions. Thus they develop empathy (the ability to model each other), and emotions, which are related to signalling. If each of the members of the species have similar emotions, that is , they react in similar ways to a given situation, and they have developed ways of expressing these emotions to each other, then it greatly improves the abilities of the life forms to model each other correctly. For example, if one life form then tries to gain an unfair distribution of resources (defecting in the prisoners dilemma problem), the other will display the emotional response of anger. They are signalling 'you cannot defect against me in such an unfair way. Because you are attempting to do so, I will fight you'.

Because the emotional responses occur automatically, they act similar to a precommitment. Essentially, the life form having the emotional response has been preprogrammed to have this response to the situation. This is a precommitment to a course of action, which can help the life form to achieve a better game-theoretic result.

(For example, if we are playing 'chicken', driving our cars toward each other on a road, the best strategy for me to win the game is to visibly precommit to not swerving no matter what. Thus, the optimal strategy would be to remove my steering wheel and throw it out of the car in a way that you can see. Since you now know that I CANNOT turn, you must turn to avoid crashing, and I win the game. This shows how a strong precommitment can be an advantage).

If we think of emotions as precommitments in this way, we can see that they can give us an advantage in our prisoner's dilemma problem. The opponent then knows that they cannot defect against us too much, or else we will become angry and will fight, even though this gives a worse outcome for us as well, it is an emotional response and thus we will do it automatically.

We can see that emotions are thus an aid to fitness, and life forms that evolve them will have a genetic advantage.

Now imagine that the problems that the life forms need to signal about become much more complex. Rather than just signalling about food or mates, for example, they need to signal about group dynamics, concepts like loyalty to a tribe, their commitment to care for young, etc.

Under these circumstances, we can see the need to develop a much greater range of emotions to deal with various situations. We need to display more than just anger over an unequal distribution, or fear, or so on. We also need to have emotional responses of love, loyalty, and so on, and be able to demonstrate/signal these to each other.

This is my understanding of how emotions should evolve among complex life forms that are at least remotely close to humans. Perhaps for extremely different life forms, emotions would not evolve. Or perhaps in pretty much all complex forms of life which are communicating with each other, some form of emotions would evolve, I don't know. I don't see the need for a magical sky-father to have desired emotions to exist, who then guided reality to this point.

Also, I am not sure how the universe-creator would accomplish this. If the degrees of freedom they have are to create the underlying fundamental laws of physics, and the initial conditions of the universe, how would they be able to compute out what sets of laws/initial conditions would lead to the physics which would lead to the chemistry which would lead to the biology which would lead to the life forms which would have these emotions? And why would that be the goal of the universe-creator? In the absence of evidence which would distinguish this hypothesis from others, I don't see why we should privilege this hypothesis to such an extent, when it is pretty clear that the real reason for "believing" in it is actually "I want it to be true". (And also "A strong meme which many people believe in is threatening that I will be harmed if I do not "believe" that this is true, and promising to reward me if I do "believe" it is true).

If anyone can correct my thoughts regarding evolution and emotions, or can point to some studies or scientific theories which either support or refute this post, I would love to read them!

Thanks for the welcome, and thanks for sharing your thoughts! I love game theory, and all your connections look good to me.

The life forms also need to be able to mentally model each other now to predict each other's actions. Thus they develop empathy (the ability to model each other), and emotions

This is something I still don't understand very well about evolution. They need it, and therefore they develop it? Is there anything that leads them to develop it, or is this related to the "evolving to extinction" chapter? I should go back and re-read the chapters on evolution. Is this something you can somewhat-briefly summarize, or would understanding require a lot more reading?

They need it, and therefore they develop it?

They need it, therefore if it randomly happens, they will keep the outcome.

Imagine a game where you are given random cards, and you choose which of them to keep and which of them to discard. If you need e.g. cards with high numbers, you can "develop" a high-numbered hand by keeping cards with high numbers and discarding cards with low numbers. Yet you have no control over which cards you receive. For example, if you have bad luck and always get only low numbers, you cannot "develop" a high-numbered hand. But only a few high numbers are enough to complete your goal.

Analogically, species receive random mutations. If the mutation makes the survival and reproduction of the animal more likely, the species will keep this gene. If the mutation makes the survival and reproduction of the animal less likely, the species will discard this gene. -- This is a huge simplification, of course. Also the whole process is probabilistic; you may receive a very lucky mutation and yet die for some completely unrelated reason, which means your species cannot keep that gene. Also, which genes provide advantage, that depends on the environment, and the environment is changing. Etc.

But the idea at the core is that evolution = random mutation + natural selection. Random mutation gives you new cards; natural selection decides which cards to keep and which ones to discard.

Without mutations, there would be no new cards in the game; each species would evolve to some final form and remain such forever. Without natural selection, all changes would be random, and since most mutations are harmful, the species would go extinct (although this is a contradiction in terms, because if you can die as a result of your genes, then you already have some form of natural selection: selecting for survival of those who do not have the lethal genes).

Sometimes there are many possible solution for one problem. For example, if you need to pick fruit that is very high on the trees (or more precisely speaking: if there is a fruit very high on the trees that no one is picking yet, so anyone who could do so, would get a big advantage), here are things that could help: a longer neck, longer legs, ability to jump, ability to climb trees, ability to fly, maybe even ability to tumble the trees. When you randomly get any card in this set (and it doesn't come with big disadvantages which would make it a net loss), you keep it. Some species went one way, other species went other way. -- A huge simplification again, since you cannot get an ability to fly in one step. You probably only get an ability to climb a little bit, or to jump a little bit. And in the next step, you can get ability to climb a little bit more, or to jump a little bit more, or to somehow stay in the air a little bit longer after you have jumped. Every single step must provide an additional advantage.

They need it, therefore if it randomly happens, they will keep the outcome.

Yes this. Of course it is not a given that something that would be a useful adaptation will develop randomly.

Great analogies with the hand of cards.

Ditto to Ander's comment - very nice summary and analogy, many thanks :)

This is something I still don't understand very well about evolution. They need it, and therefore they develop it? Is there anything that leads them to develop it, or is this related to the "evolving to extinction" chapter?

I'm not a biologist or anything, but I think I'm competent enough to answer this question.

You'll often see biology framed in teleological terms. That is, you'll often see it framed in terms that seem to indicate that natural selection is purposeful, like a person or God (agent) would be. I'll try to reframe this explanation in non-teleological terms. Animal husbandry/selective breeding/artificial selection is a good way to get an idea of how traits become more frequent in populations, and it just seems less mysterious because it happens on shorter timescales and humans set the selection criteria.

Imagine you have some wolves. You want to breed them such that eventually you'll have a generation of wolves that will do your bidding. Some of the wolves are nicer than others, and, rightly, wolves like these are the ones that you believe will be fit to do your bidding. You notice that nice wolves are more likely to have nice parents, and that nice wolves are more likely to give birth to nice pups. So, you prevent the mean wolves from reproducing by some means, and allow the nicest wolves to mate. You re-apply your selection criterion of Niceness to the next generation, allowing only the nicest wolves to mate. Before long, the only wolves you have around are nice wolves that you have since decided to call dogs.

In artificial selection, the selection criterion is Whatever Humans Want. In natural selection, the selection criterion is reproductive fitness; the environment 'decides' (see how easy it is to fall into teleology?) what reproduces. Non-teleologically, organisms with more adaptive traits are more likely to reproduce than organisms with less adaptive traits, and therefore the frequency of those traits in the population increases with time. Rather than thinking of natural selection as 'a thing that magically develops stuff,' imagine it as a process that selects the most adaptive traits among all possible traits. So, we're not so much making traits out of thin air as we are picking the most adaptive traits out of a great many possibilities. You didn't magically imbue the wolves with niceness; niceness was a possible trait among all possible traits, and you narrowed down the possibilities by only letting the nice wolves mate, until at one point you only had nice wolves left.

Like the things that we discussed earlier in the welcome thread, teleological explanations of biology are artifacts of natural language and human psychology. Well before humans spoke of biology, we spoke of agents, and this is often reflected in our language use. As a result, it's also often much more concise to speak in teleological terms. Compare, "Life forms need to be able to mentally model each other, and thus develop modeling software," with my explanations above. Teleological explanations are also often a product of the aforementioned mental-modeling software itself, just as we have historically anthropomorphized natural phenomena as deities. Very importantly, accurate biological explanations that are framed teleologically can be reframed in non-teleological terms, as opposed to explanations that are fundamentally teleological.

Please feel free to ask questions if I didn't explain myself well. And to others, please correct me if I've made an error.

No, you explained that really well!! Everything is a lot less fuzzy now! Thank you :) I think with science, the first time I read it, it makes sense to me, but I had such a bad habit of filing scientific facts into the "things you learn in school but don't really need to remember for real life" category of my brain that now, even when I actually care about learning new information, it still takes multiple explanations, and sometimes one really good one like yours, before it really start to sink in for me.

I'm pretty intimidated about posting, since you guys all seem so smart and knowledgeable, but here goes.

So I take the quality of this post along with this statement to indicate LW is not being friendly enough. I think we're currently losing more than we're gaining by being discouraging to newbies and lurkers. One suggestion I have that is probably unrealistic would be integrating the LW chatroom into the main site somehow so that people could get to it via a link from the front page. Chat feels like much less of a commitment than posting to even an open thread.

OP: good post. Don't worry about not being "up to snuff."

Thanks :)

I'll just say the intimidation factor for me stems more from my own utter lack of scientific knowledge than any unfriendliness on your guys' part. A visible chat room would definitely be a nice feature though!

ABSOLUTELY!

In general, I think the bar for posts in Discussion is way too high. After all, if you want to have a discussion, the only thing you really need is a question.

Edit: Obviously there's value in starting off a discussion with something more than just a question, and that all else equal, you'd prefer to start the discussion with more rather than less. But I still get the impression that the general atmosphere is to hold Discussion posts to way too high a standard.

One suggestion I have that is probably unrealistic would be integrating the LW chatroom

Agreed. I hope to do some work on the site in the next year or so (I'm not a good enough developer yet).

I think you are over-optimistic about human goodness. If you had to deconvert at all it is possible you are from a culture where Christian morals still go strong amongst atheists. (Comparison: I do not remember any family members with a personal belief, I do remember some great-grandmas who went to church because it was a social expectation, but I think they never really felt entitled to form a personal belief or deny it, it was more obedience in observation than faith.) These kinds of habits don't die too quickly, in fact they can take centuries - there is the hypothesis that the individualism of capitalism came from the Protestant focus on individual salvation.

My point is that this "goodness" is probably more culturally focused than genetic. While it may be possible that if people are really careful they can keep it on and on within an atheistic culture forever, it can break down pretty easily. Christianity tends to push a certain universalism - without that, if no effort is made to stop it, things probably regress to the tribal level. We cannot really maintain universalism without effort - it can be an atheist effort, but it must be a very conscious one.

As my life experience is the opposite - to me religious faith is really exotic - it seems to me that the difference is that religious folks and perhaps post-religious atheists for a generation or two keep moral things real - talk about right and wrong as if it was something as tangible as money. But in my experience it washes out after a few generations and then only money, power, status stay as real things, because they receive social feedback but right and wrong doesn't.

To put it differently - religions form communities. Atheists often just hang out. So there is a tendency to form looser, not so tightly knit social interactions. In a tight community, right and wrong gets feedback, people judge each other. When interaction becomes looser, it is more like why give a damn what some random guy thinks about what I did t was right or wrong? But things like power, status, money still work even in loose interactions, so people become more machiavellian. At least that is my experience. I am not claiming this universalistic goodness cannot be maintained in atheistic cultures, I am just claiming it requires a special, conscious effort, it does not flow from human nature.

I also think you typically get this "goodness" culture if you participate in a culture who thinks of themselves as high-status winners, on an international comparison. Sharing is a way to show this status, this surplus. It requires a certain lack of bitterness. If you feel like your group was maltreated by greater powers, or invaders etc. you will probably stick to the group. Thus sharing still happens in less winner cultures but more on a personal level, family, friends, not with strangers.

This over-optimism about goodness is a typical feature of LW and the Rationality book, so I guess you will feel more at home here than I do. To me it comes accross as mistaking the culture of the US as human nature.

I have not formulated this exactly, but I think there is such a thing as a "winner bias". It is very easy for someone from the Silicon Valley to think the behaviors there are universal, precisely because being powerful and succesful gives on the "privilege" to ignore everything you don't like to see. The most extreme form is a dictator thinking everybody agrees because nobody dares not to, but it also exists in a moderate form, that the voices of more succesful people and cultures being louder, hence coming accross as more popular, more universal unless you know the alternatives first-hand. However they are pretty sure not universal - if they were the whole world would be as succesful as SV. Well, or at least closer.

For example, a typical "winner bias" may be reading interviews with succesful CEOs and thinking this is how all CEOs think. No - but the mediocre ones don't get interviewed. So it is more of an availability heuristic. The availability heuristic forms a winner bias making first-worlders think everybody thinks like first-worlders, because voices that are not propped up by success are not heard accross oceans. The other way around is not true, of course.

However I think this "winner bias" is more than just an availability heuristic. Probably it has also something to do with not having egos hurt from other groups having higher status.

I agree that "goodness" is luxury; only people who do not have serious problems (at least for the given moment) can afford it; or those who have cultivated it in the past and now they keep it by the power of habit. On the other hand, I believe that it is universal in the sense that if a culture can afford it, sooner or later some forms of "goodness" will appear in that culture. There will be a lot of intertia, so if a culture gains a lot of resources today, they will not change their behavior immediately. The culture may even have some destructive mechanisms that will cause it to waste the resources before the "goodness" has a chance to develop.

Sorry for not being more specific here, but I have a feeling that we are talking about something that exists only in a few lucky places, but keeps reappearing in different places at different times. It is not universal as in "everyone has it", but as in "everyone has a potential to have it under the right circumstances".

Not just surplus, there are empirical records of poor people in rich soceties donating more to charity than rich people in rich soceties. I think there is also something going on with the whole of society as such, not just people's personal feelings of surplus or not.

Good post. I think you are thinking about morality correctly, and I share your feelings about the sentiments behind virtue ethics and consequentialism not being particularly dissimilar or totally incompatible.

Um... so my high school Calculus teacher, who is lots, lots smarter than I am, thinks "emotion" is evidence of intelligent design. I thought this was just a "god of the gaps" thing, but maybe it is really the simplest explanation. I think most people here have ruled out an omnipotent, omni-benevolent God... but maybe life on earth as we know it is really just an alien-child's abandoned science fair project or something.

So: I'm noting your de-conversion story cites emotional reasons and thoughts about morality, rather than epistemic parsimony (the idea that simpler explanations are more likely). Personally, I see this as a "wrong reason" to de-convert.

You can't start out thinking about what is good, and arrive at what is factual. The universe doesn't care about what we think is good. Reality can be messed up and weird, there's no rule against it, so discovering that some of the logical implications of a belief are messed up and weird morally does not mean that belief is false.

And you do allude to this, in this quoted paragraph. You've basically said, "okay, maybe we don't have an omni-benevolent God, maybe reality is messed up and weird, an abandoned science project".

But while a realization that religious explanations are morally and emotionally unsatisfying, followed by a realization that reality is allowed to be morally and emotionally unsatisfying might lead to superficially rational-ish beliefs by removing religious impediments to clear thinking, this is not the same thing as matter-of-factly concluding "religious explanations are false because they're simply too complicated to be true" without even touching upon emotions and morals.

maybe it is really the simplest explanation

The shortcut to refute that is "but why did the alien god want to do such a science project? That just brings us back to the same question. In positing another conscious being, we haven't added anything to our explanation of the preferences of conscious beings."

Here's the thing: Gods (and conscious beings in general) are very complicated. People seem simpler because we have mental shortcuts to model people, we evolved to deal with people, all our lives people have been doing stuff. But when you step back outside of that, you see that "a conscious being did it" is actually a much more complicated explanation than a five-thousand page essay detailing how a complex phenomenon arose from chemical soup. Conscious beings are really complicated and you can't invoke them without introducing a whole bunch of complexity concerning how they are structured into your hyopthesis. And that's the right reason to deconvert, or to change one's beliefs about reality. The simplicity of a logically consistent explanation should increase your estimate of its likelihood of it being correct. The moral niceness should have no baring.

TL:DR - parsimony + the realization that humans suffer from a sort of "illusion" that consciousness is fundamentally simple = deconversion for epistemically correct reasons. Not ethical or hedonistic considerations.

And I think that coming more fully into that realization might help with the whole "I'm still not very satisfied with the idea of something being an end-in-itself" problem. It shouldn't help, because preferences are preferred regardless of how they are created, whether by god or biochemical soup, but I think it does anyway, because when you intuitively know you've hit the logical bottom as far as justifications the "dissatisfied" feeling goes away.

(I'm guessing you know all of this at some level and are mostly kidding with the alien science project hypothesis - perhaps I'm explaining something that doesn't really need to be explained for you at this point. I just thought, given that morality might still be implicitly tangled up in god within your psyche, maybe thinking about parsimony explicitly when thinking about this question will help.)

Thanks, and thanks for your thoughtful reply! I had to look up the definition of parsimony, but I think that idea helps a lot.

So: I'm noting your de-conversion story cites emotional reasons and thoughts about morality, rather than epistemic parsimony (the idea that simpler explanations are more likely). Personally, I see this as a "wrong reason" to de-convert.

My story was just a story, really. Not an argument. I probably did de-convert for emotional reasons, but also because I recognized that I only believed what I believed because I was raised believing it. Obviously, there was a chance that I just happened to be born into the one true religion, but I figured if that were the case, I would find my way back there as I examined the evidence. I wanted to start from a clean slate.

The shortcut to refute that is "but why did the alien god want to do such a science project?

Yeah, you're right. Although I didn't even consider "moral niceness" or lack of since it really wouldn't affect our lives in any way. But okay, I'm already convinced it's not the "simplest" answer... I will edit that part out :)

I'm already convinced it's not the "simplest" answer

I love how people on lesswrong change minds so readily

And I'm still not very satisfied with the idea of something being an end-in-itself:

So, this feeling of dis-satisfaction you are reporting is commonly termed "Existential Angst". "Existentialism" is the idea that morality has no basis in anything deeper than the individual. It's common after deconversions and is related to the whole "God is Dead" Nietzsche thing, and the question of how we can start rebuilding a framework for morality beyond mere hedonism from that point.

The reason I thought explicitly introducing parsimony into your thinking toolkit would help is that maybe once one internalizes that consciousness is complicated and not something which just happens, perhaps the "alien god" will get a little less alien. At some point, I think you'll stop feeling like your preferences and values were arbitrarily chosen by cold random unfeeling processes, and start feeling like the physics driving the "alien god" is really just a natural part of you, and that your values and preferences are a really integral part of you and you start treating those things with an almost religious reverence. I think once you really understand all that goes into making you conscious and where "good' comes from, the whole thing stops being cold and unfeeling and starts being warm and satisfying.

I was never a Christian or theist in the first place so I didn't go through precisely the same experience (I was loosely Hindu and I suspect transitioning from pantheism to reductionism is much easier, especially given the focus on destroying the illusion of a coherent "I" in vedic religions)...But, sometime around entering high school my views on topics such as stem cells and abortion and animal treatment began to shift due to acquiring a reductionist view of consciousness. So I think understanding, at least in principle, how moral stuff and consciousness can be implemented by ordinary non-conscious matter and getting comfy with the idea that souls are constructed out of solid brain tissue that we can see and touch helps a lot when one grapples with moral questions and what they are rooted in.

I love how people on lesswrong change minds so readily

Hahahaha I completely interpreted this as sarcasm at first. I'm obviously still getting used to lesswrongers myself :)

So, this feeling of dis-satisfaction you are reporting is commonly termed "Existential Angst". "Existentialism" is the idea that morality has no basis in anything deeper than the individual. It's common after deconversions and is related to the whole "God is Dead" Nietzsche thing, and the question of how we can start rebuilding a framework for morality beyond mere hedonism from that point.

Yeah. Do you know what got me started on this whole idea? I linked to it at the bottom of the article, but I was asking if there was any good reason to pursue ambition over total hedonism, and I now think that the answer is "goodness is an end-in-itself too" and I'm pretty okay with it.

At some point, I think you'll stop feeling like your preferences and values were arbitrarily chosen by cold random unfeeling processes, and start feeling like the physics driving the "alien god" is really just a natural part of you, and that your values and preferences are a really integral part of you and you start treating those things with an almost religious reverence. I think once you really understand all that goes into making you conscious and where "good' comes from, the whole thing stops being cold and unfeeling and starts being warm and satisfying.

Wow, I really like how you put that. Other people have tried to share a similar concept with me, but it always seemed cheesy and superficial. It never really started to sink in until now. I think it was the words "natural" and "warm" that did it for me. So thanks!

I linked to it at the bottom of the article, but I was asking if there was any good reason to pursue ambition over total hedonism, and I now think that the answer is "goodness is an end-in-itself too" and I'm pretty okay with it.

The way I look at it is, I'm good because that is what I prefer. There are many possible futures. I prefer some of those futures more than the others. I try my best to choose my favorite future with my actions. "Goodness" is part of what I prefer to happen, which is why I choose it. (And a version of me which didn't prefer goodness wouldn't be me, preferring goodness is a pretty big part of what goes into the definition of "me".)

Wow, I really like how you put that. Other people have tried to share a similar concept with me, but it always seemed cheesy and superficial. It never really started to sink in until now. I think it was the words "natural" and "warm" that did it for me. So thanks!

Very glad I could be helpful! I find Neil D.Tyson / Sagan-esque talk kinda cheesy too. But I remember when I was a kid dabbling in philosophy, thinking hard about free will and monitoring my own thoughts for any trace of randomness, and suddenly it just became really clear that my thoughts and feelings followed predictable processes and there wasn't any sharp boundary between the laws governing objects and the laws governing minds. It was kind of a magical moment, I felt pretty connected to the universe and all that jazz. It is cheesy, but it's pretty hard to talk about these sorts of spiritual-ish experiences without sounding cheesy.

We want to do things that make ourselves happy, and we want to do things that make others happy.

One way to test whether we all want to do things that make others happy is to read a book or two. Try "Human Smoke" by Nicholson Baker, for instance. Another test would be to spend part of a day in prison or a mental hospital. But the most direct means I found to disabuse myself of the idea we all want to do things that make others happy is to meet more people. Having met more people, I am now more appreciative of the not-all people who not-all of the time want to be happy and see happiness. And I get made less not-happy because I no longer think everyone is terminally trying to make me happy.

It could not be less wrong that all hearts are as your heart.

I don't have anything to add to the discussion, but in the interest of being phatic I just want to say that this is a great introductory post -- welcome to LessWrong!

Part 2

One ultimate psychological motivation can trump even goodness, and that's the second terminal virtue: personal happiness.

If goodness was a terminal virtue, then how could it ever be trumped by anything? Actually, I think there's an answer to this. To me, being a terminal virtue seems to mean that you value it regardless of whether it leads to anything else. Contrast this with "I value X only to the extent that it leads to Y". But if you have more than one terminal virtue, it seems to follow that you'd have to choose which one you value more, and thus one can trump another. I'd address these points.

Anyway, so are you saying that the drive for happiness trumps that of goodness? In most people? If so, to be clear, is it your opinion that happiness and goodness really are terminal goals/virtues of people, or are you just saying that "They are terminal virtues, but in cases where you have to choose, I think happiness trumps goodness"?

We usually want what makes us happy. I want what makes me happy. Spending time with family makes me happy. Playing board games makes me happy. Going hiking makes me happy. Winning races makes me happy. Being open-minded makes me happy. Hearing praise makes me happy. Learning new things makes me happy. Thinking strategically makes me happy. Playing touch football with friends makes me happy. Sharing ideas makes me happy. Feeling free makes me happy. Adventure makes me happy. Even divulging personal information makes me happy.

1) You are too cool!

2) From a literary perspective, that's a great job of illustrating with example.

Happiness and goodness might be subconsciously motivating them to choose this instrumental goal. Few people can introspect well enough to determine what's truly motivating them.

Indeed. I think belief in belief would be a great thing to bring up here. Furthermore, I think that explaining it, not just bringing it up, would be a good idea. Ie. a religious person might claim that he wants to become Christ-like even if it meant certain drops in happiness and goodness over the long term. But he may actually act differently, and if he does, then his actual drives oppose what he claims his drives are.

So anytime a Christian does anything but pray for others, do faith-strengthening activities, spread the gospel, or earn money to donate to missionaries, he is anticipating as if God/hell doesn't exist.

Or perhaps willfully disobeying Him? Which actually seems rather likely to me, because most religious people seem to not follow the instructions with 100% comprehensiveness. As someone raised as a reform Jew, I'm all too aware of this, and always wondered how you could pick and choose what instructions you follow. Perhaps more religious people are different, but my impression was that they follow more like 80-90% of the instructions.

Imagine how many deconversions we would see if it were suddenly sinful to play football, watch TV with your family, or splurge on tasty restaurant meals.

Or maybe we'd just see some interesting new rationalizations! I get your point though.

Basically everyone who says this believes "God wants what's best for us, even when we don't understand it."

I'm not sure what you mean by "best for us" here. Ie. do people believe that God wants happiness for them, goodness for society, or both? (And a new question just came to me - what does God think of animal rights?)

Becoming happy [section]

Your claim in this section seems to be that the terminal virtue of happiness trumps that of goodness (usually?). To really argue this, I think you'd need a lot more evidence.

But given that this is just a section of a larger article, you have limited space. Perhaps a solid intuitive argument could be made in that space, but I didn't find your examples to be intuitively general enough. Ie. if you gave examples that made me think, "Oh yeah, we do things like that in sooooo many different situations", then I would have been more convinced by your claim.

Whatever that stuff is, is what accounts for individual variance in which virtues are pursued.

My strong consequentialist instincts may be giving me a particularly hard time here... but I would specify that you're referring to instrumental virtues. When I think "virtue", I just instinctively think "terminal", and thus I had to reread this a few times before understanding it.

Also worth noting is the individual variance in the extent to which an individual is consciously motivated by happiness vs. goodness. If you look at the preference ratios between the two values, sociopaths are found at one end of the spectrum; extreme altruists, the other. Most of us fall somewhere in the middle.

We talked for a while about preference ratios and altruism ratios, so I know what you mean, but I don't think you explained it thoroughly enough.

Preference ratio := how much I care about me : how much I care about person X

Altruism ratio := "I act altruistically because it will lead to goodness" : "I act altruistically because it will lead to my happiness"

I think that these are two fantastic terms, and that they should be introduced into the "vocabulary of morality".

For most people, the only true terminal values are happiness and goodness.

I think what you meant is that for most people, their only terminal values are happiness and goodness. Terminal values belong to a person. Using the word "the" makes it sound like it's some sort of inherent property of the universe (to me at least).

And I'm still not very satisfied with the idea of something being an end-in-itself: [section]

Nicely done!

Why should we be controlled by emotions that originated through random chance?

Wrong question. It's not a matter of whether they should control us. It's a fact that they do.

Exactly! Not many people seem to understand this.

Um... so my high school Calculus teacher, who is lots, lots smarter than I am, thinks "emotion" is evidence of intelligent design. I thought this was just a "god of the gaps" thing, but maybe it is really the simplest explanation. I think most people here have ruled out an omnipotent, omni-benevolent God... but maybe life on earth as we know it is really just an alien-child's abandoned science fair project or something.

The former two lines felt like such a great place to end :(

Why bring up the possibility of intelligent design here? You already mention the alien-god of evolution which implies that there is no intelligent design (I think; I just read the wiki article for the first time quickly). Regardless, the origin of the universe/emotions doesn't seem too relevant and felt like an awkward ending to me.

In short, it was the first time I've had a conversation with a fellow "rationalist" and it was one of the coolest experiences I've ever had in my life.

Likewise!! On both counts.


For the record, I went really hard on you here. I would say "don't take it personally", but I know that you won't ;)

Anyway, so are you saying that the drive for happiness trumps that of goodness? In most people? If so, to be clear, is it your opinion that happiness and goodness really are terminal goals/virtues of people, or are you just saying that "They are terminal virtues, but in cases where you have to choose, I think happiness trumps goodness"?

Nah, either one can trump the other, depending on the situation and the individual.

[flattery]

Thanks :)

Indeed. I think belief in belief would be a great thing to bring up here. Furthermore, I think that explaining it, not just bringing it up, would be a good idea. Ie. a religious person might claim that he wants to become Christ-like even if it meant certain drops in happiness and goodness over the long term

But I bring that up right in the next paragraph! It fits with both, but do you really think it belongs with 'become Christ-like' over 'become obedient to God's will' ? Or are you saying that I should double mention it it twice?

Or perhaps willfully disobeying Him? Yeah, that too! But that one's so obvious, isn't it? Here, we're talking about people who would actually claim that their terminal goal is to "become obedient" and I don't think you as a reform Jew would ever have claimed that...

I'm not sure what you mean by "best for us" here. Ie. do people believe that God wants happiness for them, goodness for society, or both? (And a new question just came to me - what does God think of animal rights?)

That's the point, haha, they don't know for sure because only God knows God's will! As for animal rights, I know only a few Christians who are into it, out of all the many Christians I know, only two are vegetarian... most believe God gave man dominion over animals, which means we take care of them and eat them. Some will also misinterpret Peter's vision in Acts 10 and cite this as God giving us permission to eat meat, but most will cite Genesis and man's "dominion"

Your claim in this section seems to be that the terminal virtue of happiness trumps that of goodness (usually?). To really argue this, I think you'd need a lot more evidence.

(sigh) If you really think I'm making that argument, or any argument (see my comment to your Part 1), then I really need to practice my writing. :(

When I think "virtue", I just instinctively think "terminal", and thus I had to reread this a few times before understanding it.

(nods) Good, because this was more of my goal, to get people to rethink where to draw the boundary.

I think what you meant is that for most people, their only terminal values are happiness and goodness. Terminal values belong to a person. Using the word "the" makes it sound like it's some sort of inherent property of the universe (to me at least).

Oops, let me rephrase that to be more clear. "The only true terminal values are happiness and goodness." Thanks. I do think it's like some sort of inherent property of the universe or something.

The former two lines felt like such a great place to end :(

You're right!!!! That was silly of me. Ending on "emotion" just reminded me of that conversation and I wanted to get some feedback, but I shouldn't have been so lazy and should have asked about it on an open thread or something.

Likewise!! On both counts.

:-)

Part 1

This is probably the first "philosophical" thought I've had in my life

Haha, good one. Humor is often a good way to open :)

happy

I assume you mean "desirability of mind-state". People associate the word "happy" with a lot of different things, so I think it's worth giving some sort of operational definition (could probably be informal though).

So I suspect a certain commonality among human beings in that we all actually share the same terminal values, or terminal virtues.

I think a quick primer on consequentialism vs. virtue ethics would be appropriate. a) Some people might not know the difference. b) It's a key part of what you're writing about and so a refresher feels like it'd be useful.

You use the phrase "terminal virtues" without first defining it. I don't think it's an "official" term, and I don't think it "has enough behind it" where people could infer what it means.

I think you should more clearly distinguish between what's a question for the social sciences, and what's a question for philosophy.

Social sciences:

1) Do people claim to be consequentialists, or virtue ethicists?

2) Do people act like consequentialists, or virtue ethicists? Ie. what would the decisions they make imply about their beliefs?

3) What are the fundamental things that drive/motivate people? Can it always be traced back to happiness or goodness (as you define them)? Or are there things that drive people independent of happiness and goodness? Example: say that someone claims to value truth. Would they tell the truth if they knew for a fact that it would lead to less happiness and goodness in the long-run?

One of the key points you seem to be making is that as far as 3) goes, for the overwhelming majority of people, their drives/motives can be traced to happiness or goodness. But what does it mean for a drive to be traced to something? Well, my thought is that drives depend on what we truly care about. We may have a drive for X, but if we only care about X to the extent that it leads to Y, then Y is what we truly care about, and I predict that the drive for X will only be as strong as the expectation that X -> Y (although I'm sure the relationship isn't perfectly linear; humans are weird).

However, this is a question for the social sciences. The way to figure it out would be to study it scientifically. Ie. by observing how people act and feel in different situations. In particular, since it involves people, the domain would be one of the social sciences.

Philosophy:

1) Does anything have "intrinsic value"?

2) What does having "intrinsic value" even mean exactly? How would the world look if things had intrinsic value? How would it look if things didn't have intrinsic value?

3) What about morality? What does it mean for something to be moral/good? How do these rules get determined?

My stance is that a) the words I mention above are hard to use because they don't have precise and commonly accepted definitions, and b) terminal goals are completely arbitrary. Ie. you can't say that killing people is a bad terminal goal. You can only say that "killing people is bad if... you want to promote a sane and happy world. (Instrumental) rationality is about being good at achieving our ends. But it doesn't help us pick our ends.

I don't want to believe this though. I've been conditioned to feel like ends are good/bad, despite my understanding. And I've been conditioned to seek purpose, ie. to find and seek "good" ends. Because of the way I've been conditioned, I don't like believing that goals are completely arbitrary, but unfortunately it's the view that makes the most sense to my by very large margins.

Often, but not always, these two desires go hand-in-hand.

I don't think it's completely clear what this means. I think you mean "doing good tends to also make us happy". You do end up saying this, but I think you say it two sentences too late. Ie. I'd say "doing good tends to also make us happy" before using the hand-in-hand phrase, and before talking about the "components" of happiness (I'd use the word determinants, which is a bit of a nitpick).

psychological motivators

I have a feeling that this isn't the right term. Regardless, I'd explain what you mean by it.

handing out money through personal-happiness-optimizing random acts of kindness

Aka warm fuzzies.

As rational human beings, we occasionally will consciously choose to inefficiently optimize our personal happiness for the sake of others.

Very important point: If you're claiming that doing so is rational, then one of two things must be the case:

1) You alter your claim to say that it's rational... presuming a terminal value of goodness.

2) You argue that a terminal value of goodness is rational.

As I read, I couldn't help but think that virtue ethics and consequentialism are not really so different at heart.

Another very important point: distinguish theory from practice.

As I understand it:

  • In theory, they're complete opposites. A virtue ethicist would say, "X is just inherently virtuous. It doesn't matter what the consequences are." A consequentialist would say that it does depend on the consequences. Someone might say, "But consequentialists have to choose terminal values don't they?" My response, "Yes, but they admit that this is an arbitrary decision. They don't claim that these terminal values are virtuous (as I understand it)."
  • In practice, virtue ethicists often pursue things to achieve the end of being virtuous, and their virtues are often very very similar to the terminal values of consequentialists. At the end of the day, their virtues are pretty much just happiness and goodness. And at the end of the day, these are often the terminal values that consequentialists choose. I think that this is the point that you were making. And I thank you for making it, because I didn't really pay much attention to that fact. My overly literal and reductionist approach failed to lead me to notice how important the practical outcome is. Furthermore, I'm not sure how true this is, but it seems that in practice, a lot of consequentialists believe that their terminal goals do posses inherent virtue, in which case the lines do get really fuzzy between consequentialism and virtue ethicism.

Thanks for the tips! Adding some a brief primer on virtue ethics and consequentialism is a good idea, and I think you're right that this whole idea is more relevant to the social sciences than philosophy. Did you actually want answers to those questions, or were they just to help show me the kind of questions that belong in each category? Great distinction at any rate, I'll go change that word "philosophical" to "intellectual" now.

I think you noticed, or at least, you've now led me to notice, that I'm not really interested in the "in theory" at all, or in struggling over definitions. I'm just trying to show that what is actually happening "in practice" and suggest that whether someone calls himself a virtue ethicist or a consequentialist doesn't change the fact that he is psychologically motivated (for lack of a better term) to pursue happiness and goodness. I think what I'm trying to do with this article is help figure out where we should draw a boundary.

b) terminal goals are completely arbitrary. Ie. you can't say that killing people is a bad terminal goal. You can only say that "killing people is bad if... you want to promote a sane and happy world. (Instrumental) rationality is about being good at achieving our ends. But it doesn't help us pick our ends.

I think this might have been my whole point, that our real ends aren't as arbitrary as we think. It seems to me that in practice there are really just two ends that humans pursue. Nothing else seems like an end-in-itself. Killing people can be an instrumental goal that someone consciously or subconsciously thinks will make him happy, that will lead him to his optimal mind-state. He might be wrong about this; it might not actually lead him to his optimal mind-state. Or maybe it does. Either way, it doesn't matter in the context of this discussion whether we classify killing as "wrong" or not, it matters what we do about it. In the real world, we're motivated, by our own desires for personal happiness and goodness, to lock up killers.

Very important point: If you're claiming that doing so is rational, then one of two things must be the case:

But I'm not claiming it's rational... I'm not claiming anything, and I'm not arguing anything or proving any point. I'm just describing how I observe that people who seem very rational can still maximize their personal happiness inefficiently. The resulting idea is that goodness seems like an end-in-itself, and a relatively universal one, so we should recognize it as such.

The main takeaway I'm getting from your advice is that I should try to make it clear in this article that I'm not attempting to prove a point, but rather just to "carve along the joints" and offer a clearer way of looking at things by lumping happiness and goodness into the same category.

Perhaps one other way we could describe what is actually happening in practice would be to say that virtue ethicists pursue their terminal values more subconsciously while consequentialists pursue the same terminal values more consciously.

Why did the alien-god give us emotions?

The alien-god does not act rationally.... The origin of emotion ultimately seems like the result of random chance.

Emotions are likely as useless as other things the alien-god gave us, things like eyes and livers and kidneys and sex-drives and fight-or-flight responses.

Emotions appear to drive social cooperation at least between mammals. Human partnership with dogs is mediated and cemented by emotions, I think only someone who has spent no time with dogs could disagree with this observation. Emotions and their expressions are common enough between humans and dogs that they probably exist across, at least, a broad swath of mammals.

Just two examples of what emotions get us: 1) pair-bonding leading to effective partnership at raising our extremely needy young, and 2) a nearly irresistibly powerful impetus to get the hell away from scary animals, especially if they surprise us at night. Pretty clearly, both of these are quite useful to our survival and so these emotions would have been developed by natural selection for fitness, just as the kidney's ability to clean blood and they eye's ability to focus would have been.