(A shorter gloss of Fun Theory is "31 Laws of Fun", which summarizes the advice of Fun Theory to would-be Eutopian authors and futurists.)

Fun Theory is the field of knowledge that deals in questions such as "How much fun is there in the universe?", "Will we ever run out of fun?", "Are we having fun yet?" and "Could we be having more fun?"

Many critics (including George Orwell) have commented on the inability of authors to imagine Utopias where anyone would actually want to live.  If no one can imagine a Future where anyone would want to live, that may drain off motivation to work on the project.  The prospect of endless boredom is routinely fielded by conservatives as a knockdown argument against research on lifespan extension, against cryonics, against all transhumanism, and occasionally against the entire Enlightenment ideal of a better future.

Fun Theory is also the fully general reply to religious theodicy (attempts to justify why God permits evil).  Our present world has flaws even from the standpoint of such eudaimonic considerations as freedom, personal responsibility, and self-reliance.  Fun Theory tries to describe the dimensions along which a benevolently designed world can and should be optimized, and our present world is clearly not the result of such optimization.  Fun Theory also highlights the flaws of any particular religion's perfect afterlife - you wouldn't want to go to their Heaven.

Finally, going into the details of Fun Theory helps you see that eudaimonia is complicated - that there are many properties which contribute to a life worth living.  Which helps you appreciate just how worthless a galaxy would end up looking (with very high probability) if the galaxy was optimized by something with a utility function rolled up at random.  This is part of the Complexity of Value Thesis and supplies motivation to create AIs with precisely chosen goal systems (Friendly AI).

Fun Theory is built on top of the naturalistic metaethics summarized in Joy in the Merely Good; as such, its arguments ground in "On reflection, don't you think this is what you would actually want for yourself and others?"

Posts in the Fun Theory sequence (reorganized by topic, not necessarily in the original chronological order):

  • Prolegomena to a Theory of Fun:  Fun Theory is an attempt to actually answer questions about eternal boredom that are more often posed and left hanging.  Attempts to visualize Utopia are often defeated by standard biases, such as the attempt to imagine a single moment of good news ("You don't have to work anymore!") rather than a typical moment of daily life ten years later.  People also believe they should enjoy various activities that they actually don't.  But since human values have no supernatural source, it is quite reasonable for us to try to understand what we want.  There is no external authority telling us that the future of humanity should not be fun.
  • High Challenge:  Life should not always be made easier for the same reason that video games should not always be made easier.  Think in terms of eliminating low-quality work to make way for high-quality work, rather than eliminating all challenge.  One needs games that are fun to play and not just fun to win.  Life's utility function is over 4D trajectories, not just 3D outcomes.  Values can legitimately be over the subjective experience, the objective result, and the challenging process by which it is achieved - the traveller, the destination and the journey.
  • Complex Novelty:  Are we likely to run out of new challenges, and be reduced to playing the same video game over and over?  How large is Fun Space?  This depends on how fast you learn; the faster you generalize, the more challenges you see as similar to each other.  Learning is fun, but uses up fun; you can't have the same stroke of genius twice.  But the more intelligent you are, the more potential insights you can understand; human Fun Space is larger than chimpanzee Fun Space, and not just by a linear factor of our brain size.  In a well-lived life, you may need to increase in intelligence fast enough to integrate your accumulating experiences.  If so, the rate at which new Fun becomes available to intelligence, is likely to overwhelmingly swamp the amount of time you could spend at that fixed level of intelligence.  The Busy Beaver sequence is an infinite series of deep insights not reducible to each other or to any more general insight.
  • Continuous Improvement:  Humans seem to be on a hedonic treadmill; over time, we adjust to any improvements in our environment - after a month, the new sports car no longer seems quite as wonderful.  This aspect of our evolved psychology is not surprising: is a rare organism in a rare environment whose optimal reproductive strategy is to rest with a smile on its face, feeling happy with what it already has.  To entirely delete the hedonic treadmill seems perilously close to tampering with Boredom itself.  Is there enough fun in the universe for a transhuman to jog off the treadmill - improve their life continuously, leaping to ever-higher hedonic levels before adjusting to the previous one?  Can ever-higher levels of pleasure be created by the simple increase of ever-larger floating-point numbers in a digital pleasure center, or would that fail to have the full subjective quality of happiness?  If we continue to bind our pleasures to novel challenges, can we find higher levels of pleasure fast enough, without cheating?  The rate at which value can increase as more bits are added, and the rate at which value must increase for eudaimonia, together determine the lifespan of a mind.  If minds must use exponentially more resources over time in order to lead a eudaimonic existence, their subjective lifespan is measured in mere millennia even if they can draw on galaxy-sized resources.
  • Sensual Experience:  Much of the anomie and disconnect in modern society can be attributed to our spending all day on tasks (like office work) that we didn't evolve to perform (unlike hunting and gathering on the savanna).  Thus, many of the tasks we perform all day do not engage our senses - even the most realistic modern video game is not the same level of sensual experience as outrunning a real tiger on the real savanna.  Even the best modern video game is low-bandwidth fun - a low-bandwidth connection to a relatively simple challenge, which doesn't fill our brains well as a result.  But future entities could have different senses and higher-bandwidth connections to more complicated challenges, even if those challenges didn't exist on the savanna.
  • Living By Your Own Strength:  Our hunter-gatherer ancestors strung their own bows, wove their own baskets and whittled their own flutes.  Part of our alienation from our design environment is the number of tools we use that we don't understand and couldn't make for ourselves.  It's much less fun to read something in a book than to discover it for yourself.  Specialization is critical to our current civilization.  But the future does not have to be a continuation of this trend in which we rely more and more on things outside ourselves which become less and less comprehensible.  With a surplus of power, you could begin to rethink the life experience as a road to internalizing new strengths, not just staying alive efficiently through extreme specialization.
  • Free to Optimize: Stare decisis is the legal principle which binds courts to follow precedent.  The rationale is not that past courts were wiser, but jurisprudence constante:  The legal system must be predictable so that people can implement contracts and behaviors knowing their implications.  The purpose of law is not to make the world perfect, but to provide a predictable environment in which people can optimize their own futures.  If an extremely powerful entity is choosing good futures on your behalf, that may leave little slack for you to navigate through your own strength.  Describing how an AI can avoid stomping your self-determination is a structurally complicated problem.  A simple (possibly not best) solution would be the gift of a world that works by improved rules, stable enough that the inhabitants could understand them and optimize their own futures together, but otherwise hands-off.  Modern legal systems fail along this dimension; no one can possibly know all the laws, let alone obey them.
  • Harmful Options:  Offering people more choices that differ along many dimensions, may diminish their satisfaction with their final choice.  Losses are more painful than the corresponding gains are pleasurable, so people think of the dimensions along which their final choice was inferior, and of all the other opportunities passed up.  If you can only choose one dessert, you're likely to be happier choosing from a menu of two than from a menu of fourteen.  Refusing tempting choices consumes mental energy and decreases performance on other cognitive tasks.  A video game that contained an always-visible easier route through, would probably be less fun to play even if that easier route were deliberately foregone.  You can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken.  And what if a worse option is taken due to a predictable mistake?  There are many ways to harm people by offering them more choices.
  • Devil's Offers:  It is dangerous to live in an environment in which a single failure of resolve, throughout your entire life, can result in a permanent addiction or in a poor edit of your own brain.  For example, a civilization which is constantly offering people tempting ways to shoot off their own feet - for example, offering them a cheap escape into eternal virtual reality, or customized drugs.  It requires a constant stern will that may not be much fun.  And it's questionable whether a superintelligence that descends from above to offer people huge dangerous temptations that they wouldn't encounter on their own, is helping.
  • Nonperson Predicates, Nonsentient Optimizers, Can't Unbirth a Child:  Discusses some of the problems of, and justification for, creating AIs that are knowably not conscious / sentient / people / citizens / subjective experiencers.  We don't want the AI's models of people to be people - we don't want conscious minds trapped helplessly inside it.  So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off.  Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living.
  • Amputation of Destiny:  C. S. Lewis's Narnia has a problem, and that problem is the super-lion Aslan - who demotes the four human children from the status of main characters, to mere hangers-on while Aslan does all the work.  Iain Banks's Culture novels have a similar problem; the humans are mere hangers-on of the superintelligent Minds.  We already have strong ethical reasons to prefer to create nonsentient AIs rather than sentient AIs, at least at first.  But we may also prefer in just a fun-theoretic sense that we not be overshadowed by hugely more powerful entities occupying a level playing field with us.  Entities with human emotional makeups should not be competing on a level playing field with superintelligences - either keep the superintelligences off the playing field, or design the smaller (human-level) minds with a different emotional makeup that doesn't mind being overshadowed.
  • Dunbar's Function:  Robin Dunbar's original calculation showed that the maximum human group size was around 150.  But a typical size for a hunter-gatherer band would be 30-50, cohesive online groups peak at 50-60, and small task forces may peak in internal cohesiveness around 7.  Our attempt to live in a world of six billion people has many emotional costs:  We aren't likely to know our President or Prime Minister, or to have any significant influence over our country's politics, although we go on behaving as if we did.  We are constantly bombarded with news about improbably pretty and wealthy individuals.  We aren't likely to find a significant profession where we can be the best in our field.  But if intelligence keeps increasing, the number of personal relationships we can track will also increase, along with the natural degree of specialization.  Eventually there might be a single community of sentients that really was a single community.
  • In Praise of Boredom:  "Boredom" is an immensely subtle and important aspect of human values, nowhere near as straightforward as it sounds to a human.  We don't want to get bored with breathing or with thinking.  We do want to get bored with playing the same level of the same video game over and over.  We don't want changing the shade of the pixels in the game to make it stop counting as "the same game".  We want a steady stream of novelty, rather than spending most of our time playing the best video game level so far discovered (over and over) and occasionally trying out a different video game level as a new candidate for "best".  These considerations would not arise in most utility functions in expected utility maximizers.
  • Sympathetic Minds:  Mirror neurons are neurons that fire both when performing an action oneself, and watching someone else perform the same action - for example, a neuron that fires when you raise your hand or watch someone else raise theirs.  We predictively model other minds by putting ourselves in their shoes, which is empathy.  But some of our desire to help relatives and friends, or be concerned with the feelings of allies, is expressed as sympathy, feeling what (we believe) they feel.  Like "boredom", the human form of sympathy would not be expected to arise in an arbitrary expected-utility-maximizing AI.  Most such agents would regard any agents in its environment as a special case of complex systems to be modeled or optimized; it would not feel what they feel.
  • Interpersonal Entanglement:  Our sympathy with other minds makes our interpersonal relationships one of the most complex aspects of human existence.  Romance, in particular, is more complicated than being nice to friends and kin, negotiating with allies, or outsmarting enemies - it contains aspects of all three.  Replacing human romance with anything simpler or easier would decrease the peak complexity of the human species - a major step in the wrong direction, it seems to me.  This is my problem with proposals to give people perfect, nonsentient sexual/romantic partners, which I usually refer to as "catgirls" ("catboys").  The human species does have a statistical sex problem: evolution has not optimized the average man to make the average woman happy or vice versa.  But there are less sad ways to solve this problem than both genders giving up on each other and retreating to catgirls/catboys.
  • Failed Utopia #4-2:  A fictional short story illustrating some of the ideas in Interpersonal Entanglement above.  (Many commenters seemed to like this story, and some said that the ideas were easier to understand in this form.)
  • Growing Up is Hard:  Each piece of the human brain is optimized on the assumption that all the other pieces are working the same way they did in the ancestral environment.  Simple neurotransmitter imbalances can result in psychosis, and some aspects of Williams Syndrome are probably due to having a frontal cortex that is too large relative to the rest of the brain.  Evolution creates limited robustness, but often stepping outside the ancestral parameter box just breaks things.  Even if the first change works, the second and third changes are less likely to work as the total parameters get less ancestral and the brain's tolerance is used up.  A cleanly designed AI might improve itself to the point where it was smart enough to unravel and augment the human brain.  Or uploads might be able to make themselves smart enough to solve the increasingly difficult problem of not going slowly, subtly insane.  Neither path is easy.  There seems to be an irreducible residue of danger and difficulty associated with an adult version of humankind ever coming into being.  Being a transhumanist means wanting certain things; it doesn't mean you think those things are easy.
  • Changing Emotions:  Creating new emotions seems like a desirable aspect of many parts of Fun Theory, but this is not to be trivially postulated.  It's the sort of thing best done with superintelligent help, and slowly and conservatively even then.  We can illustrate these difficulties by trying to translate the short English phrase "change sex" into a cognitive transformation of extraordinary complexity and many hidden subproblems.
  • Emotional Involvement:  Since the events in video games have no actual long-term consequences, playing a video game is not likely to be nearly as emotionally involving as much less dramatic events in real life.  The supposed Utopia of playing lots of cool video games forever, is life as a series of disconnected episodes with no lasting consequences.  Our current emotions are bound to activities that were subgoals of reproduction in the ancestral environment - but we now pursue these activities as independent goals regardless of whether they lead to reproduction.  (Sex with birth control is the classic example.)  A transhuman existence would need new emotions suited to the important short-term and long-term events of that existence.
  • Serious StoriesStories and lives are optimized according to rather different criteria.  Advice on how to write fiction will tell you that "stories are about people's pain" and "every scene must end in disaster".  I once assumed that it was not possible to write any story about a successful Singularity because the inhabitants would not be in any pain; but something about the final conclusion that the post-Singularity world would contain no stories worth telling seemed alarming.  Stories in which nothing ever goes wrong, are painful to read; would a life of endless success have the same painful quality?  If so, should we simply eliminate that revulsion via neural rewiring?  Pleasure probably does retain its meaning in the absence of pain to contrast it; they are different neural systems.  The present world has an imbalance between pain and pleasure; it is much easier to produce severe pain than correspondingly intense pleasure.  One path would be to address the imbalance and create a world with more pleasures, and free of the more grindingly destructive and pointless sorts of pain.  Another approach would be to eliminate pain entirely.  I feel like I prefer the former approach, but I don't know if it can last in the long run.
  • Eutopia is Scary:  If a citizen of the Past were dropped into the Present world, they would be pleasantly surprised along at least some dimensions; they would also be horrified, disgusted, and frightened.  This is not because our world has gone wrong, but because it has gone right.  A true Future gone right would, realistically, be shocking to us along at least some dimensions.  This may help explain why most literary Utopias fail; as George Orwell observed, "they are chiefly concerned with avoiding fuss".  Heavens are meant to sound like good news; political utopias are meant to show how neatly their underlying ideas work.  Utopia is reassuring, unsurprising, and dull.  Eutopia would be scary.  (Of course the vast majority of scary things are not Eutopian, just entropic.)  Try to imagine a genuinely better world in which you would be out of place - not a world that would make you smugly satisfied at how well all your current ideas had worked.  This proved to be a very important exercise when I tried it; it made me realize that all my old proposals had been optimized to sound safe and reassuring.
  • Building Weirdtopia:  Utopia and Dystopia both confirm the moral sensibilities you started with; whether the world is a libertarian utopia of government non-interference, or a hellish dystopia of government intrusion and regulation, either way you get to say "Guess I was right all along."  To break out of this mold, write down the Utopia, and the Dystopia, and then try to write down the Weirdtopia - an arguably-better world that zogs instead of zigging or zagging.  (Judging from the comments, this exercise seems to have mostly failed.)
  • Justified Expectation of Pleasant Surprises:  A pleasant surprise probably has a greater hedonic impact than being told about the same positive event long in advance - hearing about the positive event is good news in the moment of first hearing, but you don't have the gift actually in hand.  Then you have to wait, perhaps for a long time, possibly comparing the expected pleasure of the future to the lesser pleasure of the present.  This argues that if you have a choice between a world in which the same pleasant events occur, but in the first world you are told about them long in advance, and in the second world they are kept secret until they occur, you would prefer to live in the second world.  The importance of hope is widely appreciated - people who do not expect their lives to improve in the future are less likely to be happy in the present - but the importance of vague hope may be understated.
  • Seduced by Imagination:  Vagueness usually has a poor name in rationality, but the Future is something about which, in fact, we do not possess strong reliable specific information.  Vague (but justified!) hopes may also be hedonically better.  But a more important caution for today's world is that highly specific pleasant scenarios can exert a dangerous power over human minds - suck out our emotional energy, make us forget what we don't know, and cause our mere actual lives to pale by comparison.  (This post is not about Fun Theory proper, but it contains an important warning about how not to use Fun Theory.)
  • The Uses of Fun (Theory):  Fun Theory is important for replying to critics of human progress; for inspiring people to keep working on human progress; for refuting religious arguments that the world could possibly have been benevolently designed; for showing that religious Heavens show the signature of the same human biases that torpedo other attempts at Utopia; and for appreciating the great complexity of our values and of a life worth living, which requires a correspondingly strong effort of AI design to create AIs that can play good roles in a good future.
  • Higher Purpose:  Having a Purpose in Life consistently shows up as something that increases stated well-being.  Of course, the problem with trying to pick out "a Purpose in Life" in order to make yourself happier, is that this doesn't take you outside yourself; it's still all about you.  To find purpose, you need to turn your eyes outward to look at the world and find things there that you care about - rather than obsessing about the wonderful spiritual benefits you're getting from helping others.  In today's world, most of the highest-priority legitimate Causes consist of large groups of people in extreme jeopardy:  Aging threatens the old, starvation threatens the poor, extinction risks threaten humanity as a whole.  If the future goes right, many and perhaps all such problems will be solved - depleting the stream of victims to be helped.  Will the future therefore consist of self-obsessed individuals, with nothing to take them outside themselves?  I suggest, though, that even if there were no large groups of people in extreme jeopardy, we would still, looking around, find things outside ourselves that we cared about - friends, family; truth, freedom...  Nonetheless, if the Future goes sufficiently well, there will come a time when you could search the whole of civilization, and never find a single person so much in need of help, as dozens you now pass on the street.  If you do want to save someone from death, or help a great many people, then act now; your opportunity may not last, one way or another.
New Comment
30 comments, sorted by Click to highlight new comments since:

It occurred to me at some point that Fun Theory isn't just the correct reply to Theodicy; it's also a critical component of any religious theodicy program. And one of the few ways I could conceive of someone providing major evidence of God's existence.

That is, I'm fairly confident that there is no god. But if I worked out a fairly complete version of Fun Theory, and it turned out that this really was the best of all possible worlds, I might have to change my mind.

[-]Roko50

Unfortunately, it seems to me that moral anti-realism and axiological anti-realism place limits on our ability to "optimize" the universe.

To put the argument in simple terms:

  1. Axiological/Moral anti-realism states that there are no categorically good states of the universe. On this we agree. The goodness of states of the universe is contingent upon the desires and values of those who ask the question; in this case us.

  2. Human minds can only store a finite amount of information in our preferences. Humans who have spent more time developing their character beyond the evolutionarily programmed desires [food, sex, friendship, etc] will fare slightly better than those who haven't, i.e. their preferences will be more complicated. But probably not by very much, information theoretically speaking. The amount of information your preferences can absorb by reading books, by having life experiences, etc is probably small compared to the information implicit in just being human.

  3. The size of the mutually agreed preferences of any group of humans will typically be smaller than the preferences of any one human. Hence it is not surprising that in the recent article on "Failed Utopia 4-2" there was a lot of disagreement regarding the goodness of this world.

  4. The world that we currently live in here in the US/UK/EU fails to fulfill a lot of the base preferences that are common to all humans, with notable examples being the dissatisfaction with the opposite sex, boring jobs, depression, aging, etc, etc...

  5. If one optimized over these unfulfilled preferences, one would get something that resembled - for most people - a low grade utopia that looked approximately like Banks' Culture. This low grade utopia would probably only be a small amount of information away from the world we see today. Not that it isn't worth doing, of course!

This explains a lot of things. For example, the change of name of the WTA from "transhumanist" to "humanity plus". Humanity plus is code for "low grade utopia for all". "Transhumanist" is code for futures that various oddball individuals envisage in which they (somehow) optimize themselves way beyond the usual human preference set. These two futures are eminently compatible - we can have them both, but most people show no interest in the second set of possibilities. It will be interesting to think about the continuum between these two goals. It's also interesting to wonder whether the goals of "radical" transhumanists might be a little self-contradictory. With a limited human brain, you can (as a matter of physical fact) only entertain thoughts that constrain the future to a limited degree. Even with all technological obstacles out of the way, our imaginations might place a hard limit on how good a future we can try to build for ourselves. Anyone who tries to exceed this limit will end up (somehow) absorbing noise from their environment and incorporating it into their preferences. Not that I have anything against this - it is how we got our preferences in the first place - though it is not a strong motivator for me to fantasize about spending eternity fulfilling preferences that I don't have yet and which I will generate at random at some point in the future when I realize that my extant preferences have "run out of juice".

This, I fear, is a serious torpedo in the side of the transhumanist ideal. I eagerly await somebody proving me wrong here...

Roko, preferences are not flat, they depend and act on the state of the world in general and on themselves in particular. They can grow very detailed, and include a state quite remote from current world as desirable. The problem with derailed aspect of transhumanism in not in remoteness from currently human, but in mistaken preferences arrived at mostly by blind leaps of imagination. We define the preferences over remote future implicitly, without being able to imagine it, only gradually becoming able to actually implement them, preserving or refining the preference through growth.

[-]Roko10

I response to my own question: I think that the information difference between innate biological prefs that we have and explicitly stated preferences is a lot bigger than I thought.

For example, I can state the following:

(1) I wish to be smart enough to understand all human science and mathematics published to this date, and to solve all outstanding scientific and philosophical questions including intelligence, free will and ethics. I want to know the contents and meaning of every major literary work in print and every major film, to understand the history of every major civilization, to fall in love with the person who is most compatible with me in the world.

Now if I make all these wishes, how much have I cut down future states of the universe? How much optimizing power in bits have I wished for?

I expressed the wish in about 330 characters, which according to Shannon means I have expressed 330 bits of information, roughly equivalent to specifying the state of a 20X20 grid of pixels each one of which can be either on or off. I feel that this is something of an underestimate in terms of how much I have cut down future states of the universe. Another way of calculating the complexity of the above wish is to bound it by the log of the number of psychologically distinguishable states of my mind. Given the FHI brain emulation roadmap, this upper bound could be a very large number indeed. Here is another ~300-char wish:

(2) I want to be as rich as Bill Gates. I want to have ten mansions, each with ten swimming pools and a hundred young, willing female virgins to cater to my every whim. I want my own private army and an opposing force who I will trounce in real combat every weekend. I want an 18-inch penis and muscles the size of Arnie in his younger days, and I want to be 6'7''. I want to be able to eat galaxy chocolate flavored ice cream all day without getting fat or getting bored with it. I want a car that goes at 5000 miles an hour without any handling problems or danger of accident, and I want to be able to drive it around the streets of my city and leave everyone in the dust.

Now it appears to me that this wish probably did only cut down the future by 300 bits... that it is a far less complex wish than the first one I gave. Presumably the difference between those who end up in low grade heaven and those who end up as superintelligent posthumans inhabiting a Dyson sphere, or having completely escaped from our physics lies in the difference between the former wish and the latter. Again, it is fruitful and IMO very important to explore the continuum between the two.

Roko, the Minimum Message Length of that wish would be MUCH greater if you weren't using information already built into English and our concepts.

[-]Jon250

I can certainly understand your dissatisfaction with medieval depictions of heaven. However, your description of fun theory reminds me of the Garden of Eden. i.e. in Genesis 1-2, God basically says:

"I've created the two of you, perfectly suited for one another physically and emotionally, although the differences will be a world to explore in itself. You're immortal and I've placed you in a beautiful garden, but now I'm going to tell you to go out and be fruitful and multiply and fill the earth and subdue it and have dominion over all living things; meaning build, create, procreate, invent, explore, and enjoy what I've created, which by the way is really really big and awesome. I'll always be here beside you, and you'll learn to live in perfect communion with me, for I have made you in my own image to love the process of creation as I do. But if you ever decide that you don't want that, and that you want to go it alone, rejecting my presence and very existence, then there's this fruit you can take and eat. But don't do it, because if you do, you will surely die."

It seems that the point of disagreement is that your utopia doesn't have an apple. The basic argument of theodicy is that Eden with the apple is better than Eden sans apple. To the extent that free will is good, a utopia must have an escape option.

Or, to put it another way, obedience to the good is a virtue. Obedience to the good without the physical possibility evil is a farce.

It's easy to look around and say, "How could a good God create THIS." But the real question is, "How could a good God create a world in which there is a non-zero probability of THIS."

This logic assumes that a beyond human intelligence in a redesigned world would still find inherent value in free will. Isn't it possible that such an intelligence would move beyond the need to experience pain in order to comprehend the value of pleasure?

According to the bible, god created different aspects of the world across six days and after each creation he "saw that it was good". Yet nothing ELSE existed. If there had never been a "world" before, and evil had not yet been unleashed, by what method was this god able to measure that his creation was good? One must assume that god's superior intelligence simply KNEW it to be good and had no need to measure it against something "bad" in order to know it. Couldn't the eventual result of AI be the attainment of the same ability... the ability to KNOW pleasure without the existence of its opposite?

Isn't the hope (or should I say fun?) of considering the potential of AI that such a vast intelligence would move life BEYOND the anchors to which we now find ourselves locked? If AI is simply going to be filled with the same needs and methods of measuring "happiness" as we currently deal with, what is the point of hoping for it at all?

This is a bit of an afterthought, but even at our current level of intelligence, humans have no way of knowing if we would value pleasure if pain did not exist. Pain does now and has always existed. "Evil" (or what we perceive as evil) has existed since the dawn of recorded human existence. How can we assume that we are not already capable of recognizing pleasure as pleasure and good as good without their opposites to compare them to? We have never had the opportunity to try.

I beg to differ on the aspect of there being non-existence predating the creation. A subtle nuance in the first verse of Genesis offers an insight into this. Gen 1:1 "In the beginning God created the heavens and the earth. And the earth was without form, and void; and darkness was upon the face of the deep." Original manuscripts offer a translation that is closer to "and the earth 'became' without form (sic), and void". It may so very well be that in the assumption that God looked on his creation and saw that it was good, there was a pre-existential basis for this. Also to point out another simple example, there would be no record of wrong without a sort of legal system that says that an act is actually defined as wrong. I agree with the idea that there had to be an apple in the garden to bring to the front the difference between good and bad. Utopia can therefore only exists where there is an understanding or mere knowledge of dystopia.

I knew there would come a day when almost a decade of mandatory bible classes in private school would pay off. (That's not true, I've generally written it off as a really depressing waste of my mental resources... still) You've got the order of events in the Garden of Eden backwards. After God finished up and took off for Miller Time, Adam and Eve had nothing to do. They didn't need clothes or shelter, all animals were obedient and gentle, they had to live of fruit for eternity which would get old, the weather and season (singular) was always the same and they were the only those two people in existence with no concept of there ever being any more. Sure, they would have lived forever, but there was no challenge, inspiration, reason or stimulation. Only AFTER the forbidden fruit and the knowledge of good and evil does God start up Eve's biological clock and issue the 'be fruitful and multiply' command, society starts to develop, there's a ton of implicit incest (er... bonus?) and they can cook up a nice lamb shank to break up the monotony. Once again, the literal interpretation of the bible leaves a lot to be desired in a literary sense, because the Garden of Eden is one of the most depressing 'paradises' ever devised. Also, here I go again responding to many-years-cold comments.

[-][anonymous]00

and they can cook up a nice lamb shank to break up the monotony.

Well, no. That's not until Noah is issued permission to eat meat after the Flood.

because the Garden of Eden is one of the most depressing 'paradises' ever devised

It's not that depressing. It's just a park. The depressing part is that God gets angry and says, "Oh, you don't want to spend 100% of all your existence in this park for all eternity with literally nothing else? FUCK YOU AND LITERALLY DIE." A good God would have allowed much larger portions of possible life-space to be explored with fewer or even no penalties.

Eden is indeed more interesting for having the Apple, but damnation is so totally uninteresting that religious people had to go and invent Redemption, which is the simpering and undignified version of having your cake and eating it too.

Apparently having 72 virgins at your disposal is a utopia for many. EY should look into this...

but an eden with a reversible escape option is surely better than an eden with a non-reversible escape option yes?

Most religions believe that the escape option is reversible - otherwise there wouldn't be much point.

[-]Roko00

@ Carl Shulman

yes, I am aware that human "concepts" are acting as a big multiplier on how much you can wish for in a small number of words. But I want to know whether certain wishes make better use ir worse use of this, and I want toget some idea of exactly how much more a human can feasibly wish for.

I think that by using established human concepts to make a wish ("I want to understand and solve all current scientific problems"), you are able to constrain the future more, but you have less understanding of what you'll actually get. You trade in some safety and get more mileage.

[-]Roko00

@ Nesov: "Roko, preferences are not flat..."

I don't understand quite what you're saying. Perhaps it would help if i attempt to make my own post a bit clearer.

@Roko: As I understood, one of the points you made was about how preferences of both individual people and humanity as a whole are quite coarse-grained, and so strong optimization of environment is pointless. Beyond certain precision, the choices become arbitrary, and so continuing systematic optimization, forcing choices to be non-arbitrary from the updated perspective, basically consists in incorporating noise into preferences.

I reply that a formula for pi can be written down in much fewer bytes that it'd take to calculate 10000th digit in its decimal expansion. Human embodying a description of morality, just like note containing the formula for pi, can have no capacity for imagining (computing) some deeper property of that description, and still precisely determine that property. What we need in both cases is a way around the limitations of media presenting the description, without compromising its content.

[-]Roko00

@Vladimir:

Yes, you understood my message correctly, and condensed it rather well.

Now, what would it mean for human axiology to be like pi? A simple formula that unfolds into an "infinitely complex looking" pattern? Hmmm. This is an interesting intuition.

If we treat our current values as a program that will get run to infinity in the future, we may find that almost all of the future output of that program is determined by things that we don't really think of as being significant; for example, very small differences in the hormone levels in our brains when we first ask our wish granting machine for wishes.

I would only count those features of the future that are robust to very small perturbations in our psychological state to be truly the result of our prefs. On the other hand, features of the future that are entirely robust to our minds are also not the result of our prefs.

And still there is the question of what exactly this continued optimization would consist of. the 100th digit of pi makes almost no difference to its value as a number. Perhaps the hundredth day after the singularity will make almost no difference to what our lives are like in some suitable metric. Maybe it really will look like calculating the digits of pi: pointless after about digit number 10.

To satisfy the robustness criterion and this nonconvergence criterion seems hard.

If a computer program computes pi to 1,000,000 instead of 100 places, it doesn't make the result more dependent on thermal noise. You can run arbitrarily detailed abstract computations, without having the outcome depend on irrelevant noise. When you read a formula for pi from a note, differences in writing style don't change the result. AI should be only more robust.

Think of digits playing out in time, so that it's important to get each of them right at the right moment. Each later digit could be as important in the future as earlier digits now.

[-]Roko10

@Vladimir:

It is an open question whether our values and our lives will behave more like you have described or not.

For a lot of people, the desire to conform and not to be too weird by current human standards might make them converge over time. These people will live in highly customized utopias that suit their political and moral views, e.g. Christians in a mini-world where everyone has had their mind altered so that they can't doubt God, can't commit any sin, etc. E.g. ordinary modern semi-hedonists who live in something like the Culture. (Like pi as a number we use for engineering: the digits after the 100th convey almost no new information)

For others, boredom and curiosity will push them out into new territory. But the nature of both of these emotions is to incorporate environmental noise into one's prefs. They'll explore new forms of existence, new bodies, new emotions, etc, which will make them recursively weirder. These people will behave like the co-ordinates of a chaotic system in phase space: they will display very very high sensitivity to initial conditions, chance events "oh look, a flock of birds. I wonder what it would be like to exist as a swarm intelligence. I know, I'll try it".

The only group of people who I can see behaving the way you want are scientists. We have an abstract desire to understand how the world works. We will alter ourselves to become more intelligent in order to do so, and we have no idea what we will discover along the way. We are surely in for surprises as big as the discovery of evolution and quantum mechanics. Each new level of intelligence and discovery will be truly new, but hopefully the nature of truth is an abstract universal invariant that doesn't depend upon the details of the path you take to get to it.

In essence, scientists are the only ones for whom long term optimization of our world has the kind of unbounded value that singularitarians want. Ordinary people will only get a limited amount of value out of a positive singularity. Thus their apathy about it is understandable.

[-]Roko00

Thus their apathy about it is understandable.

... given that they don't think it is very likely, and they discount the future.

Note that I used "scientist" in a very general sense: anyone who really wants to understand reality for the sake of understanding it, anyone who has that natural curiosity.

I want to be able easily to come back to this. Would you create a category of "Sequences", and post it to that, so that there is a link on the side bar? I think there is at least another such sequence.

[-]Uni00

Eliezer_Yudkowsky wrote: "We don't want the AI's models of people to be people - we don't want conscious minds trapped helplessly inside it. So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off. Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living."

I do want AI models of people to be people. If an AI model of a human lacks the important part of the person that we call emotions, there is the risk that it will be a dangerously faulty model of that person. If it can't suffer, it can't understand why humans so eagerly want to get rid of suffering, and so it may in many situations think it knows better than us what is most important to achieve. It may regard the diminishing of suffering in humans just as irrational as we would regard turning the planet into paper clips.

From a utilitarian perspective, even from a negative utilitarian perspective, building AI models of humans and equipping then with human emotions and ability to suffer would not necessarily be unethical. It might be what would in the long run minimize the suffering. Trying to minimize suffering too fast can be dangerous; it might take away something we really value, along with the suffering, even if we don't value the suffering in and of itself. Future superhuman AI will know how to get rid of suffering in a completely safe way, once they have become in every relevant way immensely smarter than humans - but perhaps only if the first transhuman AI we build are equipped with the ability to suffer just like humans. You often have to know a problem well before you can safely solve it, and you can't really know suffering unless you actually feel it. So let's not hesitate to create AI that can suffer (once we can create such AI).

Happiness will grow immensely and suffering will and should be abolished later on, but let's not rush it during the process of creating AI models of humans.

We're talking about giving the models subjective experience, not just "emotions". You want the AI to create conscious minds inside itself and torture them to find out whether torture is bad? And then again every time it makes a decision where torture is a conceivable outcome? I'd hope we can give the AI a model that accurately predicts how humans react to stimuli without creating a conscious observer. Humans seem to be able to do that, at least..

Beware of anthropomorphizing AIs. A Really Powerful Optimization Process shouldn't need to "suffer" for us to tell it what suffering is, and that we would like less of it.

[-]Uni30

When we have gained total control of all the matter down to every single particle within, say, our galaxy, and found out exactly what kinds of combinations we need to put particles together in to maximize the amount of happiness produced per particle used (and per spacetime unit), then what if we find ourselves faced with the choice between 1) maximizing happiness short term but not getting control over more of the matter in the universe at the highest possible rate (in other words, not expanding maximally fast in the universe), and 2) maximizing said expansion rate at the cost of short term happiness maximation. What if this trade-off problem persists forever?

We might find ourselves in the situation where we, time after time, can either use all of our matter for maximizing the pace at which we take control over more and more matter, creating no short term happiness at all, or creating any non-zero amount of happiness short term at the expense of our ability to expand our ability to get us much more happiness in the future instead. We might find that, hey, if we postpone being happy for one year, we can be ten times as happy next year as we would otherwise be able to be, and that's clearly better. And next year, we are again in the same situation: postponing being happy one more year again seems rational. Next year, same thing. And so on.

Suppose that kind of development would never end, unless we ended it by "cashing in" (choosing short term happiness before maximum development). Then when should we "cash in"? After how many years? Any finite number of years seems too small, since you could always add one extra year to further improve the expected long term happiness gain. On the other hand, the answer "in infinitely many years from now" is not appealing either, as an infinity of years never passes, by definition, meaning we would never choose to be happy. So, when would you "cash in" and choose to be happy? After how many years?

This is an interesting problem. The correct solution probably lies somewhere in the middle: allocate X of our resources to expansion, and 1-X of our resources to taking advantage of our current scope.

The maximum happy area for a happy rectangle is when both its happy sides are of equal happy length, forming a happy square.

[-][anonymous]10

This is an awesome sequence.

[-][anonymous]00

There's a discount rate to money...How many years of your life would you have to get back for giving back everything you earned...the onlder you get the smaller that number gets...when you're on your deathbed you will give up every dollar in the bank for a few more days...You realise as you get older that it matters less and less and less

  • 56 minutes into Tim Ferris's second interview with Naval here

This really chiseled in the face validity of the eudemonia theory of hedonic value to me. Even if life was just like it is for me today, just that supply of stimulation feels better than the imagined counterfactual of non-experience.