What happens when your beliefs fully propagate

This is a very personal account of thoughts and events that have led me to a very interesting point in my life. Please read it as such. I present a lot of points, arguments, conclusions, etc..., but that's not what this is about.

I've started reading LW around spring of 2010. I was at the rationality minicamp last summer (2011). The night of February 10, 2012 all the rationality learning and practice finally caught up with me. Like a water that has been building up behind a damn, it finally broke through and flooded my poor brain.

"What if the Bayesian Conspiracy is real?" (By Bayesian Conspiracy I just mean a secret group that operates within and around LW and SIAI.) That is the question that set it all in motion. "Perhaps they left clues for those that are smart enough to see it. And to see those clues, you would actually have to understand and apply everything that they are trying to teach." The chain of thoughts that followed (conspiracies within conspiracies, shadow governments and Illuminati) it too ridiculous to want to repeat, but it all ended up with one simple question: How do I find out for sure? And that's when I realized that almost all the information I have has been accepted without as much as an ounce of verification. So little of my knowledge has been tested in the real world. In that moment I achieved a sort of enlightenment: I realized I don't know anything. I felt a dire urge to regress to the very basic questions: "What is real? What is true?" And then I laughed, because that's exactly where The Sequences start.

Through the turmoil of jumbled and confused thoughts came a shock of my most valuable belief propagating through my mind, breaking down final barriers, reaching its logical conclusion. FAI is the most important thing we should be doing right now! I already knew that. In fact, I knew that for a long time now, but I didn't... what? Feel it? Accept it? Visualize it? Understand the consequences? I think I didn't let that belief propagate to its natural conclusion: I should be doing something to help this cause.

I can't say: "It's the most important thing, but..." Yet, I've said it so many times inside my head. It's like hearing other people say: "Yes, X is the rational thing to do, but..." What follows is a defense that allows them to keep the path to their goal that they are comfortable with, that they are already invested in.

Interestingly enough, I've already thought about this. Right after rationality minicamp, I've asked myself the question: Should I switch to working on FAI, or should I continue to make games? I've thought about it heavily for some time, but I felt like I lacked the necessary math skills to be of much use on FAI front. Making games was the convenient answer. It's something I've been doing for a long time, it's something I am good. I decided to make games that explain various ideas that LW presents in text. This way I could help raise the sanity waterline. Seemed like a very nice, neat solution that allowed me to do what I wanted and feel a bit helpful to the FAI cause.

Looking back, I was dishonest with myself. In my mind, I already wrote the answer I wanted. I convinced myself that I didn't, but part of me certainly sabotaged the whole process. But that's okay, because I was still somewhat helpful, even though may be not in the most optimal way. Right? Right?? The correct answer is "no". So, now I have to ask myself again: What is the best path for me? And to answer that, I have to understand what my goal is.

Rationality doesn't just help you to get what you want better/faster. Increased rationality starts to change what you want. May be you wanted the air to be clean, so you bought a hybrid. Sweet. But then you realized that what you actually want is for people to be healthy. So you became a nurse. That's nice. Then you realized that if you did research, you could be making an order of magnitude more people healthier. So you went into research. Cool. Then you realized that you could pay for multiple researchers if you had enough money. So you went out, become a billionaire, and created your own research institute. Great. There was always you, and there was your goal, but everything in between was (and should be) up for grabs.

And if you follow that kind of chain long enough, at some point you realize that FAI is actually the thing right before your goal. Why wouldn't it be? It solves everything in the best possible way!

People joke that LW is a cult. Everyone kind of laughs it off. It's funny because cultists are weird and crazy, but they are so sure they are right. LWers are kind of like that. Unlike other cults, though, we are really, truly right. Right? But, honestly, I like the term, and I think it has a ring of truth to it. Cultists have a goal that's beyond them. We do too. My life isn't about my preferences (I can change those), it's about my goals. I can change those too, of course, but if I'm rational (and nice) about it, I feel that it's hard not to end up wanting to help other people.

Okay, so I need a goal. Let's start from the beginning:

What is truth?

Reality is truth. It's what happens. It's the rules that dictate what happens. It's the invisible territory. It's the thing that makes you feel surprised.

(Okay, great, I won't have to go back to reading Greek philosophy.)

How do we discover truth?

So far, the best method has been the scientific principle. It's has also proved itself over and over again by providing actual tangible results.

(Fantastic, I won't have to reinvent the thousands of years of progress.)

Soon enough humans will commit a fatal mistake.

This isn't a question, it's an observation. The technology is advancing on all fronts to the point where it can be used on a planetary (and wider) scale. Humans make mistakes. Making mistake with something that affects the whole world could result in an injury or death... for the planet (and potentially beyond).

That's bad.

To be honest, I don't have a strong visceral negative feeling associated with all humans becoming extinct. It doesn't feel that bad, but then again I know better than to trust my feelings on such a scale. However, if I had to simply push a button to make one person's life significantly better, I would do it. And I would keep pushing that button for each new person. For something like 222 years, by my rough calculations. Okay, then. Humanity injuring or killing itself would be bad, and I can probably spent a century or so to try to prevent that, while also doing something that's a lot more fun that mashing a button.

We need a smart safety net.

Not only smart enough to know that triggering an atomic bomb inside a city is bad, or that you get the grandma out of a burning building by teleporting her in one piece to a safe spot, but also smart enough to know that if I keep snoozing every day for an hour or two, I'd rather someone stepped in and stopped me, no matter how much I want to sleep JUST FIVE MORE MINUTES. It's something I might actively fight, but it's something that I'll be grateful for later.

FAI

There it is: the ultimate safety net. Let's get to it?

Having FAI will be very very good, that's clear enough. Getting FAI wrong will be very very bad. But there are different levels of bad, and, frankly, a universe tiled with paper-clips is actually not that high on the list. Having an AI that treats humans as special objects is very dangerous. An AI that doesn't care about humans will not do anything to humans specifically. It might borrow a molecule, or an arm or two from our bodies, but that's okay. An AI that treats humans as special, yet is not Friendly could be very bad. Imagine 3^^^3 different people being created and forced to live really horrible lives. It's hell on a whole another level. So, if FAI goes wrong, pure destruction of all humans is a pretty good scenario.

Should we even be working on FAI? What are the chances we'll get it right? (I remember Anna Salamon's comparison: "getting FAI right" is like "trying to make the first atomic bomb explode in a shape of an elephant" would have been a century ago.) What are the chances we'll get it horribly wrong and end up in hell? By working on FAI, how are we changing the probability distribution for various outcomes? Perhaps a better alternative is to seek a decisive advantage like brain uploading, where a few key people can take a century or so to think the problem through?

I keep thinking about FAI going horribly wrong, and I want to scream at the people who are involved with it: "Do you even know what you are doing?!" Everything is at stake! And suddenly I care. Really care. There is curiosity, yes, but it's so much more than that. At LW minicamp we compared curiosity to a cat chasing a mouse. It's a kind of fun, playful feeling. I think we got it wrong. The real curiosity feels like hunger. The cat isn't chasing the mouse to play with it; it's chasing it to eat it because it needs to survive. Me? I need to know the right answer.

I finally understand why SIAI isn't focusing very hard on the actual AI part right now, but is instead pouring most of their efforts into recruiting talent. The next 50-100 years is going to be a marathon for our lives. Many participants might not make it to the finish line. It's important that we establish a community that can continue to carry the research forward until we succeed.

I finally understand why when I was talking about making games that help people be more rational with Carl Shulman, his value metric was to see how many academics it could impact/recruit. That didn't make sense to me. I just wanted to raise the sanity waterline for people in general. I think when LWers say "raise the sanity waterline," there are two ideas being presented. One is to make everyone a little bit more sane. That's nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.

I finally realized that when people were talking about donating to SIAI during the rationality minicamp, most of us (certainly myself) were thinking of may be tens of thousands of dollars a year. I now understand that's silly. If our goal is truly to make the most money for SIAI, then the goal should be measured in billions.

I've realized a lot of things lately. A lot of things have been shaken up. It has been a very stressful couple of days. I'll have to re-answer the question I asked myself not too long ago: What should I be doing? And this time, instead of hoping for an answer, I'm afraid of the answer. I'm truly and honestly afraid. Thankfully, I can fight pushing a lot better than pulling: fear is easier to fight than passion. I can plunge into the unknown, but it breaks my heart to put aside a very interesting and dear life path.

I've never felt more afraid, more ready to fall into a deep depression, more ready to scream and run away, retreat, abandon logic, go back to the safe comfortable beliefs and goals. I've spent the past 10 years making games and getting better at it. And just recently I've realized how really really good I actually am at it. Armed with my rationality toolkit, I could probably do wonders in that field.

Yet, I've also never felt more ready to make a step of this magnitude. Maximizing utility, all the fallacies, biases, defense mechanisms, etc, etc, etc. One by one they come to mind and help me move forward. Patterns of thoughts and reasoning that I can't even remember the name of. All these tools and skills are right here with me, and using them I feel like I can do anything. I feel that I can dodge bullets. But I also know full well that I am at the starting line of a long and difficult marathon. A marathon that has no path and no guides, but that has to be run nonetheless.

May the human race win.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 3:50 AM
Select new highlight date
Rendering 50/79 comments  show more

Does everyone else here think that putting aside your little quirky interests to do big important things is a good idea? It seems to me that people who choose that way typically don't end up doing much, even when they're strongly motivated, while people who follow their interests tend to become more awesome over time. Though I know Anna is going to frown on me for advocating this path...

Though I know Anna is going to frown on me for advocating this path...

Argh, no I'm not going to advocate ignoring one's quirky interests to follow one's alleged duty. My impression is more like fiddlemath's, below. You don't want to follow shiny interests at random (though even that path is much better than drifting randomly or choosing a career to appease one's parents, and cousin_it is right that even this tends to make people more awesome over time). Instead, ideally, you want to figure out what it would be useful to be interested in, cultivate real, immediate, curiosity and urges to be interested in those things, work to update your anticipations and urges so that they know more of what your abstract/verbal reasoning knows, and can see why certain subjects are pivotal…

Not "far-mode reasoning over actual felt interests" but "far-mode reasoning in dialog with actual felt interests, and both goals and urges relating strongly to what you end up actually trying to do, and so that you develop new quirky interests in the questions you need to answer, the way one develops quirky interests in almost any question if one is willing to dwell on it patiently for a long time, with staring with intrinsic interest while the details of the question come out to inhabit your mind...

I find this comment vague and abstract, do you have examples in mind?

I think the flowchart for thinking about this question should look something like:

  1. If in a least convenient possible world where following your interests did not maximize utility, are you pretty sure you really would forego your personal interests to maximize utility? If no, go to 2; if yes, go to 3.

  2. Why are you even thinking about this question? Are you just trying to come up with a clever argument for something you're going to do anyway?

  3. Okay, now you can think about this question.

I can't answer your question because I've never gotten past 2.

I mostly-agree, except that question 1 shouldn't say:

"In a least convenient world, would you utterly forgo all interest in return for making some small difference to global utility".

It should say: "… is there any extent to which impact on strangers' well-being would influence your choices? For example, if you were faced with a choice between reading a chapter of a kind-of-interesting book with no external impact, or doing chores for an hour and thereby saving a child's life, would you sometimes choose the latter?"

If the answer to that latter question is yes -- if expected impact on others' well-being can potentially sway your actions at some margin -- then it is worth looking into the empirical details, and seeing what bundles of global well-being and personal well-being can actually be bought, and how attractive those bundles are.

impact on strangers' well-being

I object to this being framed as primarily about others versus self. I pursue FAI for the perfectly selfish reason that it maximizes my expected life span and quality. I think the conflict being discussed is about near interest conflicting with far interest, and how near interest creates more motivation.

Why are you even thinking about this question?

Because even if we don't have the strength or desire to willingly renounce all selfishness, we recognize that better versions of ourselves would do so, and that perhaps there's a good way to make some lifestyle changes that look like personal sacrifices but are actually net positive (and even more so when we nurture our sense of altruism)?

That's why it's a very important skill to become interested in what you should be interested in. I made a conscious decision to become interested in what I'm working on now becase it seemed like an area full of big low hanging fruit, and now it genuinely fascinates me.

How to become really interested in something?

I would suggest spending time with people interested in X, because this would give one's brain signal "X is socially rewarded", which would motivate them to do X. Any other good ideas?

What worked for me was to spend time thinking about the types of things I could do if it worked right, and feeling those emotions while trying to figure out rough paths to get there.

I also chose to strengthen the degree to which I identify as someone who can do this kind of thing, so it felt natural

I'm spitballing different ideas I've used:

  • Like you said, talk to people who know the topic and find it interesting.

  • Read non technical introductory books on the topic. I found the algorithmn's part of CS interesting, but the EE dimensions of computing was utterly boring until I read Code by Charles Petzold.

  • Research the history of a topic in order to see the lives of the humans who worked on it. Humans being social creatures, may find a topic more interesting after they have learned of some of the more interesting people who have worked in that field.

I expect that completely ignoring your quirky interests leads to completely destroying your motivation for doing useful work. On the other hand, I find myself demotivated, even from my quirky interests, when I haven't done "useful" things recently. I constantly question "why am I doing what I'm doing?" and feel pretty awful, and completely destroy my motivation for doing anything at all.

But! Picking from "fiddle with shiny things" and "increase global utility" is not a binary decision. The trick is to find a workable compromise between the ethics you endorse and the interests you're drawn to, so that you don't exhaust yourself on either, and gain "energy" from both. Without some sort of deep personal modification, very few people can usefully work 12 hours a day, 7 days a week, at any one task. You can, though, spend about four hours a day on each of three different projects, so long as they're sufficiently varied, and they provide an appropriate mixture of short-term and long-term goals, near and far goals, personal time and social time, and seriousness and fun.

First, making games isn't a little quirky interest. Second, I don't necessarily have to put it aside. My goal it to contribute to FAI. I will have to figure out the best way to do that. If I notice that whatever I try, I fail at because I can't summon enough motivation, then may be making games is the best options I've got. But the point is that I have to maximize my contribution.

A couple of years ago, I'd side with Anna. Today, I'm more inclined to agree with you. As I learned the hard way, intrinsic motivation is extremely important for me.

(Long story short: I have a more than decent disposable income, which I earned by following my "little quirky interests". I could use this income for direct regular donations, but instead I decided to invest it, along with my time, in a potentially money-making project I had little intrinsic motivation for. I'm still evaluating the results, but so far it's likely that I'll make intrinsic motivation mandatory for all my future endeavors.)

Doing what's right is hard and takes time. For a long time I've been of the opinion that I should do what's most important and let my little quirky interests wither on the vine, and that's what I've done. But it took many years to get it right, not because of issues of intrinsic motivation, but because I'm tackling hard problems and it was difficult to even know what I was supposed to be doing. But once I figured out what I'm doing, I was really glad I'd taken the risk, because I can't imagine ever returning to my little quirky interests.

I think it involves a genuine leap into the unknown. For example, even if you decide that you should dedicate your life to FAI, there's still the problem of figuring out what you should be doing. It might take years to find the right path and you'll probably have doubts about whether you made the right decision until you've found it. It's a vocation fraught with uncertainty and you might have several false starts, you might even discover that FAI is not the most important thing after all. Then you've got to start over.

Should everyone being doing it? Probably not. Is there a good way to decide whether you should be doing or not? I doubt it. I think what really happens is you start going down that road and there's a point where you can't turn back.

My trouble is that my "little quirky interests" are all I really want to do. It can be a bit hard to focus on all the things that must get done when I'd much rather be working on something totally unrelated to my "work".

I'm not sure how to solve that.

Replace FAI with Rapture and LW with Born Again, and you can publish this "very personal account" full of non-sequiturs on a more mainstream site.

Replace FAI with Rapture and LW with Born Again

And "rational" with "faithful", and "evidence" with "feeling", and "thought about" with "prayed for", etc. With that many substitutions, you could turn it into just about anything.

Thanks, I'm actually glad to see your kind of comment here. The point you make is something I am very wary off, since I've had dramatic swings like that in the past. From Christianity to Buddhism, back to Christianity, then to Agnosticism. Each one felt final, each one felt like the most right and definite step. I've learned not to trust that feeling, to be a bit more skeptical and cautious.

You are correct that my post was full of non-sequiturs. That's because I wrote it in a stream-of-thought kind of way. (I've also omitted a lot of thoughts.) It wasn't meant to be an argument for anything other than "think really hard about your goals, and then do the absolute best to fulfill them."

tl;dr: If you can spot non-sequiturs in your writing, and you put a lot of weight on the conclusion it's pointing at, it's a really good idea to take the time to fill in all the sequiturs.

Writing an argument in detail is a good way to improve the likelihood that your argument isn't somewhere flawed. Consider:

  • Writing allows reduction. By pinning the argument to paper, you can separate each logical step, and make sure that each step makes sense in isolation.
  • Writing gives the argument stability. For example, the argument won't secretly change when you think about it while you're in a different mood. This can help to prevent you from implicitly proving different points of your argument from contradictory claims.
  • Writing makes your argument vastly easier to share. Like in open source software, enough eyeballs makes all bugs trivial.

Further, notice that we probably underestimate the value of improving our arguments, and are overconfident in apparently-solid logical arguments. If an argument contains 20 inferences in sequence, and you're wrong about such inferences 5% without noticing the misstep, then you have about a 64% chance of being wrong somewhere in the argument. If you can reduce your chance of a misstep in logic to 1% per inference, then you only have an 18% chance of being wrong, somewhere. Improving the reliability of the steps in your arguments, then, has a high value-of-information -- even when 1% and 5% both feel like similar amounts of uncertainty. Conjunction fallacy. It is probable, then, that we underestimate the value of information attained by subjecting ourselves to processes that improve our arguments.

If being wrong about an argument is highly costly -- if you would stand to lose much by believing incorrectly -- then it is well worth writing these sorts of arguments formally, and ensuring that you're getting them right.

All that said... I suspect I know exactly what you're talking about. I haven't performed a similar, convulsive update myself, but I can practically feel the pressure for it in my own head, growing. I fight that update longer than parts of me think I should, because I'm afraid of strong mental attractors. If you can write the sound, solid argument publicly, I will be happy to double-check your steps.

Yes. Even if this one is right, you're still running on corrupt hardware and need to know when to consciously lower your enthusiasm.

The problem with this argument is that you've spent so much emotional effort arguing why the world is screwed without FAI, that you've neglected to hold the claim "The FAI effort currently being conducted by SIAI is likely to succeed in saving the world" to the standards of evidence you would otherwise demand.

Consider the following exercise in leaving a line of retreat: suppose Omega told you that SIAI's FAI project was going to fail, what would you do?

I wasn't making any arguments to the fact that SIAI is likely to succeed in saving the world or even that they are the best option for FAI. (In fact, I have a lot of doubts about it.) That's a really complicated argument, and I really don't have enough information to make a statement like that. As I've said, my goal is to make FAI happen. If SIAI isn't the best option, I'll find another best option. If it turns out that FAI is not really what we need, then I'll work on whatever it is we do.

There's a lot to process here, but: I hear you. As you investigate your path, just remember that a) paths that involve doing what you love should be favored as you decide what to do with yourself, because depression and boredom do not a productive person make, and b) if you can make a powerful impact in gaming, you can still translate that impact into an impact on FAI by converting your success and labor into dollars. I expect these are clear to you, but bear mentioning explicitly.

These decisions are hard but important. Those who take their goals seriously must choose their paths carefully. Remember that the community is here for you, so you aren't alone.

Thanks! :) I'm fully aware of both points, but I definitely appreciate you brining them up. You're right, depression and boredom is not good. I sincerely doubt boredom will be a problem, and as for depression, it's something I'll have to be careful about. Thankfully, there are things in life that I like doing aside from making games.

Yes, I could convert that success into dollars, but as I've mentioned in my article, that's probably not the optimal way of making money. (It still might be, I'd have to really think about it, but I'd definitely have to change my approach if that's what I decided to do.)

I finally realized that when people were talking about donating to SIAI during the rationality minicamp, most of us (certainly myself) were thinking of may be tens of thousands of dollars a year. I now understand that's silly. If our goal is truly to make the most money for SIAI, then the goal should be measured in billions.

Eliezer has said that he doesn't know how to usefully spend more than 10 million dollars...

I wish you well, but be wary. I would guess that many of us on this site had dreams of saving the world when younger, and there is no doubt that FAI appeals to that emotion. If the claims on the SI are true, then donating to them will mean you contributed to saving the world. Be wary of the emotions associated with that impulse. Its very easy for the brain to pick out a train of thoughts and ignore all others- those doubts you admit to may not be entirely unreasonable. Before making drastic changes to your lifestyle, give it a while. Listen to skeptical voices. Read the best arguments as to why donating to SI may not be a good idea (there are some on this very site).

If you are convinced after some time to think that helping the SI is all you want to do with life, then, as Villiam suggests, do something you love to promote it. Donate what you can spare to SI, and keep on doing what makes you happy, because I doubt you will be more productive doing something that makes you miserable. So make those rational board games, but make some populist ones too, because while the former may convert, the latter might generate more income to allow you to pay someone else to convert people.

Yes, I probably need a healthy dose of counter-arguments. Can you link any? (I'll do my own search too.)

I have to admit that no particular examples come to mind, but usually in the comments threads on topics such as optimal giving, and occasional posts arguing agains the probability of the singularity. I certainly have seen some, but can't remember where exactly, so any search you do will probably be as effective as my own. To present you with a few possible arguments (which I believe to varying degrees of certainty)

-A lot of the arguments for becoming commited to donating to FAI are based on "even if theres a low probability of it happening, the expected gains are incredibly huge". I'm wary of this argument because I think it can be applied anywhere. For instance, even now, and certainly 40 years ago, one could make a credible argument that theres a not insignificant chance of a nuclear war eradicating human life from the planet. So we should contribute all our money to organisations devoted to stopping nuclear war. -This leads directly to another argument- how effective do we expect the SI to be? Is friendly AI possible? Are SI going to be the ones to find it? If SI create friendliness, will it be implemented? If I had devoted all my money to the CND, I would not have had a significant impact on the proliferation of nuclear weaponry. -A lot of the claims based on a singularity assume that intelligence can solve all problems. But there may be hard limits to the universe. If the speed of light is the limit, then we are trapped with finite resources, and maybe there is no way for us to use them much more efficiently than we can now. Maybe cold fusion isn't possible, maybe nanotechnology can't get much more sophisticated? -Futurism is often inaccurate. The jokes about "wheres my hover car" are relevant- the progress over the last 200 years has rocketed in some spheres but slowed in others. For instance, current medical advances have been slowing recently. They might jump forwards again, but maybe not. Predicting which bits of science will advance in a certain time scale are unlikely. -Intelligence might have a hard limit, or an exponential decay. It could be argued that we might be able to wire up millions of humanlike intelligence in a computer array, but that might hit physical limits

Oh, wow. I was reading your description of your experiences in this, and I was like, "Oh, wow, this is like a step-by-step example of brainwashing. Yup, there's the defreezing, the change while unfrozen, and the resolidification."

It's certainly what it feels like from inside as well. I'm familiar with that feeling, having gone through several indoctrinations in my life. This time I am very wary of rushing into anything, or claiming that this belief if absolutely right, or anything like that. I have plenty of skepticism; however, not acting on what I believe to be correct would be very silly.

I've spent the past 10 years making games and getting better at it. And just recently I've realized how really really good I actually am at it.

Good enough to make billions and/or impact/recruit many (future) academics? Then do it! Use your superpowers to do it better than before.

And if you are not good enough, then what else will you do? Will you be good enough in that other thing? You should not replace one thing for another just for the sake of replacing, but because it increases your utility function. You should be able to do more in the new area, or the new area should be so significant that even if you do less, the overall result is better.

I have an idea, though I am not sure if it is good and if you will like it. From the reviews it seems to me that you are a great storyteller (except the part of writing dialogs), but your weak point is game mechanics. And if you made a game, you are obviously good at programming. So I would suggest to focus on the mechanical part and, for a moment, to forget about stories. People as SIAI are preparing a rationality curicullum; they try to make exercises that will help people improve some of their skills. I don't know how far they are, but if they already have something... is there a chance you could make a program (not a game, yet) that students could use? Focus on the function, not form; do useful software, not a game. So, step 1, you do something immediately useful for increasing sanity baseline. Step 2, after the rationality curicullum is ready and you have made dozen programs, then think about the game where you could reuse these exercises as parts of the game mechanic. And invite other people to help you (you already know you need someone to help with dialogs; and you probably also need game testers). My point is, first do something useful for teaching rationality, and only then make it a game. Thus the game will not only be about rationality, but it will actually teach some parts of rationality -- and that will be also a great recruiting tool, because if someone liked doing it in the game, then LW will be like an improved version of the game, except that it is also real.

Here's what I was thinking as I read this: Maybe you need to reassess cost/benefits. Apply the Dark Arts to games and out-Zynga Zynga. Highly addictive games with in-game purchases designed using everything we know about the psychology of addiction, reward, etc. Create negative utility for a small group of people, yes, but syphon off their money to fund FAI.

I think if I really, truly believed FAI was the only and right option I'd probably do a lot of bad stuff.

I think if I really, truly believed FAI was the only and right option I'd probably do a lot of bad stuff.

You might want to read through some decision theory stuff and ponder it for a while. Also, even before that, please consider the possibility that your political instincts are optimized to get a group of primates to change a group policy in a way you prefer while surviving the likely factional fight. If you really want to be effective here or in any other context requiring significant coordination with numerous people, it seems likely to me that you'll need to adjust your goal directed tactics so that you don't leap into counter-productive actions the moment you decide they are actually worth doing.

Baby steps. Caution. Lines of retreat. Compare and contrast your prediction with: the valley of bad rationality.

I have approached numerous intelligent and moral people who are perfectly capable of understanding the basic pitch for singularity activism but who will not touch it with a ten-foot pole because they are afraid to be associated with anything that has so much potential to appeal to the worst sorts of human craziness. Please do something other than confirm these bleak but plausible predictions.

In re FAI vs. snoozing: What I'd hope from an FAI is that it would know how much rest I needed. Assuming that you don't need that snoozing time at all strikes me as a cultural assumption that theories (in this case, possibly about willpower, productivity, and virtue) should always trump instincts.

A little about hunter-gatherer sleep. What I've read elsewhere is that with an average of 12 hours of darkness and an average need for 8 hours of sleep, hunter-gathers would not only have different circadian rhythms (teenagers tend to run late, old people tend to run early), but a common pattern was to spend some hours in the middle of the night for talk, sex, and/or contemplation. To put it mildly, this pattern in not available for the vast majority of modern people, and we don't know what if anything this is costing.

I think of FAI as being like gorillas trying to invent a human-- a human which will be safe for gorillas, but I may be unduly pessimistic.

I'm inclined to think that raising the sanity waterline is more valuable than you do for such a long range project-- FAI is so dependent on a small number of people, and I think it will continue to be so. Improved general conditions means that the odds of someone who would be really valuable not having their life screwed up early are improved.

On the other hand, this is a "by feel" argument, and I'm not sure what I might be missing.

I think of FAI as being like gorillas trying to invent a human-- a human which will be safe for gorillas, but I may be unduly pessimistic.

Leave out "artificial" - what would constitute a "human-friendly intelligence"? Humans don't. Even at our present intelligence we're a danger to ourselves.

I'm not sure "human-friendly intelligence" is a coherent concept, in terms of being sufficiently well-defined (as yet) to say things about. The same way "God" isn't really a coherent concept.

A question that I'm really curious about: Has anyone (SIAI?) created a roadmap to FAI? Luke talks about granularizing all the time. Has it been done for FAI? Something like: build a self-sustaining community of intelligent rational people, have them work on problems X, Y, Z. Put those solutions together with black magic. FAI.

Lukeprog's So You Want to Save the World is sort of like a roadmap, although it's more of a list of important problems with a "strategies" section at the end, including things like raising the sanity waterline.

Sometimes, it feels like part of me would take over the world just to get people to pay attention to the danger of UFAI and the importance of Friendliness. Figuratively speaking. And part of me wants to just give up and get the most fun out of my life until death, accepting our inevitable destruction because I can't do anything about it.

So far, seems like the latter part is winning.

Both a little extreme. :) There are little things you can do. I've been donating to SIAI for a while, so that's a good start. Take on as much as you can bear responsibly.

I recently had a very similar realization and accompanying shift of efforts. It's good to know others like you have as well.

A couple of principles I'm making sure to follow (which may be obvious, but I think are worth pointing out):

  1. Happier people are more productive, so it is important to apply a rational effort toward being happy (e.g. by reading and applying the principles in "The How of Happiness"). This is entirely aside from valuing happiness in itself. The point is that I am more likely to make FAI happen if I make myself happier, as a matter of human psychology. If the reverse were true, and happiness made me less effective, I would apply a rational effort to make myself less happy instead.

  2. In line with #1, be aware of the risks of stress, boredom, and burnout. If I hate a certain task, then, even though in most cases it may be the best choice for working toward FAI, it may not be in my case. At the same time, be aware of when this is just an excuse, and when it's possible to change so as to actually enjoy things I otherwise wouldn't have.

I think when LWers say "raise the sanity waterline," there are two ideas being presented. One is to make everyone a little bit more sane. That's nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.

There's another possible scenario: The AI Singularity isn't far, but it is not very near, either. AGI is a generation or more beyond our current understanding of minds, and FAI is a generation or more beyond our current understanding of values. We're making progress; and current efforts are on the critical path to success — but that success may not come during our lifetimes.

Since this is a possible scenario, it's worth having insurance against it. And that means making sure that the next generation are competent to carry on the effort, and themselves survive to make it.

Cultivating a culture of rationality, awareness of existential risks, etc. is surely valuable for that purpose, too.

As a relatively new member of this site, I'm having trouble grasping this particular reasoning and motivation for participating in FAI. I've browsed Eleizer's various writings on the subject of FAI itself, so I have a vague understanding of why FAI is important, and such a vague understanding is enough for me to conclude that FAI is one, if not the most, important topic that currently needs to be discussed. This belief may not be entirely my own and is perhaps largely influenced by the amount of comments and posts in support of FAI, in conjunction with my lack of knowledge in the area.

With this lack of understanding, I think it is clear /why/ I haven't given up my life to support FAI. But it seems to me that many others on this site know much, much more about the subject, and they still have not given up their lives for FAI.

So my brain has made an equivalence between supporting FAI and other acts of extreme charity. I think highly of those who work for years in impoverished countries battling local calamities, but I don't find myself very motivated to participate. From my observations, I think this is because I have never heard of anyone with the goal of saving the world actually making significant progress in that direction. However, I have heard of many people who have made the world a better place while never exhibiting such lofty motivations.

I guess this is similar to cousin_it's response in that it seems strange to me to pursue something because it is a "big important problem". But I am also worried about the following line of reasoning:

Motivation to participate in FAI => motivation to do charitable work => I should be motivated to do all sorts of charitable work.

This seems like it would become reality only if my interests were aligned with the charitable work. In the OP's reasoning, is the motivation to save the world enough to align interest with work? To me, it seems analogous to the effect of a sugar high on your energy level.

If I did find myself working with FAI, it would probably be because I found that these were interesting problems to solve, and not because I wanted to save the world.

Even when you understand that FAI is the most important thing to be doing, there are many ways in which you can fail to translate that into action.

It seems most people are making the assumption that I'll suddenly start doing really boring work that I hate. That's not the case. I have to maximize my benefit, which means considering all the factors. I can't be productive in something that I'm just bad at, or something that I really hate, so I won't do that. But there are plenty of things that I'm somewhat interested in and somewhat familiar with, that would probably do a lot more to help with FAI than making games. But, again, it's something that has to be carefully determined. That's all I was trying to say in this post. I have an important goal -> I need to really consider what the best way to achieve that goal is.