Followup toAffective Death Spirals, My Wild and Reckless Youth

My parents always used to downplay the value of intelligence.  And play up the value of—effort, as recommended by the latest research?  No, not effort.  Experience.  A nicely unattainable hammer with which to smack down a bright young child, to be sure.  That was what my parents told me when I questioned the Jewish religion, for example.  I tried laying out an argument, and I was told something along the lines of:  "Logic has limits, you'll understand when you're older that experience is the important thing, and then you'll see the truth of Judaism."  I didn't try again.  I made one attempt to question Judaism in school, got slapped down, didn't try again.  I've never been a slow learner.

Whenever my parents were doing something ill-advised, it was always, "We know better because we have more experience.  You'll understand when you're older: maturity and wisdom is more important than intelligence."

If this was an attempt to focus the young Eliezer on intelligence uber alles, it was the most wildly successful example of reverse psychology I've ever heard of.

But my parents aren't that cunning, and the results weren't exactly positive.

For a long time, I thought that the moral of this story was that experience was no match for sheer raw native intelligence.  It wasn't until a lot later, in my twenties, that I looked back and realized that I couldn't possibly have been more intelligent than my parents before puberty, with my brain not even fully developed.  At age eleven, when I was already nearly a full-blown atheist, I could not have defeated my parents in any fair contest of mind.  My SAT scores were high for an 11-year-old, but they wouldn't have beaten my parents' SAT scores in full adulthood.  In a fair fight, my parents' intelligence and experience could have stomped any prepubescent child flat.  It was dysrationalia that did them in; they used their intelligence only to defeat itself.

But that understanding came much later, when my intelligence had processed and distilled many more years of experience. 

The moral I derived when I was young, was that anyone who downplayed the value of intelligence didn't understand intelligence at all.  My own intelligence had affected every aspect of my life and mind and personality; that was massively obvious, seen at a backward glance.  "Intelligence has nothing to do with wisdom or being a good person"—oh, and does self-awareness have nothing to do with wisdom, or being a good person?  Modeling yourself takes intelligence.  For one thing, it takes enough intelligence to learn evolutionary psychology.

We are the cards we are dealt, and intelligence is the unfairest of all those cards.  More unfair than wealth or health or home country, unfairer than your happiness set-point.  People have difficulty accepting that life can be that unfair, it's not a happy thought.  "Intelligence isn't as important as X" is one way of turning away from the unfairness, refusing to deal with it, thinking a happier thought instead.  It's a temptation, both to those dealt poor cards, and to those dealt good ones.  Just as downplaying the importance of money is a temptation both to the poor and to the rich.

But the young Eliezer was a transhumanist.  Giving away IQ points was going to take more work than if I'd just been born with extra money.  But it was a fixable problem, to be faced up to squarely, and fixed.  Even if it took my whole life.  "The strong exist to serve the weak," wrote the young Eliezer, "and can only discharge that duty by making others equally strong."  I was annoyed with the Randian and Nietszchean trends in SF, and as you may have grasped, the young Eliezer had a tendency to take things too far in the other direction.  No one exists only to serve.  But I tried, and I don't regret that.  If you call that teenage folly, it's rare to see adult wisdom doing better.

Everyone needed more intelligence.  Including me, I was careful to pronounce.  Be it far from me to declare a new world order with myself on top—that was what a stereotyped science fiction villain would do, or worse, a typical teenager, and I would never have allowed myself to be so cliched.  No, everyone needed to be smarter.  We were all in the same boat:  A fine, uplifting thought.

Eliezer1995 had read his science fiction.  He had morals, and ethics, and could see the more obvious traps.  No screeds on Homo novis for him.  No line drawn between himself and others.  No elaborate philosophy to put himself at the top of the heap.  It was too obvious a failure mode.  Yes, he was very careful to call himself stupid too, and never claim moral superiority.  Well, and I don't see it so differently now, though I no longer make such a dramatic production out of my ethics.  (Or maybe it would be more accurate to say that I'm tougher about when I allow myself a moment of self-congratulation.)

I say all this to emphasize that Eliezer1995 wasn't so undignified as to fail in any obvious way.

And then Eliezer1996 encountered the concept of the Singularity.  Was it a thunderbolt of revelation?  Did I jump out of my chair and shout "Eurisko!"?  Nah.  I wasn't that much of a drama queen.  It was just massively obvious in retrospect that smarter-than-human intelligence was going to change the future more fundamentally than any mere material science.  And I knew at once that this was what I would be doing with the rest of my life, creating the Singularity.  Not nanotechnology like I'd thought when I was eleven years old; nanotech would only be a tool brought forth of intelligence.  Why, intelligence was even more powerful, an even greater blessing, than I'd realized before.

Was this a happy death spiral?  As it turned out later, yes: that is, it led to the adoption even of false happy beliefs about intelligence.  Perhaps you could draw the line at the point where I started believing that surely the lightspeed limit would be no barrier to superintelligence.  (It's not unthinkable, but I wouldn't bet on it.)

But the real wrong turn came later, at the point where someone said, "Hey, how do you know that superintelligence will be moral?  Intelligence has nothing to do with being a good person, you know—that's what we call wisdom, young prodigy."

And lo, it seemed obvious to the young Eliezer, that this was mere denial.  Certainly, his own painstakingly constructed code of ethics had been put together using his intelligence and resting on his intelligence as a base.  Any fool could see that intelligence had a great deal to do with ethics, morality, and wisdom; just try explaining the Prisoner's Dilemma to a chimpanzee, right?

Surely, then, superintelligence would necessarily imply supermorality.

Thus is it said:  "Parents do all the things they tell their children not to do, which is how they know not to do them."  To be continued, hopefully tomorrow.

Post Scriptum:  How my views on intelligence have changed since then... let's see:  When I think of poor hands dealt to humans, these days, I think first of death and old age.  Everyone's got to have some intelligence level or other, and the important thing from a fun-theoretical perspective is that it should ought to increase over time, not decrease like now.  Isn't that a clever way of feeling better?  But I don't work so hard now at downplaying my own intelligence, because that's just another way of calling attention to it.  I'm smart for a human, if the topic should arise, and how I feel about that is my own business.  The part about intelligence being the lever that lifts worlds is the same.  Except that intelligence has become less mysterious unto me, so that I now more clearly see intelligence as something embedded within physics.  Superintelligences may go FTL if it happens to be permitted by the true physical laws, and if not, then not.

 

Part of the sequence Yudkowsky's Coming of Age

Next post: "My Best and Worst Mistake"

(start of sequence)

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:44 AM
Select new highlight date
All comments loaded

"Surely, then, superintelligence would necessarily imply supermorality" Thought the cow, as a bolt plunged into its brain.

What will the cloned cow muscle cells think about the issue?

edit: I think you guys here over-focus on rational decision making and forget that what sets us apart from animals, in terms of intelligence, is our ability to invent solutions and solve problems. Including the problems like 'how to have a steak without killing a cow'. It's just that we as species are barely capable of invention, and so it takes us a great time to get there. There's no doubt that killing cows like we do now will be outlawed after we find another way to have the steak.

There's no doubt that killing cows like we do now will be outlawed after we find another way to have the steak.

No doubt at all? I'd put money on this being wrong. Why would it be outlawed?

Including the problems like 'how to have a steak without killing a cow'.

I'm not sure that's the relevant problem. The more important problem is "how can we get more and better steaks cheaper?"

No doubt at all? I'd put money on this being wrong. Why would it be outlawed?

There are various laws on treatment of animals already. Ineffective and poorly adhered to, but there are.

I'm not sure that's the relevant problem. The more important problem is "how can we get more and better steaks cheaper?"

Yet more important problem is how we make the most profit. Once there's notable grown-in-a-vat steak industry, you can be sure that the ethics of killing cows will be explained to you via fairly effective advertising. Especially if it costs somewhat more and consequently brings better income for same % markup.

There's no doubt that killing cows like we do now will be outlawed after we find another way to have the steak.

Perhaps. Which will likely lead to fewer cows in existence than we have now, since killing cows the way we kill other animals (e.g., by building on their habitats) probably won't be.

It's not due to our great intelligence, that we are stuck on this mud ball having to wipe out the things we like for the things we like more.

Lara: I regularly partake in a drug that lowers my IQ in exchange for other utility... It's called alcohol.

You've just summarized my complete refusal of all alcoholic beverages better than I ever could. I try not to be too annoying about it, but I really do find the stuff quite horrifying.

Scott: Here's the funny thing: given who I am now, I would not pay to have my IQ lowered, and indeed would pay good money to avoid having it lowered, or even to have it raised. But I would also pay to have been, since early childhood, the sort of person who didn't have such an intelligence-centric set of priorities. I'm not transitive in my preferences; I don't want to want what I want.

How much would you pay to retain your present intelligence, but be born into a world where that intelligence was average? I've never regretted being smart, but I sometimes wish I wasn't smarter. I think that's at least 50% of what people who complain about being smart are really complaining about.

But I try not to complain about that either - it seems like whining, considering all the people who would commit murder to swap places with me. We all have our own troubles and they aren't any less troubling just because other people have troubles too. But I wouldn't want to swap places with those people who want to swap places with me. The grass is greener on this side of the fence.

I don't have a problem, my environment has a problem.

Now see, that's exactly the sort of comment that led the young Eliezer to associate criticism of the intelligence-morality link with bad surface analogies. An easy enough monster-argument to slay, but I didn't do quite as well on reconstructing the corpse into something scarier.

Then why don't you go ahead and slay it? I share your dislike for surface analogies, but it seems like this one runs deeper.

Although the cow doesn't have the intelligence to form that thought, the point is that the hypothetical cow thinks "It takes intelligence to increase my utility function, therefore intelligence much greater than mine must lead to greater increases in my utility". It turns out that the cow is wrong, and a counterexample is us. There are supercow intelligences running around, but they kill and eat cows which is presumably not something the cow wants.

If you get the exact same argument out of a human brain, it's just as invalid, though (thankfully) there isn't any real life example to point to.

The deep connection is the same; there is more than one possible utility function.

Don't think "silver spoons", think "clean drinking water".

When I think of poor hands dealt to humans, these days, I think first of death and old age. Everyone's got to have some intelligence level or other, and the important thing from a fun-theoretical perspective is that it should ought to increase over time, not decrease like now.

This is a really important point, and I want to make certain that I get it right - especially to you personally, Mr. Yudkowsky, since you seem like someone with a higher-than-epsilon chance of actually doing something about all of this.

Solve people's lack of motivation and expertise for self-improvement before you handle our old age and death, please. Please.

Because, speaking as someone caught deep in the throes of a flawed optimization loop, the prospect of being caught in such a loop for centuries is terrifying.

Just as initial conditions are hideously unfair, life-paths are also hideously unfair, and the universe does not owe anyone the capacity, let alone the opportunity, to achieve meaning and purpose and happiness in their life.

And I don't know about others, but being condemned to an eternity as myself, damned to struggle futilely to achieve some understanding or purpose that will always be one level higher than I can reach, seems far, far worse than simply recycling my constituent hardware, freeing up my clock cycles, and letting something else take my place. Given the utterly unfair, stochastic nature of the universe, maybe it will be something better. But if it isn't, at least it won't have to suffer with its inadequacy forever, either.

Don't get my doom-and-gloom wrong, though - I would love to be immortal, and free, and capable of pursuing happiness. But I am terrified that, in my current mental configuration, immortality would simply mean an eternity of self-inflicted suffering. And I am most certainly not alone in this fear.

Keep working on your Series here - they're insightful and important, and they're the first thing I've heard in almost 20 years that doesn't sound like utter bullshit - and keep your fire to save the world, because that fire is the one thing in the universe worth protecting - but remember that some parts of the world may not be better off saved, if you can't heal them first.

I disagree with the fundamental premise here. I would much rather be immortal and stuck in an akratic loop for a few centuries - because a few centuries is very finite and I'll still be alive at the end.

While even if I become absurdly productive and self-controlled, I will still die like a dog of disease & decay in the likely event there is no Singularity and SENS fails.

Remember Steve Jobs: he used all the cutting-edge treatments and even used his billions to buy his way to the head of the transplant line - and died anyway.

Akrasia doesn't begin to describe the problem. I'm going to quote a line from HPMoR that resonated strongly with me:

"You could call it heroic responsibility, maybe," Harry Potter said. "Not like the usual sort. It means that whatever happens, no matter what, it's always your fault. Even if you tell Professor McGonagall, she's not responsible for what happens, you are. Following the school rules isn't an excuse, someone else being in charge isn't an excuse, even trying your best isn't an excuse. There just aren't any excuses, you've got to get the job done no matter what."

I get heroic responsibility. I've felt it in my gut since I was five. When I was 13, and it finally dawned on me that everyone around me was miserable and terrified and angry because the God they were praying to wasn't listening, my immediate resolution was to abandon worshipping him, and attempt to become a better God myself.

But, some of us aren't as smart as others, or as charismatic, or as willful, or as physically or mentally strong or resilient. We hear the call, but we don't have what it takes to answer it properly.

And that's our fault, too.

And we can't just stop listening. Not knowing that people need saving isn't any more of an excuse than not being strong enough to save them. Re-wiring your mind to not feel the crushing need to save them is ALSO a cop-out.

So... yeah. And lest anyone think I'm trying to be self-congratulatory here about my "superior morality", please understand that I am most assuredly not doing it right - this is a bug, not a feature.

Remember Steve Jobs: he used all the cutting-edge treatments and even used his billions to buy his way to the head of the transplant line

Well...

Lara: As far as I can tell, there are four basic problems.

First, if adults constantly praise and reward you for solving math problems, writing stories, and so on, then you aren't forced to develop interpersonal skills to the same extent most kids are. You have a separate source of self-worth, and it may be too late that you realize that source isn't enough. (Incidentally, the sort of interpersonal skills I'm talking about often get conflated with caring for others' welfare, which then leads to moral condemnation of nerds as egotistical and aloof. But the two qualities seem completely unrelated to me. As often as not, those who are most skilled at convincing others to go along with them also care about others the least.) Of course, the same might in principle be true for any unusual talent, including musical or athletic talent---except that the latter are understood and rewarded by one's peer group in a way that intellectual skills aren't.

Second, math, physics, and so on can simply be fun, independently of whatever self-worth one derives from them. In this they're no different from tennis or basket weaving or any other activity that some people enjoy. The trouble, again, is that while math and physics are reasonably well-rewarded economically, they're not rewarded socially. And therefore, deriving pleasure from them can have the same sorts of social implications as deriving pleasure from heroin.

Third, even if you manage to overcome these handicaps, other people won't know you have, and will be guided by the reigning stereotypes. They might decide before talking to you that you couldn't possibly have anything in common with them. Naturally, this sort of thing can be overcome given enough social skill, but it's another obstacle.

The fourth problem is specific to technical fields (rather than literary ones), and is just the well-known gender imbalance in those fields.

Given all of this, what's surprising is not that so many "intelligence-centric types" are unhappy, but rather that in spite of it many manage to live reasonably happy lives. That's the interesting part! :-)

My own intelligence had affected every aspect of my life and mind and personality; that was massively obvious, seen at a backward glance.

Obviously you are a great deal smarter than I was as a child...and maybe more of a contrarian. I have a tendency to smooth conflicts over rather than attack them head-on, which probably makes it harder for me to be a rationalist, and most of what I experience is bumping up against the limits of my native intelligence: concepts I kind of understand, but which are too complex to hold in my working memory all at once so I can really look at them, or music that sounds amazing, but which is too complex for me to break it down and figure out how to replicate the effect, or my struggles with learning computer programming. (I didn't struggle relative to other people, but subjectively I felt like it was difficult and frustrating.) If I could take a pill to increase my working memory (which is below average), ability to consolidate to long-term (probably above average but always room for improvement) or spatial skills and ability to grasp concepts at a glance, I would likely choose that over any kind of physical enhancement. It's really annoying to be more curious than intelligent...I'm acutely aware that I don't understand most of quantum mechanics (or a dozen other fascinating fields) and probably never will, because understanding would require years of dedicated studying.

It's really annoying to be more curious than intelligent...I'm acutely aware that I don't understand most of quantum mechanics (or a dozen other fascinating fields) and probably never will, because understanding would require years of dedicated studying.

Puh, I've always felt I was alone in that I think that quantum physics is one of the most interesting topics in the world and that I'm too dumb to grok it. All people I know, who are too stupid to understand quantum mechanics, say things like " Oh, physics is not important, I rather read Hegel" or " The mind is irreducible and feelings, poetry and art are far more essential!". IMO many people hate or reject science since they don't want to acknowledge their own intellectual inferiority, which is particularly apparent in fields like math or physics. Whereas every mildly intelligent person can discuss Nietzsche, Schopenhauer etc. ( I admit that these guys indeed have some important things to say.) , and it feels so good to utter seemingly deep, vague, not falsifiable gibberish, because nobody can say that you've made a mistake. Hm, I guess this rant is already off-topic, and ceteris paribus it is good to read some philosophy but it really frustrates me that so many people find science boring, I simply do not understand this attitude...

Scott, all your problems are problems of being smarter, not problems of being smart.

I like "we are the cards we are dealt", which expresses nicely a problem with common ideas of blame and credit. I disagree that intelligence is the unfairest card of all - I think that a relatively dim person born into affluence in the USA has a much better time of it than a smart person born into poverty in the Congo.

Notice the number of cards you had to change to balance the intelligence card.

how much money (or other utility-bearing fruit) would you demand (or pay Scott) to take a drug which lowered your IQ by x pts?

Here's the funny thing: given who I am now, I would not pay to have my IQ lowered, and indeed would pay good money to avoid having it lowered, or even to have it raised. But I would also pay to have been, since early childhood, the sort of person who didn't have such an intelligence-centric set of priorities. I'm not transitive in my preferences; I don't want to want what I want.

epwripi: This might sound cockier than I mean it to, but really, I tire of such assertions. I know what intelligence is, and I suspect many here do as well. Plenty of good definitions have been put forth, but somebody is always going to have a nitpick because it doesn't include their favorite metaphor, and there are always going to be people who don't want it to be defined. It can certainly be quantified roughly and relatively, at the least (though "points on a linear scale" may be tending towards a strawman extreme), and when people speak of an individual's intelligence, it's implied that they're talking about a certain time, or an average over time. It's trivial to point out that individuals can be consistent and inconsistent. It's the same with athletic ability.

I don't have a problem, my environment has a problem.

Eliezer, I'm in complete sympathy with that attitude. I've had only limited success so far at nerdifying the rest of the world, but I'll keep at it!

In his comment to Scott Aaronson, Eliezer seems skeptical of extreme intelligence being detrimental to happiness. It is however my understanding that statistics favor Scott's view.

Such statistical data are discussed in this article:

http://www.prometheussociety.org/articles/Outsiders.html

"Without at least heterozygosity for the health disorders associated with the mutations, Ashkenazim are no more intelligent, in the aggregate, than non-Ashkenazy Jews."

Retired, you're wrong. The largest hypothesized effects of the disease alleles would be only a small fraction of the Ashkenazim advantage: they just aren't frequent enough. If you had a selective pressure for IQ, then it would affect all IQ-influencing alleles, reducing the frequency of rare variants with negative effects (Ashkenazi have lower rates of IQ-reducing PKU alleles), changing the distribution of allleles responsible for normal variation, etc. The Ashkenazi genetic diseases just happen to be visible because of their medical consequences. For instance, there may also be alleles that cause miscarriage (reducing parental fertility) or disrupt implantation in homozygotes but boost cognition, and we wouldn't know.

It would be interesting to read a post that describes how a future society would look like if everyone was given the ability of todays top 2% regarding IQ. What would happen, implications, economic output, happiness and so on.

Excellent analogy TGGP. (and I say that as a meat eater)