I recently attended a discussion group whose topic, at that session, was Death.  It brought out deep emotions.  I think that of all the Silicon Valley lunches I've ever attended, this one was the most honest; people talked about the death of family, the death of friends, what they thought about their own deaths.  People really listened to each other.  I wish I knew how to reproduce those conditions reliably.

I was the only transhumanist present, and I was extremely careful not to be obnoxious about it.  ("A fanatic is someone who can't change his mind and won't change the subject."  I endeavor to at least be capable of changing the subject.)  Unsurprisingly, people talked about the meaning that death gives to life, or how death is truly a blessing in disguise.  But I did, very cautiously, explain that transhumanists are generally positive on life but thumbs down on death.

Afterward, several people came up to me and told me I was very "deep".  Well, yes, I am, but this got me thinking about what makes people seem deep. 

At one point in the discussion, a woman said that thinking about death led her to be nice to people because, who knows, she might not see them again.  "When I have a nice thing to say about someone," she said, "now I say it to them right away, instead of waiting."

"That is a beautiful thought," I said, "and even if someday the threat of death is lifted from you, I hope you will keep on doing it—"

Afterward, this woman was one of the people who told me I was deep.

At another point in the discussion, a man spoke of some benefit X of death, I don't recall exactly what.  And I said:  "You know, given human nature, if people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing.  But if you took someone who wasn't being hit on the head with a baseball bat, and you asked them if they wanted it, they would say no.  I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no."

Afterward, this man told me I was deep.

Correlation is not causality.  Maybe I was just speaking in a deep voice that day, and so sounded wise.

But my suspicion is that I came across as "deep" because I coherently violated the cached pattern for "deep wisdom" in a way that made immediate sense.

There's a stereotype of Deep Wisdom.  Death: complete the pattern: "Death gives meaning to life."  Everyone knows this standard Deeply Wise response.  And so it takes on some of the characteristics of an applause light.  If you say it, people may nod along, because the brain completes the pattern and they know they're supposed to nod.  They may even say "What deep wisdom!", perhaps in the hope of being thought deep themselves.   But they will not be surprised; they will not have heard anything outside the box; they will not have heard anything they could not have thought of for themselves.  One might call it belief in wisdom—the thought is labeled "deeply wise", and it's the completed standard pattern for "deep wisdom", but it carries no experience of insight.

People who try to seem Deeply Wise often end up seeming hollow, echoing as it were, because they're trying to seem Deeply Wise instead of optimizing.

How much thinking did I need to do, in the course of seeming deep?  Human brains only run at 100Hz and I responded in realtime, so most of the work must have been precomputed.  The part I experienced as effortful was picking a response understandable in one inferential step and then phrasing it for maximum impact.

Philosophically, nearly all of my work was already done.  Complete the pattern: Existing condition X is really justified because it has benefit Y:  "Naturalistic fallacy?" / "Status quo bias?" / "Could we get Y without X?" / "If we had never even heard of X before, would we voluntarily take it on to get Y?"  I think it's fair to say that I execute these thought-patterns at around the same level of automaticity as I breathe.  After all, most of human thought has to be cache lookups if the brain is to work at all.

And I already held to the developed philosophy of transhumanism.  Transhumanism also has cached thoughts about death.  Death: complete the pattern: "Death is a pointless tragedy which people rationalize."  This was a nonstandard cache, one with which my listeners were unfamiliar.  I had several opportunities to use nonstandard cache, and because they were all part of the developed philosophy of transhumanism, they all visibly belonged to the same theme.  This made me seem coherent, as well as original.

I suspect this is one reason Eastern philosophy seems deep to Westerners—it has nonstandard but coherent cache for Deep Wisdom.  Symmetrically, in works of Japanese fiction, one sometimes finds Christians depicted as repositories of deep wisdom and/or mystical secrets.  (And sometimes not.)

If I recall correctly an economist once remarked that popular audiences are so unfamiliar with standard economics that, when he was called upon to make a television appearance, he just needed to repeat back Econ 101 in order to sound like a brilliantly original thinker.

Also crucial was that my listeners could see immediately that my reply made sense.  They might or might not have agreed with the thought, but it was not a complete non-sequitur unto them.  I know transhumanists who are unable to seem deep because they are unable to appreciate what their listener does not already know.  If you want to sound deep, you can never say anything that is more than a single step of inferential distance away from your listener's current mental state.  That's just the way it is.

To seem deep, study nonstandard philosophies.  Seek out discussions on topics that will give you a chance to appear deep.  Do your philosophical thinking in advance, so you can concentrate on explaining well.  Above all, practice staying within the one-inferential-step bound.

To be deep, think for yourself about "wise" or important or emotionally fraught topics.  Thinking for yourself isn't the same as coming up with an unusual answer.  It does mean seeing for yourself, rather than letting your brain complete the pattern.  If you don't stop at the first answer, and cast out replies that seem vaguely unsatisfactory, in time your thoughts will form a coherent whole, flowing from the single source of yourself, rather than being fragmentary repetitions of other people's conclusions.

 

Part of the Seeing With Fresh Eyes subsequence of How To Actually Change Your Mind

Next post: "We Change Our Minds Less Often Than We Think"

Previous post: "The Logical Fallacy of Generalization from Fictional Evidence"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:05 AM
Select new highlight date
All comments loaded

I have played with the idea of writing a "wisdom generator" program for a long time. A lot of "wise" statements seem to follow a small set of formulaic rules, and it would not be too hard to make a program that randomly generated wise sayings. A typical rule is to create a paradox ("Seek freedom and become captive of your desires. Seek discipline and find your liberty") or just use a nice chiasm or reversal ("The heart of a fool is in his mouth, but the mouth of the wise man is in his heart"). This seems to fit in with your theory: the structure given by the form is enough to trigger recognition that a wise saying will now arrive. If the conclusion is weird or unfamiliar, so much the better.

Currently reading Raymond Smullyan's The Tao is Silent, and I'm struck by how much less wise taoism seems when it is clearly explained.

I suspect that this sort of algorithm was unconsciously internalized by many scriptwriters of Kung Fu films. I did the same thing, unconsciously, during the period I was reading Smullyan's books. That's what I did to come up with, "There's neither heaven nor hell save what we grant ourselves, neither fairness nor justice save what we grant each other."

I suspect that this sort of algorithm was used as a sort of filter by the more savvy Taoist masters -- just sit back and see who gets trapped in this particular local maxima.

You should work at a fortune cookie company, I'm sure you'd learn some tricks of the trade.

You may wish to study the "terribly mysterious" sayings of The Sphinx (from the movie "Mystery Men") for inspiration :)

"When you can balance a tack hammer on your head, you will head off your foes with a balanced attack."

"I had one opportunity to kiss the girl I loved the most and I blew it. " And that makes your life better than if you had more opportunities?

Huh?

It seems noteworthy that the first known story (Gilgamesh), and the second well known one (Eden), and the dominant global religions (Christianity, Islam), are all about yearning for immortality.

I can only speak for myself, but I think most of us are defining "immortality" as "living for at least a million years" rather than Greg Egan's "Not, dying after a very long time; just not dying, ever."

Now I certainly have no moral objection to the latter state of affairs. As I sometimes like to tell people, "I want to live one more day. Tomorrow I will still want to live one more day. Therefore I want to live forever, proof by induction on the positive integers."

But flippant remarks aside, I'm not sure how I feel about real immortality, if such a thing should be physically permissible. Do I want to live longer than a billion years, live longer than a trillion years, live longer than a googolplex years, live longer than Graham's Number, live so long it has to be expressed in Conway chained arrow notation, live longer than Busy_Beaver(100)?

Note that I say "live longer than Graham's Number", not "live longer than Graham's Number years/seconds/millennia", because these are all essentially the same number. Living for this amount of time does not just require the ability to circumvent thermodynamics, it requires the ability to build custom universes with custom laws of physics. And the vast majority of integers are very much larger than that, or even Busy_Beaver(100). Perhaps this is possible. Perhaps not.

The emotional connection that I feel to my future self who's lived for Graham's Number is pretty much nil, on its own. But my self of tomorrow, with whom I identify very strongly, will be just a tiny bit closer. As I fulfill or abandon old goals, I will adopt new ones. The connection may be vicarious, but it is there.

And I certainly see a very great difference between humanity continuing forever, versus humanity continuing to Graham's Number and then halting; a difference very much worth dying for. (It follows that my discount rate is 1.)

So, as I usually tell people:

"Do I want to live forever? I don't know. Ask me again in a million years. Maybe then I'll have decided how I feel about immortality. I am a short-term thinker; I take my life one eon at a time."

I think that this works for two reasons: firstly, people tend to assume that everyone else is working from the same cache as themselves, so when we encounter someone working from non-standard cache, we often assume that the speaker must have thought up everything on his own; secondly, cached wisdom tend to be polished, self-contained and carefully worded for maximum rhetorical effect, whereas original thinking tends to be... not those things. Consequently, when we encounter an unfamiliar bit of cached wisdom, it seems as though the idea must have burst fully formed Athena-style from the speakers brow, when really he's just repeating something he read in a book somewhere that was gradually refined over time by others.

Well, please feel free to explain with absolute clarity and necessity; perhaps you'll do so in that great philosophy book. I regret that, at least for me, you haven't at all managed to do so yet. I can see that, e.g., writing a good philosophy book might seem more valuable to you if you only have one shot at it (though, er, it seems to me that it's not unheard of for philosophers to write more than one good book in their lives), but I can't imagine how you can think that's not outweighed by being able to write more and better books. And if your expected productive lifespan were a thousand years, there would still be challenges big enough that you'd only get one shot at them. They'd just be bigger, harder challenges.

In other words, you'd get more done, you'd get better things done, you'd have better just-one-shot challenges to meet (perhaps: not "kiss the girl I loved the most" but "find someone I can live with happily for a thousand years"; not "write a really good philosophy book" but "definitively solve such-and-such a very deep philosophical problem" -- though I bet these aren't imaginative enough); what's the downside, here?

Perhaps you think actual immortality would be worse somehow; I think that's a more defensible proposition. But you actually claimed not merely "infinitely extended lives might turn out to be worse" but "even as they are, our lives are quite likely too long". Stockholm syndrome, sorry.

My condescension was directed toward your biophysics professor. Seriously, what the hell gives senior scientists the idea that they can stop using science and still form accurate beliefs?

Evidently, you know, talking to people of average intelligence we are always going to sound deep, especially on social occasions when we tailor our conversation to the listener. But that has nothing to do with the particular view you defended. Someone defending that death gives meaning to life with better arguments than those people had would elicit the same response.

In 30,000 years you get 6000 rems worth of cosmic rays. This would be fatal (in a day or an hour) if a living human received it all at once.

But it's not nearly as much damage as is done by vitrifying someone to the temperature of liquid nitrogen, which would kill you instantly if it happened all at once.

There's a difference between functional damage to living systems (on which basis the cryonics folk are calculating that it will take at least 3000 years); versus the informational damage required to disrupt the relative structure of neurons frozen at liquid nitrogen temperature sufficiently to permanently erase information stored therein (probably Myears).

Sorta like the difference between doing enough damage to a hard drive to prevent it from working normally when you plug it in (which is how medical cryobiologists think), versus doing enough damage to a hard drive that not even the NSA can figure out what was once stored in it (you would be strongly advised to vaporize it).

In any case, cosmics are simply not significant over the timeframe of realistic cryonics (<300 years).

bw, I think I concur with Eliezer's diagnosis in another thread of Stockholm syndrome, or something like it. If you find it too easy to achieve all your goals because you have so many opportunities, then find harder goals.

(Perhaps I'm just, like, a seething cauldron of negativity or something, but that particular problem seems to me rather remote from my own experience, or from that of anyone else I know enough about.)

Cosmic ray damage isn't going to matter except over 10 KYear+ time periods, and even a million years worth would almost certainly be repairable by mature nanotechnology. Cryonics is supposed to get you to 2050 or whenever. I can't even find this question in standard cryonics FAQs, it's so bizarre.

How does a transhumanist respond to a person that wants to die? Like not in the future in a "death has X benefit" way, but an actual concrete "I'm going to finish up these things here and then put on my nice shoes and die" way?

Supposedly there exist transhumanists who don't subscribe to immortalism, as the other two commenters seem to be trying to say, but less helpfully. Probably a more precise formulation of your question would thus be "how does a transhumanist immortalist respond to a person that wants to die?"

That out of the way, my direct response would probably be "here's the number for the suicide hotline". If they don't actually seem to be in any real danger of killing themselves any time soon, I might ask them what they hope to gain by dying today.

See, I feel like suicide hotlines are for people who don't want to live, which isn't quite the same thing? What if they do give you a concrete answer. Is there any answer they could give that would pop them out of the "death is bad" bubble? Like, what if they say they feel like their death is part of some weird, creative, performance art thing?

Thank you for helpfulness! I understand the distinction now. =)

People have told me I was 'deep' because, in discussions, it's a habit for me to point out opposing points of view to everything that comes up, even if I agree with the original point of view, and to come up with the best arguments I can for the point of view even if I disagree with it, all while being very polite and pleasant about it. Apparently that's a good way to come across as really open to new ideas, which a lot of people seem to equate with being 'deep'.

"I know transhumanists who are unable to seem deep because they are unable to appreciate what their listener does not already know. If you want to sound deep, you can never say anything that is more than a single step of inferential distance away from your listener's current mental state."

This is extremely interesting to me because I am such a person; I have had significant difficulty throughout my life with uderstanding the existing state of other people. I've luckily found a mate who is much better at it than I am, and can therefore pull me aside if necessary to tip me off that I'm talking at cross purposes with my interlocutor. However, this is my own problem to solve.

What I want to know, though, is this: Is "a single step" of a particular reliable size, or do people take differently sized steps?

"""For your expected lifespan value to diverge to +infinity, it is necessary to place only

.000000000000000000000000000000000000000000000000000000000001

probability on your chance of living forever, and I don't think you can realistically defend assigning a probability lower than that."""

I can assign a nonzero probability to any number less than infinity, but in infinite time, the probability that even a godlike being will earn a Darwin is 1, no matter how unlikely it is n the next year.

"I for one do not see what the point would be in acquiring knowledge if I never died"

? Why do you acquire knowledge now? I do it because it's fun/interesting/useful toward accomplishing some goal.

I can understand the reasoning behind the saying that death gives meaning to life. But I've never been able to fully agree with that sentiment. If I could I would live forever. Death certainly gives me reason to want to do as much as I can while I am still able. But that desire doesn't give my life any more meaning than if it was not there. I can agree that death makes life precious, for without death life would be abundant.

I often imagine what it'd be like to live 200 years or 1000 years. I know like Eliezer I would do so if able (assuming my mind was still intact the entire time). I can't even begin to imagine the things I would be able to understand with a lifespan like that. I'm only 22 years old and I know and understand quite a bit, but what I don't know and don't understand is far greater. To me living a longer than what is currently natural life would be an opportunity to soak up even more knowledge and experience. That's what I'm doing now with my life and I hope by some advancement in technology I'm able to do so for far longer than 78 years (or whatever my life expectancy is).

I don't think I've ever actually heard anyone say exactly why they think death gives meaning to life. Anyone got a link to something that explains this?

I don't think anyone is qualified to judge, based on theory alone, whether true immortality is meaningful or worth achieving, since no one has lived much longer than 120 yrs. Maybe the human consciousness would throw up its hands and scream 'to hell with it all!' after 300 years, maybe not. Maybe our children will be lacksidasical losers because they have no impetus to get off their asses and on with their lives (lord knows how many ppl get a move on because they fear getting too old for girlfriends/marriage/children). But we don't know that, and it's all a moot point, since nobody's done it before. What is clear is that almost everyone wishes they didn't age, that our bodies and minds did not decay, that our memories did not fade, that we could keep the vigor, curiosity, openess and excitement of our most productive years. Why not try for that and see what living so long is really like? What would we have to lose?

See also: the chapter entitled "A different box of tools" in "Surely you're joking, Mr Feynman".

Calling someone "deep" is like calling someone "articulate"...It's a statement of unwillingness or inability to discuss what was said. I'm not offended by being called "deep" because I'll outlive the deathists who tend to call me that.

I just read Tom Stoppard's "Rosencrantz & Guildenstern are Dead", which is praised as a deep and intellectual play. It appears to operate primarily by stringing us along with a few lines of boring dialogue, then throwing in something random or meaningless. The unexpected line intrigues us; we feel the thrill of curiousity, undiluted by any interest in dramatic tension, plot, or character. But the dialogue's breakneck speed forces us to leave it behind before we can inspect the line and discover it says nothing we didn't already know. Repeat until curtain.

The play is allegedly "about" destiny vs. free will, significance vs. insignificance, and death. But it merely rambles on about these things, presenting trite, overused metaphors and angstful reactions in pretentious language, without ever making an argument. An argument must begin with facts, and Stoppard's play carefully and deliberately excises all facts from the start.

To establish some grounds for comparison, can you list three or four plays which do say things we didn't already know, and which make an argument beginning with facts?

have you succeeded in chaining these "one-inference-steps"?

that is, have you found you can take people with different beliefs / less domain knowledge, in casual conversation, and quickly explain things one inference at a time? i've found that i can only pull a few of those, even if they follow and are delightfully surprised by each one, else i start sounding too weird.

Next time around, I'd be more careful to link to tvtropes - that site is even more addictive than lesswrong! Ah, Eliezer, you continue to find new ways to steal time from me.

Is there any deepness, though, that you can just figure out without previously contemplating it, or is nearly all philosophy something that needs to just be explained later? And isn't then anything deep just regurgitating what we've already thought?

This deep-seeming by violating expectations reminds me of the great quote from Niels Bohr, that there "two sorts of truth: trivialities, where opposites are obviously absurd, and profound truths, recognised by the fact that the opposite is also a profound truth."

Thank you for this post Eliezer, it was deep :). (I will learn to pronounce your name correctly before i meet you, just you wait.)

Thanks for informing me of another bias you are triggering. You're one of the first people (maybe the first person?) I've found who explains in a convincing way how not to be fooled by people speaking in a convincing way.

(Sorry if I got that from you. I know it's a cached thought, but I can't seem to trace it.)

People have apparently argued for a 300 to 30,000 years storage limit due to free radicals due to cosmic rays, but the uncertainty is pretty big. Cosmic rays and background radiation are likely not as much a problem as carbon-14 and potassium-40 atoms anyway, not to mention the freezing damage. http://www.cryonics.org/1chapter2.html has a bit of discussion of this. The quick way of estimating the damage is to assume it is time compressed, so that the accumulated yearly dose is given as an acute dose.

Tiiba:

The hypothesis is actual immortality, to which nonzero probability is being assigned. For example, suppose under some scenario your probability of dying at each time decreases by a factor of 1/2. Then, your total probability of dying is 2 times the probability of dying at the very first step, which we can assume far less than 1/2.

Uhm, cosmic rays a threat to cryonics? Where the heck did /that/ come from?