Things are actually a bit worse than this, because there is also no theorem that says there is only one valley, so there's no guarantee that even after you climb out of this valley, your next step won't cause you to go off a precipice.

BTW, there's a very similar issue in economics, which goes under the name of the Theory of the Second Best. Markets will allocate resources efficiently if they are perfectly competitive and complete, but there is no guarantee that any incremental progress towards that state, such creating some markets that were previously missing, or making some markets more competitive, will improve social welfare.

Incremental Progress and the Valley

Followup toRationality is Systematized Winning

Yesterday I said:  "Rationality is systematized winning."

"But," you protest, "the reasonable person doesn't always win!"

What do you mean by this?  Do you mean that every week or two, someone who bought a lottery ticket with negative expected value, wins the lottery and becomes much richer than you?  That is not a systematic loss; it is selective reporting by the media.  From a statistical standpoint, lottery winners don't exist—you would never encounter one in your lifetime, if it weren't for the selective reporting.

Even perfectly rational agents can lose.  They just can't know in advance that they'll lose.  They can't expect to underperform any other performable strategy, or they would simply perform it.

"No," you say, "I'm talking about how startup founders strike it rich by believing in themselves and their ideas more strongly than any reasonable person would.  I'm talking about how religious people are happier—"

Ah.  Well, here's the the thing:  An incremental step in the direction of rationality, if the result is still irrational in other ways, does not have to yield incrementally more winning.

The optimality theorems that we have for probability theory and decision theory, are for perfect probability theory and decision theory.  There is no companion theorem which says that, starting from some flawed initial form, every incremental modification of the algorithm that takes the structure closer to the ideal, must yield an incremental improvement in performance.  This has not yet been proven, because it is not, in fact, true.

"So," you say, "what point is there then in striving to be more rational?  We won't reach the perfect ideal.  So we have no guarantee that our steps forward are helping."

You have no guarantee that a step backward will help you win, either.  Guarantees don't exist in the world of flesh; but contrary to popular misconceptions, judgment under uncertainty is what rationality is all about.

"But we have several cases where, based on either vaguely plausible-sounding reasoning, or survey data, it looks like an incremental step forward in rationality is going to make us worse off.  If it's really all about winning—if you have something to protect more important than any ritual of cognition—then why take that step?"

Ah, and now we come to the meat of it.

I can't necessarily answer for everyone, but...

My first reason is that, on a professional basis, I deal with deeply confused problems that make huge demands on precision of thought.  One small mistake can lead you astray for years, and there are worse penalties waiting in the wings.  An unimproved level of performance isn't enough; my choice is to try to do better, or give up and go home.

"But that's just you.  Not all of us lead that kind of life.  What if you're just trying some ordinary human task like an Internet startup?"

My second reason is that I am trying to push some aspects of my art further than I have seen done.  I don't know where these improvements lead.  The loss of failing to take a step forward is not that one step, it is all the other steps forward you could have taken, beyond that point.  Robin Hanson has a saying:  The problem with slipping on the stairs is not falling the height of the first step, it is that falling one step leads to falling another step.  In the same way, refusing to climb one step up forfeits not the height of that step but the height of the staircase.

"But again—that's just you.  Not all of us are trying to push the art into uncharted territory."

My third reason is that once I realize I have been deceived, I can't just shut my eyes and pretend I haven't seen it.  I have already taken that step forward; what use to deny it to myself?  I couldn't believe in God if I tried, any more than I could believe the sky above me was green while looking straight at it.  If you know everything you need to know in order to know that you are better off deceiving yourself, it's much too late to deceive yourself.

"But that realization is unusual; other people have an easier time of doublethink because they don't realize it's impossibleYou go around trying to actively sponsor the collapse of doublethink.  You, from a higher vantage point, may know enough to expect that this will make them unhappier.  So is this out of a sadistic desire to hurt your readers, or what?"

Then I finally reply that my experience so far—even in this realm of merely human possibility—does seem to indicate that, once you sort yourself out a bit and you aren't doing quite so many other things wrong, striving for more rationality actually will make you better off.  The long road leads out of the valley and higher than before, even in the human lands.

The more I know about some particular facet of the Art, the more I can see this is so.  As I've previously remarked, my essays may be unreflective of what a true martial art of rationality would be like, because I have only focused on answering confusing questions—not fighting akrasia, coordinating groups, or being happy.  In the field of answering confusing questions—the area where I have most intensely practiced the Art—it now seems massively obvious that anyone who thought they were better off "staying optimistic about solving the problem" would get stomped into the ground.  By a casual student.

When it comes to keeping motivated, or being happy, I can't guarantee that someone who loses their illusions will be better off—because my knowledge of these facets of rationality is still crude.  If these parts of the Art have been developed systematically, I do not know of it.  But even here I have gone to some considerable pains to dispel half-rational half-mistaken ideas that could get in a beginner's way, like the idea that rationality opposes feeling, or the idea that rationality opposes value, or the idea that sophisticated thinkers should be angsty and cynical.

And if, as I hope, someone goes on to develop the art of fighting akrasia or achieving mental well-being as thoroughly as I have developed the art of answering impossible questions, I do fully expect that those who wrap themselves in their illusions will not begin to compete.  Meanwhile—others may do better than I, if happiness is their dearest desire, for I myself have invested little effort here.

I find it hard to believe that the optimally motivated individual, the strongest entrepreneur a human being can become, is still wrapped up in a blanket of comforting overconfidence.  I think they've probably thrown that blanket out the window and organized their mind a little differently.  I find it hard to believe that the happiest we can possibly live, even in the realms of human possibility, involves a tiny awareness lurking in the corner of your mind that it's all a lie.  I'd rather stake my hopes on neurofeedback or Zen meditation, though I've tried neither.

But it cannot be denied that this is a very real issue in very real life.  Consider this pair of comments from Less Wrong:

I'll be honest —my life has taken a sharp downturn since I deconverted. My theist girlfriend, with whom I was very much in love, couldn't deal with this change in me, and after six months of painful vacillation, she left me for a co-worker. That was another six months ago, and I have been heartbroken, miserable, unfocused, and extremely ineffective since.

Perhaps this is an example of the valley of bad rationality of which PhilGoetz spoke, but I still hold my current situation higher in my preference ranking than happiness with false beliefs.

And:

My empathies: that happened to me about 6 years ago (though thankfully without as much visible vacillation).

My sister, who had some Cognitive Behaviour Therapy training, reminded me that relationships are forming and breaking all the time, and given I wasn't unattractive and hadn't retreated into monastic seclusion, it wasn't rational to think I'd be alone for the rest of my life (she turned out to be right). That was helpful at the times when my feelings hadn't completely got the better of me.

So—in practice, in real life, in sober fact—those first steps can, in fact, be painful.  And then things can, in fact, get better.  And there is, in fact, no guarantee that you'll end up higher than before.  Even if in principle the path must go further, there is no guarantee that any given person will get that far.

If you don't prefer truth to happiness with false beliefs...

Well... and if you are not doing anything especially precarious or confusing... and if you are not buying lottery tickets... and if you're already signed up for cryonics, a sudden ultra-high-stakes confusing acid test of rationality that illustrates the Black Swan quality of trying to bet on ignorance in ignorance...

Then it's not guaranteed that taking all the incremental steps toward rationality that you can find, will leave you better off.  But the vaguely plausible-sounding arguments against losing your illusions, generally do consider just one single step, without postulating any further steps, without suggesting any attempt to regain everything that was lost and go it one better.  Even the surveys are comparing the average religious person to the average atheist, not the most advanced theologians to the most advanced rationalists.

But if you don't care about the truth—and you have nothing to protect—and you're not attracted to the thought of pushing your art as far as it can go—and your current life seems to be going fine—and you have a sense that your mental well-being depends on illusions you'd rather not think about—

Then you're probably not reading this.  But if you are, then, I guess... well... (a) sign up for cryonics, and then (b) stop reading Less Wrong before your illusions collapse!  RUN AWAY!

 

Part of the sequence The Craft and the Community

Next post: "Whining-Based Communities"

Previous post: "Selecting Rationalist Groups"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:56 PM
Select new highlight date
Rendering 50/112 comments  show more

Some historical context:

16th through 19th-century rationalists advocated views something like the views Eliezer is advocating. This view was eventually reflected in the art of the day, as exemplified by Bach and, later, by the strict formalisms of classical music.

In the 19th century, romanticism was an artistic reaction against rationalism. We're talking Goethe, Beethoven, Byron, and Blake. In painting, it was also a reaction against photography, searching for a justification for continuing to paint.

During the romantic period, Nietzsche used romantic artistic ideas to criticize rationality, by saying that life is worth living when we commit to values, and rationality undermines our commitment to our values. He offered as an alternative the culture/value creator, who leads his culture to greatness. This greatness, he says, can only be attained if we reject rationalism. There is some happiness theory in there as well, including the idea that war isn't justified by values, war justifies values. This seems to be a riff on the idea that the striving and drama is itself what we value.

In the 20th century, Max Weber rephrased it this way: Societies are legitimized by tradition, reason, or charisma. Religious societies are legitimized by tradition. The Enlightenment introduced legitimization by reason. Nietzsche argued for legitimization by charisma.

By then, most intellectuals the world over sided with Nietzsche. (I use "intellectuals" in the standard way, which marginalizes the physicists, mathematicians, and other hard scientists whom many of us consider to be the world's true intellectuals.)

Then Hitler and Lenin-Stalin played out legitimization by charisma. Intellectuals the world over were repulsed. It didn't seem so noble in real life. They rejected Nietzsche's conclusions, but without finding any problems with Nietzsche's arguments.

Philosophy since then has been boring, probably because philosophers can't get worked up about any position anymore. Today, most intellectuals reject tradition, reason, and charisma for legitimizing society; and no one has come up with anything better.

To push the true rationalist agenda, someone needs to find the errors in Nietzsche.

This is not what I see happening. When I hear people defending the preference of truth over Happiness or utility, it sounds like they're trying to create a monstrous hybrid of the Enlightenment and Nietzsche by rooting the entire structure of rationalism in an act of charismatic Nietzschian value-creation. That's not true utilitarian rationality. It looks like rationality above the surface; but the roots are Nietzschian.

This should, perhaps, have been its own post, because I see no relation whatsoever to the original post.

The initial point of contact is when you said

If you don't prefer truth to happiness with false beliefs...

followed by a number of people in the comments disagreeing with me when I said this didn't make sense to me.

That, as well a lot of things said on LW by various people including you, sounds to me like elevating truth above values.

But I'll delete it and make it its own post if you like. I also thought that maybe I should've made it a separate post. It is a side issue.

I'd like to see it as its own post, illustrated with quotes from Nietzsche or quotes from those interpreting Nietzsche.

"No," you say, "I'm talking about how startup founders strike it rich by believing in themselves and their ideas more strongly than any reasonable person would. ..."

It's important to realize that this is another myth perpetuated by the media and our ignorance of the statistics. Most startups fail; I think the statistics are that 80% die in the first 5 years. But the ones that get written up in glowing articles are the ones that succeeded. Of course all those founders who struck it rich believed strongly in their ideas, but so did many of those that failed. That irrational belief may be a crucial ingredient for success, but it doesn't supply a guarantee. Most of the people who held that irrational belief worked for businesses that failed--but they didn't get their name in the paper, so they're relatively invisible.

Still, if everyone who does succeed has an irrational belief in their own success, then it's not wrong to conclude that such a belief is probably a prerequisite (though certainly not a "guarantee") for success.

Maybe the reason why so many startups fail is that people are prone to have irrational beliefs about business ideas. This causes many entrepreneurs to pursue bad investments or irrational business practices.

More relevant to the discussion topic, consider these questions:

Some beliefs have the tendency to be self-fulfilling prophecies, but is it irrational to have these beliefs? Is self-deception necessary for the "self-fulfilling" property to work? Can we, say, have a positive outlook on life while having rational expectations at the same time?

Surely having a positive outlook on life doesn't require any specific belief.

Regarding self-fulfilling beliefs:

Yes, having a belief can have the side effect of changing your behavior independently of how you would consciously change your behavior in light of your beliefs.

When you have an accurate belief, and the side effects of believing it affect your behavior in a way you consciously believe is positive, then take advantage of it! If you can get a boost toward your goals without making a conscious effort, then by all means cut out conscious effort as the middleman in the causal chain between your beliefs and your goal state.

But if you spy a shortcut between an inaccurate belief state and your current goal, don't follow the causal chain from the beginning, but meet it in the middle. Strive to shape your behavior according to your prediction of its effect, but leave your innermost beliefs to entangle with reality. They are shaped too much by non-entanglement processes as it is.

I think the best approach is a slightly more sophisticated one: commit to the belief that there is a way to succeed and you will find it - but not necessarily that you have already found it.

But is it also true that 80% of entrepreneurs fail? I was under the impression that yes, 80% of startup companies fail, but the average entrepreneur might start five or more companies over his career, so that average success rate per person is much higher than 20%.

Oh no, more grandeur.

A rationalist can take a small concrete problem, reduce it to essentials, figure out a good strategy and follow it. No need to brainf*ck yourself and reevaluate your whole life - people have built bridges and discovered physical laws without it. For examples of what I want see Thomas Schelling's "Strategy of Conflict": no mystique, just clear mathematical analysis of many real-life problems. Starts out from toys, e.g. bargaining games and PD, and culminates in lots of useful tactics for nuclear deterrence that were actually adopted by the US military after the book's publication. How's that for "something to protect"?

I for one would be happy if you just wrote up, mathematically, your solution concept for Newcomb's and PD. Is it an extension of superrationality for asymmetric games, or something else entirely? If we slowly modify one player's payoffs in PD, at what precise moment do you stop cooperating?

If you know what you want so clearly, why not write it and post it? Less Wrong is what you make it.

When there is a conventional wisdom it usually pays for most people to become more rational just so they can better anticipate, assimilate, remember and use that conventional wisdom. But once your rationality becomes so strong that it leads you to often reject conventional wisdom, then you face a tougher tradeoff; there can be serious social costs from rejecting conventional wisdom.

Things are actually a bit worse than this, because there is also no theorem that says there is only one valley, so there's no guarantee that even after you climb out of this valley, your next step won't cause you to go off a precipice.

BTW, there's a very similar issue in economics, which goes under the name of the Theory of the Second Best. Markets will allocate resources efficiently if they are perfectly competitive and complete, but there is no guarantee that any incremental progress towards that state, such creating some markets that were previously missing, or making some markets more competitive, will improve social welfare.

I agree there's no guarantee in principle, but I can't recall ever running into a second valley in practice.

An incremental step can be a loss where you have two errors reversing each other. You have error A that causes suffering a and error B that causes anti-a. You cure B, and suddenly you experience a. The anti-rationalist says "quick, reinstate B". I say "no, work back from a to A and cure A".

Example: pessimists make better calibrated estimates but are worse off for happiness and health. IMO the pessimists are probably not accepting the reality they predict, they are railing against it, which is a variety of magical thinking.

Another example: getting rid of risk aversion without getting rid of overconfidence bias, or vice versa.

Even perfectly rational agents can lose. They just can't know in advance that they'll lose. They can't expect to underperform any other performable strategy, or they would simply perform it.

I think your formulation in this post is the clearest, and I agree with it. In previous posts, you may have said things which confused your point, such as this:

Said I: "If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."

The strong interpretation of this quote is that if you lose, you weren't being rational. This may explain why so many people felt the urge to point out that rational people can lose. The weak interpretation is that if you lose, rather than protesting that you were rational, you should more closely scrutinize your thinking and whether it is really rational. Now it seems that the weak interpretation is what you intend.

Or – if you lose, you should learn why, if it's important to not lose again.

But if you don't care about the truth - and you have nothing to protect - and you're not attracted to the thought of pushing your art as far as it can go - and your current life seems to be going fine - and you have a sense that your mental well-being depends on illusions you'd rather not think about -

..then it may already be too late, since the seed of doubt is already planted.

I wish. People seem capable of sustaining themselves in this state for indefinite periods.