And so I wrote at once to the Bloggingheads folks and asked if they could arrange a debate. This seemed like someone I wanted to test myself against. Also, it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that, because I think I should be able to handle damn near anything on the fly, and I desire to learn whether this thought is correct; and I am willing to risk public humiliation to find out.

This really bothers me, because you weren't just risking your own public humiliation; you were risking our public humiliation. You were endangering an important cause for your personal benefit.

There are three great besetting sins of rationalists in particular, and the third of these is underconfidence.  Michael Vassar regularly accuses me of this sin, which makes him unique among the entire population of the Earth.

But he's actually quite right to worry, and I worry too, and any adept rationalist will probably spend a fair amount of time worying about it.  When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result.  That's what makes a lot of cognitive subtasks so troublesome—you know you're biased but you're not sure how much, and you don't know if you're correcting enough—and so perhaps you ought to correct a little more, and then a little more, but is that enough?  Or have you, perhaps, far overshot?  Are you now perhaps worse off than if you hadn't tried any correction?

You contemplate the matter, feeling more and more lost, and the very task of estimation begins to feel increasingly futile...

And when it comes to the particular questions of confidence, overconfidence, and underconfidence—being interpreted now in the broader sense, not just calibrated confidence intervals—then there is a natural tendency to cast overconfidence as the sin of pride, out of that other list which never warned against the improper use of humility or the abuse of doubt.  To place yourself too high—to overreach your proper place—to think too much of yourself—to put yourself forward—to put down your fellows by implicit comparison—and the consequences of humiliation and being cast down, perhaps publicly—are these not loathesome and fearsome things?

To be too modest—seems lighter by comparison; it wouldn't be so humiliating to be called on it publicly, indeed, finding out that you're better than you imagined might come as a warm surprise; and to put yourself down, and others implicitly above, has a positive tinge of niceness about it, it's the sort of thing that Gandalf would do.

So if you have learned a thousand ways that humans fall into error and read a hundred experimental results in which anonymous subjects are humiliated of their overconfidence—heck, even if you've just read a couple of dozen—and you don't know exactly how overconfident you are—then yes, you might genuinely be in danger of nudging yourself a step too far down.

I have no perfect formula to give you that will counteract this.  But I have an item or two of advice.

What is the danger of underconfidence?

Passing up opportunities.  Not doing things you could have done, but didn't try (hard enough).

So here's a first item of advice:  If there's a way to find out how good you are, the thing to do is test it.  A hypothesis affords testing; hypotheses about your own abilities likewise.  Once upon a time it seemed to me that I ought to be able to win at the AI-Box Experiment; and it seemed like a very doubtful and hubristic thought; so I tested it.  Then later it seemed to me that I might be able to win even with large sums of money at stake, and I tested that, but I only won 1 time out of 3.  So that was the limit of my ability at that time, and it was not necessary to argue myself upward or downward, because I could just test it.

One of the chief ways that smart people end up stupid, is by getting so used to winning that they stick to places where they know they can win—meaning that they never stretch their abilities, they never try anything difficult.

It is said that this is linked to defining yourself in terms of your "intelligence" rather than "effort", because then winning easily is a sign of your "intelligence", where failing on a hard problem could have been interpreted in terms of a good effort.

Now, I am not quite sure this is how an adept rationalist should think about these things: rationality is systematized winning and trying to try seems like a path to failure.  I would put it this way:  A hypothesis affords testing!  If you don't know whether you'll win on a hard problem—then challenge your rationality to discover your current level.  I don't usually hold with congratulating yourself on having tried—it seems like a bad mental habit to me—but surely not trying is even worse.  If you have cultivated a general habit of confronting challenges, and won on at least some of them, then you may, perhaps, think to yourself "I did keep up my habit of confronting challenges, and will do so next time as well".  You may also think to yourself "I have gained valuable information about my current level and where I need improvement", so long as you properly complete the thought, "I shall try not to gain this same valuable information again next time".

If you win every time, it means you aren't stretching yourself enough.  But you should seriously try to win every time.  And if you console yourself too much for failure, you lose your winning spirit and become a scrub.

When I try to imagine what a fictional master of the Competitive Conspiracy would say about this, it comes out something like:  "It's not okay to lose.  But the hurt of losing is not something so scary that you should flee the challenge for fear of it.  It's not so scary that you have to carefully avoid feeling it, or refuse to admit that you lost and lost hard.  Losing is supposed to hurt.  If it didn't hurt you wouldn't be a Competitor.  And there's no Competitor who never knows the pain of losing.  Now get out there and win."

Cultivate a habit of confronting challenges—not the ones that can kill you outright, perhaps, but perhaps ones that can potentially humiliate you.  I recently read of a certain theist that he had defeated Christopher Hitchens in a debate (severely so; this was said by atheists).  And so I wrote at once to the Bloggingheads folks and asked if they could arrange a debate.  This seemed like someone I wanted to test myself against.  Also, it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that, because I think I should be able to handle damn near anything on the fly, and I desire to learn whether this thought is correct; and I am willing to risk public humiliation to find out.  Note that this is not self-handicapping in the classic sense—if the debate is indeed arranged (I haven't yet heard back), and I do not prepare, and I fail, then I do lose those stakes of myself that I have put up; I gain information about my limits; I have not given myself anything I consider an excuse for losing.

Of course this is only a way to think when you really are confronting a challenge just to test yourself, and not because you have to win at any cost.  In that case you make everything as easy for yourself as possible.  To do otherwise would be spectacular overconfidence, even if you're playing tic-tac-toe against a three-year-old.

A subtler form of underconfidence is losing your forward momentum—amid all the things you realize that humans are doing wrong, that you used to be doing wrong, of which you are probably still doing some wrong.  You become timid; you question yourself but don't answer the self-questions and move on; when you hypothesize your own inability you do not put that hypothesis to the test.

Perhaps without there ever being a watershed moment when you deliberately, self-visibly decide not to try at some particular test... you just.... slow..... down......

It doesn't seem worthwhile any more, to go on trying to fix one thing when there are a dozen other things that will still be wrong...

There's not enough hope of triumph to inspire you to try hard...

When you consider doing any new thing, a dozen questions about your ability at once leap into your mind, and it does not occur to you that you could answer the questions by testing yourself...

And having read so much wisdom of human flaws, it seems that the course of wisdom is ever doubting (never resolving doubts), ever the humility of refusal (never the humility of preparation), and just generally, that it is wise to say worse and worse things about human abilities, to pass into feel-good feel-bad cynicism.

And so my last piece of advice is another perspective from which to view the problem—by which to judge any potential habit of thought you might adopt—and that is to ask:

Does this way of thinking make me stronger, or weaker?  Really truly?

I have previously spoken of the danger of reasonableness—the reasonable-sounding argument that we should two-box on Newcomb's problem, the reasonable-sounding argument that we can't know anything due to the problem of induction, the reasonable-sounding argument that we will be better off on average if we always adopt the majority belief, and other such impediments to the Way.  "Does it win?" is one question you could ask to get an alternate perspective.  Another, slightly different perspective is to ask, "Does this way of thinking make me stronger, or weaker?"  Does constantly reminding yourself to doubt everything make you stronger, or weaker?  Does never resolving or decreasing those doubts make you stronger, or weaker?  Does undergoing a deliberate crisis of faith in the face of uncertainty make you stronger, or weaker?  Does answering every objection with a humble confession of you fallibility make you stronger, or weaker?

Are your current attempts to compensate for possible overconfidence making you stronger, or weaker?  Hint:  If you are taking more precautions, more scrupulously trying to test yourself, asking friends for advice, working your way up to big things incrementally, or still failing sometimes but less often then you used to, you are probably getting stronger.  If you are never failing, avoiding challenges, and feeling generally hopeless and dispirited, you are probably getting weaker.

I learned the first form of this rule at a very early age, when I was practicing for a certain math test, and found that my score was going down with each practice test I took, and noticed going over the answer sheet that I had been pencilling in the correct answers and erasing them.  So I said to myself, "All right, this time I'm going to use the Force and act on instinct", and my score shot up to above what it had been in the beginning, and on the real test it was higher still.  So that was how I learned that doubting yourself does not always make you stronger—especially if it interferes with your ability to be moved by good information, such as your math intuitions.  (But I did need the test to tell me this!)

Underconfidence is not a unique sin of rationalists alone.  But it is a particular danger into which the attempt to be rational can lead you.  And it is a stopping mistake—an error which prevents you from gaining that further experience which would correct the error.

Because underconfidence actually does seem quite common among aspiring rationalists who I meet—though rather less common among rationalists who have become famous role models)—I would indeed name it third among the three besetting sins of rationalists.

 

Part of the sequence The Craft and the Community

Next post: "Well-Kept Gardens Die By Pacifism"

Previous post: "My Way"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:06 PM
Select new highlight date
All comments loaded

I wonder if the decline of apprenticeships has made overconfidence and underconfidence more common and more severe.

I'm not a history expert, but it seems to me that a blacksmith's apprentice 700 years ago wouldn't have had to worry about over/underconfidence in his skill. (Gender-neutral pronouns intentionally not used here!) He would have known exactly how skilled he was by comparing himself to his master every day, and his master's skill would have been a known quantity, since his master had been accepted by a guild of mutually recognized masters.

Nowadays, because of several factors, calibrating your judgement of your skill seems to be a lot harder. Our education system is completely different, and regardless of whatever else it does, it doesn't seem to be very good at providing reliable feedback to its students, who properly understand the importance of the feedback and respond accordingly. Our blacksmith's apprentice (let's call him John) knows when he's screwed up - the sword or whatever that he's made breaks, or his master points out how it's flawed. And John knows why this is important - if he doesn't fix the problem, he's not going to be able to earn a living.

Whereas a modern schoolkid (let's call him Jaden) may be absolutely unprepared to deal with math, but he doesn't know exactly how many years he's behind (it's hard enough to get this information in aggregate, and it seems to be rarely provided to the students themselves on an individual basis - no one is told "you are 3 years behind where you ought to be"). And Jaden has absolutely no clue why that matters, since the link between math and his future employment isn't obvious to him, and no one's explaining it to him. (School isn't for learning; as Paul Graham has explained, "Officially the purpose of schools is to teach kids. In fact their primary purpose is to keep kids locked up in one place for a big chunk of the day so adults can get things done. And I have no problem with this: in a specialized industrial society, it would be a disaster to have kids running around loose.")

Another modern schoolkid (let's call her Jaina) may be really skilled at math, but testing won't indicate this strongly enough (it works both ways; tests saturate at the high end - especially if they're targeting a low level of achievement for the rest of the class - and "you are 3 years ahead of everyone else in this room" is not feedback that is commonly given). And there's a good chance it won't be obvious to her how important this is, and how important becoming even more skilled is. And if she ends up being underconfident in her ability, and the feedback loop ("I know how skilled I am, I know why becoming stronger is important, and I know what I need to do") isn't established, then instead of learning plasma physics and working on ITER or DEMO, she goes into marketing or something. Maybe doing worthy things, but not being as awesome as she could have been.

My point, after this wondering, is that I agree with this post, and want to elaborate: structuring what you do so that you test yourself in the process of doing it is a good way to establish a feedback loop that increases your skill and the accuracy of your confidence in it. I find nothing wrong with the debating example in this post, but I worry that it makes self-testing sound like something that you should go out and do, separate from your everyday work. (Part of this, I think, is due to Eliezer's very unusual occupation.) My usual self-testing example is something like "can I write this program correctly on the very first try?". That's a hard challenge, integrated into my everyday work. Successfully completing it, or coming close, has allowed me to build up my skill ("the compiler in my head") and avoid the danger of underconfidence.

A friend of mine, normal in most ways, has exceptionally good mental imagery, such that one time she visited my house and saw a somewhat complex 3-piece metalwork puzzle in my living room and thought about it later that evening after she had left, and was able to solve it within moments of picking it up when she visited a second time. At first I was amazed at this, but I soon became more amazed that she didn't find this odd, and that no one had ever realized she had any particular affinity for this kind of thing in all the time she'd been in school. I'm curious as to how many cognitive skills like this there are to excel at and if many people are actually particularly good at one or many of them without realizing it due to a lack of good tests for various kinds of cognition.

My usual self-testing example is something like "can I write this program correctly on the very first try?". That's a hard challenge, integrated into my everyday work.

I should try to remember to try this the next time I have a short piece of code to write. Furthermore, it's the sort of thing that makes me slightly uncomfortable and is therefore easy to forget, so I should try harder to remember it.

In general, this sort of thing seems like a very useful technique if you can do it without endangering your work. Modded parent up.

Without risk, there is no growth.

If your practice isn't making you feel scared and uncomfortable, it's not helping. Imagine training for a running race without any workouts that raise your heart rate and make you breathe hard.

Feeling out of your comfort zone and at risk of failure is something everybody should seek out on a regular basis.

it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that

I urge you to prepare properly. Not only Hitchens but Richard Carrier and several other atheists have been humiliated in debate with him, by their own admission. Winning at all is challenge enough, and would be a great service to the world. Given how much of a blow you would find it to lose having fully prepared, I urge you to to reconsider whether you're self-handicapping.

Scientists are frequently advised to never participate in a live debate with a creationist. This is because being right has absolutely nothing to do with winning.

"Debating creationists on the topic of evolution is rather like playing chess with a pigeon - it knocks the pieces over, craps on the board, and flies back to its flock to claim victory." -- Scott D. Weitzenhoffer

Debates are not a rationality competition. They're a Dark Arts competition, in which the goal is to use whatever underhanded trick you can come up with in order to convince somebody to side with you. Evidence doesn't matter, because it's trivial to simply lie your ass off and get away with it.

The only kind of debates worth having are written debates, in which, when someone tells a blatant lie, you can look up the truth somewhere and take all the space you need to explain why it's a lie - and "cite your sources, or you forefeit" is a reasonable rule.

Indeed. Association fallacy. Eliezer might not think much of his loss, but it would still be seen by people as a loss for "the atheists" and a victory for "the theists". Debate to win!

Who is this theist? I'm interested in watching these debates. (though obviously without knowledge of the specific case, I agree with ciphergoth. It's not just about you, it's about whoever's watching.)

I agree with ciphergoth's guess.

Eliezer: I agree with ciphergoth and Yvain. Debating, at least as the Theist Who (Apparently) Must Not Be Named is concerned, is a performance art more than it is a form of intellectual inquiry, and unless you've done a lot of it you run the severe risk of getting eaten by someone who has, especially if you decide to handicap yourself. If you engage in such a debate, the chances are that at least some people will watch or hear it, or merely learn of the result, and change their opinions as a result. (Probably few will change so far as to convert or deconvert: maybe none. Many will find that their views become more or less entrenched.)

What would you think of a musician who decided to give a public performance without so much as looking at the piece she was going to play? Would you not be inclined to say: "It's all very well to test yourself, but please do it in private"?

(For what it's worth, I think it's rather unlikely that TTWMNBN will agree to a Bloggingheads-style debate. He would want it to be public. And he might decide that Eliezer isn't high-enough-profile to be worth debating. Remember: for him, among other things, this is propaganda.)

[EDITED a few minutes after posting to remove the explicit mention of the theist's name]

I urge you to prepare properly. Not only Hitchens but Richard Carrier and several other atheists have been humiliated in debate with him, by their own admission. Winning at all is challenge enough, and would be a great service to the world. Given how much of a blow you would find it to lose having fully prepared, I urge you to to reconsider whether you're self-handicapping.

Eleizer will be humiliated. Even if Eleizer prepares for the debate he will still lose. Eleizer spends too much time thinking rationally for him to be a match for a master debater. I've seen him on Bloggingheads. He doesn't spend nearly enough energy producing the kind of bullshit you are supposed to throw together if you want to be considered victorious in a debate.

Unfair debate proposal

You want a debate in which the tables are tilted against you? I see a way to do that which doesn't carry the risks of your current proposal.

A bunch of us get together on an IRC channel and agree to debate you. We thrash out our initial serve; we then spring the topic and our initial serve on you. You must counter immediately, with no time to prepare. We then go away and mull over your counter, and agree a response, which you must again immediately respond to.

We can give ourselves more speaking time than you in each exchange, too, if you want to tilt the tables further (I'm imagining the actual serves and responses being delivered as video).

Since Eliezer hasn't prepared by watching earlier debates then one solution could be to just use arguments from the theist's past debates in a simulated debate. As Eliezer prefers, he wouldn't prepare and would have to answer questions immediately.

There are two drawbacks: first it would just be "us" evaluating whether Eliezer performed well (but then, debate performance is always somewhat subjective) and we would lose the interaction of question, response and follow-up question.

Nevertheless, Eliezer's off-the-cuff responses to the theist's past questions could be informative.

It sounds as though you're viewing the debate as a chance to test your own abilities at improvisational performance. That's the wrong goal. Your goal should be to win.

"The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him."

By increasing the challenge the way you suggest, you may very well be acting rationally toward the goal of testing yourself, but you're not doing all you can to cut the opponent. To rationally pursue winning the debate, there's no excuse for not doing your research.

In choosing not to try for that, you'll end up sending the message that rationalists don't play to win. You and I know this isn't quite accurate -- what you're doing is more like a rationalist choosing to lose a board game, because that served some other, real purpose of his -- but that is still how it will come across. Do you consider this to be acceptable?

I've found some of the characterizations of Craig's arguments and debate style baffling.

When he debates the existence of god, he always delivers the same five arguments (technically, it's four: his fifth claim is that god can be known directly, independently of any argument). He develops these arguments as carefully as time allows, and defends each of his premises. He uses the kalam cosmological argument, the fine tuning argument, the moral argument, and the argument from the resurrection of Jesus. This can hardly be characterized as dumping.

Also, his arguments are logically valid; you won't see any, 'brain teaser, therefore god!' moves from him. He's not only a 'theologian'; he's a trained philosopher (he actually has two earned PHDs, one in philosophy and one in theology).

Finally, Craig is at his best when it comes to his responses. He is extremely quick, and is very adept at both responding to criticisms of his arguments, and at taking his opponent's arguments apart.

Debating William Lane Craig on the topic of god's existence without preparation would be as ill advised as taking on a well trained UFC fighter in the octagon without preparation. To extend the analogy further, it would be like thinking it's a good idea because you've won a couple of street fights and want to test yourself.

Cultivate a habit of confronting challenges - not the ones that can kill you outright, perhaps, but perhaps ones that can potentially humiliate you.

You may be interested to learn that high-end mountaineers apply exactly the strategy you describe to challenges that might kill them outright. Mick Fowler even states it explicitly in his autobiography - "success every time implies that one's objectives are not challenging enough".

A large part of mountaineering appears to be about identifying the precise point where your situation will become unrecoverable, and then backing off just before you reach it. On the other hand, sometimes you just get unlucky.

And so I wrote at once to the Bloggingheads folks and asked if they could arrange a debate. This seemed like someone I wanted to test myself against. Also, it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that, because I think I should be able to handle damn near anything on the fly, and I desire to learn whether this thought is correct; and I am willing to risk public humiliation to find out.

This really bothers me, because you weren't just risking your own public humiliation; you were risking our public humiliation. You were endangering an important cause for your personal benefit.

The cause of rationalism does not rise and fall with Eliezer Yudkowsky.

If you fear the consequences of being his partisan, don't align yourself with his party. If you are willing to associate yourself and your reputation with him, accept the necessary consequences of having done so.

Phil might be wrong to phrase his objection in terms of "our public humiliation". But its still the case that there are things at stake beyond Eliezer Yudkowsky's testing himself. And those are things we all care about.

A slogan I like is that failure is OK, so long as you don't stop trying to avoid it.

While reading this post, a connection with Beware of Other-Optimizing clicked in my mind. Different aspiring rationalists are (more) susceptible to different failure modes. From Eliezer's previous writings it had generally seemed like he was more worried about the problem of standards (for oneself) that are too low -- that is, not being afraid enough of failure -- than about the opposite error, standards that are too high. But I suspect that's largely specific to him; others may need to worry more about being too afraid of failure. Hence I'm happy to see this post.

There are three great besetting sins of rationalists in particular, and the third of these is underconfidence.

Were we ever told the other two?

Yes, by Jeffreyssai:

"Three flaws above all are common among the beisutsukai. The first flaw is to look just the slightest bit harder for flaws in arguments whose conclusions you would rather not accept. If you cannot contain this aspect of yourself then every flaw you know how to detect will make you that much stupider. This is the challenge which determines whether you possess the art or its opposite: Intelligence, to be useful, must be used for something other than defeating itself."

"The second flaw is cleverness. To invent great complicated plans and great complicated theories and great complicated arguments - or even, perhaps, plans and theories and arguments which are commended too much by their elegance and too little by their realism. There is a widespread saying which runs: 'The vulnerability of the beisutsukai is well-known; they are prone to be too clever.' Your enemies will know this saying, if they know you for a beisutsukai, so you had best remember it also. And you may think to yourself: 'But if I could never try anything clever or elegant, would my life even be worth living?' This is why cleverness is still our chief vulnerability even after its being well-known, like offering a Competitor a challenge that seems fair, or tempting a Bard with drama."

"The third flaw is underconfidence, modesty, humility. You have learned so much of flaws, some of them impossible to fix, that you may think that the rule of wisdom is to confess your own inability. You may question yourself so much, without resolution or testing, that you lose your will to carry on in the Art. You may refuse to decide, pending further evidence, when a decision is necessary; you may take advice you should not take. Jaded cynicism and sage despair are less fashionable than once they were, but you may still be tempted by them. Or you may simply - lose momentum."

We have lots of experimental data showing overconfidence; what experimental data show a consistent underconfidence, in a way that a person could use that data to correct their error? This would be a lot more persuasive to me than the mere hypothetical possibility of underconfidence.

Underconfidence is surely very common in the general population. It's usually referred to "shyness", "tentativeness", "depression" - or by other names besides "underconfidence". This is part of the audience of the self-help books that encourage people to be more confident.

E.g. see: "The trouble with overconfidence." on PubMed.

I skimmed several debates with WLC yesterday, referenced here. His arguments are largely based on one and the same scheme:

  1. Everythng must have a cause
  2. Here's a philosophical paradox for you, that can't be resolved within the world
  3. Since despite the paradox, some fact still holds, it must be caused by God, from outside the world

(Or something like this, the step 3 is a bit more subtle than I made it out to be.) What's remarkable, even though he uses a nontrivial number of paradoxes for the step 2, almost all of them were explicitly explained in the material on Overcoming Bias. At least, I was never confused while listening to his arguments, whereas some of his opponents were, on some of the arguments. I don't see WLC as possessing magical oratorial skills, but he bends the facts on occasion, and is very careful in what he says. Also, his presentations are too debugged to be alive, so it looks unnatural.

The general meta-counterargument would be to break this scheme, as he could present some paradox (e.g. anthropics) without clear known resolution, and through it bend his line. I'm sure he knows lots of paradoxes, so there is a real danger of encountering an unknown one.

He knows Bayesian math. On one occasion, he basically replied to a statement that there is no evidence for God that it's only relevant if you expect more evidence for God if it exists, as opposed to if it doesn't, and if you expect no evidence in both cases, this fact can't be lowered a priori probability. This, of course, contradicts the rest of his arguments, but I guess he'll say that those arguments are some different kind of evidence.

Many of WLC's arguments have this rough structure:

  • Here's a philosophical brain teaser. Doesn't it make your head spin?
  • Look, with God we can shove the problem under the carpet
  • Therefore, God.

That's why I think that in order to debate him you have to explicitly challenge the idea that God could ever be a good answer to anything; otherwise, you disappear down the rabbit hole of trying to straighten out the philosophical confusions of your audience.

gjm asks wisely:

What would you think of a musician who decided to give a public performance without so much as looking at the piece she was going to play? Would you not be inclined to say: "It's all very well to test yourself, but please do it in private"?

The central thrust of Eliezer's post is a true and important elaboration of his concept of improper humility, but doesn't it overlook a clear and simple political reality? There are reputational effects to public failure. It seems clear that those reputational effects often outweigh whatever utility is gained from an empirical "test" of one's own abilities: this is why international relations theory isn't a rigorous empirical science. We live in an irrational kaleidescope of power, driven by instinct and emotion, ordered only fleetingly by rhetoric and guile. In this situation, we need to keep our cards close to our chest if we want to win.

Mulciber adds something along the same lines:

By increasing the challenge the way you suggest, you may very well be acting rationally toward the goal of testing yourself, but you're not doing all you can to cut the opponent. To rationally pursue winning the debate, there's no excuse for not doing your research.

And Eliezer does seem to approve of this mode of thinking in some cases:

Of course this is only a way to think when you really are confronting a challenge just to test yourself, and not because you have to win at any cost. In that case you make everything as easy for yourself as possible. To do otherwise would be spectacular overconfidence, even if you're playing tic-tac-toe against a three-year-old.

So, to sum up my concern, how is this principle of pragmatism reconciled to your choice not to prepare? Isn't it best to test yourself in the peace and safety of your dojo, or in circumstances where the stakes are not high, and use every means available to resist on the actual field of battle?

What is the danger of overconfidence?

Passing up opportunities. Not doing thing you could have done, but didn't try (hard enough).

Did you mean "danger of underconfidence"?

This post reminds me of Aristotle's heuristics for approaching the mean when one tends towards the extremes:

"That moral virtue is a mean, then, and in what sense it is so, and that it is a mean between two vices, the one involving excess, the other deficiency, and that it is such because its character is to aim at what is intermediate in passions and in actions, has been sufficiently stated. Hence also it is no easy task to be good. For in everything it is no easy task to find the middle, e.g. to find the middle of a circle is not for every one but for him who knows; so, too, any one can get angry- that is easy- or give or spend money; but to do this to the right person, to the right extent, at the right time, with the right motive, and in the right way, that is not for every one, nor is it easy; wherefore goodness is both rare and laudable and noble.

Hence he who aims at the intermediate must first depart from what is the more contrary to it, as Calypso advises-

Hold the ship out beyond that surf and spray.

For of the extremes one is more erroneous, one less so; therefore, since to hit the mean is hard in the extreme, we must as a second best, as people say, take the least of the evils; and this will be done best in the way we describe. But we must consider the things towards which we ourselves also are easily carried away; for some of us tend to one thing, some to another; and this will be recognizable from the pleasure and the pain we feel. We must drag ourselves away to the contrary extreme; for we shall get into the intermediate state by drawing well away from error, as people do in straightening sticks that are bent." (NE, II.9)

Did you write a cost function down for the various debate outcomes? The skew will inform whether overconfidence or underconfidence should be weighted differently.

Eliezer, does your respect for Aumann's theorem incline you to reconsider, given how many commenters think you should thoroughly prepare for this debate?

And conversely, as Ari observes:

If you’ve never hit the ground while skydiving, you’re opening your parachute too early.