One of the main Eliezer Sequences, consisting of dozens of posts, is How To Actually Change Your Mind. Looking at all those posts, one gets the feeling that changing one’s mind must be Really Hard. But maybe it doesn't have to be that hard. I think it would much easier to change your mind, if you instinctively thought that your best ideas are almost certainly still far from the truth. Most of us are probably aware of the overconfidence bias, but there hasn't been much discussion on how to practically reduce overconfidence in our own ideas.

I offer two suggestions in that vein for your consideration.

1. Take the outside view. Recall famous scientists and philosophers of the past, and how far off from the truth their ideas were, and yet how confident they were in their ideas. Realize that they are famous because, in retrospect, they were more right than everyone else of their time, and there are countless books filled with even worse ideas. How likely is it that your ideas are the best of our time? How likely is it that the best ideas of our time are fully correct (as opposed to just a bit closer to the truth)?

2. Take a few days to learn some cryptology and then design your own cipher. Use whatever tricks you can find and make it as complicated as you want. Feel your confidence in how unbreakable it must be (at least before the Singularity occurs), and then watch it taken apart by an expert in minutes. Now feel the sense of betrayal against your “self-confidence module” and vow “never again”.

New Comment
25 comments, sorted by Click to highlight new comments since:

At the same time, the ideas you invent or encounter, that often won't be correct, clear or useful, need to be taken seriously in order to see how they fair in your actual understanding of the world and not just in a compartment for ridiculous things. So it's important to keep "taking seriously" and "believe strongly" or "believe useful" from contaminating each other, or maybe just to take seriously everything (while retaining ability to tell what's plausible/useful to what extent).

So it's important to keep "taking seriously" and "believe strongly" or "believe useful" from contaminating each other, or maybe just to take seriously everything (while retaining ability to tell what's plausible/useful to what extent).

I tried to convince my roommate that my comparative advantage was in being wrong about a lot of different things, since everybody else spends time being wrong about stupid boring things but would still never change their mind, and anyways in order to be wrong about something you have to have had anticipations and counter-able intuitions in the first place with is an important virtue. I think my arguments were rather persuasive. Anyway my point is that once you get to the point where you're not afraid of having been wrong or changing your mind then "taking seriously" becomes more useful, more fun, and less dangerous. I'm reasonably sure it's not a problem, since if you can't change your mind anyway you had bigger things to worry about.

I think the largest costs of sticking your neck out and taking things seriously are social ones, which is unfortunate since talking to smart people is the best way to check yourself.

ETA: Hmuh... huge typical mind fallacy alert on what I just said, I do not actually know my audience well enough to have written the above so confidently. ;)

  1. Take the outside view. Recall famous scientists and philosophers of the past, and how far off from the truth their ideas were, and yet how confident they were in their ideas.

I think part of the reason people on LW tend to avoid using the outside view, is that towards the end of the sequences, Eliezer developed a hostility towards the outside view, probably because people were using outside view based arguments against the Singularity. This is best illustrated in this post which I consider borderline anti-epistomology since it can serve as a universal counterargument against anyone invoking the outside view.

Eliezer developed a hostility towards the outside view, probably because people were using outside view based arguments against the Singularity

Eliezer developed a hostility towards the outside view because people were misusing the outside view, entirely missing the point and making absolutely ridiculous claims based of superficial similarities.

This is best illustrated in this post which I consider borderline anti-epistomology since it can serve as a universal counterargument against anyone invoking the outside view.

The charge of anti-epistomology is not valid. People could apply the reasoning from that post incorrectly in the same way they could apply his outside view post incorrectly yet you cannot thereby (correctly) label the warning anti-epistemic. Using "Outside View!" as a conversation halter is a bad thing, for the reasons specified. Most relevant is the unpacking of the reasoning underlying outside view considerations - see the bottom half of the post.

Eliezer developed a hostility towards the outside view because people were misusing the outside view, entirely missing the point and making absolutely ridiculous claims based of superficial similarities.

Of the arguments he mentions Robin Hanson is trying to fit a line through too few data points, so while his argument is flawed it's not his use of the outside view that's the real problem. The argument made by taw is mostly correct, even if he somewhat overstates his case, in particular the success rate for the reference class of beliefs in coming of a new world, be it good or evil, (depending on exactly what you mean by "new world") is slightly above 0%.

Most relevant is the unpacking of the reasoning underlying outside view considerations - see the bottom half of the post.

He appears to be using the narrowest possible argument for the outside view he can get away with. Thus ruling out a lot of valid applications of the outside view. A strict reading would even rule out Wei Dai's application in the OP.

The argument made by taw is mostly correct, even if he somewhat overstates his case

If my memory serves me the constant misuse of (and borderline ranting about) 'outside view' by taw in particular did far more to discourage the appeal of 'outside view' references than anything Eliezer may have said. A preface of 'outside view' does not transform an analogy into a bulletproof argument.

It's sad and true. For instance automatically thinking of reference classes for beliefs and strategies can be useful but I don't see it applied often enough. When it comes to something like (strategies about / popularizing interest in) predictability of the Singularity, for example, people bring up objections like "you'll never be able to convince anyone that something big and potentially dangerous might happen based off of extrapolations of current trends", but the outside view response "then explain global warming" actually narrows the discussion and points out features of the problem that might not have been obvious.

You can use outside view arguments, just not connotations of "outside view".

Take a few days to learn some cryptology and then design your own cipher. [...] and then watch it taken apart by an expert in minutes.

That needs an expert cryptanalyst. Those guys tend to be busy.

Cryptography has mailing lists, and hobbyists, and open source influences, and IRC channels. You, as a rank beginner, are not going to get to a level where you can design a crypto scheme that a dilletante hobbyist with the equivalent of a few university courses, and a lot of time can't beat. Not in a few days, probably not even if you're Terry Tao.

I disagree. It's actually remarkably easy to create for all intents and purposes (barring the resolution of several outstanding problems in cryptography and computer science) impossible-to-break cryptography schemes if you know anything about RSA, lattice methods, etc.

Unless "a lot of time" means the age of the universe (precluding functional quantum computers before then).

I see a lot of broken systems designed by people who've read Applied Cryptography.

Take a few days to learn some cryptology and then design your own cipher.

I didn't take a few days but what about XORing your plaintext with the binary digits of pi starting at some digit specified by the key? I think this is in P because of the BBP formula.

Off the top of my head: given standard assumptions about what is considered a valid attack, there's an attack that takes time on the order of the square root of the size of the keyspace.

NB if I don't try to break your proposal, don't think it's secure - cryptanalysis is generally time-consuming work.

Wei Dai was right; I do feel surprised. Can you give me some more details on how this would be done?

Clue: this attach has nothing to do with the fact that you used pi; it would work on any cipher that says "the key is the index into this infinite stream". The attack is here though I encourage you to try to work it out for yourself first.

NB 2: do NOT fix your design and present it again. That would be COMPLETELY THE WRONG LESSON to draw. See Memo to the Amateur Cipher Designer.

do NOT fix your design and present it again

I didn't really expect it to work; I just wanted to try this because Wei Dai said that it might teach me something. I fully realized that my surprise at its failure was not a rational emotion, I just felt that it was important to acknowledge this surprise in order to help my emotions better reflect my rational thought in the future. That article was interesting though (the memo, not the attack; I haven't read that yet).

That's fine :-) it's just that having spotted an attack and dashed off a comment, I get nervous that someone will draw the wrong inference if I don't cryptanalyze a proposal. Thanks for setting my mind at rest!

Nice topic.

Is there a straightforward way to do 2 so that you recommend others attempt it, or are you relating something you've experienced and found reduced your confidence in general?

Perhaps 2 is a special case of "find something that you can generate a feeling of high confidence but where you will subsequently be wrong with high probability". You might interpret this as "experience being a crank".

[-][anonymous]10

I was an intern in the cryptography research group of a large technology company. My boss got an email from someone in product development wanting us to review a cipher they had designed and intended to use in a product. Of course he gave them the standard reply that any cipher not designed by an expert is almost certainly weak (and they should just use a standard cipher that has gone through years of review by the entire cryptography community), but they refused to believe that their cipher is weak, so we had to actually break it. My boss didn't want to bother one of the researchers, so I got the job, and broke it after a couple of days. (An expert would have literally taken minutes, but none of the researchers specialized in symmetric cipher cryptanalysis.)

If you study cryptography for a while you'll hear and see plenty of cautionary tales like this, so you can pretty much absorb the lesson without actually being on the "receiving end" of it. If you're less patient, but happen to work in a large technology company with a cryptography group that you can email, try that. :) Otherwise, yeah, 2 generalizes to "find something that you can generate a feeling of high confidence but where you will subsequently be wrong with high probability".

design your own cipher

Tell me about it!

Wait, where are the experts I can go to in order to test a cipher I've designed? How do you arrange that? And what are you expected to provide for the cryptanalyst?

One of the following must be true: 1) I misunderstood the rules of game 2, b) game 2 is trivially broken, or c) an expert can somehow extract a messaged that you've xored with a random pad and then destroyed the pad, in minutes.

I'm guessing (1). First, One-Time Pads have already been invented. Second, you're unlikely to find someone willing to spend the time to extract plaintext from an unspecified ciphertext; the idea was to give your full cipher design up for scrutiny. After all, if it's truly secure, it will remain secure even if the attacker knows how it works.

But I thought the whole point of public review of ciphers was that amateur-designed ciphers have a high risk of being broken by experts, even if they aren't told what the cipher is?