I want to know what are good rationality exercises.

I was just on a call with Liron and PhilH, hanging out after the weekly LessWrong weekend event, and we discussed exercises that could happen on LessWrong.

Here is the list we generated:

  • Thinking Physics
  • Fermi Estimates
  • Project Euler
  • Calibration Training
  • Basic probabilistic reasoning
  • Basic have-you-read-the-sequences knowledge test (e.g. "Which of the following is an example of 'belief as attire'?")

Another user on the call (whose name I forget) suggested it could be fun to have a daily Fermi Estimate on LessWrong, where everyone submits their number and the model they used to reach the number. I think this would be quite exciting.

Please write answers with other exercises that you think are or might be great for rationality training, some explanation of why you think it could be good, and a suggestion of how it could be incorporated into LessWrong. I'll probably add some of the above myself.

New Answer
New Comment

10 Answers sorted by

DirectedEvolution

180

Things that interest me:

  • Let's go exploring. Eliezer took a pretty low-bar activity (fan fic) and created something original (HPMOR). Why don't we pick some notorious areas of the internet where we think a little LW-style overthinking could go a long way?
  • A rational approach to cultivating imagination, creativity, and meditation. We have so many tools here for modeling questions of fact. Can't rationality help us develop the right side of the brain as well as the left?
  • Business ideas we could collaborate on, that hinge primarily on rational thinking, learning how to learn, and conscientiousness.

I would not participate in activities that boil down to arbitrary left-brain problem solving.

Matt Goldenberg

130

"Doing impossible things"

  • Get 100 strangers to show up at a specific place at a specific time.
  • Make $5,000 counterfactual dollars in a weekend.
  • Be featured in a major print publication in less than a month.
  • etc.

Adam Zerner

120

Answer: Writing Your Hypothetical Apostasy

See Write Your Hypothetical Apostasy on Overcoming Bias.

Imagine, if you will, that the world's destruction is at stake and the only way to save it is for you to write a one-pager that convinces a jury that your old cherished view is mistaken or at least seriously incomplete.  The more inadequate the jury thinks your old cherished view is, the greater the chances that the world is saved.  The catch is that the jury consists of earlier stages of yourself (such as yourself such as you were one year ago).  Moreover, the jury believes that you have been bribed to write your apostasy; so any assurances of the form "trust me, I am older and know better" will be ineffective.  Your only hope of saving the world is by writing an apostasy that will make the jury recognize how flawed/partial/shallow/juvenile/crude/irresponsible/incomplete and generally inadequate your old cherished view is.

I'm not sure exactly how this fits in to group rationality practice. I personally am always more motivated to write when it's something that I will publish, so having a place where we publish hypothetical apostacys could be useful for motivational reasons. It would also be useful because you'd get feedback on your thought process, although that point could be made for many other exercises.

Oh yeah, this one's great. Thanks for reminding me.

Adam Zerner

110

Answer: Check My Understanding

Here's how it'd work. Suppose I want to improve my understanding of Aumann's Agreement Theorem. I would write up my thoughts, doing my best to explain what I know about it. Then other people would comment on what I'm missing and where I went wrong.

This seems useful for a few different reasons:

  • As an author, the comments provide you with personalized feedback and allow you to "fill in the gaps".
  • As an author, the act of doing the initial write-up seems like it'd be very beneficial. Ditto for readers writing out their comments. (I have the Feynman Technique in mind.)
  • As a reader, you may have a decent understanding of Aumann's Agreement Theorem, but seeing it explained by a different author might help some things "click" for you (I have Non-Expert Explanation in mind).

Liron

90

I was thinking that if the sequences and other LW classics were a high school class, we could make something like an SAT subject test to check understanding/fluency in the subject, then that could be a badge on the site and potentially a good credential to have in your career.

The kinds of questions could be like:

1.

If a US citizen has a legal way to save $500/year on their taxes, but it requires spending 1 hour/day filling out boring paperwork on 5 days of every week, should they do it?

a. Virtually everyone should do it

b. A significant fraction (10-90%) of the population should do it

c. Virtually no one should do it

2.

With sufficient evidence and a rational deliberation process, is it possible to become sure that the Loch Ness Monster does/doesn't exist?

a. We CAN potentially become sure either way

b. We CAN'T potentially become sure either way

c. We can only potentially become sure that it DOES exist

d. We can only potentially become sure that it DOESN'T exist

I recall reading educational psych stuff about how the act of both 1) creating and 2) answering questions like this is a great way to deepen your understanding.

Adam Zerner

80

Answer: Betting With Real Money

From the end of Inadequate Equilibria:

I don’t have good, repeatable exercises for training your skill in this field, and that’s one reason I worry about the results. But I can tell you this much: bet on everything. Bet on everything where you can or will find out the answer. Even if you’re only testing yourself against one other person, it’s away of calibrating yourself to avoid both overconfidence and underconfidence, which will serve you in good stead emotionally when you try to do inadequacy reasoning. Or so I hope.

Eliezer seems to be referring to real money here. And I recall him talking elsewhere about how it is useful to put real money on the line.

This meshes with my experiences playing poker. It's one thing to study and learn that X is a mistake. It's another thing to make the mistake of X and lose a big pot because of it. There's something about losing real money that cements it in your head. And I'm not just referring to my own experiences. From talking to other poker players, it seems that this is the norm.

However, real money is a touchy subject and I'm not sure how we would actually pull this off. But I figure that there is still value in bringing it up.

Betting with real money is definitely a useful way of probing at your own confidence (I don't do it much at all due to general underconfidence, but it's sure helped me nail down the feeling of being really sure of something), and a lot of my rationalist friends do it on a handshake-agreement basis. However, any way of formalizing this would turn LW (or whatever institution) into a gambling site, which is illegal :/

2jacobjacob
There could be ways of making it legal given that we're a non-profit with somewhat academic interests. (By "making" I mean actually changing the law or getting a No-Action Letter.) Most people who do gambling online do it for profit, which is where things get tricky. 
2Adam Zerner
There may be some creative non-formal solutions though. * On one end of the spectrum you could have a token system and leave it up to the users to figure out actually exchanging money themselves (a lot of poker apps do this). * Getting less hands-on, you could do away with the tokens and just act as a matchmaker, getting two parties who want to make a bet in touch with each other and they could handle it from there. * Getting even less hands-on, you could just function as a place to discuss bets you may want to make in the real world. Eg. sports betting or stock picking (I guess there's not too many examples of this).

Adam Zerner

80

Answer: Discussing Updates

See the Updates Thread. Basically, taking note of the belief updates you perform and discussing why you performed them. What did you previously believe, what do you currently believe, and why did the data you observed move you from there to here?

Gunnar_Zarncke

60

Making bets is good exercise too. If you can't find other people to bet with you can also make public predictions.

Daniel Kokotajlo

60

When I first read the sequences, I thought "What do I know and how do I think I know it?" was pretty banal and useless -- didn't everyone know that? Philosophy 101, question your beliefs, look for hidden assumptions, etc.

The older I get the more I come to think that no, not everyone knows this, and even the people who know it don't practice it enough. I'm not sure though.

I think of "What do I know and how do I think I know it?" as the "root cause" of essentially all other epistemic rationality - i.e. if you're sufficiently good at that one skill, all the others will follow naturally from it. Conversely, that suggests it's really difficult to get really good at it: if I'm missing any other epistemic rationality skill, it means I'm not good enough at "What do I know and how do I think I know it?".

I'd say the "obvious" version of the skill involves activities which look like questioning beliefs, looking for hidden assumptions... (read more)

Ben Pace

50

Answer: Fermi Estimates

Fermi estimates are attempts to answer a quantitative question using order-of-magnitude style reasoning. These are questions like "How many people fly on airplanes each day?" or "How many atoms are in my arm?". In contrast to things like calibration practice, these are much more generative, attempting to tie together parts of your world model to come up with a model that answers a question.

On LessWrong, this could be practically implemented by having a set of 100-1000 questions that users can do either in a weekend blitz, or spaced out over time. A user who got 100 correct (within a factor of 2x) could have a sign on their profile indicating that they completed this task. It could also be implemented as a daily/weekly question for users to answer and then compare notes on.

9 comments, sorted by Click to highlight new comments since:

The CFAR Handbook has a lot of good ones.

According to a vague feeling of a couple of people I know, the CFAR handbook is tricky enough that reading it without doing CFAR could be dangerous.

It seems very plausible that you'd get more value out of them after having gone through CFAR. But it seems implausible that you'd get zero or negative value out of them without having gone through CFAR. At least in terms of expected value.

Nah, I don't think that's a real concern. Or at least I really don't see much danger in the things in there, and have worked a lot with it in the past.

I think this excerpt from Rationality: From AI to Zombies' preface says it all.

It was a mistake that I didn't write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples.

In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing which was that I didn't realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn't realize that part was the priority; and regarding this I can only say "Oops" and "Duh."

Yes, sometimes those big issues really are big and really are important; but that doesn't change the basic truth that to master skills you need to practice them and it's harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.)

This "incorporated into LW" condition is a tight leash; and it reminds me of why I don't usually... recommend LW to my friends.

Some matters are too personal to talk about on the Internet. Like marital infidelity, which 1) is something outside of many people's experiences, 2) definitely seems to require tons of instrumental rationality even on the best of days, 3) has (ethical) implications which real people often don't take into account despite other real people often expecting them to (but knowing they won't), and 4) unlike acceptable LW material with which it shares the above characteristics, it hurts. And so it is with some other things that actual adults have to deal with.

Unless you speak about something already in the past. Maybe we should have a Cemetery of Failed Things in our City. (Our current Cemetery of Failed Things holds several startups and personal habits, which is, wow, how lucky we are.)

Basic have-you-read-the-sequences knowledge test (e.g. "Which of the following is an example of 'belief as attire'?")

This might be combined with calibration training.