So you think you want to be rational, to believe what is true even when sirens tempt you? Great, get to work; there's lots you can do. Do you want to justifiably believe that you are more rational than others, smugly knowing your beliefs are more accurate? Hold on; this is hard.
Humans nearly universally find excuses to believe that they are more correct that others, at least on the important things. They point to others' incredible beliefs, to biases afflicting others, and to estimation tasks where they are especially skilled. But they forget most everyone can point to such things.
But shouldn't you get more rationality credit if you spend more time studying common biases, statistical techniques, and the like? Well this would be good evidence of your rationality if you were in fact pretty rational about your rationality, i.e., if you knew that when you read or discussed such issues your mind would then systematically, broadly, and reasonably incorporate those insights into your reasoning processes.
But what if your mind is far from rational? What if your mind is likely to just go through the motions of studying rationality to allow itself to smugly believe it is more accurate, or to bond you more closely to your social allies?
It seems to me that if you are serious about actually being rational, rather than just believing in your rationality or joining a group that thinks itself rational, you should try hard and often to test your rationality. But how can you do that?
To test the rationality of your beliefs, you could sometimes declare beliefs, and later score those beliefs via tests where high scoring beliefs tend to be more rational. Better tests are those where scores are more tightly and reliably correlated with rationality. So, what are good rationality tests?
Play poker for significant amounts of money. While it only tests limited and specific areas of rationality, and of course requires some significant domain-specific knowledge, poker is an excellent rationality test. The main difficulty of playing the game well, once one understands the basic strategy, is in how amazingly well it evokes and then punishes our irrational natures. Difficulties updating (believing the improbable when new information comes in), loss aversion, takeover by the limbic system (anger / jealousy / revenge / etc), lots of aspects that it tests.
Agreed, but I think it is easier to see yourself confront your irrational impulses with blackjack. For instance, you're faced with a 16 versus a 10; you know you have to hit, but your emotions (at least mine) tell me not to. Anyone else experience this same accidental rationality test?
For amateur players, sure. But there is an easily memorizable table by which to play BJ perfectly, either basic strategy or counting cards. So you always clearly know what you should do. If you are playing BJ to win, it stops being a test of rationality.
Whereas even when you become skilled at poker, it is still a constant test of rationality both because optimal strategy is complex (uncertainty about correct strategy means lots of opportunity to lie to yourself) and you want to play maximally anyway (uncertainty about whether opponent is making a mistake gives you even more chances to lie to yourself). Kinda like life...
Whether a person memorizes and uses the table is still a viable test. No rational person playing to win would take an action incompatible with the table, and acting only in ways compatible with the table is unlikely to be accidental for an irrational person.
A way of determining whether people act rationally when it is relatively easy to do so can be quite valuable, since most people don't.
The thought occurs to me that the converse question of "How do you know you're rational?" is "Why do you care whether you have the property 'rationality'?" It's not unbound - we hope - so for every occasion when you might be tempted to wonder how rational you are, there should be some kind of performable task that relates to your 'rational'-ness. What kind of test could reflect this 'rationality' should be suggested from consideration of the related task. Or conversely, we ask directly what associates to the task.
Prediction markets would be suggested by the task of trying to predict future variables; and then conversely we can ask, "If someone makes money on a prediction market, what else are they likely to be good at?"
I think there is likely a distinction between being rational at games and rational at life. In my experience those who are rational in one way are very often not rational in the other. I think it highly unlikely that there is a strong correlation between "good at prediction markets" or "good at poker" and "good at life". Do we think the best poker players are good models for rational existence? I don't think I do and I don't even think THEY do.
A suggestion:
List your goals. Then give the goals deadlines along with probabilities of success and estimated utility (with some kind of metric, not necessarily numerical). At each deadline, tally whether or not the goal is completed and give an estimation or the utility.
From this information you can take at least three things.
- Whether or not you can accurately predict your ability.
- Whether or not you are picking the right goals (lower than expected utility would be bad, I think)
- With enough date points your could determine your ration of success to utility. Too much success and not enough utility means you need to aim higher. Too little success for goals with high predicted utility mean either aim lower or figure out what you're doing wrong in pursuing the goals. If both are high you're living rationally, if both are low YOU'RE DOING IT WRONG.
The process could probably be improved if it was done transparently and cooperatively. Others looking on would help prevent you from cheating yourself.
Not terribly rigorous, but thats the idea.
This almost seems too obvious to mention in one of Robin's threads, but I'll go ahead anyway: success on prediction markets would seem to be an indicator of rationality and/or luck. Your degree of success in a game like HubDub may give some indication as to the accuracy of your beliefs, and so (one would hope) the effectiveness of your belief-formation process.
I would expect success in a prediction market to be more correlated with amount of time spent researching than with rationality. At best, rationality would be a multiplier to the benefit gained per hour of research; alternatively, it could be an upper bound to the total amount of benefit gained from researching.
This is the fundamental question that determines whether we can do a lot of things - if we can't come up with evidence-based metrics that are good measures of the effect of rationality-improving interventions, then everything becomes much harder. If the metric is easily gamed once people know about it, everything becomes much harder. If it can be defeated by memorization like school, everything becomes much harder. I will post about this myself at some point.
This problem is one we should approach with the attitude of solving as much as possible, not feeling delightfully cynical about how it can't be solved, but at least you know it. It's too important for that. It sets up the incentives in the whole system. If the field of hedonics can try to measure happiness, we can at least try to measure rationality.
...but not to derail the discussion, Robin's individual how-do-you-know? stance is a valid perspective, and I'll post about the scientific measurement / institutional measurement problems later.
Prediction markets seem like the obvious answer, but the range of issues currently available as contracts is too narrow to be of much use. Most probability calibration exercises are focus on trivial issues. I think they are still useful, but the real test is how you deal with emotional issues, not just neutral ones.
This might not be amenable to a market, but I would like to see a database collected of the questions being addressed by research in-progress. Perhaps when a research grant is issued, if a definite conclusion is anticipated, the question can be entered in the database. The question would have to be constructed so that users could enter in definite predictions. At first glance, I think the predictions would have to remain private until after a result is published, but I'm unsure. In contrast to existing prediction sites, this would have the benefits of a broad range of questions formulated by experts who are concerned about precisely defining the issue at hand. How would a standard procedure of formulating a question for a prediction database influence the type of research done?
Another broad test I've considered is whether your judgment of the quality of an individual's claims is correlated with their social club affiliations. To me, political party stands out as the most relevant example of a social club for this purpose. If you find yourself disagreeing with Republicans more frequently than with Democrats over factual issues, that appears to be a sign of confirmation bias. Because association with social clubs tends to be caused by how you were raised, social class, or the sheer desire to be part of a group, there is no reason to think that affiliation should be a strong predictor of quality. Any thoughts?
"If you find yourself disagreeing with Republicans more frequently than with Democrats over factual issues, that appears to be a sign of confirmation bias."
Only to the extent that you think Republicans and Democrats are equally wrong. I don't see any rule demanding this.
Since all accurate maps are consistent with eachother, everyone with accurate political beliefs are going to be consistent, and you might as well use a new label for this regularity. It's fine to be a Y if the causality runs from X is true -> you believe X is true -> you're labeled "member of group Y".
Tests for "Group Y believes this-> I believe this" that can rule out the first causal path would be harder to come up with, especially since irrational group beliefs are chosen to be hard to prove (to the satisfaction of the group members).
The situation gets worse when you realize that "Group Y believes this-> I believe this" can be valid to the extent that you have evidence that Group Y gets other things right.
ISO quality certification doesn't look primarily at the results, but primarily at the process. If the process has a good argument or justification that it consistently produces high quality, then it is deemed to be compliant. For example "we measure performance in [this] way, the records are kept in [this] way, quality problems are addressed like [this], compliance is addressed like [such-and-so]".
I can imagine a simple checklist for rationality, analogous to the software carpentry checklist.
- Do you have a procedure for making decisions?
- Is the procedure available at the times and locations that you make decisions?
- How do you prevent yourself from making decisions without following this procedure?
- If your procedure depends on calibration data, how do you guarantee the quality of your calibration data?
- How does your procedure address (common rationality failure #1)?
- et cetera
Sorry, it's just a sketch of a checklist, not a real proposal, but I think you get the idea. Test the process, not the results. Of course, the process should describe how it tests the results.
Set up a website where people can submit artistic works - poetry, drawings, short stories, maybe even pictures of themselves - and it's expected rating on a 1-10 scale.
The works would be publicly displayed, but anonymously, and visitors could rate them ("nonymously" is to make sure the ratings are "global" and not "compared to other work by the same guy" - so maybe the author could be displayed once you rated it).
You could then compare the expected rating of a work to the actual ratings it received, and see how much the author under- or over-estimates himself.
(for extra measurment of calibration, you could also ask the author to give a confidence factor, though I'm not sure how exactly it should be presented and calculated)
Your own art has the advantage of being something about which you might be systematically biased, and which can still be evaluated pretty easily (as opposed to predictions about how to get out of the financial crisis).
Anyone up for some Rational Debating?
Another test.
Find out the general ideological biases of the test subject
Find two studies, one (Study A) that supports the ideological biases of the test subject, but is methodologically flawed. The other (Study B) refutes the ideological biases of the subject, but is methodologically sound.
Have the subject read/research information about the studies, and then ask them which study is more correct.
If you randomize this a bit (sometimes the study is both correct and "inline with one's bias") and run this multiple times on a person, you should get a pretty good read on how rational they are.
Some people might decide "Because I want to show off how rational I am, I'll accept that study X is more methodologically sound, but I'll still believe in my secret heart that Y is correct"
I'm not sure any amount of testing can handle that much self-deception, although I'm willing to be convinced otherwise :)
- How do you know your determination of "ideological bias" isn't biased itself?
- All experiments are flawed in one way or another to some degree. Are you saying one study is more methodologically flawed than another? How do you measure the degree of the flaws? How do you know your determination of flaws isn't biased?
- Again, you've already decided the which study is "correct" based on your own biased interpretations. How do you prove the other person is wrong and it's not you that is biased?
I agree with the randomize and repeat bit though.
However, I would like to propose that this test methodology for rationality is deeply flawed.
Keep track of when you change your mind about important facts based on new evidence.
a) If you rarely change your mind, you're probably not rational.
b) If you always change your mind, you're probably not very smart.
c) If you sometimes change your mind, and sometimes not, I think that's a pretty good indication that you're rational.
Of course, I feel that I fall into category (c), which is my own bias. I could test this, if there was a database of how often other people had changed their mind, cross-referenced with IQ.
Here's some examples from my own past:
I used to completely discount AGW. Now I think it is occuring, but I also think that the negative feedbacks are being ignored/downplayed.
I used to think that the logical economic policy was always the right one. Now, I (begrudgingly) accept that if enough people believe an economic policy is good, it will work, even though it's not logical. And, concomitantly, a logical economic policy will fail if enough people hate it.
Logic is our fishtank, and we are the fish swimming in it. It is all we know. But there is a possibility that there's something outside the fishtank, that we are unable to see because of our ideological blinders.
The two great stresses in ancient tribes were A) "having enough to eat" and B) "being large enough to defend the tribe from others". Those are more or less contradictory goals. But both are incredibly important. People who want to punish rulebreakers and free-riders are generally more inclined to weigh A) over B). People who want to grow the tribe, by being more inclusive and accepting of others are more inclined to weight B) over A).
None of the modern economic theories seem to be any good at handling crises. I used to think that Chicago and Austrian schools had better answers than Keynesians.
I used to think that banks should have just been allowed to die, now I'm not so sure - I see a fair amount of evidence that the logical process there would have caused a significant panic. Not sure either way.
How about testing the rationality of your life (and not just your beliefs)?
Are you satisfied with your job/marriage/health-exercising? Are you deeply in debt? Spent too much money on status-symbols? Cheating on your life-partner? Spending too much time on the net? Drinking too much?
I am sure there are many other life-tests.
If we can't demand perfect metrics then surely we should at least demand metrics that aren't easily gamed. If people with the quality named "rationality" don't on average win more often on life-problems like those named, what quality do they even have, and why is it worthwhile?
I predict that winners are on average less rational than rationalists. Risk level has an optimal point determined by expected payoff. But the maximal payoff keeps increasing as you increase risk. The winners we see are selected for high payoff. Thus they're likely to be people who took more risks than were rational. We just don't see all the losers who made the same decisions as the winners.
Those who take rational actions win more often than those who do not.
If we take a sample of those who have achieved the greatest utility then we can expect that sample to to be biased towards those who have taken the most risks.
Even in idealised situations where success is determined soley by decisions made based off information and in which rationality measured based on how well those decision maximise expected utility we can expect the biggest winners to not be the most rational.
When it comes to actual humans the above remains in place, yet may well be dwarfed by other factors. Some lyrics from Ben Folds spring to mind:
Fate doesn't hang on a wrong or right choice, fortune depends on the tone of your voice
I am 95% confident that calibration tests are good tests for a very important aspect of rationality, and would encourage everyone to try a few.
I suspect I should also be writing down calibrated probability estimates for my project completion dates. This calibration test is easy to do oneself, without infrastructure, but I'd still be interested in a website tabulating my and others' early predictions and then our actual performance -- perhaps a page within LW?. Might be especially good to know about people within a group of coworkers, who could perhaps then know how much to actually estimate timelines when planning or dividing complex projects.
This is a good point. Still, it would provide evidence of rationality, especially in the likely majority of cases where people didn't try to game the system by e.g. deliberately picking dates far in advance of their actual completions, and then doing the last steps right at that date. My calibration scores on trivia have been fine for awhile now, but my calibration at predicting my own project completions is terrible.
Perhaps we could make a procedure for asking your friends, coworkers, and other acquaintance (all mixed together) to rate you on various traits, and anonymizing who submitted which rating to encourage honesty? You could then submit calibrated probability estimates as to what ratings were given.
I'd find this a harder context in which to be rational than I'd find trivia.
An ideal rationality test would be perfectly specific: there would be no way to pass it other than being rational. We can't conveniently create such a test, but we can at least make it difficult to pass our tests by utilizing simple procedures that don't require rationality to implement.
Any 'game' in which the best strategies can be known and preset would then be ruled out. It's relatively easy to write a computer program to play poker (minus the social interaction). Same goes for blackjack. It takes rationality to create such a program, but the program doesn't need rationality to function.