Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment.  If you expect to find yourself unarmed in a dark alley, or fighting hand to hand in a war, it makes sense.  But most people do a lot better at ensuring their personal security by coordinating to live in peaceful societies and neighborhoods; they pay someone else to learn martial arts.  Similarly, while "survivalists" plan and train to stay warm, dry, and fed given worst case assumptions about the world around them, most people achieve these goals by participating in a modern economy.

The martial arts metaphor for rationality training seems popular at this website, and most discussions here about how to believe the truth seem to assume an environmental worst case: how to figure out everything for yourself given fixed info and assuming the worst about other folks.  In this context, a good rationality test is a publicly-visible personal test, applied to your personal beliefs when you are isolated from others' assistance and info.  

I'm much more interested in how we can can join together to believe truth, and it actually seems easier to design institutions which achieve this end than to design institutions to test individual isolated general tendencies to discern truth.  For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics.  We don't each need to train to identify and fix each possible kind of bias; each bias can instead have specialists who look for where that bias appears and then correct it. 

Perhaps martial-art-style rationality makes sense for isolated survivalist Einsteins forced by humanity's vast stunning cluelessness to single-handedly block the coming robot rampage.  But for those of us who respect the opinions of enough others to want to work with them to find truth, it makes more sense to design and field institutions which give each person better incentives to update a common consensus.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 10:27 AM
Select new highlight date
All comments loaded

Robin Hanson has identified a breakdown in the metaphor of rationality as martial art: skillful violence can be more or less entirely deferred to specialists, but rationality is one of the things that everyone should know how to do, even if specialists do it better. Even though paramedics are better trained and equipped than civilians at the scene of a heart attack, a CPR-trained bystander can do more to save the life of the victim due to the paramedics' response time. Prediction markets are great for governments, corporations, or communities, but if an individual's personal life has gotten bad enough to need the help of a professional rationalist, a little training in "cartography" could have nipped the problem in the bud.

To put it another way, thinking rationally is something I want to do, not have done for me. I would bet that Robin Hanson, and indeed most people, respect the opinions of others in proportion to the extent that they are rational. So the individual impulse toward learning to be less wrong is not only a path to winning, but a basic value of a rationalist community.

One can think that individuals can profit from being more rational, while also thinking that improving our social epistemic systems or participating in them actively will do more to increase our welfare than focusing on increasing individual rationality.

Yes, it would be silly to think of ourselves as isolated survivalists in a society where so many people are signed up for cryonics, where Many-Worlds was seen as retrospectively obvious as soon as it was proposed, and no one can be elected to public office if they openly admit to believing in God. But let us be realistic about which Earth we actually live in.

I too am greatly interested in group mechanisms of rationality - though I admit I put more emphasis on individuals; I suspect you can build more interesting systems out of smarter bricks. The obstacles are in many ways the same: testing the group, incentivizing the people in it. In most cases if you can test a group you can test an individual and vice versa.

But any group mechanism of that sort will have the character of a band of survivalists getting together to grow carrots. Prediction markets are lonely outposts of light in a world that isn't so much "gone dark" as having never been illuminated to begin with; and the Policy Analysis Markets were burned by a horde of outraged barbarians.

We have always been in the Post-Apocalyptic Rationalist Environment, where even scientists and academics are doing it wrong and Dark Side Epistemology howls through the street; I don't even angst about this, I just take it for granted. Any proposals for getting a civilization started need to take into account that it doesn't already exist.

Sounds like you do think of yourself as an isolated survivalist in a world of aliens with which you cannot profitably coordinate. Let us know if you find those more interesting systems you suspect can be built from smarter bricks.

It's pretty hard to be isolated in a world of six billion people. The more key question is the probability of coordinating with any randomly selected person on a rationalist topic of fixed difficulty, and the total size of the community available to support some number of institutions.

To put it bluntly, if you built the ideal rationalist institution that requires one million supporters, you'd be in trouble because the 99.98th percentile of rationality is not adequate to support it (and also such rationalists may have other demands on their time).

But if you can build institutions that grow starting from small groups even in a not-previously-friendly environment, or upgrade rationalists starting from the 98th percentile to what we would currently regard as much higher levels, then odds look better for such institutions.

We both want to live in a friendly world with lots of high-grade rationalists and excellent institutions with good tests and good incentives, but I don't think I already live there.

Even in the most civilized civilizations, barbarity takes place on a regular basis. There are some homicides in dark alleys in the safest countries on earth, and there are bankruptcies, poverty, and layoffs even in the richest countries.

In the same way, we live in a flawed society of reason, which has been growing and improving with starts and fits since the scientific revolution. We may be civilized in the arena of reason in the same way you could call Northern Europe in the 900s civilized in the arena of personal security: there are rules that nearly everyone knows and that most obey to some extent, but they are routinely disrespected, and the only thing that makes people really take heed is the theater of enforcement, whether that's legally-sanctioned violence against notorious bandits or a dressing-down of notorious sophists.

Right now, we are only barely scraping together a culture of rationality, it may have a shaky foundation and many dumber bricks, but it seems a bit much to say we don't have one.

Let me us distinguish "truth-seekers", people who respect and want truth, from "rationalists", people who personally know how to believe truth. We can build better institutions that produce truth if only we have enough support from truth-seekers; we don't actually need many rationalists. And having rationalists without good institutions may not produce much more shared accessible truth.

I'm not sure I can let you make that distinction without some more justification.

Most people think they're truth-seekers and honestly claim to be truth-seekers. But the very existence of biases shows that thinking you're a truth-seeker doesn't make it so. Ask a hundred doctors, and they'll all (without consciously lying!) say they're looking for the truth about what really will help or hurt their patients. But give them your spiel about the flaws in the health system, and in the course of what they consider seeking the truth, they'll dismiss your objections in a way you consider unfair. Build an institution that confirms your results, and they'll dismiss the institution as biased or flawed or "silly". These doctors are not liars or enemies of truth or anything. They're normal people whose search for the truth is being hijacked in ways they can't control.

The solution: turn them into rationalists. They don't have to be black belt rationalists who can derive Bayes' Theorem in their sleep, but they have to be rationalist enough that their natural good intentions towards truth-seeking correspond to actual truth-seeking and allow you to build your institutions without interference.

"The solution: turn them into rationalists."

You don't say how to accomplish this. Would it require (or at least benefit greatly from) institutional change?

This sounds like you're postulating people who have good taste in rationalist institutions without having good taste in rationality. Or you're postulating that it's easy to push on the former quantity without pushing on the latter. How likely is this really? Why wouldn't any such effort be easily hijacked by institutions that look good to non-rationalists?

Putting so much work into talking about these things isn't the act of an isolated survivalist, though.

For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics.

If no one person has a good grasp of all the material, then there will be significant insights that are missed. Science in our era is already dominated by dumb specialists who know everything about nothing. EY's work has been so good precisely because he took the effort to understand so many different subjects. I'll bet at long odds that a prediction market containing an expert on evo-psych, an expert on each of five narrow AI specialisms, an expert on quantum mechanics, an expert on human biases, an expert on ethics and an expert on mathametical logic would not even have produced FAI as an idea to be bet upon.

We don't each need to train to identify and fix each possible kind of bias; each bias can instead have specialists who look for where that bias appears and then
correct it.

If people could see inside each others' heads and bet on (combinations of) people's thoughts, this would work.

In reality, what will happen is that a singly debiased single subject specialist will simply not produce any ideas for the prediction market that (a) involve more than his specialism and (b) would require him to debias in more than one way.

For example, a logic expert who suffers from overconfidence in the effectiveness of logic in AI will not hypothesize that maybe something other than a logical KR is appropriate for the semantic web. [people in my research group were shocked when I produced this hypothesis] A bayesian stats researcher will not produce this hypothesis because he doen't know the semantic web exists; it isn't part of his world.

What I am driving at with this comment is that the strength of connection between thoughts held in one mind is much greater than the strength of connection between thoughts in a market. In a market, two distinct predictions interact in a very simple way: their price. In a mind, two or more insights can be combined. If no individual mind is bias-free, then we lose this "single mind" advantage. [Apologies for comment deletion. It would be nice to have a preview button...]

One problem with trusting the experts rather than trying to think things through for yourself is that you need a certain amount of expertise just to understand what the experts are saying. The experts might be able to tell you that "all symmetric matrices are orthonormally diagonalizable," and you might have perfect trust in them, but without a lot of personal study and inquiry, the mere words don't help you very much.

That doesn't matter if the expert can say "hire this guy", "invest in this company", "vote for this guy", or "donate to this charity". If you're doing some sort of complicated action with careful integration of expert advice, then it's probably worthwhile becoming at least a semi-expert yourself.

Experts don't just tell us facts; they also offer recommendations as to how to solve individual or social problems. We can often rely on the recommendations even if we don't understand the underlying analysis, so long as we have picked good experts to rely on.

Following the martial arts analogy, I guess that makes Robin a supporter of "Rationalist Gangs".

One of the ways that I think that OB could have been better, and that I think LW could be more helpful, is to put a greater emphasis on practice and practical techniques for improving rationality in the writings here and to give many more real-life examples than we do.

When making a post that hints at any kind of a practical technique, posters could really make an effort to clearly identify the practical implications and techniques, to put all the practical parts together in the essay rather than mixing them throughout 15 paragraphs of justification and reasoning, and to highlight that practical part of the post.

The practical parts could be extracted and placed together somewhere in order to have one single place that people can go to easily find them. Perhaps the LW software could provide some kind of support for distinguishing the practice sections of a post, and the extraction and aggregation of the practical howto sections could be automated.

Robin was kind enough not to say what overemphasizing the heroic individual rationalist implies about our true motivations.

I love your thesis and metaphor, that the goal is for us all jointly to become rational, seek, and find truth. But I do not "respect the opinions of enough others." I have political/scientific disagreements so deep and frequent that I frequently just hide them and. worry. I resonated best with your penultimate sentence: "humanity's vast stunning cluelessness" does seem to be the problem. Has someone written on the consequences of taking over the world? The human genome, presumptively adapted to forward it's best interests in a competitive world, may have only limited rationality, inadequate to the tasks of altruism, global thinking, and numerical analysis. By this last phrase I refer to our overreaction to a burning skyscraper, when an equal number of deaths monthly on freeways, by being less spectacular or poignant, motivates a disproportionately low response. Surely the difference there is a "gut" reaction, not a cogent one. We need to change what we care about, but we're hardwired to worry about spectacle, perhaps?

For whatever reason, the community here (so-called "rationalists") is heavily influenced by overly-individualistic ideologies (libertarianism, or in its more extreme forms, objectivism). This leads to ignoring entire realms of human phenomena (social cognition) and the people who have studied them (Vygotsky, sociologists of science, ethnomethodology). It's not that social approaches to cognition provide a magic bullet -- they just provide a very different perspective on how minds work. Imagine if you stop believing that beliefs are in the head and locate themselves in a community or institution. If interested, you could start with How Institutions Think by Mary Douglas.

I am guilty as charged in being much more familiar with individualistic than socially oriented ideologies.

Why don't you write some posts about techniques or discoveries from socially-oriented science that could help rationalists?

I would say Robin Hanson's views on status fit quite well into the gap you perceive. I do find it interesting that status isn't talked about more on Less Wrong.

Maybe I can tie this into what I think about the article. LW's articles do currently take an individualist stance on rationality (although I doubt objectivism has any role in this). The "refinements" they propose are mostly alterations of cognitive habits, not suggested ways of changing group dynamics. But LW as a whole is not simply a bunch of iconoclasts. Rather, there appears to be a clear attempt to collectively change patterns of thought. People write stuff, get +/- karma, feel good/bad, update their beliefs and try again. So even though the content of LW is individually applicable, posters will naturally develop preferred topics of expertise, subjects on which they know enough to benefit the community by what they write. And developing expertise does benefit from the martial arts analogy.

I would say Robin Hanson's views on status fit quite well into the gap you perceive. I do find it interesting that status isn't talked about more on Less Wrong.

Was there a time when we neglected status as a topic? wow. I don't remember that.

Maybe personal finance is a better analogy than Martial Arts. It's useful for nearly anybody to know about personal finance, yet many people are lacking even in the basics. Some high-falutin stock market concepts may not be useful to the average joe, the same way advanced rationality ("better then Einstein") may not be needed, but still, education about the basics is useful.

Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment. If you expect to find yourself unarmed in a dark alley, or fighting hand to hand in a war, it makes sense. But most people do a lot better at ensuring their personal security by coordinating to live in peaceful societies and neighborhoods; they pay someone else to learn martial arts. Similarly, while "survivalists" plan and train to stay warm, dry, and fed given worst case assumptions about the world around them, most people achieve these goals by participating in a modern economy.

As a martial arts enthusiast I have to concur that the practical survivability impact of my training is somewhat limited. In fact, I would go as far as to say that my martial art training is far less likely to save my life than is my previous sporting hobby, running.

The martial arts metaphor for rationality training applies as much to my motives for participation as it does for the training itself. I don't expect to beat many armed assailants to a pulp in a dark alley and nor do I expect elimitating biasses from my cognition to make a dramatic impact on my success or life satisfaction. However, I relish every opportunity to push both my body and mind to their limits in elegance and performance. I am also attracted to subcultures that tend to non-exclusivity with skill based elitism.

I unashamedly confess that I'd be a rationalist even if it had absolutely no direct benefit (over participation in the activities of any other arbitrary non-rationilist subculture to a similar degree). But at the same time I have to concur with Robin on the best way to go about finding truth.

But for those of us who respect the opinions of enough others to want to work with them to find truth, it makes more sense to design institutions which give each person better incentives to update a common consensus.

Absolutely. There is just something comforting in knowing that if the information I am relying apon is flawed, someone is losing money because of it. It's even better to know that if you do find flaws you'll be rewarded for doing so, not hunted down and persecuted as a 'whistleblower' or a heretic.

Unfortunately 'designing institutions' doesn't sound like the hard part. The hard part is taking these institutions and making them an active reality. Diluting the influence of authority tends to go against the interests of those in authority, at least how they perceive it. Of course, that particular robotic rampage of human stupidity is not something I personally need to overcome with my own rationalist-fu. I can respect the opinions of Robin et. al. and eagerly keep abreast of their instights and practical solutions.

Yes, you are right that designing need not be the hard part. So I just changed "design" to "design and field."

On this point, we should also be talking about effective evangelism for rationality.

We should learn how to identify trustworthy experts. Is there some general way, or do you have to rely on specific rules for each category of knowledge?

Two examples of rules are never trust someone's advice about which specific stocks you should buy unless the advisor has material non-public information, and be extremely skeptical of statistical evidence presented in Women Studies' journals. Although both rules are probably true you obviously couldn't trust financial advisers or Women Studies' professors to give them to you.

Have you evaluated statistical evidence in Women Studies' journals?

Prediction markets can forecast the accuracy or fame of purported experts. But preferably you'd accept the market estimate on your question and so not need to know who is an expert.

Another good example is the legal system. Individually it serves many participants poorly on a truth-seeking level; it encourages them to commit strongly to an initial position and make only those arguments that advance their cases, while doing everything they can to conceal their cases' flaws short of explicit misrepresentation. They are rewarded for winning, whether or not their position is correct. On the other hand, this set-up (in combined with modern liberalized disclosure rules) works fairly well as a way of aggregating all the relevant evidence and arguments before a decisionmaker. And that decisionmaker is subject to strong social pressures not to seek to affiliate with the biased parties. Finally, in many instances the decisionmaker must provide specific reasons for rejecting the parties' evidence and arguments, and make this reasoning available for public scrutiny.

The system, in short, works by encouraging individual bias in service of greater systemic rationality.

The legal system does supposedly encourage individual bias to aggregate evidence; I'm more of a skeptic about how well it actually does this in practice though.