Rationality is Systematized Winning

Followup toNewcomb's Problem and Regret of Rationality

"Rationalists should win," I said, and I may have to stop saying it, for it seems to convey something other than what I meant by it.

Where did the phrase come from originally?  From considering such cases as Newcomb's Problem:  The superbeing Omega sets forth before you two boxes, a transparent box A containing $1000 (or the equivalent in material wealth), and an opaque box B that contains either $1,000,000 or nothing.  Omega tells you that It has already put $1M in box B if and only if It predicts that you will take only box B, leaving box A behind.  Omega has played this game many times before, and has been right 99 times out of 100.  Do you take both boxes, or only box B?

A common position - in fact, the mainstream/dominant position in modern philosophy and decision theory - is that the only reasonable course is to take both boxes; Omega has already made Its decision and gone, and so your action cannot affect the contents of the box in any way (they argue).  Now, it so happens that certain types of unreasonable individuals are rewarded by Omega - who moves even before they make their decisions - but this in no way changes the conclusion that the only reasonable course is to take both boxes, since taking both boxes makes you $1000 richer regardless of the unchanging and unchangeable contents of box B.

And this is the sort of thinking that I intended to reject by saying, "Rationalists should win!"

Said Miyamoto Musashi:  "The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means.  Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement.  It is essential to attain this.  If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him."

Said I:  "If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."

This is the distinction I had hoped to convey by saying, "Rationalists should win!"

There is a meme which says that a certain ritual of cognition is the paragon of reasonableness and so defines what the reasonable people do.  But alas, the reasonable people often get their butts handed to them by the unreasonable ones, because the universe isn't always reasonable.  Reason is just a way of doing things, not necessarily the most formidable; it is how professors talk to each other in debate halls, which sometimes works, and sometimes doesn't.  If a hoard of barbarians attacks the debate hall, the truly prudent and flexible agent will abandon reasonableness.

No.  If the "irrational" agent is outcompeting you on a systematic and predictable basis, then it is time to reconsider what you think is "rational".

For I do fear that a "rationalist" will clutch to themselves the ritual of cognition they have been taught, as loss after loss piles up, consoling themselves:  "I have behaved virtuously, I have been so reasonable, it's just this awful unfair universe that doesn't give me what I deserve.  The others are cheating by not doing it the rational way, that's how they got ahead of me."

It is this that I intended to guard against by saying:  "Rationalists should win!"  Not whine, win.  If you keep on losing, perhaps you are doing something wrong.  Do not console yourself about how you were so wonderfully rational in the course of losing.  That is not how things are supposed to go.  It is not the Art that fails, but you who fails to grasp the Art.

Likewise in the realm of epistemic rationality, if you find yourself thinking that the reasonable belief is X (because a majority of modern humans seem to believe X, or something that sounds similarly appealing) and yet the world itself is obviously Y.

But people do seem to be taking this in some other sense than I meant it - as though any person who declared themselves a rationalist would in that moment be invested with an invincible spirit that enabled them to obtain all things without effort and without overcoming disadvantages, or something, I don't know.

Maybe there is an alternative phrase to be found again in Musashi, who said:  "The Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size."

"Rationality is the spirit of winning"?  "Rationality is the Way of winning"?  "Rationality is systematized winning"?  If you have a better suggestion, post it in the comments.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:48 AM
Select new highlight date
Rendering 50/262 comments  show more

Rationality is about winning.

The about captures the expected systematic winning part, as you are considering the model of winning, not necessarily the accidental winning itself. It limits the scope to the winning only, leaving only the secondary roles for parry, hit, spring, strike or touch. Being a study about the real thing, rationality employs a set of tricks that allow to work it, in special cases and at coarse levels of detail. Being about the real thing, rationality aims to give the means for actually winning.

Personally, I think the word "win" might be the problem. Winning is very binary, which isn't how rationality is defined. Perhaps "Rationalists maximize"?

Wikipedia has this right:

"a rational agent is specifically defined as an agent which always chooses the action which maximises its expected performance, given all of the knowledge it currently possesses."

Expected performance. Not actual performance. Whether its actual performance is good or not depends on other factors - such as how malicious the environment is, whether the agent's priors are good - and so on.

Problem with that in human practice is that it leads to people defending their ruined plans, saying, "But my expected performance was great!" Vide the failed trading companies saying it wasn't their fault, the market had just done something that it shouldn't have done once in the lifetime of the universe. Achieving a win is much harder than achieving an expectation of winning (i.e. something that it seems you could defend as a good try).

Expected performance is what rational agents are actually maximising.

Whether that corresponds to actual performance depends on what their expectations are. What their expectations are typically depends on their history - and the past is not necessarily a good guide to the future.

Highly rational agents can still lose. Rational actions (that follow the laws of induction and deduction applied to their sense data) are not necessarily the actions that win.

Rational agents try to win - and base their efforts on their expectations. Whether they actually win depends on whether their expectations are correct. In my view, attempts to link rationality directly to "winning" miss the distinction between actual and expected utility.

There are reasons for associations between expected performance and actual performance. Indeed, those associations are why agents have the expectations they do. However, the association is statistical in nature.

Dissect the brain of a rational agent, and it is its expected utility that is being maximised. Its actual utility is usually not something that is completely under its control.

It's important not to define the "rational action" as "the action that wins". Whether an action is rational or not should be a function of an agent's sense data up to that point - and should not vary depending on environmental factors which the agent knows nothing about. Otherwise, the rationality of an action is not properly defined from an agent's point of view.

I don't think that the excuses humans use for failures is an issue here.

Behaving rationally is not the only virtue needed for success. For example, you also need to enter situations with appropriate priors.

Only if you want rationality to be the sole virtue, should "but I was behaving rationally" be the ultimate defense against an inquisition.

Rationality is good, but to win, you also need effort, persistence, good priors, etc - and it would be very, very bad form to attempt to bundle all those into the notion of being "rational".

Expected performance is what rational agents are actually maximising.

Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout? As Nesov says, rationality is about utility; which is why a rational agent in fact maximizes their expectation of utility, while trying to maximize utility (not their expectation of utility!).

It may help to understand this and some of the conversations below if you realize that the word "try" behaves a lot like "quotation marks" and that having an extra "pair" of quotation "marks" can really make "your" sentences seem a bit odd.

Problem with that in human practice is that it leads to people defending their ruined plans, saying, "But my expected performance was great!"

It's true that people make this kind of response, but that doesn't make it valid, or mean that we have to throw away the notion of rationality as maximizing expected performance, rather than actual performance.

In the case of failed trading companies, can't we just say that despite their fantasies, their expected performance shouldn't have been so great as they thought? And the fact that their actual results differed from their expected results should cast suspicion on their expectations.

Perhaps we can say that expectations about performance be epistemically rational, and only then can an agent who maximizes their expected performance be instrumentally rational.

Achieving a win is much harder than achieving an expectation of winning (i.e. something that it seems you could defend as a good try).

Some expectations win. Some expectations lose. Yet not all expectations are created equal. Non-accidental winning starts with something that seems good to try (can accidental winning be rational?). At least, there is some link between expectations and rationality, such that we can call some expectations more rational than others, regardless of whether they actually win or lose.

An example SoullessAutomaton made was that we shouldn't consider lottery winners rational, even though they won, because they should not have expected to. Conversely, all sorts of inductive expectations can be rational, even though sometimes they will fail due to the problem of induction. For instance, it's rational to expect that the sun will rise tomorrow. If Omega decides to blow up the sun, my expectation will still have been rational, even though I turned out to be wrong.

It sounds like the objection you're giving here is that "some people will misinterpret expected performance in the technical sense as expected performance in the colloquial sense (i.e., my guess as to how things will turn out)." That doesn't seem like much of a criticism though, and it doesn't sound severe enough to throw out what is a pretty standard definition. People will also misinterpret your alternate definition, as we have seen.

Do you have other objections?

What you say is important: the vast majority of whining "rationalists" weren't done dirty by a universe that "nobody could have foreseen" (the sub-prime mortgage crisis/piloting jets into buildings). If you sample a random loser claiming such (my reasoning was flawless, my priors incorporated all feasibly available human knowledge), an impartial judge would in nearly all cases correctly call them to task.

But clearly it's not always the case that my reasoning (and/or priors) is at fault when I lose. My updates shouldn't overshoot based on empirical noise and false humility. I think what you want to say is that most likely even (especially?) the most proud rationalists probably shield themselves from attributing their loss to their own error ("eat less salt").

I'd like some quantifiable demonstration of an externalizing bias, some calibration of my own personal tendency to deny evidence of my own irrationality (or of my wrong priors).

Suggestion: "Rationalists seek to Win, not to be rational".

Suggestion: "If what you think is rational appears less likely to Win than what you think is irrational, then you need to reassess probabilities and your understanding of what is rational and what is irrational".

Suggestion: "It is not rational to do anything other than the thing which has the best chance of winning".

If I have a choice between what I define as the "Rational" course of action, and a course of action which I describe as "irrational" but which I predict has a better chance of winning, I am either predicting badly or wrongly defining what is Rational.

I am not sure my suggestions are Better, but I am groping towards understanding and hope my gropings help.

EDIT: and the warning is that we may deceive ourselves into thinking that we are being rational, when we are missing something, using the wrong map, arguing fallaciously. So what about:

Suggestion: "If you are not Winning, consider whether you are really being rational".

"If you are not Winning more than people you believe to be irrational, this may be evidence that you are not really being rational".

On a different tack, "Rationalists win wherever rationality is an aid to winning". I am not going to win millions on the Lottery, because I do not play it.

Rationalists are the ones who win when things are fair, or when things are unfair randomly over an extended period. Rationality is an advantage, but it is not the only advantage, not the supreme advantage, not an advantage at all in some conceivable situations, and cannot reasonably be expected to produce consistent winning when things are unfair non-randomly. However, it is a cultivable advantage, which is among the things that makes it interesting to talk about.

A rationalist might be unfortunate enough that (s)he does not do well, but ceteris paribus, (s)he will do better. Maybe that could be the slogan - "rationalists do better"? With the implied parenthetical "(than they would do if they were not rationalists, with the caveat that you can concoct unlikely situations in which rationality is an impediment to some values of "doing well")".

"You can't reliably do better than rationality in a non-pathological universe" is probably closer to the math.

It's impossible to add substance to "non-pathological universe." I suspect circularity: a non-pathological universe is one that rewards rationality; rationality is the disposition that lets you win in a nonpathological universe.

You need to attempt to define terms to avoid these traps.

Pathological universes are ones like: where there is no order and the right answer is randomly placed. Or where the facts are maliciously arranged to entrap in a recursive red herring where the simplest well-supported answer is always wrong, even after trying to out-think the malice. Or where the whole universe is one flawless red herring ("God put the fossils there to test your faith").

"No free lunch" demands they be mathematically conceivable. But to assert that the real universe behaves like this is to go mad.

Rationality seems like a good name for the obvious ideal that you should believe things that are true and use this true knowledge to achieve your goals. Because social organisms are weird in ways whose details are beyond the scope of this comment, striving to be more rational might not pay off for a human seeking to move up in a human world---but aside from this minor detail relating to an extremely pathological case, it's still probably a good idea.

Hmmm. Unless you are suggesting a different definition for rationality, I think I disagree. If an atheist has the goal of gaining business contacts (or something) and he can further this goal by joining a church, and impersonating the irrational behaviors he sees, he isn't being irrational. While behaviors that tend to have their origins in irrational thought are sometimes rewarded by human society, the irrationality itself never is. I think becoming more rational will help a person move up in a human status hierarchy, if that is the rationalist's goal. I think we have this stereotyped idea of rationalists as Asperger's-afflicted know-it-alls who are unable to deal with irrational humans. It simply doesn't have to be that way.

I always thought that the majority of exposition in your Newcomb example went towards, not "Rationalists should WIN", but a weaker claim which seems to be a smaller inferential distance from most would-be rationalists:

Rationalists should not systematically lose; whatever systematically loses is not rationality.

(Of course, one needs the logical caveat that we're not dealing with a pure irrationalist-rewarder; but such things don't seem to exist in this universe at the moment.)

Re: "First, foremost, fundamentally, above all else: Rational agents should WIN."

In an attempt to summarise the objections, there seem to be two fairly-fundamental problems:

  1. Rational agents try. They cannot necessarily win: winning is an outcome, not an action;

  2. "Winning" is a poor synonym for "increasing utility": sometimes agents should minimise their losses.

"Rationalists maximise expected utility" would be a less controversial formulation.

It seems to me that the disagreement isn't so much about winning as the expectation.

In fact I don't really agree with this winning vs. belief modes of rationality.

Both approaches are trying to maximize their expected payout. Eliezer's approach has a wider horizon of what it considers when figuring out what the universe is like.

The standard approach is that since the content of the boxes is already determined at the time of the choice, so taking both will always put you $1000 ahead.

Eliezer looks (I think) out to the most likely final outcomes. (or looks back at how the chain of causality of one's decision is commingled with the chain of causality of Omega's decision. )

I think that flaw in the standard approach is not 'not winning' but a false belief about the relationship between the boxes and your choices. (the belief that there isn't any.) Once you have the right answer, making the choice that wins is obvious.

The way we would know that the standard approach is the wrong one is by looking at results. That a certain set of choices consistently wins isn't evidence that it is rational, it is evidence that it wins. Believing that it wins is rational.

So maybe: "Rationality is learning how to win"

I guess when I look over the comments, the problem with the phraseology is that people seem to inevitably begin debating over whether rationalists win and asking how much they win - the properties of a fixed sort of creature, the "rationalist" - rather than saying, "What wins systematically? Let us define rationality accordingly."

Not sure what sort of catchphrase would solve this.

I don't think I buy this for Newcomb-like problems. Consider Omega who says, "There will be $1M in Box B IFF you are irrational."

Rationality as winning is probably subject to a whole family of Russell's-Paradox-type problems like that. I suppose I'm not sure there's a better notion of rationality.

What you give is far harder than a Newcomb-like problem. In Newcomb-like problems, Omega rewards your decisions, he isn't looking at how you reach them. This leaves you free to optimize those decisions.

The rationality that doesn't secure your wish isn't the true rationality.

Winning has no fixed form. You'll do whatever is needed to succeed, however original or far fetched it would sound. How it sounds is irrelevant, how it works is the crux.

And If at first what you tried didn't work, then you'll learn, adapt, and try again, making no pause for excuses, if you merely want to succeed, you'll be firm as a rock, relentless in your attempts to find the path to success.

And if your winning didn't go as smoothly or well as you wanted or thought it should, in general, then learn, adapt, and try again. Think outside of the box, self recurse on winning itself. Eventually, you should refine and precise your methods into a tree, from general to specialized.

That tree will have a trunk of general cases and methods used to solve those, and any case that lies ahead, upwards on the tree; and the higher you go, the more specialized the method, the rarer the case it solves. The tree isn't fixed either, it can and will grow and change.

What about cases where any rational course of action still leaves you on the losing side?

Although this may seem to be impossible according to your definition of rationality, I believe it's possible to construct such a scenario because of the fundamental limitations of a human brains ability to simulate.

In previous posts you've said that, at worst, the rationalist can simply simulate the 'irrational' behaviour that is currently the winning strategy. I would contend that humans can't simulate effectively enough for this to be an option. After all we know that several biases stem from our inability to effectively simulate our own future emotions, so to effectively simulate an entire other beings response to a complex situation would seem to be a task beyond the current human brain.

As a concrete example I might suggest the ability to lie. I believe it's fairly well established that humans are not hugely effective liars and therefore the most effective way to lie is to truly believe the lie. Does this not strongly suggest that limitations of simulation mean that a rational course of action can still be beaten by an irrational one?

I'm not sure that even if this is true it should effect a universal definition of rationality - but it would place bounds on the effectiveness of rationality in beings of limited simulation capacity.

If humans are imperfect actors then in situations (such as a game of chicken) in which it is better to (1) be irrational and seen as irrational then it is to (2) be rational and seen as rational

then the rational actor will lose.

Of course holding constant everyone else's beliefs about you, you always gain by being more rational.

"Rationalists should win." is what sold me on this site. Its a good phrase.