The trouble is that this rationale leads directly to wireheading at the first chance you get - choosing to become a brain in a vat with your reward centers constantly stimulated. Many people don't want that, so those people should make their beliefs only a means to an end.

However, there are some people who would be fine with wireheading themselves, and those people will be totally unswayed by this sort of argument. If Joe is one of them... yeah, sure, a sufficiently pleasant belief is better than facing reality. In this particular case, I might still recommend that Joe face the facts, since admitting that you have a problem is the first step. If he shapes up enough, he might even get married and live happily ever after.

Making Beliefs Pay Rent (in Anticipated Experiences)

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."

Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying "No," and the other saying "Yes," they do not anticipate any different experiences.  The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

It's tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don't see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don't experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.

You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?

To answer precisely, you must use beliefs like Earth's gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional. It probably does not exaggerate much to describe these two beliefs as sentences made out of words. But these two beliefs have an inferential consequence that is a direct sensory anticipation—if the clock's second hand is on the 12 numeral when you drop the ball, you anticipate seeing it on the 1 numeral when you hear the crash five seconds later. To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience.

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

The same brain that builds a network of inferred causes behind sensory experience, can also build a network of causes that is not connected to sensory experience, or poorly connected. Alchemists believed that phlogiston caused fire—we could oversimply their minds by drawing a little node labeled "Phlogiston", and an arrow from this node to their sensory experience of a crackling campfire—but this belief yielded no advance predictions; the link from phlogiston to experience was always configured after the experience, rather than constraining the experience in advance. Or suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a "post-utopian". What does this mean you should expect from his books? Nothing. The belief, if you can call it that, doesn't connect to sensory experience at all. But you had better remember the propositional assertion that "Wulky Wilkinsen" has the "post-utopian" attribute, so you can regurgitate it on the upcoming quiz. Likewise if "post-utopians" show "colonial alienation"; if the quiz asks whether Wulky Wilkinsen shows colonial alienation, you'd better answer yes. The beliefs are connected to each other, though still not connected to any anticipated experience.

We can build up whole networks of beliefs that are connected only to each other—call these "floating" beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens's ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit.  Do you believe that phlogiston is the cause of fire?  Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a post-utopian? Then what do you expect to see because of that? No, not "colonial alienation"; what experience will happen to you? Do you believe that if a tree falls in the forest, and no one hears it, it still makes a sound? Then what experience must therefore befall you?

It is even better to ask: what experience must not happen to you?  Do you believe that elan vital explains the mysterious aliveness of living beings?  Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you.  It floats.

When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can't find the difference of anticipation, you're probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don't know what experiences are implied by Wulky Wilkinsen being a post-utopian, you can go on arguing forever. (You can also publish papers forever.)

Above all, don't ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.

 

Part of the sequence Mysterious Answers to Mysterious Questions

Next post: "Belief in Belief"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 2:06 AM
Select new highlight date
Rendering 50/249 comments  show more

You write, “suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a ‘post-utopian’. What does this mean you should expect from his books? Nothing.”

I’m sympathetic to your general argument in this article, but this particular jibe is overstating your case.

There may be nothing particularly profound in the idea of ‘post-utopianism’, but it’s not meaningless. Let me see if I can persuade you.

Utopianism is the belief that an ideal society (or at least one that's much better than ours) can be constructed, for example by the application of a particular political ideology. It’s an idea that has been considered and criticized here on LessWrong. Utopian fiction explores this belief, often by portraying such an ideal society, or the process that leads to one. In utopian fiction one expects to see characters who are perfectible, conflicts resolved successfully or peacefully, and some kind of argument in favour of utopianism. Post-utopian fiction is written in reaction to this, from a skeptical or critical viewpoint about the perfectibility of people and the possibility of improving society. One expects to see irretrievably flawed characters, idealistic projects turn to failure, conflicts that are destructive and unresolved, portrayals of dystopian societies and argument against utopianism (not necessarily all of these at once, of course, but much more often than chance).

Literary categories are vague, of course, and one can argue about their boundaries, but they do make sense. H. G. Wells’ “A Modern Utopia” is a utopian novel, and Aldous Huxley’s “Brave New World” is post-utopian.

Indeed. Some rationalists have a fondness for using straw postmodernists to illustrate irrationality. (Note that Alan Sokal deliberately chose a very poor journal, not even peer-reviewed, to send his fake paper to.) It's really not all incomprehensible Frenchmen. While there may be a small number of postmodernists who literally do not believe objective reality exists, and some more who try to deconstruct actual science and not just the scientists doing it, it remains the case that the human cultural realm is inherently squishy and much more relative than people commonly assume, and postmodernism is a useful critical technique to get through the layers of obfuscation motivating many human cultural activities. Any writer of fiction who is any good, for instance, needs to know postmodernist techniques, whether they call them that or not.

Yes.

That said, it's not too surprising that postmodernists are often the straw opponent of choice.

The idea that the categories we experience as "in the world" are actually in our heads is something postmodernists share with cognitive scientists; many of the topics discussed here (especially those explicitly concerned with cognitive bias) are part of that same enterprise.

I suspect this leads to a kind of uncanny valley effect, where something similar-but-different creates more revulsion than something genuinely opposed would.

Of course, knowing that does not make me any less frustrated with the sort of soi-disant postmodernist for whom category deconstruction is just a verbal formula, rather than the end result of actual thought.

I also weakly suspect that postmodernists get a particularly bad rap simply because of the oxymoronic name.

Would you consider Le Guin's The Dispossessed to be post-utopian? I think she intends her Anarres to be a good place on the whole, and a decent partial attempt at achieving a utopia, but still to have plausible problems.

What good is math if people don't know what to connect it to?

All math pays rent.

For all mathematical theorems can be restated in the form:

If the axioms A, B, and C and the conditions X, Y and Z are satisfied, then the statement Q is also true.

Therefore, in any situations where the statements A,B,C and X,Y,Z are true, you will expect Q to also be verified.

In other words, mathematical statements automatically pay rent in terms of changing what you expect. (Which is) the very thing it was required to show. ■


In practice:

If you demonstrate Pythagoras's Theorem, and you calculate that 3^2+4^2=5^2, you will expect a certain method of getting right angles to work.

If you exhibit the aperiodic Penrose Tiling, you will expect Quasicrystals to exist.

If you demonstrate the impossibility of solving to the Halting Problem, you will not expect even a hypothetical hyperintelligence to be able to solve it.

If you understand why you can't trisect an angle with an unmarked ruler and a compass (not both used at the same time), you will know immediately that certain proofs are going to be wrong.

and so on and so forth.

Yes, we might not immediately know where a given mathematical fact will come in handy when observing the world, but by their nature, mathematical facts tell us exactly when to expect them.

Is it not the purpose of math to tell us "how" to connect things? At the bottom, there are some axioms that we accept as basis of the model, and using another formal model we can infer what to expect from anything whose behavior matches our axioms.

Math makes it very hard to reason about models incorrectly. That's why it's good. Even parts of math that seem particularly outlandish and disconnected just build a higher-level framework on top of more basic concepts that have been successfully utilized over and over again.

That gives us a solid framework on which we can base our reasoning about abstract ideas. Just a few decades ago most people believed the theory of probability was just a useless mathematical game, disconnected from any empirical reality. Now people like you and me use it every day to quantify uncertainty and make better decisions. The connections are not always obvious.

I loved this post, but I have to be a worthless pedant.

If you drop a ball off a 120-m tall building, you expect impact in t=sqrt(2H/g)=~5 s. But that would be when the second-hand is on the 1 numeral.

Heh. I got this right originally, then reread it just recently while working on the book, saw what I thought was an error (1 numeral? just one second? why?) and "fixed" it.

Elizer, your post above strikes me, at least, as a restatement of verificationism: roughly, the view that the truth of a claim is the set of observations that it predicts. While this view enjoyed considerable popularity in the first part of the last century (and has notable antecedents going back into the early 18th century), it faces considerable conceptual hurdles, all of which have been extensively discussed in philosophical circles. One of the most prominent (and noteworthy in light of some of your other views) is the conflict between verificationism and scientific realism: that is, the presumption that science is more than mere data-predictive modeling, but the discovery of how the world really is. See also here and here.

It's amazing how many forms of irrationality failure to see the map-territory distinction, and the resulting reification of categories (like 'sound') that exist in the mind, causes: stupid arguments, phlogiston, the Mind Projection Fallacy, correspondence bias, and probably also monotheism, substance dualism, the illusion of the self, the use of the correspondence theory of truth in moral questions... how many more?

I think you're being too hard on the English professor, though. I suspect literary labels do have something to do with the contents of a book, no matter how much nonsense might be attached to them. But I've never experienced a college English class; perhaps my innocent fantasies will be shaken then.

Michael V, you could say that mathematical propositions are really predictions about the behavior of physical systems like adding machines and mathematicians. I don't find that view very satisfying, because math seems to so fundamentally underly everything else - mathematical truths can't be changed by changing anything physical, for instance - but it's one way to make math compatible with anticipation.

I suspect literary labels do have something to do with the contents of a book, no matter how much nonsense might be attached to them

I think Eliezer's point was about the student. "Wulky Wilkinsen is a 'post-utopian'" could be meaningful, if you know what a post-utopian is and is not (I don't, and don't care). The student who learns just the statement, however, has formed a floating belief.

We might even initially use propositional beliefs as indicators of meaningful beliefs about the world. But if we then discuss these highly compressed beliefs without referencing their meaning, we often feel like we are reasoning when really we have ceased to speak about the world. That is, grounded beliefs can become "floaty" and spawn further "floaty" beliefs.

In my sociology class, we talk about how "Man in his natural state has liberty because everyone is equal". "Natural state", "liberty", and "equal" could conceivably be linked to descriptions of social interaction or something. However, class after class we refrain from talking about specific behaviors. Concepts float away from their referents without much resistance - it's all the same to the student, who only needs to make a few unremarkable remarks to get his B+ for class participation. Compare:

"Man in his natural state has liberty because everyone is equal"

"Man in his natural state is equal because everyone has liberty"

"When everyone has liberty and is equal, man is in his natural state"

These statements should express very different beliefs about the world, but to the student they sound equally clever coming out of the professor's mouth.

(Edit for minor grammar and formatting)

Rooney, as discussed in The Simple Truth I follow a correspondence theory of truth. I am also a Bayesian and a believer in Occam's Razor. If a belief has no empirical consequences then it could receive no Bayesian confirmation and could not rise to my subjective attention. In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

I, too, am nervous about having anticipated experience as the only criterion for truth and meaning. It seems to me that a statement can get its meaning either from the class of prior actions which make it true or from the class of future observations which its truth makes inevitable. We can't do quantum mechanics with kets, but no bras. We can't do Gentzen natural deduction with rules of elimination, but no rules of introduction. We can't do Bayesian updating with observations, but no priors. And I claim that you can't have a theory of meaning which deals only with consequences of statements being true but not with what actions put the universe into a state in which the statement becomes true.

This position of mine comes from my interpretation of the dissertation of Noam Zeilberger of CMU (2005, I think). Zeilberger's main concern lies in Logic and Computer Science, but along the way he discusses theories of truth implicit in the work of Martin-Lof and Dummett.

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

If some average Joe believes he’s smart and beautiful, and that gives him utility, is that necessarily a bad thing? Joe approaches a girl in a bar, dips his sweaty fingers in her iced drink, cracks a piece of ice in his teeth, pulls it out of his mouth, shoves it in her face for demonstration, and says, “Now that I’d broken the ice—”

She thinks: “What a butt-ugly idiot!” and gets the hell away from him.

Joe goes on happily believing that he’s smart and beautiful.

For myself, the answer is obvious: my beliefs are means to an end, not ends in themselves. They’re utility producers only insofar as they help me accomplish utility-producing operations. If I were to buy stock believing that its price would go up, I better hope my belief paid its rent in correct anticipation, or else it goes out the door.

But for Joe? If he has utility-pumping beliefs, then why not? It’s not like he would get any smarter or prettier by figuring out he’s been a butt-ugly idiot this whole time.

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

They can. They just do so very rarely, and since accepting some inaccurate beliefs makes it harder to determine which beliefs are and aren't beneficial, in practice we get the highest utility from favoring accuracy. It's very hard to keep the negative effects of a false belief contained; they tend to have subtle downsides. In the example you gave, Joe's belief that he's already smart and beautiful might be stopping him from pursuing self-improvements. But there definitely are cases where accurate beliefs are definitely detrimental; Nick Bostrom's Information Hazards has a partial taxonomy of them.

It's sort of taken for granted here that it is in general better to have correct beliefs (though there have been some discussions as to why this is the case). It may be that there are specific (perhaps contrived) situations where this is not the case, but in general, so far as we can tell, having the map that matches the territory is a big win in the utility department.

In Joe's case, it may be that he is happier thinking he's beautiful than he is thinking he is ugly. And it may be that, for you, correct beliefs are not themselves terminal values (ends in themselves). But in both cases, having correct beliefs can still produce utility. Joe for example might make a better effort to improve his appearance, might be more likely to approach girls who are in his league and at his intellectual level, thereby actually finding some sort of romantic fulfillment instead of just scaring away disinterested ladies. He might also not put all his eggs in the "underwear model" and "astrophysicist" baskets career-wise. You can further twist the example to remove these advantages, but then we're just getting further and further from reality.

Overall, the consensus seems to be that wrong beliefs can often be locally optimal (meaning that giving them up might result in a temporary utility loss, or that you can lose utility by not shifting them far enough towards truth), but a maximally rational outlook will pay off in the long run.

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

I think you've hit on one of the conceptual weaknesses of many Rationalists. Beliefs can pay rent in many ways, but Rationalists tend to only value the predictive utility of beliefs, and pooh pooh other other utilities of belief. Comfort utility - it makes me feel good to believe it. Social utility - people will like me for believing it. Efficacy utility - I can be more effective if I believe it.

Predictive Truth is a means to value, and even if a value in itself, it's surely not the only value. Instead of pooh poohing other types of utility, to convince people you need to use that predictive utility to analyze how the other utilities can best be fulfilled.

The trouble is that this rationale leads directly to wireheading at the first chance you get - choosing to become a brain in a vat with your reward centers constantly stimulated. Many people don't want that, so those people should make their beliefs only a means to an end.

However, there are some people who would be fine with wireheading themselves, and those people will be totally unswayed by this sort of argument. If Joe is one of them... yeah, sure, a sufficiently pleasant belief is better than facing reality. In this particular case, I might still recommend that Joe face the facts, since admitting that you have a problem is the first step. If he shapes up enough, he might even get married and live happily ever after.

I agree with those who say it's okay to figure things out later. If my music professor says a certain composer favors the Aeolian mode, I may not be able to visualize that on the spot but who cares? I can remember that statement and think about it later. Likewise with phlogiston, I have a vague concept of what it is and someday the alchemists will discover more precisely what's going on there.

Too much cognitive effort would be spent if, every time I thought about linear algebra, I had to visualize the myriad concrete instances in which it will be applied. I bet thinking in abstractions results in way more economical use of thinking time and thinking-matter.

Interesting post. However, I do not agree completely in the conclutions on the end.

I am a student in math science, what involves me into an enviroment of researchers of this area. In this way, I am able to see that this people's work is based on beliefs that 'does not exists', I mean, they work on abstract ideas that generally only exists in their minds. And now I wonder, does their efforts 'does not pay rent'? They live from structures and stuff that, in the most of the cases, cannot be found in 'real life', and so, according to the article's conclution, this would not be worth thinking, as is not flowing from a question of anticipation (what were we anticipating if it does not exists?).

Maybe I'm missunderstanding the post, or maybe it is just focus in other life experiences.

You're definitely right that there's some areas where it's easier to make beliefs pay rent than others! I think there's two replies to your concern:

1) First, many theories from math DO pay rent (the ones I'm most aware of are statistics and computer-science related ones). For example, better algorithms in theory (say Strassen's algorithm for multiplying matrices) often correspond to better results in practice. Even more abstract stuff like number theory or recursion theory do yield testable predictions.

2) Even things that can't pay rent directly can be logical implications of other things that pay rent. Eliezer wrote about this kind of reasoning here.

I don't understand how the examples given illustrate free-floating beliefs: they seem to have at least some predictive powers, and thus shape anticipation - (some comments by others below illustrate this better).

  • The phlogiston theory had predictive power (e.g. what kind of "air" could be expected to support combustion, and that substances would grow lighter when they burned), and it was falsifyable (and was eventually falsified). It had advantages over the theories it replaced and was replaced by another theory which represented a better understanding. (I base this reading on Jim Loy's page on Phlogiston Theory.

  • Literary genres don't have much predictive powers if you don't know anything about them - if you do, then they do. Classifying a writer as producing "science fiction" or "fantasy" creates anticipations that are statistically meaningful. For another comparison, saying some band plays "Death Metal" will shape our anticipation; somewhat differently for those who can distinguish Death Metal from Speed Metal as compared to those who merely know that "Metal" means "noise".

I can imagine beliefs leading to false anticipations, and they're obviously inferior to beliefs leading to more correct ones. That doesn't mean they're free-floating.

One example for the free-floating belief is actually about the tree falling in the forest: to believe that it makes a sound does not anticipate any sensory experience, since the tree falls explicitly where nobody is around to hear it, and whether there is sound or no sound will not change how the forest looks when we enter it later. However, to let go of the belief that the tree makes a sound does not seem to me to be very useful. What am I missing?

I understand that many beliefs are held not because they have predictive power, but because they generalize experiences (or thoughts) we have had into a condensed form: a sort of "packing algrithm" for the mind when we detect something common; and when we understand this commonality enough, we get to the point where we can make prediction, and if we don't yet, we can't, but may do so later. There is no belief or thought we can hold that we couldn't trace back to experiences; beliefs are not anticipatory, but formed from hindsight. They organize past experience. Can you predict which of these beliefs is not going to be helpful in organizing future experiences? How?

"Or suppose your postmodern English professor teaches you that the famous Wulky Wilkinsen is actually a "post-utopian". What does this mean you should expect from his book? Nothing."

When I first read this I thought, "Huh? Surely it tells you something, because I already have beliefs about what 'utopian' probably means, and what the 'post' part of it probably means, and what context these types of terms are usually used in... That sounds like a whole bag of reasons to expect certain things/themes/ideas in his book!"

But I think this missed the point Eliezer is making; a point I suggest would be more clear if he said:

"Or suppose your postmodern English professor teaches you that the famous Wulky Wilkinsen is actually a "barnbeanbaggle". What does this mean you should expect from his book? Nothing."

Darn right. I have no idea what a "barnbeanbaggle" is. It creates no anticipations about what I"ll find in his book; it's free-floating.

Free-floating beliefs have to at least feel like beliefs. You can't even think you have a belief about whether Wulky Wilkinsen is a barnbeanbaggle unless you think you have some idea of what "barnbeanbaggle" is being used to mean. The thing about using a made-up word is that it's too easy to notice that you don't know what to anticipate from it. The thing about "post-utopian" is that, even if you have some idea of what "post-utopian" is supposed to mean, being told (by someone you perceive as sufficiently authoritative) that a certain author is "post-utopian" is quite likely to just make you selectively interpret that author's works to fit that schema. Similar to how you can make professional wine tasters describe a white wine the way they usually describe red wines by dying it red.

What about knowledge for the sake of knowledge? For instance I don't anticipate that my belief that The Crusades took place will ever directly affect my sensory experiences in any way. Does that then mean that this belief is completely worthless and on the same level as the belief in ghosts, psychics, phlogiston, etc.?

Wouldn't taking your chain of reasoning to its logical conclusion require one to "evict" all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch? After all, how much personal sensory experience do you have that confirms the existence of atoms, for example?

DP

I think Eliezer's point is less strong than you think: for one thing, reading a history book is a sensory experience, and fewer history books would proclaim that The Crusades occurred in worlds where they had not than in worlds where they had.