This is cross-posted from Putanumonit.com, you can jump in the discussion in either place.


LessWrong has a reputation for being a place where dry and earnest people write dry and earnest essays with titles like “Don’t Believe Wrong Things”. A casual visitor wouldn’t expect it to host lively discussions of prophets, of wizards, and of achieving enlightenment. And yet, each of the above links does lead to LessWrong, and each post (including mine) has more than a hundred comments.

The discussion often turns to a debate that rages eternal in the rationalist community: correctness vs. usefulness. Rationality is about having true beliefs, we are told, but rationalists should also win. Winning, aka instrumental rationality, sure sounds a lot more fun than just believing true things (epistemic rationality). People are tempted to consider it as the primary goal of rationality, with the pursuit of truth being secondary.

Mentions of the “useful but incorrect”, which is how I see Jordan Peterson, invite comments like this:

A correct epistemological process is likely to assign very low likelihood to the proposition of Christianity being true at some point. Even if Christianity is true, most Christians don’t have good epistemics behind their Christianity; so if there exists an epistemically justifiable argument for ‘being a Christian’, our hypothetical cradle-Christian rationalist is likely to reach the necessary epistemic skill level to see through the Christian apologetics he’s inherited before he discovers it.

At which point he starts sleeping in on Sundays; loses the social capital he’s accumulated through church; has a much harder time fitting in with Christian social groups; and cascades updates in ways that are, given the social realities of the United States and similar countries, likely to draw him toward other movements and behavior patterns, some of which are even more harmful than most denominations of Christianity, and away from the anthropological accumulations that correlate with Christianity, some of which may be harmful but some of which may be protecting against harms that aren’t obvious even to those with good epistemics. Oops! Is our rationalist winning?
[…]
epistemic rationality is important because it’s important for instrumental rationality. But the thing we’re interested in is instrumental rationality, not epistemic rationality. If the instrumental benefits of being a Christian outweigh the instrumental harms of being a Christian, it’s instrumentally rational to be a Christian. If Christianity is false and it’s instrumentally rational to be a Christian, epistemic rationality conflicts with instrumental rationality.

Well, it’s time for a dry and earnest essay (probably overdue after last week’s grapefruits) on the question of instrumental vs. epistemic rationality. I am not breaking any ground that wasn’t previously covered in the Sequences etc., but I believe that this exercise is justified in the spirit of non-expert explanation.

I will attempt to:

  1. Dissolve a lot of the dichotomy between “useful” and “correct”, via some examples that use “wrong” wrong.
  2. Of the dichotomy that remains, position myself firmly on the correct side of the debate.
  3. Suggest that convincing yourself of something wrong is, in fact, possible and should be guarded vigilantly against.
  4. Say some more in praise of fake frameworks, and what they mean if they don’t mean “believing in false things”.

Wrong and Less Wrong

What does “truth” mean, for example in the definition of epistemic rationality as “the pursuit of true beliefs about the world”? I think that a lot of the apparent conflict between the “useful” and “true” stems from confusion about the latter word that isn’t merely semantic. As exemplars of this confusion, I will use Brian Lui’s posts: wrong models are good, correct models are bad, and useful models are better than correct models.

I have chosen Brian as a foil because:

  1. We actually disagree, but both do so in good faith.
  2. I asked him if I could, and he said OK.

Here are some examples that Brian uses:

Correct Models

Schrödinger’s model | Calorie-in-calorie-out | Big 5 personality | Spheric Earth

Useful Model

Bohr’s atomic model | Focus on satiety | MBTI | Flat Earth

You may be familiar with Asimov’s quote:

“When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

People often overlook the broader context of the quote. Asimov makes the point that Flat Earth is actually a very good model. Other models could posit an Earth with infinitely tall mountains or bottomless trenches, or perhaps an Earth tilted in such a way that walking north-west would always be uphill. A flat Earth model, built on empiricism and logic, is quite an achievement:

Perhaps it was the appearance of the plain that persuaded the clever Sumerians to accept the generalization that the earth was flat; that if you somehow evened out all the elevations and depressions, you would be left with flatness. Contributing to the notion may have been the fact that stretches of water (ponds and lakes) looked pretty flat on quiet days.

A model is correct or not in the context of a specific question asked of it, such as “Will I arrive back home from the east if I keep sailing west?” The flat Earth model was perfectly fine until that question was asked, and the first transoceanic voyages took place more than 1,000 after Eratosthenes calculated the spherical Earth’s radius with precision.

But it’s not just the “wrong” models that are true, the opposite is also the case, as famously noticed by George Box. The Earth’s shape isn’t a sphere. It’s not even a geoid, it changes moment by moment with the tides, plate tectonics, and ants building anthills. Brian’s division of models into the correct and incorrect starts to seem somewhat arbitrary, so what is it based on?

Brian considers the Big 5 personality model to more “correct” and “scientific” because it was created using factor analysis, while Myers-Briggs is based on Jung’s conceptual theory. But the trappings of science don’t make a theory true, particularly when the science in question has a fraught relationship with the truth. How “scientific” a process was used to generate a model can correlate with its truthfulness, but as a definition it seems to miss the mark entirely.

Rationalists usually measure the truth of a model by the rent it payswhen it collides with reality. Neither MBTI nor Big 5 does a whole lot of useful prediction, and they’re not even as fun as the MTG color system. On the other hand, Bohr’s atomic model works for most questions of basic chemistry and even the photoelectric effect.

A model is wrong not because it is not precisely quantified (like satiety), or because it wasn’t published in a science journal (like MBTI), or because it has been superseded by a more reductionist model (like Bohr’s atom). It is wrong when it predicts things that don’t happen or prohibits things that do.

When a model’s predictions and prohibitions line up with observable reality, the model is true. When those predictions are easy to make and check, it is useful.  Calorie-in-calorie-out isn’t very useful on the question of successful dieting because it is so difficult for people to just change their caloric balance as an immediate action. This difficulty doesn’t make this model any more or less correct, it just means that it’s hard to establish its correctness from seeing whether people who try to count calories lose weight or not. In this view truth and usefulness are almost orthogonal: truth is a precondition for usefulness, while some models are so wrong that they are worse than useless.

Jesus and Gandhi

Usefulness, in the sense of beliefs paying rent, is a narrower concept than winning, e.g., making money to pay your actual rent. The comment about the lapsed Christian I quoted talks about instrumental rationality as the pursuit of actually winning in life. So, is the rejection of Christ epistemically rational but instrumentally irrational?

First of all, I think that the main mistake the hypothetical apostate is making is a bucket error. In his mind, there is a single variable labeled “Christianity” which contains a boolean value: True or False. This single variable serves as an answer to many distinct questions, such as:

  1. Did Jesus die for my sins?
  2. Should I go to church on Sunday?
  3. Should I be nice to my Christian friends?

There is no reason why all three questions must have the same answer, as demonstrated by my closet-atheist friend who lives in an Orthodox Jewish community. The rent in the Jewish part of Brooklyn is pretty cheap (winning!) and doesn’t depend on one’s beliefs about revelation. Living a double life is not ideal, and it is somewhat harder to fit in a religious community if you’re a non-believer. But carelessly propagating new beliefs before sorting out the buckets in one’s head is much more dangerous than zoning out during prayer times. Keeping behaviors that correlate with a false belief is very different from installing new beliefs to change one’s behavior.

Information hazards are also a thing. There are many real things that we wish other people wouldn’t know, and some things that we wouldn’t want to learn ourselves. But avoiding true but dangerous knowledge is also very different than hunting false beliefs.

With that said, what if hunting and installing false beliefs is actually justified? A friend of mine who’s a big fan of Jordan Peterson is joking-not-joking about converting to Christianity.  If Christianity provides one with friends, meaning, and protection from harmful ideologies, isn’t it instrumentally rational to convert?

There’s a word for this sort of bargain: Faustian. One should always imagine this spoken by someone with reddish skin, twisty horns, and an expensive suit. I offer you all this, and all I want in return is a tiny bit of epistemic rationality. What’s it even worth to you?

Epistemic rationality is worth a lot.

It takes a lot of epistemic rationality to tease apart causation from the mere correlation of religion with its benefits. Perhaps a Christian’s community likes him because consistent beliefs make a person predictable; this benefit wouldn’t extend to a fresh convert. As for meaning and protection from adverse memes, are those provided by Jesus or by the community itself? Or by some confounder like age or geography?

A person discerning enough on matters of friendship to judge whether it is the cause or the effect of Christian belief probably understands friendship well enough to make friends with or without converting. I help run a weekly meetup of rationalists in New York. We think a lot about building an active community, and we implement this in practice. We may not provide the full spiritual package of a church, but we also don’t demand a steep price from our members: neither in money, nor in effort, nor in dogma.

Perhaps converting is the instrumentally optimal thing to do for a young rationalist, but it would require heroic epistemic rationality to know that it is so. And once you have converted, that epistemic rationality is gone forever, along with the ability to reason well about such trade-offs in the future. If you discover a new religion tomorrow that offers ten times the benefits of Christianity, it would be too late: your new belief in the truth of Christianity will prevent you from even considering the option of reconverting to the new religion.

This argument is colloquially known as The Legend of Murder-Gandhi. Should Gandhi, who abhors violence, take a pill that makes him 99% as reluctant to commit murder for a million dollars? No, because 99%-pacifist Gandhi will not hesitate to take another pill and go to 98%, and then down to 97%, and to 90%,

and so on until he’s rampaging through the streets of Delhi, killing everything in sight.

An exception could be made if Gandhi had a way to commit himself to stopping at 95% pacifism; that’s still pacifist enough that he doesn’t really need to worry about acting violently, yet $5 million richer.

But epistemic rationality is a higher-level skill than mere pacifism. It’s the skill that’s necessary not only to assess a single trade-off, but also to understand the dangers of slippery slopes, and the benefits of pre-commitments, and the need for Functional Decision Theory in a world full of Newcomblike problems. Gandhi who’s perfectly pacifist but doesn’t understand Schelling fences will take the first pill, and all his pacifism will be for naught.

Do you think you have enough epistemic rationality to determine when it’s really worth sacrificing epistemic rationality for something else? Better to keep increasing your epistemic rationality, just to be sure.

Flat Moon Society

Is this a moot point, though? It’s not like you can make yourself go to sleep an atheist and wake up a devout Christian tomorrow. Eliezer wrote a whole sequence on the inability to self-deceive:

We do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.
[…]
You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.
The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

He gives an example of very peculiar Orthodox Jew:

When this woman was in high school, she thought she was an atheist.  But she decided, at that time, that she should act as if she believed in God.  And then—she told me earnestly—over time, she came to really believe in God.
So far as I can tell, she is completely wrong about that.  Always throughout our conversation, she said, over and over, “I believe in God”, never once, “There is a God.”  When I asked her why she was religious, she never once talked about the consequences of God existing, only about the consequences of believing in God.  Never, “God will help me”, always, “my belief in God helps me”.  When I put to her, “Someone who just wanted the truth and looked at our universe would not even invent God as a hypothesis,” she agreed outright.

She hasn’t actually deceived herself into believing that God exists or that the Jewish religion is true.  Not even close, so far as I can tell.

On the other hand, I think she really does believe she has deceived herself.

But eventually, he admits that believing you won’t self-deceive is also somewhat of a self-fulfilling prophecy:

It may be wise to go around deliberately repeating “I can’t get away with double-thinking!  Deep down, I’ll know it’s not true!  If I know my map has no reason to be correlated with the territory, that means I don’t believe it!”

Because that way—if you’re ever tempted to try—the thoughts “But I know this isn’t really true!” and “I can’t fool myself!” will always rise readily to mind; and that way, you will indeed be less likely to fool yourself successfully.  You’re more likely to get, on a gut level, that telling yourself X doesn’t make X true: and therefore, really truly not-X.

To me the sequence’s message is “don’t do it!” rather than “it’s impossible!”. If self-deception were impossible, there would be no need for injunctions against it.

Self-deception definitely isn’t easy. A good friend of mine told me about two guys he knows who are aspiring flat-Earthers. Out of the pure joy of contrarianism, the two have spent countless hours watching flat-Earth apologia on YouTube. So far their yearning for globeless epiphany hasn’t been answered, although they aren’t giving up.

A coworker of mine feels that every person should believe in at least one crazy conspiracy theory, and so he says that he convinced himself that the moon landing was faked. It’s hard to tell if he fully believes it, but he probably believes it somewhat. His actual beliefs about NASA have changed, not just his beliefs-in-self-deception. Perhaps earlier in life, he would have bet that the moon landing was staged in a movie studio at million-to-one odds, and now he’ll take that bet at 100:1.

He is certainly less likely to discount the other opinions of moon landing-skeptics, which leaves him a lot more vulnerable to being convinced of bullshit in the future. And the mere belief-in-belief is still a wrong belief that was created in his mind ex-nihilo. My colleague clearly sacrificed some amount of epistemic rationality, although it’s unclear what he got in return.

Self-deception works like deception. False beliefs sneak into your brain the same way a grapefruit does.

  1. First, we hear something stated as fact: the moon landing was staged. Our brain’s immediate reaction on a neurological level to a new piece of information is to believe it. Only when propagating the information shows it to be in conflict with prior beliefs is it discarded. But nothing can ever be discarded entirely by our brains, and a small trace remains.
  2. We come across the same information a few more times. Now, the brain recognizes it as familiar, which means that it anchors itself deeper into the brain even if it is disbelieved every time. The traces accumulate. Was the footage of the moon landing really all it seemed?
  3. Perhaps we associate a positive feeling with the belief. Wouldn’t it be cool if the Appolo missions never happened? This means that I can still be the first human on the moon!
  4. Even if we still don’t believe the original lie when questioning it directly, it still occupies some territory in our head. Adjacent beliefs get reinforced through confirmation bias, which in turn reinforces the original lie. If the “landing” was really shot on the moon, why was the flag rippling in the wind? Wait, is the flag actually rippling? We don’t remember, it’s not like we watch moon landing footage every day. But now we believe that the flag was rippling, which reinforces the belief that the moon landing was fake.
  5. We forget where we initially learned the information from. Even if the original claim about the moon fakery was presented as untrue and immediately debunked, we will just remember that we heard somewhere that it was all an elaborate production to fool the Russians. We recall that we used to be really skeptical of the claim once, but it sure feels like a lot of evidence has been pointing that way recently…

It is easiest to break this chain on step 1 – avoid putting trash into your brain. As an example, I will never read the Trump exposé Fire and Fury under any circumstances, and implore my friends to do the same. Practically everyone agrees that the book has ten rumors and made up stories for every single verifiable fact, but if you read the book, you don’t know which is which. If you’re the kind of person who’s already inclined to believe anything and everything about Donald Trump, reading the book will inevitably make you stupider and less informed about the president. And this “kind of person” apparently includes most of the country, because no parody of Fire and Fury has been too outlandish to be believed.

Take the Glasses Off

So, what are “fake frameworks” and what do they have to do with all of this?

I use a lot of fake frameworks — that is, ways of seeing the world that are probably or obviously wrong in some important way.
[…]
Assume the intuition is wrong. It’s fake. And then use it anyway.

It almost sounds as if Val is saying that we should believe in wrong things, but I don’t think that’s the case. Here’s the case.

First of all, you should use a safety mechanism when dealing with fake frameworks: sandboxing. This means holding the belief in a separate place where it doesn’t propagate.

This is why I talk about wearing a “Peterson mask”, or having Peterson as a voice on your shoulder. The goal is to generate answers to questions like “What would Peterson tell me to do here? And how would Scott Alexander respond?” rather than literally replacing your own beliefs with someone else’s.  Answering those questions does require thinking as Peterson for a while, but you can build scaffolding that prevents that mode of thinking from taking over.

But sandboxing is secondary to the main point of fake frameworks: they’re not about believing new things, they’re about un-believing things.

A lot of fake frameworks deal with the behavior of large numbers of people: coordination problems are an ancient hungry demon, the social web forces people into playing roles, Facebook is out to get you. In what sense is Facebook out to get you? Facebook is thousands of employees and millions of shareholders pursuing their own interest, not a unified agent with desires.

But neither is a person.

People minds are made up of a multitude of independent processes, conscious and unconscious, each influencing our interactions with the world. Our single-minded pursuit of genetic fitness has shattered into a thousand shards of desire. Insofar as we have strategic goals such as being out to get someone, we are constantly distracted from them and constantly changing them.

The insight of fake frameworks is that every framework you use is fake, especially when talking about complicated things like people and societies. “Society” and “person” themselves aren’t ontologically basic entities, just useful abstractions. Useful, but not 100% true.

And yet, you have to interact with people and societies every day.  You can’t do it without some framework of thinking about people; a cocktail party isn’t navigable on the level of quarks or molecules or cells. You have to see human interaction through one pair of glasses or another. The glasses you look through impose some meaning on the raw data of moving shapes and mouth sounds, but that meaning is “fake”: it’s part of the map, not the territory.

Once you realize that you’re wearing glasses, it’s hard to forget that fact. You can now safely take the glasses off and replace them with another pair, without confusing what you see through the lenses with what exists on a fundamental level. The process is gradual, peeling away layer after layer of immutable facts that turned out to be interpretations. Every time a layer is peeled away, you have more freedom to play with new frameworks of interpretation to replace it.

If you can stand one more visual-based metaphor, the skill of removing glass is also called Looking. This art is hard and long and I’m only a novice in it, but I have a general sense of the direction of progress. There seems to be a generalizable skill of Looking and playing with frameworks, as well as domain-specific understanding that is required for Looking in different contexts. Deep curiosity is needed, and also relinquishment. It often takes an oblique approach rather than brute force. For example, people report the illusion of a coherent “self” being dispelled by such varied methods as meditation, falling in love, taking LSD, and studying philosophy.

Finally, while I can’t claim the benefits that others can, I think that Looking offers real protection against being infected with wrong beliefs. Looking is internalizing that some of your beliefs about the world are actually interpretations you impose on it. False interpretations are much easier to critically examine and detach from than false beliefs. You end up believing fewer wrong things about the world simply because you believe fewer things about the world.

And if Looking seems beyond reach, believing fewer wrong things is always a good place to start.

New Comment
36 comments, sorted by Click to highlight new comments since: Today at 9:21 AM

No specific comment beyond to say this was one of my favorite posts of yours and on LW in terms of rationality content in months.

Once you realize that you’re wearing glasses, it’s hard to forget that fact. You can now safely take the glasses off and replace them with another pair, without confusing what you see through the lenses with what exists on a fundamental level. The process is gradual, peeling away layer after layer of immutable facts that turned out to be interpretations. Every time a layer is peeled away, you have more freedom to play with new frameworks of interpretation to replace it.

Can you give an example of this?

Good question! Also hard to give a clear cut example of, but I think this is somewhat true of how I understand people's behavior.

  • When I was little, I saw people as having an unchanging character: good person, angry person, mean person.
  • When I grew up I realized that "character" isn't really an immutable part of a person, just the way I see them. I started understanding behavior in terms of following incentives and executing strategies: this person wants X, so he does Y.
  • Now, I have a sense that "a person wants something" is really just an abstraction. People look like they're following goals, but at any given moment we are executing a bunch of routines that are very context-dependent. We do some things driven by system 2, and other things that reenact previous actions or roles, and some things in response to arbitrary stimuli etc. I don't see behavior in the moment, let alone over time, as necessarily being unified or coherent.

This final stage allows me to be more flexible about describing character and behavior, because I see that those aren't ontologically basic. Instead of "this person is tribal" or "this person is signaling group loyalty", I may see someone as executing group signalling routines in a certain social context, and doing that by taking cues from a specific person. If I meet someone new I may form an initial impression of them at the level of character or goals, but it's much easier to add nuance to those or at least to moderate the strength of my predictions about what they may do.

You’re just talking about correspondence bias / fundamental attribution error, right?

Not quite, I think that either of those talks about only a small piece of misunderstanding people's behaviors.

Learning about FAE tells me that the other person kicked the vending machine not because he's an "angry person" but because he had a bad day. But really, "bad day" isn't any more of a basic entity than "angry person" is. A zen master has no "bad days" and also isn't an "angry person", which one is the reason why a zen master doesn't kick vending machines?

Also, the reason I kicked a vending machine isn't just because I had a bad day, but also because 5 minutes ago I was thinking about soccer, and 5 weeks ago I kicked a machine and it gave me a can, and 5 years ago I read a book about the benefits of not suppressing emotions. The causes of a simple act like that are enormously complicated, and FAE is just a step in that direction.

Separately from my other comment, I have a question about this bit:

A zen master has no “bad days”

Could you elaborate on this? I’m not sure what this could mean.

The one hypothesis I can think of, for how to interpret this line, is something like: “A zen master, even if he were to experience a series of misfortunes and frustrations over the course of a day, would nonetheless remain perfectly calm and in control of his emotions, and would not be driven to acts of violence or outward expression of frustration, such as kicking a vending machine which ate his dollar.”

Now, let us say this is true of some zen master. It seems to me, however, that it would then be precisely accurate to say that this zen master “isn’t an angry person”, and that this is the reason why he doesn’t kick vending machines. (I mean, I have bad days aplenty, and yet—despite not being a zen master—I don’t kick vending machines; this is, of course, because I am not an angry person.)

If you had a different meaning in mind, could you explain?

I am not Jacob, but here is what I think it means:

The pipeline from misfortunes to vending-machine-kicking has the following stages. Misfortunes lead to frustration. Frustration leads to anger. Anger leads to kicking vending machines. Our hypothetical Zen master has cultivated habits of mind that break this pipeline in multiple ways. When misfortunes strike, he doesn't become frustrated. Even if frustrated, he doesn't become angry. Even if angry, he doesn't kick vending machines. The early parts of this pipeline we call "having a bad day". The later parts we call "being an angry person", or at least acting like an angry person on a particular occasion. The Zen master's non-kicking is overdetermined: it isn't just because he has learned not to become so angry as to kick vending machines, but also because he has learned not to get into the mental states that would be precursors to this sort of anger.

It seems to me that the “misfortunes” part are what we call “having a bad day”. On that basis, my analysis (and my question) stands.

It seems to me that that interpretation makes the statement that 'a zen master has no "bad days"' mere nonsense and I don't think it's likely that Jacob wrote mere nonsense. Hence my opinion that he meant "bad days" to be understood as something like "days that he finds unpleasant and frustrating on account of misfortunes".

I guess there's another possibility, which is that Jacob is thinking that the Zen master achieves a state of indifference-to-worldly-things deep enough that scarcely anything that the rest of us would consider misfortune plays any role in his preferences. If you literally don't care what you eat, what sort of surroundings you live in, whether you are sick, whether anyone else likes you, how you spend your time, etc., then you're immune to a lot of things usually regarded as bad. (But also to a lot of things usually regarded as good.)

Jacob, if you're reading this, would you care to clarify?

I don't have much to add to gjm's description, but I'll add a little bit of flavor to get at Said's situational vs. dispositional dichotomy.

"Having a bad day" means something like experiencing a day in such a way that it causes mental suffering, and being an "angry person" is someone who reacts to mental suffering with violence. My claim is that those things aren't clean categories: they are hard to separate from each other, and they are both situation and dispositional.

If you experience a lot of suffering from some external misfortune, you are more likely to react in a way that makes it worse, and also to build up a subconscious habit of reacting in this way, which in turn creates more chances for you to suffer and get angry and react and reinforce the pattern... eventually you will end up kicking a lot of vending machines.

It doesn't make a lot of sense to draw a circle around something called "bad day" or "angry person" and blame your machine kicking on that. These two things are causes and effects of each other, and of a million other situational and dispositional things. That's what I mean by "bad day" and "angry person" being fake, and the definition of FAE that I googled doesn't quite address this.

I see, thanks.

I disagree with your view for several reasons, but I think we’ve diverged from the topic enough that it’s probably not useful to continue down this tangent. (In any case, you’ve more or less explained what you mean to my satisfaction.)

It seems to me that that interpretation makes the statement that ‘a zen master has no “bad days”’ mere nonsense and I don’t think it’s likely that Jacob wrote mere nonsense. Hence my opinion that he meant “bad days” to be understood as something like “days that he finds unpleasant and frustrating on account of misfortunes”.

Indeed, I agree that it’s unlikely that Jacob wrote something that was, to him, obvious nonsense. What might yet be the case, however, is that he wrote something that seemed like not-nonsense, but is actually nonsense upon inspection.

Before we investigate this possibility, though, I do want to ask—you say that what you describe in your second paragraph is “another possibility”, which implies that what you had in mind in your first paragraph is something different. But this puzzles me. How would the Zen master manage to not have any “days that he finds unpleasant and frustrating on account of misfortunes”, if it’s not either “never having any days full of misfortunes”, or “achiev[ing] a state of indifference-to-worldly-things deep enough that scarcely anything that the rest of us would consider misfortune plays any role in his preferences”?

Is the idea that the Zen master has preferences, but does not experience the frustration[1] of those preferences as (emotionally) frustrating? If so, then I struggle to imagine what it might mean to describe such a person as “having preferences”. (“I like ice cream and hate cake, but if I never eat ice cream again and have to eat cake all the time, that’s ok and I don’t feel bad about this”—like that? Or what?)

Or is it something else?

[1] In the impersonal descriptive sense, i.e. “lack of satisfaction”.

Is the idea that the Zen master has preferences, but does not experience the frustration[1] of those preferences as (emotionally) frustrating? If so, then I struggle to imagine what it might mean to describe such a person as “having preferences”.

A person who feels no frustration can be said to have preferences if his choices exhibit those preferences. I.e you prefer ice cream to cake, if, given the choice, you always choose ice cream. Emotions have nothing to do with this. It doesn't matter whether eating cake makes you feel bad. Hell, I can even prefer water to beer, even if drinking beer makes me happier (and even if it has no direct negative consequences for me).

There is also an issue where you conflate "getting frustrated" with "feeling bad". Frustration is a very specific emotion. It is not at all clear, that in order to never feel frustration I must also get rid of all other emotions. Maybe the zen master is able to feel pleasure, and, on the day his house burns down, he simply feels less pleasure.

To summarize "having preferences that aren't tied to negative emotions" and "not having preferences", are indeed two different options.

Here are some possible ways -- in principle, and I make no comment on how achievable they are -- to have misfortunes without unpleasant frustration. The first is the "another possibility", the others are different. (1) Indifference to what happens (i.e., no preference over outcomes). (2) Preferences largely detached from emotions (i.e., you consistently act to achieve X rather than Y, say X is better than Y, etc., but don't feel anything much different depending on whether X or Y happens). (3) Preferences detached from negative but not from positive emotions (i.e., you feel satisfaction/joy/excitement/contentment/... when X happens, but when Y happens you just say "huh, too bad" and move on). (4) Other varieties of partial detachment of preferences from emotions; e.g., emotions associated with dispreferred outcomes are transient only and lack power to move you much.

Two things.

First, I don't think "does not experience the frustration" is quite accurate. It would be more accurate to say experiences it for 100 or 200 milliseconds when it comes up.

The emotion comes up but instead of letting it rest in the body it passes through.

Secondly, there are also Zen masters who have indifference-to-worldy-things and spent all their time meditating.

Is the idea that the Zen master has preferences, but does not experience the frustration[1] of those preferences as (emotionally) frustrating?

I believe this is in fact the deal here. I think this is the sort of thing in eastern philosophy (and I guess Stoicism too?) that are famously hard to grok.

(I do think we can delve into the details here and find something that legibly makes sense. In the past I've found it difficult to have this sort of conversation with you, because you seem to start with such a high prior that "it's just nonsense that we're confused about" as opposed to an outlook you don't yet understand, that getting to the useful bits of the conversation doesn't feel very rewarding.)

I’m willing to have this conversation, of course, but in this particular case I think it might actually be a moot point.

I’m entirely willing to accept, for the sake of argument, that such “zen masters” as described in this subthread (which, as we’re tentatively assuming, is the sort of thing Jacob had in mind) do indeed exist. Well, then wouldn’t describing such an individual as “not an angry person” be correct? (Incomplete, perhaps; imprecise, maybe; but basically on-target—yes?)

I mean, the original point was that the idea of the FAE / correspondence bias was missing something, that it didn’t capture the point Jacob made in this comment. This objection only makes sense, however, if “whether one does, or does not, have bad days” is not—unlike “is an angry person or not”—a dispositional property of the individual, but is instead something else (like what?—well, that was my question…).

But if the “zen masters” whom we’ve described in this subthread “don’t have bad days” precisely because they’re a certain sort of person. So Jacob’s objection doesn’t make sense after all.

In other words, by bringing up the “zen master” example, Jacob seems to be arguing (contra this comment) for an (at least partly) dispositional view of behavior after all—something like: “Alice kicked the vending machine because she experienced a series of misfortunes today and isn’t a zen master [and thus we may describe her as having ‘had a bad day’] and is an angry person; Bob didn’t kick the vending machine, despite having experienced a series of misfortunes today and also not being a zen master [and thus also having had what may be described as ‘a bad day’] because he isn’t an angry person; Carol didn’t kick the vending machine despite experiencing a series of misfortunes today because she is a zen master [and thus has not had anything which we may describe as ‘a bad day’]; Dave didn’t kick the vending machine because he has no reason to do so.”

What this doesn’t seem like, however, is any sort of move beyond a situational account of behavior, toward something which is neither situational nor dispositional. That is what was implied, but no such thing is in evidence.

(Of course, it’s entirely possible I’ve misunderstood or mischaracterized something at some point in my account.)

What this doesn’t seem like, however, is any sort of move beyond a situational account of behavior, toward something which is neither situational nor dispositional.

Do accounts that are neither situational nor dispositional exist? What would that even look like?

I don't see where Jacob promised such an account. I do see where he explains to you, that FAE is not a wrong explanation, but he finds his own more accurate.

It seems to me, from the third-person view, that the disagreements in this comment thread are quite simple, and you might be overreacting to them.

Learning about FAE tells me that the other person kicked the vending machine not because he’s an “angry person” but because he had a bad day.

This seems to me to be a misinterpretation of the concept of the FAE.

The idea is that you think that you have some reason for doing things, whereas other people do things because that’s just the kind of person they are. But in fact, everyone has (or thinks they have) reasons for doing things. Whether those causes are proximal (“I’m having a bad day”, “this darn machine ate my last quarter”, etc.) or complex (“5 years ago I read a book …”) is of peripheral importance to the key insight that other people have lives and mental goings-on of their own.

Now, this is all quite well-understood and not new (although, of course, it may be new to each of us, as we each get older and (hopefully) wiser and discover for ourselves all the things that people before us have discovered and even helpfully written down in books which we, sadly, don’t have time to read). However, my more salient question is this:

In what way can the process of discovering or realizing these truths about how people work, be reasonably described as “tak[ing] the glasses off and replace them with another pair, without confusing what you see through the lenses with what exists on a fundamental level”? This seems to me to be a misleading characterization. Wouldn’t a more accurate description be something like “getting closer to the truth” or “improving your model of the world” or something along those lines?

...can the process of discovering or realizing these truths about how people work, be reasonably described as...

I mean - yes, I think so, otherwise I would not have written this post.

I'm not sure where this conversation is going. We're not talking about whether X is true, but whether Y is the optimal metaphor that can conceived of to describe X. While I always want to learn how to make my writing more clear and lucid, I don't find this sort of discussion particularly productive.

I mean—yes, I think so, otherwise I would not have written this post.

Wait… what? That… wasn’t a yes-or-no question.

Oops, just realized that. Let me try again:

In what way can the process of discovering or realizing these truths about how people work, be reasonably described...

In the way that I just did.

You asked me if this is just FAE, I answer "Kinda, but I like my description better. FAE doesn't capture all of it".

You ask if it this is just getting closer to the truth, I answer "Kinda, but I like my description better. Getting closer to the truth doesn't tell you what mental movement is actually taking place."

If you think you know what I mean but I'm explaining it poorly, you probably won't be able to squeeze a better explanation out of me. This isn't a factual claim, it's a metaphor for a complex mental process. If 4,000 words weren't enough to make this make sense in your head, then go read someone else - the point of non-expert explanation is that everyone can find the one explanation that makes sense for them.

Deformation Professionelle.

Religion impressed from childhood and deconverted from.

Lucid Dreaming.

Ritual.

Drugs.

It's not so much that these are the examples so much that these are examples where you have the opportunity to notice the contrast. Everything is glasses. Your skepticism is glasses.

How are any of these things example of “take the glasses off and replace them with another pair, without confusing what you see through the lenses with what exists on a fundamental level”?

(If anything, they seem like counterexamples!)

you are always already Looking no matter which glasses you have on. The point of swapping glasses and noticing that you are swapping glasses is to notice what remains the same no matter which pair you are wearing, in the same way that you see the flaws in the lens by moving it against the background and noticing the scratch is invariant.

First of all, I think that the main mistake the hypothetical apostate is making is a bucket error. In his mind, there is a single variable labeled “Christianity” which contains a boolean value: True or False. This single variable serves as an answer to many distinct questions, such as:
Did Jesus die for my sins?
Should I go to church on Sunday?
Should I be nice to my Christian friends?

This seems like the crux of the issue to me.

Relevant, from Conviction Without Self-deception:

I think it's important to tease apart feelings from beliefs. If you're standing on that diving platform, I think it's important to simultaneously know you have a 17% chance of victory, and fill yourself with the excitement, focus, and confidence of the second swimmer. Become able to tap into conviction, without any need for the self-deception.

Here, Nate is specifically talking about the epistemic/instrumental trade-off presented by "believe in yourself!" and noting that it doesn't have to be a trade-off. You can have the feeling of "confidence" and have an accurate model.

Likewise with, "Should I convert to Christianity?", there is a difference between using Christianity as a "genie" to make decisions for you, and taking the premises of Christianity to be true. When we look at the problem this way, the next obvious question is, "Why should I use particular framework, fake or otherwise?".

The biggest worry seems to be that you won't be able to "take off" a certain framework if you use it too much. It doesn't seem like that is a problem with frameworks in general. I haven't heard any stories of software/hardware engineers getting "trapped" at their level of abstraction, and insisting that they're framework is "actually super duper true".

Though there does seem to be some hazard in situations like converting to a religion. I think that a fruitful area of investigation would be to study what qualities of a framework lend it to being "sticky". Here are some conjectures.

  • Frameworks that come with a social context are more sticky
  • Frameworks which insist on you professing belief in the framework
  • Frameworks that basically disguised fill in the blanks, where your intuitions do all the work
  • Frameworks that try to answer any possible question (and thus you are encouraged to use it more and more often)

I have heard plenty of stories (and seen examples) of software engineers who only know how to make software using the particular frameworks and tools they are familiar with, and flounder if e.g. given just a text editor containing an empty document and asked to write code to do some simple task. (Or if asked to do some more complicated task for which the tools they know are ill suited.)

That seems not a million miles for what being unable to "take off" a framework looks like when translated from "frameworks for thinking" to "frameworks for developing software".

Your example helped me draw out a useful distinction.

I can imagine the programmers you're alluding to. In the put-in-front-of-a-blank-doc scenario, I can guess a few thoughts they could be thinking:

1. "I don't actually have the skills to do task ABC without my pet framework"

2. "I can't even imagine how one would go about task ABC without my pet framework"

3. "I declare it to be a general impossibility for one to do task ABC without my pet framework"

#1 and #2 seem to be failures of skill/training. #3 is the sneaky one that is bad epistemic hygiene.

Christians rarely say, "I'm not clever enough to see how morality could work without God", but instead say, "You can't have morality without God."

I'd be very surprised to find examples of software engineers who claimed #3.

I'd guess that the fact that most people know or at least have heard of someone who is way more competent than they are makes it harder for them to claim #3.

I agree that this is a useful distinction.

Very much seconded.

In fact, it seems like the Christian lady in EY example "got it" by accident:

She doesn't really believe in god, but says her belief is useful to her.

To me, to be effective and useful, self-deception should occur in System 1 (fast, intuitive), but not in System 2 (slow, analytical). It seems applied rationality helps a lot with questions of motivation, or having useful intuitions to make progress towards a goal. And since System 2 is not affected, "fake beliefs" installed in System 1 are open for re-evaluation.

I think I'm more on your side of the argument but I don't find the arguments you bring convincing. You use an example like the moon-landing where there's no value in believing in it and you given that you don't have the skills to simply change your beliefs by choice, you take it as an assumption that nobody has those skills.

I'm planning to write a post about this sometime in the future, but the gist of why I believe that having adopting useful but wrong beliefs is that it makes you shy away from situations that might disapprove those wrong beliefs.

On interesting thing about dealing with spirituality is that the act of telling people without any spiritual experiences about advanced spiritual concepts that are based on actual experiences is that the lay person necessarily forms a wrong belief about the actual spiritual topic as having a correct belief about the topic needs some basic experience as reference points.

That's one of the reasons why most traditional forms of spirituality don't tell beginners or lay people about advanced concepts. If one isn't careful when talking in that domain it's quite easy for the beginner to adopt wrong beliefs that are in the way of progress.

From my perspective this is one of the reasons why a lot of New Age spirituality has relatively little to show in spiritual experiences for the people in those communities.

it makes you shy away from situations that might disapprove those wrong beliefs

This is another good reason. I was gesturing roughly in that direction when talking about the Christian convert being blocked from learning about new religions.

I think that there's a general concept of being "truth aligned", and being truth aligned is the right choice. Truth-seeking things reinforce each other, and things like lying, bullshitting, learning wrong things, avoiding disconfirming evidence etc. also reinforce each other. Being able to convince yourself of arbitrary belief is an anti-truth skill, and Eliezer suggests you should dis-cultivate it by telling yourself you can't do it.

Your point about spirituality is a major source of conflict about those topics, with non-believers saying "tell us what it is" and the enlightened saying "if I did, you'd misunderstand". I do think that it's at least fair to expect that the spiritual teachers understand the minds of beginners, if not vice versa. This is why I'm much more interested in Val's enlightenment than in Vinay Gupta's.

Being able to convince yourself of arbitrary belief is an anti-truth skill, and Eliezer suggests you should dis-cultivate it by telling yourself you can't do it.

That's an interesting example in this context. You seem to say you want to believe that "you can't do it" because it's useful to hold that belief and not necessarily because it's true.

Practically, I don't think convincing yourself of a belief because the belief is useful is the same thing as convincing yourself of an arbitrary belief. I don't think that the people I now who I consider particularly skilled at adopting beliefs because they consider them useful practiced on arbitrary beliefs.

To use an NLP term (given that's the community where I know most people with the relevant skill set), behavior change is much easier when the belief change is ecological then if it's random.

You use an example like the moon-landing where there's no value in believing in it

There's some value in believing in it. If you don't believe in it and it comes up, people might look at you funny.

One interesting difference between what we may as well call "epistemicists" and "ultra-instrumentalists" is that ultra-instrumentalists generally weight social capital as more important, and individual intellectual ability as less important, than epistemicists do. See here: most of the reputed benefits of belief in Mormonism-the-religion are facets of access to Mormonism-the-social-network.

Another interesting feature of ultra-instrumentalists is that their political beliefs are often outside their local Overton windows. Presumably they have some idea of how much social capital these beliefs have cost them.

It's true that correctness and usefulness of models are both measured by the accuracy of their predictions, however, the weights are different. Usefulness strongly weighs the accuracy of questions that come up often, while correctness weighs all possible questions more uniformly.

Do you think you have enough epistemic rationality to determine when it’s really worth sacrificing epistemic rationality for something else? Better to keep increasing your epistemic rationality, just to be sure.

This is ridiculous. "Sacrificing epistemic rationality" is a risk with uncertain rewards (let us assume that the rewards do exist). It's not necessarily stupid to take risks. It is stupid to wait until you have perfect information about the reward you would receive, because that will never happen.

Also, there is another issue - converting to a religion doesn't immediately make you retarded, as you seem to immagine. Religious people are perfectly capable of high instrumental rationality, even if we agree that their epistemic rationality is diminished. Even further, it can be useful, for empathy, to model religious people, and crackpots in general, as perfect bayesians with really weird priors.