Reply toThe Mystery of the Haunted Rationalist
Followup toDon't Believe You'll Self-Deceive

Should a rationalist ever find themselves trying hard to believe something?

You may be tempted to answer "No", because "trying to believe" sounds so stereotypical of Dark Side Epistemology.  You may be tempted to reply, "Surely, if you have to try hard to believe something, it isn't worth believing."

But Yvain tells us that - even though he knows damn well, on one level, that spirits and other supernatural things are not to be found in the causal closure we name "reality" - and even though he'd bet $100 against $10,000 that an examination would find no spirits in a haunted house - he's pretty sure he's still scared of haunted houses.

Maybe it's okay for Yvain to try a little harder to accept that there are no ghosts, since he already knows that there are no ghosts?

In my very early childhood I was lucky enough to read a book from the children's section of a branch library, called "The Mystery of Something Hill" or something.  In which one of the characters says, roughly:  "There are two ways to believe in ghosts.  One way is to fully believe in ghosts, to look for them and talk about them.  But the other way is to half-believe - to make fun of the idea of ghosts, and talk scornfully of ghosts; but to break into a cold sweat when you hear a bump in the night, or be afraid to enter a graveyard."

I wish I remembered the book's name, or the exact quote, because this was one of those statements that sinks in during childhood and remains a part of you for the rest of the life.  But all I remember was that the solution to the mystery had to do with hoofbeats echoing from a nearby road.

So whenever I found something that I knew I shouldn't believe, I also tried to avoid half-believing; and I soon noticed that this was the harder part of the problem.  In my childhood, I cured myself of the fear of the dark by thinking:  If I'm going to think magically anyway, then I'll pretend that all the black and shapeless, dark and shadowy things are my friends.  Not quite the way I would do it nowadays, but it worked to counteract the half-belief, and in not much time I wasn't thinking about it at all.

Considerably later in my life, I realized that I was having a problem with half-believing in magical thinking - that I would sometimes try to avoid visualizing unpleasant things, from half-fear that they would happen.  If, before walking through a door, I visualized that a maniac had chosen that exact moment to sneak into the room on the other side, and was armed and waiting with a knife - then I would be that little bit more scared, and look around more nervously, when entering the room.

So - being, at this point, a bit more sophisticated - I visualized a spread of probable worlds, in which - in some tiny fraction - a knife-wielding maniac had indeed chosen that moment to lurk behind the door; and I visualized the fact that my visualizing the knife-wielding maniac did not make him the tiniest bit more likely to be there - did not increase the total number of maniacs across the worlds.  And that did cure me, and it was done; along with a good deal of other half-superstitions of the same pattern, like not thinking too loudly about other people in case they heard me.

Enforcing reflectivity - making ourselves accept what we already know - is, in general, an ongoing challenge for rationalists.  I cite the example above because it's a very direct illustration of the genre:  I actually went so far as to visualize the (non-)correlation of map to territory across possible worlds, in order to get my object-level map to realize that the maniac really really wasn't there.

It wouldn't be unusual for a rationalist to find themselves struggling to rid themselves of attachment to an unwanted belief.  If we can get out of sync in that direction, why not the other direction?  If it's okay to make ourselves try to disbelieve, why not make ourselves try to believe?

Well, because it really is one of the classic warning signs of Dark Side Epistemology that you have to struggle to make yourself believe something.

So let us then delimit, and draw sharp boundaries around the particular and rationalist version of striving for acceptance, as follows:

First, you should only find yourself doing this when you find yourself thinking, "Wait a minute, that really is actually true - why can't I get my mind to accept that?"  Not Gloriously and Everlastingly True, mind you, but plain old mundanely true.  This will be gameable in verbal arguments between people - "Wait, but I do believe it's really actually true!" - but if you're honestly trying, you should be able to tell the difference internally.  If you can't find that feeling of frustration at your own inability to accept the obvious, then you should back up and ask whether or not it really is obvious, before trying to make your mind do anything.  Can the fool say, "But I do think it's completely true and obvious" about random silly beliefs?  Yes, they can.  But as for you, just don't do that.  This is to be understood as a technique for not shooting off your own foot, not as a way of proving anything to anyone.

Second, I call it "striving to accept", not "striving to believe", following billswift's suggestion.  Why?  Consider the difference between "I believe people are nicer than they are" and "I accept people are nicer than they are".  You shouldn't be trying to raise desperate enthusiasm for a belief - if it doesn't seem like a plain old reality that you need to accept, then you're using the wrong technique.

Third and I think most importantly - you should always be striving to accept some particular argument that you feel isn't sinking in.  Strive to accept "X implies Y", not just "Y".  Strive to accept that there are no ghosts because spirits are only made of material neurons, or because the supernatural is incoherent.  Strive to accept that there's no maniac behind the door because your thoughts don't change reality.  Strive to accept that you won't win the lottery because you could make one distinct statement every second for a year with every one of them wrong, and not be so wrong as you would be by saying "I will win the lottery."

So there is my attempt to draw a line between the Dark Side and the Light Side versions of "trying to believe".  Of course the Light Side also tends to be aimed a bit more heavily at accepting negative beliefs than positive beliefs, as is seen from the three examples.  Trying to think of a positive belief-to-accept was difficult; the best I could come up with offhand was, "Strive to accept that your personal identity is preserved by diassembly and reassembly, because deep down, there just aren't any billiard balls down there."  And even that, I suspect, is more a negative belief, that identity is not disrupted.

But to summarize - there should always be some particular argument, that has the feeling of being plain old actually true, and that you are only trying to accept, and are frustrated at your own trouble in accepting.  Not a belief that you feel obligated to believe in more strongly and enthusiastically, apart from any particular argument.

New Comment
38 comments, sorted by Click to highlight new comments since:
[-]pjeby240

Actually, as you noticed -- but didn't notice that you noticed -- no "striving" is actually required. What happened was simply that you had to translate abstract knowledge into concrete knowledge.

In each of the examples you gave, you created a metaphor or context reframe, based on imagining some specific sensory reality.

Because the emotional, "action", or "near" brain doesn't directly "get" conceptual abstractions. They have to be translated back into some kind of sensory representation first, and linked to the specific context where you want to change your (emotional/automatic) expectations.

A great example of one of the methods for doing this, is Byron Katie's book, "Loving What Is" -- which is all about getting people to emotionally accept the facts of a situation, and giving up their "shoulds". Her approach uses four questions that postulate alternative realities, combined with a method of generating counterexamples ("turnarounds", she calls them), which, if done in the same sort of "what if?" way that you imagined your friendly ghosts and probabilistic knife killers, produce emotional acceptance of a given reality -- i.e., "loving what is".

Hers is far from the only such method, though. There's another approach, described by Morty Lefkoe in "Recreate Your Life", which uses a different set of questions designed to elicit and re-interpret your evidence for an existing emotional belief. Robert Fritz's books on the creative process demonstrate yet another set of questioning patterns, although not a formalized one.

And rational-emotive therapy, "learned optimism", and cognitive-behavior therapy all have various questions and challenges of their own. (And I freely borrow from all of them in my client work.)

Of course, it's easy to confuse these questioning patterns with logical arguments, and trying to convince yourself (or others) of something. But that not only misses the point, it doesn't work. The purpose of questions like these is to get you to imagine other possibilities -- other evidential interpretations and predictions -- in a sensory way, in a specific sensory context, to update your emotional brain's sensory prediction database.

In other words, to take abstractions from the "far" brain, and apply them to generate new sensory data for the "near" brain to process.

Viewed in this way, there's no need to "struggle" -- you simply need to know what the hell you're doing. That is, have an "inside view" of the relationship between the "near" and "far" brains.

In other words, something that every rationalist should have. A rationalism that can't fix (or at least efficiently work around) the bugs in its own platform isn't very useful for Winning.

Struggle and striving is a sign of confusion, not virtue. We need to understand the human platform, and program it effectively, instead of using up our extremely limited concentration and willpower by constantly fighting with it.

[-][anonymous]50

Struggle and striving is a sign of confusion, not virtue. We need to understand the human platform, and program it effectively, instead of using up our extremely limited concentration and willpower by constantly fighting with it.

Viewed in this way, there's no need to "struggle" -- you simply need to know what the hell you're doing. That is, have an "inside view" of the relationship between the "near" and "far" brains.

Once you know what the hell you're doing you've got to bother to go ahead and actually do it. That's hardly trivial.

With all due respect to the effective methods you describe, I suggest that 'struggle' remains a relevant description. It is certianly true that struggle and striving are no virtues in themselves. Yet programming our brains and using them effectively are hard work.

With practice, we can take weaknesses in the human platform and train them such that we don't need to exert our limited concentration and willpower to constantly prop up our thinking. This is the same as with any expert skill.

Struggle and strive till you need struggle no more. Then pick a slightly more difficult thinking skill that relies upon the first and struggle and strive some more.

[-]pjeby150

If you insist on believing it's hard work, you can certainly make it such. But notice that Eliezer's account indicates that once he chose suitable representations, the changes were immediate, or at least very quick. And that's my experience with these methods also.

The difficult part of change isn't changing your beliefs -- it's determining which beliefs you have that aren't useful to you... and that therefore need changing.

That's the bit that's incredibly difficult, unless you have the advantage of a successful model in a given area. (Not unlike the difference between applying a programming pattern, and inventing a programming pattern.)

For example, I'd say that the utility of a belief in struggle being a requirement for rationality is very low. Such a belief only seemed attractive to me in the past, because it was associated with an idea of being noble. Dropping it enabled me to make more useful changes, a lot faster.

On a more general level, when someone is successfully doing something that I consider a struggle, and that person says that doing the thing is easy, the rational response is for me to want to learn more about their mental models and belief structure, in order to update my own.

Not to argue (however indirectly) that struggle -- like death -- is a good thing because it's part of the natural order!

(This is also ignoring the part where "struggle" itself is a confusion: in reality, there is never anything to "strive for" OR "struggle against"; these are only emotional labels we attach to the map, that don't actually exist in the territory. In reality, there are no problems or enemies, only facts. Time-consuming tasks exist, but this does not make them a struggle.)

Thank you for writing this. I think I've just realised what I've been doing wrong for the last year and a half, and how to start believing positive things about myself that I know rationally to be true.

Thank you for writing this. I think I've just realised what I've been doing wrong for the last year and a half, and how to start believing positive things about myself that I know rationally to be true.

Do let us know how that turns out - perhaps you can write a post about it.

This "trying to believe" tactic is much more explicitly used in areas where there is randomness or unpredictability.

My business is finance. As a financial advisor, I am constantly "trying to believe" in things like regression to the mean, long term performance of the market vs. short term volatility, the efficacy of asset allocation.

But each day I am faced with evidence that causes me to doubt my rationally held beliefs about investing.

I think baseball players may have similar issues with batting. They may rationally know that it's only practice and talent that improve their performance, but they still notice that when they where the red underwear they hit better. So they may be "trying to believe" that the red underwear doesn't really affect their batting behavior.

As with many of the issues we raise, this all boils down to having a brain that's made of multiple systems each trying to do something a little different. We have a pattern matching part of our brain and we have our prefrontal cortex, theorizing about the world.

Sometimes these systems can be in conflict.

Hope this contributes to the discussion.

David

This is one of my favorite topics, more so than the usual topics of rationalism perhaps, and I've thought about it a lot. How can we best believe things we accept? The other day I was out running and the moon was large and visible in the daylight. I was looking up at it and thinking to myself, "If people really understood what the moon was, what the stars are, what Earth is, could they go on living the way they do? If I really, genuinely knew these were other places, could I go on living the way I do?" This is, perhaps, too romantic a view of things. But it illustrates my point: we really do accept very profound things without ever truly making them part of our person. It's not just the absence of ghosts and other supernatural entities we have difficulty with but the presence of many phenomena outside our usual experience.

Paul Churchland's early work, Scientific Realism and the Plasticity of Mind, has a great illustration of this. Churchland's mentor was the American philosopher Wilfrid Sellars who developed a distinction between the "scientific image" of the world and its common sense "manifest image." Churchland's approach is to give the scientific image preeminence. He wants science to replace common sense. This larger project was the background to his more familiar eliminative materialism (which seeks to replace folk psychology with a peculiar connectionist account of the brain). While most of the work is quite technical there are some excellent passages on how we could achieve this replacement. He discusses the way we still talk of the sun rising and setting, for example, and uses a particular diagram to show how one can reorient oneself to really appreciate the fact that we're a planet orbiting a star.

I've tried to post the diagram here:

http://s5.tinypic.com/zinh38.jpg (before we reorient ourselves)

http://s5.tinypic.com/2akfoll.jpg (after we reorient ourselves)

I don't have the descriptive text at hand but for me what the diagrams illustrate is a particular approach to science that I think Eliezer shares, more or less, which is that we should try to incorporate science deeply into the way we inhabit the world. I suppose that, to me, is a major part of what rationality is or should be: How can we best live that which we have until now merely accepted as fact?

Nitpick: We talk of the sun rising and setting because we're a planet rotating on its own axis, not because we're orbiting the sun. The orbiting causes seasons.

However, you make an interesting point. Whenever I can remember, I try to do what someone taught me years ago: Sit down to watch the sunset (ideally on a beach) and think about the fact that it is the Earth 'rising' and not the Sun 'setting'. It is a really fun exercise.

Even better: due to relativity, realize that if you're only considering the earth/sun system and not paying attention to other planets, you can go ahead and choose either one as your reference frame (traditionally, you pick the one you're in). So the sun really is quickly orbiting a stationary Earth.

"the sun goes around the Earth quickly while rotating slowly" and "the Earth is rotating quickly while orbiting the sun slowly" express the same sentiment.

EDIT: Okay, you got me - you can't rotate an inertial frame, and 'fictitious forces' would be detectable differently in each of those examples. But I stand by my first point.

IANAP, but this sounds wrong to me. It would feel different to be on a slowly rotating Earth than to be on a quickly rotating Earth.

[-][anonymous]30

IANAPE, but it does seem hard to 'relativise' being torn assunder by centrifugal forces vs rotating slowly while the sun laughs in the face of the speed of light.

I'm thinking back to those Feynman lectures. I have an incling that he said rotation could be detected if you were stuck in one of those hypothetical transport containers. Failing that, just thinking of the relevant experiment is making my right hand turn blue for some reason.

I'll throw those two together and surmise that "who is orbitting whom" is just a matter of "who cares? Just plug in the weights and give me relative positions and a direction" but that rotation you've got somewhat less flexibility with.

Don't they both just move in straight lines through curved space-time, or is that just another way of percieving the math?

Like Camelot, "it's only a model".

Yes, that's another way of looking at it. More or less intuitive depending on the application.

[-]pwno00

we have difficulty with but the presence of many phenomena outside our usual experience.

Everything is equally a phenomena, just some phenomena we have or haven't evolved to be un-astounded by. Conversely, there are some phenomena we are more inclined to be astounded by, namely, waterfalls.

[-]haig90

I think this post and the related ones are really hitting home why it is hard for our minds to function fully rationally at all times. Like Jon Haidt's metaphor that our conscious awareness is akin to a person riding on top of an elephant, our conscious attempt at rational behavior is trying to tame this bundle of evolved mechanisms lying below 'us'. Just think of the preposterous notion of 'telling yourself' to believe or not believe in something. Who are you telling this to? How is cognitive dissonance even possible?

I remember the point when I finally abandoned my religious beliefs as a kid. I had 'known' that belief in a personal god and the religious teachings were incompatible with rational thinking yet I still maintained my irrational behavior. What did the trick was to actually practice and live strictly for a set period of time only appropriately to what my rational beliefs allowed. After some number of days, I was finally completely changed and could not revert back to my state of contradiction.

In relation to this, think about why you can't just read a math book and suddenly just get it (at least for us non math geniuses). You may read an equation and superficially understand that it is true, but you can still convince yourself otherwise or hold conflicted beliefs about it. But then, after doing examples yourself and practicing, you come to 'know' the material deeply and you can hardly imagine what it is like not to know it.

For people like the girl Eliezer was talking to, I wonder what would happen if you told her, as an experiment, to force herself to totally abandon her belief in god for a week, only adhering to reason, and see how she feels.

I hadn't thought about a normative aspect to the skeptics' fear of ghosts, but after reading this post I acknowledge that there is one. I commit to fighting irrational fears from now on.

I read with interest all of the specific techniques proposed on this thread and welcome more. I'd be particularly interested if anyone had theories of or techniques against akrasia, the general inability of the mind to follow through on what it knows to be right.

I think there are a lot of techniques for aligning emotions with (deliberative) knowledge.

  1. Self-talk - saying, repeatedly, either internally or out loud, the true state of affairs that you would like your emotions to align with.
  2. Visualization - actively imagining "how the truth looks".
  3. Pretending - pretend to be the kind of person that you want to be.

My personal experience is that self-talk is only useful insofar as you're using it to lead yourself to a sensory experience of some kind. For example, asking "What if [desired state of affairs] were true?" is far more useful than simply asserting it so. The former at least invites one to imagine something specific.

Repetition also isn't as useful as most people seem to think. Your brain has little problem updating information immediately, if there's sufficient emotion involved... and the "aha" of insight (i.e. reducing the modeling complexity required to explain your observations) counts as an emotion. If you have to repeat it over and over again -- and it's not a skill you're practicing -- you're doing something wrong.

All of these terms -- self-talk, visualization, and pretending -- are also examples of Unteachable Excellence and Guessing The Teacher's Password. You can equally use the term to describe something useful (like asking good questions) or something ridiculous (like affirmations). The specific way in which you talk, visualize, or pretend is of critical importance.

For example, if you simply visualize some scripted scenario, rather than engaging in inquiry with yourself, you are wasting your time. The "near" brain needs to generate the details, not the "far" brain, or else you don't get the right memories in context.

I'll admit to a bit of hand-waving on that last part -- I know that when my clients visualize, self-talk, or pretend in "scripted" ways (driven by conscious, logical, and "far" thinking), my tests show no change in belief or behavior, and that when they simply ask what-if questions and observe their mind's response, the tests show changes. My guess is that this has something to do with the "reconsolidation" theory of memory: that activating a memory is required in order to change it. But I'm more of a pragmatist than a theorist in this area.

My experience with self-talk is via Cognitive behavioral therapy, as described in "Feeling Good". There are a lot of concrete and specific ways of adjusting one's emotions to match one's deliberative beliefs in that book.

"You can equally use the term to describe something useful or something ridiculous." I agree completely. I think the success of religious memes has a lot to do with their systematic advocacy of self-talk, visualization and imitation of guru figures.

You say "When my clients visualize, self-talk in scripted ways, my tests show no change in belief or behavior." I can easily imagine someone doing a brief experiment of self-talk without believing it can change them, and then demonstrating no change. However, I can also imagine Jesuits systematically visualizing the life of Jesus over a period of 30 days (with the goal of changing their emotions) succeeding at changing their emotions.

And when you imagine this, what concrete test are you imagining performing on the Jesuits before and after the 30 days' visualization, in order to confirm that there was in fact behavioral change between the two points? ;-)

To be clear, I use tests of a person's non-voluntary responses to imagined or recalled stimuli. I also prefer to get changes in test results within the order of 30 minutes (or 3 minutes for certain types of changes), rather than 30 days!

What's more, I don't instruct clients to use directed or scripted imagery or self-talk. In fact, I usually have to teach them NOT to do so.

Basically, when they apply a technique and get no change in test response, I go back over it with them, to find out what they did and how they did it. And one of the most common ways (by far) in which they've deviated from my instructions are by making statements, directing their visualization, indulging in analytical and abstract thinking, or otherwise failing to engage the "near" system with sensory detail.

And as soon as I get them to correct this, we start getting results immediately.

Now, does that prove that you CAN'T get results through directed, argumentative, or repetitive thinking? No, because you can't prove a negative.

However, please note that these are not people who disbelieve in self-talk, nor are they attempting to prove or disprove anything. Rather, they are simply not familiar with -- or skilled in -- a particular way of engaging their minds, and are just doing the same things they always do.

Which is, of course, why they get the same results they always do.

And it's also why I have such a pet peeve about self-help and psych books that try to teach the Unteachable Excellence, without understanding that by default, people try to Guess The Teacher's Password -- that is, to somehow change things without ever actually doing anything different.

Practical psychology is still far too much alchemy, not enough chemistry. Rationalists must -- and CAN -- do much better than this.

[-]haig10

practice, practice, practice

[-][anonymous]-10

Who you gonna call?

I just earlier today used the same quantum viewpoint. I was thinking "I really wish I had a lot of money, I ought to play the lottery, there is some chance." I factually know this is nonsense. I was able to make that intuitive by transforming "a chance of winning" into "a huge spread of worlds with a few rare winners". I will both win and not-win, but most of me doesn't win - that's just not a good result.

The desire to play the lottery went away.

One drawback of using the word 'accept' in this context is that it brings to mind (at least for me) phrases like "accept Jesus into your heart."

Less so than 'believe', don't you think? All of the other synonyms I can think of are clunky.

[-][anonymous]30

The hurdle here (and it seems like a common hurdle of rationality in general) is that our rational, conscious brain doesn't control our behavior, but seems to act as more of a 'guide' for the other, deeper structures of our brain, and makes sure they're running smoothly. It can take over in a pinch and temporarily override the output of another structure, but it can't run things constantly; the costs are just too high.

The best it can do is try to 'push' behavior down into the lower structures. But this is hard to do; you can't tell yourself 'don't be afraid of ghosts' anymore than you can tell yourself 'send more dopamine to my pleasure center'. We don't control these parts of our brain, we can only offer 'suggestions' to them.

[-][anonymous]30

Strive to accept. I like the phrase. I prefer to reserve 'believe' for 'that which I would draw on a map were I getting paid for every right answer' while acceptance tends to take a whole lot more time

How related is this to the skill of "biting bullets", of accepting the unintuitive consequences of one's theories? (Robin Hanson is good at this; most memorably when he came out in support of a height tax as a consequence of optimal taxation theory) I had thought that a good rationalist should eat bullets for breakfast, but it seems to me now that this phenomenon is closer to what we should seek out. It occurs to me that sometimes your intuition is telling you important things.

We really need to distinguish between a belief, and an idea that creates a strong emotional reaction in us.

Plenty of people know and believe that smoking is incredibly harmful, but that doesn't motivate them to stop smoking. Plenty of people know and believe that there are no ghosts, but that doesn't stop them from being scared when they walk past a graveyard.

What people here are discussing seems overwhelmingly to be methods for retraining your emotions, not altering beliefs.

That's actually a distinction made by the Chinese philosopher Ch'eng I - between mere knowledge and a deeper sort - knowing that leaving town at night is dangerous versus having been mauled by a tiger. Of course, he regarded the latter as more properly called 'knowledge'. I think I'm persuaded that belief and emotion are more closely linked than you suggest.

What I don't see mentioned is the fact that roots of the fear of 'bump-in-the-dark' is an instinctive reaction that is programmed deep in our genes from a time when it was a survival trait. Ghosts, spirits and other supernatural explanations are only a cultural veneer.

While one can use reason to dismiss those explanations, it's not necessarily going to convince one's limbic system that there is nothing to worry about.

In my childhood, I cured myself of the fear of the dark by thinking:

There are a lot of people who consider fear of the dark to be irrational. IMHO it's simply an adaptation to the primivitve environment where nightly predators abounded. The fact is, our vision is not that good in the dark so personally I think having that fear is still a useful adaptation that I want to keep in place.

Btw, I once heard one police officer commenting that at the night, robbers and muggers consistently choose dark, lone areas to prey on their victims, so there you have it.

"There are a lot of people who consider fear of the dark to be irrational. IMHO it's simply an adaptation to the primivitve environment where nightly predators abounded."

The possibility that it was adaptive in our ancestral environment doesn't mean it's not irrational.

It doesn't even mean that it was rational in our ancestral environment, at least not in the sense that it was explicitly justified. At most, it was potentially justifiable if anyone had tried to understand it.

In order for fear of darkness to become ingrained it had to offer a fitness advantage to us otherwise the genes for that wouldn't be common. So, it increased your likelihood of survival, and if you value your life and health it is in fact rational. Keep in mind, that what fear really does is it heightens your alert levels and makes you focus much more attention on the environment(when the right conditions are met) then you would otherwise have done.

Btw. my definition of rational here is: whatever makes you win.

"Btw. my definition of rational here is: whatever makes you win."

That's not a very useful definition. Aside from being purely Consequentialist, it means that we can't say whether a strategy is 'rational' unless we know whether it will work out, and that sort of knowledge is very difficult to acquire.

I have difficulty accepting that certain theorems of calculus are true, when I recall them, I'm always uncertain and check to make sure they're right, wasting valuable time.

Eliezer, have you ever seen a hypnosis show, or been hypnotized yourself? Hypnosis is very much related to self-deception and "doublethink"; it seems to me as though it is, at least in part, a way of using one's higher level functions to tell one's perceptual systems to shut up and do what they're told, to shut off various levels of fact-checking.

Why would a rationalist want to overrule one's senses and critical facilities? Well, for one, turning off pain is frequently desirable, and it could be something fun to do in the bedroom... in general, it might be useful in any situation in which your brain isn't doing what you want it to do.

being afraid to enter a graveyard is rational from the ancestral perspective. dead bodies attract disease. before our ancestors understood what disease was they probably attributed it to the ghosts of the dead haunting the living.