How to offend a rationalist (who hasn't thought about it yet): a life lesson

Usually, I don't get offended at things that people say to me, because I can see at what points in their argument we differ, and what sort of counterargument I could make to that. I can't get mad at people for having beliefs I think are wrong, since I myself regularly have beliefs that I later realize were wrong. I can't get mad at the idea, either, since either it's a thing that's right, or wrong, and if it's wrong, I have the power to say why. And if it turns out I'm wrong, so be it, I'll adopt new, right beliefs. And so I never got offended about anything.

Until one day.

One day, I encountered a belief that should have been easy to refute. Or, rather, easy to dissect, and see whether there was anything wrong with it, and if there was, formulate a counterargument. But for seemingly no reason at all, it frustrated me to great, great, lengths. My experience was as follows:

I was asking the opinion of a socially progressive friend on what they feel are the founding axioms of social justice, because I was having trouble thinking of them on my own. (They can be derived from any set of fundamental axioms that govern morality, but I wanted something that you could specifically use to describe who is being oppressed, and why.) They seemed to be having trouble understanding what I was saying, and it was hard to get an opinion out of them. They also got angry at me for dismissing Tumblr as a legitmate source of social justice. But eventually we got to the heart of the matter, and I discovered a basic disconnecf between us: they asked, "Wait, you're seriously applying a math thing to social justice?" And I pondered that for a moment and explained that it isn't restricted to math at all, and an axiom in this context can be any belief that you use to base your beliefs on. However, then the true problem came to light (after a comparison of me to misguided 18th-century philosophes): "Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

And that did it. For the rest of the day, I wreaked physical havoc, and emotionally alienated everyone I interacted with. I even seriously contemplated suicide. I wasn't angry at my friend in particular for having said that. For the first time, I was angry at an idea: that belief systems about certain things should not be internally consistent, should not follow logical rules. It was extremely difficult to construct an argument against, because all of my arguments had logically consistent bases, and were thus invalid in its face.

I'm glad that I encountered that belief, though, like all beliefs, since I was able to solve it in the end, and make peace with it. I came to the following conclusions:

  1. In order to make a rationalist extremely aggravated, you can tell them that you don't think that belief structures should be internally logically consistent. (After 12-24 hours, they acquire lifetime immunity to this trick.)
  2. Belief structures do not necessarily have to be internally logically consistent. However, consistent systems are better, for the following reason: belief systems are used for deriving actions to take. Many actions that are oriented towards the same goal will make progress in accomplishing that goal. Making progress in accomplishing goals is a desirable thing. An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress. Therefore, assuming the first few statements, having an internally consistent belief system is desirable! Having reduced it to an epistemological problem (do people really desire progress? can actions actually accomplish things?), I now only have epistemological anarchism to deal with, which seems to work less well in practice than the scientific method, so I can ignore it.
  3. No matter how offended you are about something, thinking about it will still resolve the issue.
Does anyone have anything to add to this? Did I miss any sort of deeper reasons I could be using for this? Granted, my solution only works if you want to accomplish goals, and use your belief system to generate actions to accomplish goals, but I think that's fairly universal.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:48 AM
Select new highlight date
All comments loaded

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

I don't understand what "this stuff" refers to in this sentence and it is far from clear to me that your interpretation of what your friend said is correct.

I also don't think it's a good idea to take an axiomatic approach to something like social justice. This approach:

Edit: Also, a general comment. Suppose you think that the optimal algorithm for solving a problem is X. It does not follow that making your algorithm look more like X will make it a better algorithm. X may have many essential parts, and making your algorithm look more like X by imitating some but not all of its essential parts may make it much worse than it was initially. In fact, a reasonably efficient algorithm which is reasonably good at solving the problem may look nothing like X.

This is to say that at the end of the day, the main way you should be criticizing your friend's approach to social justice is based on its results, not based on aesthetic opinions you have about its structure.

This comment causes me unpleasant cognitive dissonance. I read the second party's statement as meaning something like "no, logical rigor is out of place in this subject, and that's how it should be." And I find that attitude, if not offensive, at least incredibly irritating and wrongheaded.

And yet I recognize that your argument has merit and I may need to update. I state this not so much because I have something useful to say for or against it, but to force it out of my head so I can't pretend the conflict isn't there.

Reminds me of an comment by pjeby (holy cow, 100 upvotes!) in an old thread:

One of the things that I've noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think "guessing the teacher's password", but not just in school or knowledge, but about everything.

Such people have no problem with the idea of magic, because everything is magic to them, even science.

...

When you combine that with a mistrust for logically-consistent thinking that's burned them in the past, you get a MESS.

I had the opposite problem, for a while I divided the world (or at least mathematics) into two categories, stuff I understand and stuff I will understand later. It was a big shock when I realized that for most things this wasn't going to happen.

Belief structures do not necessarily have to be internally logically consistent. However, consistent systems are better, for the following reason: belief systems are used for deriving actions to take.

I have a working hypotheses that most evil (from otherwise well-intentioned people) comes from forcing a very complex, context-dependent moral system into one that is "consistent" (i.e., defined by necessarily overly simplified rules that are global rather than context-dependent) and then committing to that system even in doubtful cases since it seems better that it be consistent.

(There's no problem with looking for consistent rules or wanting consistent rules, the problem is settling on a system too early and applying or acting on insufficient, inadequate rules.)

Eliezer has written that religion can be an 'off-switch' for intuitively knowing what is moral ... religion is the common example of any ideology that a person can allow to trump their intuition in deciding how to act. My pet example is, while I generally approve of the values of the religion I was brought up with, you can always find specific contexts (its not too difficult, actually) where their decided rules of implementation are entirely contrary to the values they are supposed to espouse.

By the way, this comment has had nothing to say about your friend's comment. To relate to that, since I understand you were upset, my positive spin would be that (a) your friend's belief about the relationship between 'math' and social justice is not strong evidence on the actual relationship (though regardless your emotional reaction is an indication that this is an area where you need to start gathering evidence, as you are doing with this post) and (b) if your friend thought about it more, or thought about it more in the way you do (Aumann's theorem), I think they would agree that a consistent system would be "nicest".

A comment from another perspective. To be blunt, I don't think you understand why you got upset. (I'm not trying to single you out here; I also frequently don't understand why I am upset.) Your analysis of the situation focuses too much on the semantic content of the conversation and ignores a whole host of other potentially relevant factors, e.g. your blood sugar, your friend's body language, your friend's tone of voice, what other things happened that day that might have upset you, etc.

My current understanding of the way emotions work is something like this: first you feel an emotion, then your brain guesses a reason why you feel that emotion. Your brain is not necessarily right when it does this. This is why people watch horror movies on dates (first your date feels an intense feeling caused by the horror movie, then hopefully your date misinterprets it as nervousness caused by attraction instead of fear). Introspection is unreliable.

When you introspected for a reason why you were upset, you settled on "I was upset because my friend was being so irrational" too quickly. This is an explanation that indicates you weren't trying very hard to explicitly model what was going on in your friend's head. Remember, your friend is not an evil mutant. The things they say make sense to them.

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

Let me translate: "You should do what I say because I said so." This is an attempt to overpower you and is quite common. Anyone who insists that you accept their belief without logical justification is simply demanding that you do what they say because they say so. My response, to people who can be reasoned with, is often just to point this out and point out that it is extremely offensive. If they cannot be reasoned with then you just have to play the political game humans have been playing for ages.

A more charitable translation would be "I strongly disagree with you and have not yet been able to formulate a coherent explanation for my objection, so I'll start off simply stating my disagreement." Helping them state their argument would be a much more constructive response than confronting them for not giving an argument initially.

Let me offer a different translation: "You are proposing something that is profoundly inhuman to my sensibilities and is likely to have bad outcomes."

Rukifellth below has, I think, a much more likely reason for the reaction presented.

Oh. Well, that was a while ago, and I get over that stuff quickly. Very few people have that power over me, anyway; they were one of the only friends I had, and it was extremely unusual behavior foming from them. It was kind of devastating to me that there was a thought that was directed at me by a trusted source that was negative and I couldn't explain... but I could, so now I'm all the more confident. This is a success story! I've historically never actually committed sucide, and it was a combination of other stress factors as well that produced that response. I doubt that I actually would, in part because I have no painless means of doing so: when I actually contemplate the action, it's just logistically impossible to do the way I like. I've also gotten real good at talking myself out of it. Usually it's out of a "that'll show 'em" attitude, which I recognize immediately, and also recognize that that would be both cruel and a detriment to society. So, I appreciate your concern for me a lot, but I don't think I'm in any danger of dying at all. Thanks a lot for caring, though!

And that did it. For the rest of the day, I wreaked physical havoc, and emotionally alienated everyone I interacted with. I even seriously contemplated suicide. I wasn't angry at my friend in particular for having said that. For the first time, I was angry at an idea: that belief systems about certain things should not be internally consistent, should not follow logical rules.

This emotional reaction seems abnormal. Seriously, somebody says something confusing and you contemplate suicide?
What are you, a Straw Vulcan computer that can be disabled with a Logic Bomb ?

Unless you are making this up, I suggest you consider seeking professional help.

It was extremely difficult to construct an argument against, because all of my arguments had logically consistent bases, and were thus invalid in its face.

Actually, it's rather easy: just tell them that ex falso quodlibet.

Tetlock's foxes vs. hedgehogs (people without strong ideologies are somewhat better predictors than those who have strong ideologies, though still not very good predictors) suggests that a hunt for consistency in for something as complex as politics leads to an excessively high risk of ignoring evidence.

Hedgehogs might have premises about how to learn more than about specific outcomes.

I suspect that what frustrated you is not noticing your own confusion. You clearly had a case of lost purposes: "applying a math thing to social justice" is instrumental, not terminal. You discovered a belief "applying math is always a good thing" which is not obviously connected to your terminal goal "social justice is a good thing".

You are rationalizing your belief about applying math in your point 2:

An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress.

How do you know that? Seems like an argument you have invented on the spot to justify your entrenched position. Your point 3 confirms it:

No matter how offended you are about something, thinking about it will still resolve the issue.

In other words, you resolved your cognitive dissonance by believing the argument you invented, without any updating.

If you feel like thinking about the issue some more, consider connecting your floating belief "math is good" to something grounded, like The Useful Idea of Truth:

True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.

This is reasonably uncontroversial, so the next step would be to ponder whether in order to be better at this social justice thing one has to be better at modeling reality. If so, you can proceed to the argument that a consistent model is better than an inconsistent one at this task. This may appear self-evident to you, but not necessarily to your "socially progressive" friend. Can you make a convincing case for it? What if s/he comes up with examples where someone following an inconsistent model (like, say, Mother Teresa) contributes more to social justice than those who study the issue for a living? Would you accept their evidence as a falsification of your meta-model "logical consistency is essential"? If not, why not?

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

I'm making a wild guess, but possibly it's the bold part that offended you... Because this is usually what irritates me (Yay for lovely generalization from one example... but see also Emile's comment quoting pjeby's comment).

Similar offenders are:

  • "Come on, it's obvious!"
  • "You can't seriously mean that /Are you playing dumb?"
  • "Because everybody knows that!"

In general, what irritates me is the refusal to really discuss the subject, and the quick dismissal. If arguments are soldiers, this is like building a foxhole and declare you won't move from there at any cost.

"I mean, have you heard of cri... cry... cryonics? Hehe..."

"Yeah, I'm interested in it."

"...Like... no."

From conversation today.

I sympathize, but I down voted this post.

this is a personal story and a generalization from one person's experience. I think that as a category, that's not enouph for a post on its own. It might be fine as a comment in an open thread or other less prominently placed content.

And that did it. For the rest of the day, I wreaked physical havoc, and emotionally alienated everyone I interacted with. I even seriously contemplated suicide.

You never get offended but this little thing brought you on the verge of suicide!? Did you recently become a rationalist? I am not sure how to read the situation.

An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress.

One way to model willpower is that it is a muscle that uses up brain energy to accomplish things. This is a common model but it is not my current working hypothesis for how things "really universally work in human brains". Rather, I see a need for "that which people vaguely gesture towards with the word willpower" as a sign that a person's total cognitive makeup contains inconsistent elements that are destructively interfering with each other. In other words, the argument against logically coherent beliefs is sort of an argument in favor of akrasia.

Some people seem to have a standard response to this idea that is consonant with the slogan "that which can be destroyed by the truth should be" and this is generally not my preferred response except as a fallback in cases of a poverty of alternative options. The problem I have with "destroy my akrasia with the truth" responses is roughly that they are sort of like censoring a part of yourself without proper justification for doing so. I generally expect constraints of inferential distances and patience to make the detailed reasoning here opaque, but for those interested, a useful place to start is to consider the analogy of "cognitive components as assets" and then play compare and contrast with modern portfolio theory (MPT).

However, explicitly learning about MPT appears not to be within the cognitive means of most people at the present time... which means that if the related set of insights is critical to optimal real life functioning as an epistemic agent then an implicit form of the same insights is likely to be embedded in people in "latent but effective form". It doesn't mean that such people are "bad" or "trying to dominate you" necessarily, it just means that they have a sort of in-theory-culturally-rectifiable disability in the context of something like "explicitly negotiated life optimization".

If this disability is emotionally affirmed as a desirable state and taken to logical extremes in a context of transhuman self modification abilities you might end up with something like dream apes:

Their ancestors stripped back the language centres to the level of higher primates. They still have stronger general intelligence than any other primate, but their material culture has been reduced dramatically – and they can no longer modify themselves, even if they want to. I doubt that they even understand their own origins any more.

Once you've reached the general ballpark of dream apes, the cognitive MPT insight has reached back around to touch on ethical questions that come up in daily life. You can imagine a sort of grid of social and political possibilities based on questions like: What if the dream ape is more (or less) ethical than me? What if a dream ape is more (or less) behaviorally effective than me, but in a "directly active" way (with learning and teaching perhaps expected to work by direct observation of gross body motions and direct inference of the justifications for those actions)? What if the dream ape has a benevolent (or hostile) attitude towards me right now? What if, relative to someone else, I'm the dream ape?

You can get an interesting intellectual puzzle by imagining that "becoming a god-like dream ape" (ie lesioning verbal processing but getting better at tools and science and ethics) turned out as "scientific fact" to be the morally and pragmatically correct outcome of the transhuman possibility. In that context, imagine that one of these "super awesome transhuman dream apes" runs into a person from a different virtue ethical clade who is (1) worth saving but (2) has tried (successfully or unsuccessfully) to totally close themselves to anything except verbally explicit forms of influence, and then (3) fallen into sin somehow. In this scenario, what does the angelic dream ape do to get a positive outcome?

EDITED: Ran into the comment length limit and trimmed the thought to a vaguely convenient stopping point.

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

I felt offended reading this, even though I was expecting something along these lines and was determined not to be offended. I've come to interpret this feeling, on a 5-second level, as "Uh oh, someone's attacking my group." I'm sure I'd be a little flustered if someone said that to me in conversation. But after some time to think about it, I think my response would be "Why shouldn't math be applied to social justice?" And I really would be curious about the answer, if only because it would help me better understand people who hold this kind of viewpoint.

Also, I expect there are good reasons why it's dangerous to apply math to social justice, especially since most people aren't good at math.

1) I think your reaction to this situation owed itself more to your psychological peculiarities as a person (whichever they are) than to a characteristic that all people that identify as rationalists share. There's no reason to expect people with the same beliefs as yours never to keep their cool (at least never on the first time) when talking to someone with an obviously incompatible belief system.

2)

It was extremely difficult to construct an argument against, because all of my arguments had logically consistent bases, and were thus invalid in its face.

It doesn't have to be like that, at least if you don't start off with consistent and false belief systems. The way I think about such issues while effectively avoiding epistemological crises is the following: an algorithm by which I arrived at conclusions which I can pretty confidently dub "knowledge" ends up being added to my cognitive toolbox. There are many things out there that look like they were designed to be added to people's cognitive toolboxes, but not all of them can be useful, can they? Some of them look like they were specifically designed to smash other such tools to pieces. So here's a good rule of thumb: don't add anything to your cognitive toolbox that looks like an "anti-tool" to a tool that is already inside of it. Anything that you suspect makes you know less, be dumber, or require you to forsake trustworthy tools is safe & recommendable to ignore. (In keeping with the social justice topic, a subcategory of bad beliefs to incorporate are those that cause you to succumb to, rather than resist, what you know to be flaws in your cognitive hardware, such as an ingroup-outgroup bias or affect heuristics -- that's why, I think, one should avoid getting too deep into the "privilege" crowd of social justice even if the arguments make sense to one.) Of course, you should periodically empty out the toolbox and see whether the tools are in a good state, or if there's an upgraded version available, or if you were simply using the wrong hammer all along -- but generally, rely on them.

3) You like to explore the implications of a premise, which is completely incompatible with your friend's "separate magisteria" approach (a technique directly out of the Official Handbook of Belief Conservation); unfortunately it is why you weren't able to abandon the train of thought before it derailed into emotional disturbance. You see someone saying you shouldn't use an obviously (to you) useful and relevant method for investigating something? That's a sign that says "Stop right here, there's no use in trying to extrapolate the consequences of this belief of theirs; they obviously haven't thought about it in sufficient detail to form opinions on it that you can make head or tails of." The knowledge and deepness of thought that it takes to see why math is relevant to understanding society is small enough that, if they failed to catch even that, they obviously went no further in establishing beliefs about math that could be either consistent or inconsistent with the pursuit of justice and equality. You went as far as seeing the implications and being horrified -- "How can anyone even think that?" -- but it is a thought they likely didn't get to think; the ramifications of their thought about math ended long before that, presumably at the point when it began to interfere with ideological belief conservation.

4) Get better friends. I know the type, and I've learned the hard way not to try to reason with them. Remember that one about playing chess with a pidgeon?

I usually turn to the Principle of Explosion to explain why one should have core axioms in their ethics, (specifically non-contradictory axioms). If some principle you use in deciding what is or is not ethical creates a contradiction, you can justify any action on the basis of that contradiction. If the axioms aren't explicit, the chance of a hidden contradiction is higher. The idea that every action could be ethically justified is something that very few people will accept, so explaining this usually helps.

I try to understand that thinking this way is odd to a lot of people and that they may not have explicit axioms, and present the idea as "something to think about." I think this also helps me to deal with people not having explicit rules that they follow, since it A) helps me cut off the rhetorical track of "Well, I don't need principles" by extending the olive branch to the other person; and B) reminds me that many people haven't even tried to think about what grounds their ethics, much less what grounds what grounds their ethics.

I usually use the term "rule" or "principle" as opposed to "axiom," merely for the purpose of communication: most people will accept that there are core ethical rules or core ethical principles, but they may have never even used the word "axiom" before and be hesitant on that basis alone.

Almost no one these days regards axiom compiling as a way of describing emotional phenomenon, such as altruism. The idea of describing such warm reasons in terms of axioms was so unintuitive that it caused your friend to think that you were looking for some other reason for social justice, other than a basic appeal to better human nature. He may have been disgusted at what he thought was an implicit disregard for the more altruistic reasons for social justice, as if they weren't themselves sufficient to do good things.

Perhaps I'm mistaken about this, but isn't a far stronger argument in favor of a consistent belief system the fact that with inconsistent axioms you can derive any result you want? In an inconsistent belief system you can rationalize away any act you intend to take, and in fact this has often been seen throughout history.

In theory, yes. In practice, ... maybe. Like saying "a human can implement a bounded TM and can in principle, without tools other than paper&pencil, compute a prime number with a million digits".

It depends on how inconsistent the axioms are in practice. If the contradictions are minor, before leveraging that contradiction to derive arbitrary results, the hu-man may die of old age.

It depends on how inconsistent the axioms are in practice. If the contradictions are minor, before leveraging that contradiction to derive arbitrary results, the hu-man may die of old age.

Of course, if the belief system in questions becomes popular, one of his disciples may wind up doing this.

From your strong reaction I would guess that your friend's reaction somehow ruined the model of the world you had, in a way that was connected with your life goals. Therefore for some time your life goals seemed unattainable and the whole life meaningless. But gradually you found a way to connect your life goals with the new model.

Seems to me that your conclusion #2 is too abstract ("far") for a shock that I think had personal ("near") aspects. You write impersonal abstractions -- "do people really desire progress? can actions actually accomplish things?" -- but I guess it felt more specific than this; something like: "does X really desire Y? can I actually accomplish Z?" for some specific values of X, Y, and Z. Because humans usually don't worry about abstract things; they worry about specific consequences for themselves. (If this guess is correct, would you like to be more specific here?)

I would add to this that if the domain of discourse is one where we start out with a set of intuitive rules, as is the case for many of the kinds of real-world situations that "social justice" theories try to make statements about, there are two basic ways to arrive at a logically consistent belief structure: we can start from broad general axioms and reason forward to more specific rules (as you did with your friend), or we can start from our intuitions about specific cases and reason backward to general principles.

IME, when I try to reason-forward in such domains, I end up with a more simplistic, less workable understanding of the domain than when I try to reason-backward. The primary value to me of reasoning-forward for these domains is if I distrust my intuitive mechanism for endorsing examples, and want to justify rejecting some of those intuitions.

With respect to your anecdote... if your friend's experience aligns with mine, it may be that they therefore understood you to be trying to justify rejecting some of their intuitions about social mores in the name of logical consistency, and was consequently outraged (defending social mores from challenge is basically what outrage is for, after all).

I really liked this post, and I think a lot of people aren't giving you enough credit. I've felt similarly before -- not to the point of suicide, and I think you might want to find someone who you can confide those anxieties with -- but about being angered at someone's dismissal of rationalist methodology. Because ultimately, it's the methodology which makes someone a rationalist, not necessarily a set of beliefs. The categorizing of emotions as in opposition to logic for example is a feature I've been frustrated with for quite some time, because emotions aren't anti-logical so much as they are alogical. (In my personal life, I'm an archetype of someone who gets emotional about logical issues.)

What I suspect was going on is that you felt that this person was being dismissive of the methodology and that the person did not believe reason to be an arbiter of disagreements. This reads to me like saying "I'm not truth-seeking, and I think my gut perception of reality is more important than the truth" -- a reading that sounds to me both arrogant and immoral. I've ran across people like this too, and every time I feel like someone is de-prioritizing the truth over their kneejerk reaction, it's extremely insulting. Perhaps that's what you felt?

They seemed to be having trouble understanding what I was saying, and it was hard to get an opinion out of them. They also got angry at me for dismissing Tumblr as a legitmate source of social justice.

Relevant/funny comic.

I am confused why your friend thought good social justice arguments do not use logic to defend their claims. Good arguments of any kind use logic to defend their claims. Ergo, all the good social justice arguments are using logic to defend their claims. Why did you not say this to your friend?

EDIT: Also confused about your focus on axioms. Axioms, though essential, are the least interesting part of any logical argument. If you do not accept the same axioms as your debate partner, the argument is over. Axioms are by definition not mathematically demonstrable. In your post, you stated that axioms could be derived from other fundamental axioms, which is incorrect. Could you clarify your thinking on this?