Bad Concepts Repository

We recently established a successful Useful Concepts Repository.  It got me thinking about all the useless or actively harmful concepts I had carried around for in some cases most of my life before seeing them for what they were.  Then it occurred to me that I probably still have some poisonous concepts lurking in my mind, and I thought creating this thread might be one way to discover what they are.

I'll start us off with one simple example:  The Bohr model of the atom as it is taught in school is a dangerous thing to keep in your head for too long.  I graduated from high school believing that it was basically a correct physical representation of atoms.  (And I went to a *good* high school.)  Some may say that the Bohr model serves a useful role as a lie-to-children to bridge understanding to the true physics, but if so, why do so many adults still think atoms look like concentric circular orbits of electrons around a nucleus?  

There's one hallmark of truly bad concepts: they actively work against correct induction.  Thinking in terms of the Bohr model actively prevents you from understanding molecular bonding and, really, everything about how an atom can serve as a functional piece of a real thing like a protein or a diamond.

Bad concepts don't have to be scientific.  Religion is held to be a pretty harmful concept around here.  There are certain political theories which might qualify, except I expect that one man's harmful political concept is another man's core value system, so as usual we should probably stay away from politics.  But I welcome input as fuzzy as common folk advice you receive that turned out to be really costly.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 5:42 PM
Select new highlight date
All comments loaded

The concept of "deserve" can be harmful. We like to think about whether we "deserve" what we get, or whether someone else deserves what he/she has. But in reality there is no such mechanism. I prefer to invert "deserve" into the future: deserve your luck by exploiting it.

Of course, "deserve" can be a useful social mechanism to increase desired actions. But only within that context.

Also "need". There's always another option, and pretending sufficiently bad options don't exist can interfere with expected value estimations.

And "should" in the moralizing sense. Don't let yourself say "I should do X". Either do it or don't. Yeah, you're conflicted. If you don't know how to resolve it on the spot, at least be honest and say "I don't know whether I want X or not X". As applied to others, don't say "he should do X!". Apparently he's not doing X, and if you're specific about why it is less frustrating and effective solutions are more visible. "He does X because it's clearly in his best interests, even despite my shaming. Oh..." - or again, if you can't figure it out, be honest about it "I have no idea why he does X"

Don't let yourself say "I should do X". Either do it or don't.

That would work nice if I was so devoid of dynamic inconsistency that “I don't feel like getting out of bed” would reliably entail “I won't regret it if I stay in bed”; but as it stands, I sometimes have to tell myself “I should get out of bed” in order to do stuff I don't feel like doing but I know I would regret not doing.

(Thinking about this for a bit, I noticed that it was more fruitful for me to think of "concepts that are often used unskillfully" rather than "bad concepts" as such. Then you don't have to get bogged down thinking about scenarios where the concept actually is pretty useful as a stopgap or whatever.)

Bad Concept: Obviousness

Consider this - what distinguishes obviousness from a first impression? Like some kind of meta semantic stop sign, "it's obvious!" can be used as an excuse to stop thinking about a question. It can be shouted out as an argument with an implication to the effect of "If you don't agree with me instantly, you're an idiot." which can sometimes convince people that an idea is correct without the person actually supporting their points. I sometimes wonder if obviousness is just an insidious rationalization that we cling to when what we really want is to avoid thinking or gain instant agreement.

I wonder how much damage obviousness has done?

I've found the statement "that does not seem obvious to me" to be quite useful in getting people to explain themselves without making them feel challenged. It's among my list of "magic phrases" which I'm considering compiling at posting at some point.

It's among my list of "magic phrases" which I'm considering compiling at posting at some point.

Looking forward to this.

Implicitly assuming that you mapped out/classified all possible realities. One of the symptoms is when someone writes "there are only two (or three or four...) possibilities/alternatives..." instead of "The most likely/only options I could think of are..." This does not always work even in math (e.g. the statement "a theorem can be either true or false" used to be thought of as self-evidently true), and it is even less reliable in a less rigorous setting.

In other words, there is always at least one more option than you have listed! (This statement itself is, of course, also subject to the same law of flawed classification.)

There's a Discordian catma to the effect that if you think there are only two possibilities — X, and Y — then there are actually Five possibilities: X, Y, both X and Y, neither X nor Y, and something you haven't thought of.

Jaynes had a recommendation for multiple hypothesis testing - one of the hypotheses should always be "something I haven't thought of".

"Your true self", or "your true motivations". There's a tendency sometimes to call people's subconscious beliefs and goals their "true" beliefs and goals, e.g. "He works every day in order to be rich and famous, but deep down inside, he's actually afraid of success." Sometimes this works the other way and people's conscious beliefs and goals are called their "true" beliefs and goals in contrast to their unconscious ones. I think this is never really a useful idea, and the conscious self should just be called the conscious self, the subconscious self should just be called the subconscious self, and neither one of them needs to be privileged over the other as the "real" self. Both work together to dictate behavior.

"Rights". This is probably obvious to most consequentialists, but framing political discussions in terms of rights, as in "do we have the right to have an ugly house, or do our neighbors not have the right not to look at an ugly house if they don't want to?" is usually pretty useless. Similarly, "freedom" is not really a good terminal value, because pretty much anything can be defined as freedom, e.g. "by making smoking in restaurants illegal, the American people have the freedom not to smell smoke in a restaurant if they don't want to."

The word "is" in all its forms. It encourages category thinking in lieu of focussing on the actual behavior or properties that make it meaningful to apply. Example: "is a clone really you?" Trying to even say that without using "is" poses a challenge. I believe it should be treated the same as goto: occasionally useful but usually a warning sign.

So some, like Lycophron, were led to omit 'is', others to change the mode of expression and say 'the man has been whitened' instead of 'is white', and 'walks' instead of 'is walking', for fear that if they added the word 'is' they should be making the one to be many. -Aristotle, Physics 1.2

ETA: I don't mean this as either criticism or support, I just thought it might be interesting to point out that the frustration with 'is' has a long history.

E-Prime.

We could support speaking this way on LW by making a "spellchecker" that would underline all the forbidden words.

In that sentence, I find the words "clone", "really" and "you" to be as problematic as "is".

You're perfectly comfortable with the indefinite article?

No, but I am much more comfortable with it than I am with the other words.

There is a cultural heuristic (especially in Eastern cultures) that we should respect older people by default. Now, this is not a useless heuristic, as the fact that older people have had more life experiences is definitely worth taking into account. But at least in my case (and I suspect in many other cases), the respect accorded was disproportionate to their actual expertise in many domains.

The heuristic can be very useful when respecting the older person is not really a matter of whether he/she is right or wrong, but more about appeasing power. It can be very useful to distinguish between the two situations.

How old is the "older" person? 30? 60? 90? In the last case, respecting a 90-years old person is usually not about appeasing power.

It seems more like retirement insurance. A social contract that while you are young, you have to respect old people, so that while you are old, you will get respect from young people. Depends on what specifically "respecting old people" means in given culture. If you have to obey them in their irrational decisions, that's harmful. But if it just means speaking politely to them and providing them hundred trivial advantages, I would say it is good in most situations.

Specifically, I am from Eastern Europe, where there is a cultural norm of letting old people sit in the mass transit. As in: you see an old person near you, there are no free places to sit, so you automatically stand up and offer the old person to sit down. The same for pregnant women. (There are some seats with a sign that requires you to do this, but the cultural norm is that you do it everywhere.) -- I consider this norm good, because for some people the difference in utility between standing and sitting is greater than for average people. (And of course, if you have a broken leg or something, that's an obvious exception.) So it was rather shocking for me to hear about cultures where this norm does not exist. Unfortunately, even in my country in recent decades this norm (and the politeness is general) is decreasing.

Now, this is not a useless heuristic, as the fact that older people have had more life experiences is definitely worth taking into account.

More relevant to the social reasons for the heuristic, they have also had more time to accrue power and allies. For most people that is what respect is about (awareness of their power to influence your outcomes conditional on how much deference you give them).

The heuristic can be very useful when respecting the older person is not really a matter of whether he/she is right or wrong, but more about appeasing power. It can be very useful to distinguish between the two situations.

Oh, yes, those were the two points I prepared in response to your first paragraph. You nailed both, exactly!

Signalling social deference and actually considering an opinion to be strong Bayesian evidence need not be the same thing.

Within my lifetime, a magic genie will appear that grants all our wishes and solves all our problems.

For example, many Christians hold this belief under the names the Kingdom, the Rapture, and/or the second coming (details depend on sect). It leads to excessive discounting of the future, and consequent poor choices. In Collapse Jared Diamond writes about how apocalyptic Christians who control a mining company cause environmental problems in the United States.

Belief in a magic problem solving genie also causes people to fail to take effective action to improve their lives and help others, because they can just wait for the genie to do it for them.

I think this would probably be a pretty destructive idea were it not for the fact that for most people who hold it, it seems to be such a far belief that they scarcely consider the consequences.

I am not sure I am comfortable with the idea of an entirely context-less "bad concept". I have the annoying habit of answering questions of the type "Is it good/bad, useful/useless, etc." with a counter-question "For which purpose?"

Yes, I understand that rare pathological cases should not crowd out useful generalizations. However given the very strong implicit context (along with the whole framework of preconceived ideas, biases, values, etc.) that people carry around in their heads, I find it useful and sometimes necessary to help/force people break out of their default worldview and consider other ways of looking at things. In particular, ways where good/bad evaluation changes the sign.

To get back to the original point, a concept is a mental model of reality, a piece of a map. A bad concept would be wrong and misleading in the sense that it would lead you to incorrect conclusions about the territory. So a "bad concept" is just another expression for a "bad map". And, um, there are a LOT of bad maps floating around in the meme aether...

This one is well known, but having an identity that is too large can make you more susceptible to being mind killed.

When I "die", I won't cease to exist, I'll be united with my dead loved ones where we'll live forever and never be separated again.

"Harmony" -- specifically the idea of root) progressions -- in music theory. (EDIT: That's "music theory", not "music". The target of my criticism is a particular tradition of theorizing about music, not any body of actual music.)

This is perhaps the worst theory I know of to be currently accepted by a mainstream academic discipline. (Imagine if biologists were Lamarckians, despite Darwin.)

Could you expand on that? It has never been clear to me what music theory is — what constitutes true or false claims about the structure of a piece of music, and what constitutes evidence bearing on such claims.

What makes the idea of "harmony" wrong? What alternative is "right"? Schenker's theory? Westergaard's? Riemann? Partsch? (I'm just engaging in Google-scholarship here, I'd never heard of these people until moments ago.) But what would make these, or some other theory, right?

Could you expand on that? It has never been clear to me what music theory is — what constitutes true or false claims about the structure of a piece of music, and what constitutes evidence bearing on such claims.

You're in good company, because it's never been clear to music theorists either, even after a couple millennia of thinking about the problem.

However, I do have my own view on the matter. I consider the music-theoretical analogue of "matching the territory" to be something like data compression. That is, the goodness of a musical theory is measured by how easily it allows one to store (and thus potentially manipulate) musical data in one's mind.

Ideally, what you want is some set of concepts such that, when you have them in your mind, you can hear a piece of music and, instead of thinking "Wow! I have no idea how to do that -- it must be magic!", you think "Oh, how nice -- a zingoban together with a flurve and two Type-3 splidgets" , and -- most importantly -- are then able to reproduce something comparable yourself.