The Most Important Thing You Learned

My current plan does still call for me to write a rationality book - at some point, and despite all delays - which means I have to decide what goes in the book, and what doesn't.  Obviously the vast majority of my OB content can't go into the book, because there's so much of it.

So let me ask - what was the one thing you learned from my posts on Overcoming Bias, that stands out as most important in your mind?  If you like, you can also list your numbers 2 and 3, but it will be understood that any upvotes on the comment are just agreeing with the #1, not the others.  If it was striking enough that you remember the exact post where you "got it", include that information.  If you think the most important thing is for me to rewrite a post from Robin Hanson or another contributor, go ahead and say so.  To avoid recency effects, you might want to take a quick glance at this list of all my OB posts before naming anything from just the last month - on the other hand, if you can't remember it even after a year, then it's probably not the most important thing.

Please also distinguish this question from "What was the most frequently useful thing you learned, and how did you use it?" and "What one thing has to go into the book that would (actually) make you buy a copy of that book for someone else you know?"  I'll ask those on Saturday and Sunday.

PS:  Do please think of your answer before you read the others' comments, of course.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:01 PM
Select new highlight date
All comments loaded

"the map is not the territory" has stuck in my mind as one of the over-arching principles of rationality. it reinforces the concept of self-doubt, implies one should work to make their map conform more closely to the territory, and is invaluable when one believes to have hit a cognitive wall. there are no walls, just the ones drawn on your map.

the post, "mysterious answers to mysterious questions" is my favorite post that dealt with this topic, though it has been reiterated (and rightly so) over a multitude of postings.

link: http://www.overcomingbias.com/2007/08/mysterious-answ.html

"Newcomb's Problem and Regret of Rationality" is one of my favorites. For all the excellent tools of rationality that stuck with me, this is the one that most globally encompassed Eliezer's general message: that rationality is about success, first and foremost, and if whatever you're doing isn't getting you the best outcome, then you're not being rational, even if you appear rational.

"A rationalist should win". Very high-level meta-advice and almost impossible to directly apply, but it keeps me oriented.

Your explanation / definition of intelligence as an optimization process. (Efficient Cross-Domain Optimization)

That was a major "aha" moment for me.

The most important thing I learned from Overcoming Bias was to stop viewing the human mind as a blank slate, ideally a blank slate, an approximation to a blank slate, or anything with properties even slightly resembling blankness or slateness. The rest is just commentary - admittedly very, very good commentary.

The posts I associate with this are everything on evolutionary psychology such as Godshatter (second most important thing I learned: study evolutionary psychology!), the free will series, the "ghost in the machine" and "ideal philosopher of perfect emptiness" series, and the Mind Projection Fallacy.

The most important thing I can recall is conservation of expectation. In particular, I'm thinking of Making Beliefs Pay Rent and Conservation of Expected Evidence. We need to see a greater commitment to deciding in advance which direction new evidence will shift our beliefs.

Most frequently referenced concepts:

  1. Mind projection fallacy and "The map is not the territory."
  2. "The opposite of stupidity is not intelligence."

Engines of cognition was the final thing I needed to assimilate the idea that nothing's for free and that intelligence does not magically allow to do anything, has a cost, limitations, and obey the second law of thermodynamics. Or rather, that they both obey the same underlying principle.

http://www.overcomingbias.com/2008/02/second-law.html

"Obviously the vast majority of my OB content can't go into the book, because there's so much of it."

I know this is not what you asked for, but I'd like to vote for a long book. I feel that the kind of people who will be interested by it (and readers of OB) probably won't be intimidated by the page count, and I know that I'd really like to have a polished paper copy of most of the OB material for future reference. The web just isn't quite the same.

In short: Something that is Godel, Escher, Bach-like in lenght probably wouldn't be a problem, though maybe there are other good reasons to keep it shorter other than "there is too much material".

A near-tie. Either:

(1) The Bottom Line, or

(2) Realizing there's actually something at stake that, like, having accurate conclusions really matters for (largely, Eliezer's article on heuristics and biases in global catastrophic risks, which I read shortly before finding OB), or

(3) Eliezer's re-definition of humility in "12 virtues", and the notion in general that I should aim to see how far my knowledge can take me, and to infer all I can, rather than just aiming to not be wrong (by erring on the side of underconfidence).

(1) wasn't a new thought for me, but I wasn't applying it consistently, and Eliezer's meditations on it helped. (2) and (3) more or less were new to me. I've gotten the most out of some of the most basic OB content, and probably continue to get the most out of reflecting on it.

The biggest "aha" post was probably the one linking thermodynamics to beliefs ( The Second Law of Thermodynamics, and Engines of Cognition, and the following one, Perpetual Motion Beliefs ), because it linked two subjects I knew about in a surprising and interesting way, deepening my understanding of both.

Apart from that, "Tsuyoku Naritai" was the one that got me hooked, though I didn't really "learn" anything by it - I like the attitude it portrays.

I agree about Engines of Cognition. It got me really interested in the parallels between information theory and thermodynamics and led me to start reading a lot more about the former, including the classic Jaynes papers. I think it gave me a deeper understanding of why e.g. the Carnot limit holds, and let me to read about the interesting discovery that the thermodynamic availability (extractable work) of a system is equal to its Kullback-Leibler divergence (a generalization of informational entropy) from its environment.

Second for me would have to be Artificial Addition, which helped me understand why attempts to "trick" a system into displaying intelligence are fundamentally misguided.

I'm going to have to choose "How to Convince Me That 2 + 2 = 3." It did quite a lot to illuminate the true nature of uncertainty.

http://www.overcomingbias.com/2007/09/how-to-convince.html

The ideas in itare certainly not the most important, but another really striking posts for me is "Surprised by Brains." The lines "Skeptic: Yeah? Let's hear you assign a probability that a brain the size of a planet could produce a new complex design in a single day. / Believer: The size of a planet? (Thinks.) Um... ten percent." in particular are really helpful in fighting biases that cause me to regard conservative estimates as somehow more virtuous.

I second this one, also as related to Making Beliefs Pay Rent: what you think and what you present as argument needs to be valid, needs to actually have the strength as evidence that it claims to have. Failure to abide by this principle results in empty or actively stupid thoughts.

The most important thing for me, basically, was the morality sequence and in particular The Moral Void. I was worrying heavily about whether any of the morals I valued were justified in a universe that lacked Intrinsic Meaning. The Morality sequence (and Nietzsche, incidentally) helped me internalize that it's OK after all to value certain things— that it's not irrational to have a morality— that there's no Universal Judge condemning me for the crime of parochialism if I value myself, my friends, humanity, beauty, knowledge, etc— and that even my flight from value judgments was the result of a slightly more meta value judgment.

Seems probable to me that many potential readers aren't currently too worried about the Moral Void, but those who are need a pretty substantial push in this direction.

Hard to pick a favourite, of course, but there's a warning against confirmation bias that cautions us against standing firm, to move with the evidence like grass in the wind, that has stuck with me.

On the general discussions of what sort of book I want, I want one no more than a couple of hundred pages long which I can press into the hands of as many of my friends as possible. One that speaks as straightforwardly as possible, without all the self-aggrandizing eastern-guru type language...

The most important and useful thing I learned from your OB posts, Eliezer, is probably the mind-projection fallacy: the knowledge that the adjective "probable" and the adverb "probably" always makes an implicit reference to an agent (usually the speaker).

Honorable mention: the fact that there is no learning without (inductive) bias.

The most important thing I learned may have been how to distinguish actual beliefs from meaningless sounds that come out of our mouths. Beliefs have to pay the rent. (http://www.overcomingbias.com/2007/07/making-beliefs-.html)

The Wrong Question sequence was amazing. One of the very unintuitive sequences that greatly improved my categorization methods. Especially with the 'Disguised Queries' post.

I'm going to go with "Knowing About Biases Can Hurt People", but only because I got the Mind Projection Fallacy straight from Jaynes.

I refuse to name just one thing. I can't rank a number of ideas by how important they were relative to each other, they were each important in their own right. So, to preserve the voting format, I'll just split my suggestions into several comments.

Some notes in general. The first year I used to partially misinterpret some of your essays, but after I got a better grasp of underlying ideas, I saw many of the essays as not contributing any new knowledge. This is not to say that the essays were unimportant: they act as exercises, exploring the relevant ideas in excruciating detail, which makes them ideal for forming solid intuitive understanding of these ideas, a level of ownership for habits of thought without which it hardly makes sense to bother learning them. Focusing attention on each of the explored facets of rationality allows to think about extending and adapting them to your own background. At the same time, I think the verbosity in your writing should be significantly reduced.

I too would like to support more brevity in your writings - but maybe that just isn't your style.

Overcoming Bias: Thou Art Godshatter: understanding how intricate human psychology is, and how one should avoid inventing simplistic Fake Utility Functions for human behavior. I used to make this mistake. Also relevant: Detached Lever Fallacy, how there's more to other mental operations than meets the eye.

If my priors are right, then genuinely new evidence is a random walk. Especially: when I see something complicated I think is new evidence and think the story behind it is obviously something confirming my beliefs in every particular, I need to be very suspicious.

http://www.overcomingbias.com/2007/08/conservation-of.html

http://www.overcomingbias.com/2007/09/conjunction-fal.html

http://www.overcomingbias.com/2007/09/rationalization.html

A while back, I posted on my blog two lists with the posts I considered the most useful on Overcoming Bias so far.

If I just had to pick one? That's tough, but perhaps burdensome details. The skill of both cutting away all the useless details from predictions, and seeing the burdensome details in the predictions of others.

An example: Even though I was pretty firmly an atheist before, arguments like "people have received messages from the other side, so there might be a god" wouldn't have appeared structurally in error. I would have questioned whether or not people really had received messages from the dead, but not the implication. Now I see the mistake - "there's something after death" and "there is a supernatural entity akin to the traditional Christian god" may be hypotheses that are traditionally (in this culture) asssociated with the same memeplex, but as hypotheses they're entirely distinct.

There are no genuine mysteries, only things that I am ignorant or confused about.

It's hard to answer this question, given how much of your philosophy I have incorporated wholesale into my own, but I think it's the fundamental idea that there are Iron Laws of evidence, that they constrain exactly what it is reasonable to believe, and that no mere silly human conceit such as "argument" or "faith" can change them even in the millionth decimal place.

Your debunking of philosophical zombieism really stuck with me. I don't think I've ever done a faster 180 on my stance on a philosophical argument.

The most important thing for me, is the near-far bias - even though that's a relatively recent "discovery" here, it still resonates very well with why I argue with people about things, and why people who I respect argue with each other.

  1. The Blegg / Rube series, which I'll still list as separate from...
  2. The Map / Territory distinction
  3. An Alien God

All things that, if pushed with the right questions, I'd have come to on my own, but all three put very beautifully.

Every Cause Wants To Be A Cult, Science as Attire, The Simple Truth

That clear thinking can take you from obvious but wrong to non-obvious but right, and on issues of great importance. That we frequently incur great costs just because we're not really nailing things down.

Looking over the list of posts, I suggest the ones starting with 'Fake'

I've been enjoying the majority of OB posts, but here's the list of ideas I consider the most important for me:

  1. Intelligence as a process steering the future into a constrained region.

  2. The map / territory distinction.

  3. The use of probability theory to quantify the degree of belief.

Is this to be a book that somebody could give to their grandmother and expect the first page to convince her that the second is worth reading?

The series of post about the "free will". I was always a determinist but somehow refused to think about "free will" in detail, holding a belief that determinism and free will are compatible for some mysterious reason. OB helped me to see things clearly (now it seems all pretty obvious).

I vote for "Conservation of Expected Evidence." The essential answer to supposed evidence from irrationalists.

Second place, either "Occam's Razor" or "Decoherence is Falsifiable and Testable" for the understandable explanation of technical definitions of Occam's Razor.

The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P, and showing that P is true has no effect on the validity of A. This is the core of the "knowing biases can hurt you" problem, and while it's obvious if put in formal terms, it's counterintuitive in practice. The best way to get that to sink in, I think, is to practice demolishing bad arguments that support a conclusion you agree with.

The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P

That sort of makes sense if what you mean is "whatever we humans think about A has no effect on the truth or falsehood of P in a Platonic sense" but surely showing that A is invalid ought to change how likely you think that P is true?

and showing that P is true has no effect on the validity of A.

Similarly, if P is actually true, a random argument that concludes with "P is true" is more likely to be valid than a random argument that concludes with "P is false". So showing P is true ought to make you think that A is more or less likely to be valid depending on its conclusion.

(Given that this comment was voted up to 3 and nobody gave a counterargument, I wonder if I'm missing something obvious.)

I wrote that two years ago, and you're right that it's imprecise in a way that makes it not literally true. In particular, if a skilled arguer gives you what they think is the best argument for a proposition, and the argument is invalid, then the proposition is likely false. What I was getting at, I think, is that my intuition used to vastly overestimate the correlation between the validity of arguments encountered and the truth of propositions they argue for, because people very often make bad arguments for true statements. This made me reject things I shouldn't have, and easily get sidetracked into dealing with arguments too many layers removed from the interesting conclusions.

3 is still a small number. If it were 10+ then you should worry. I'm confused by this too.

The nearest correct idea I can think of to what Jim actually said, is that if you have a proposition P with an associated credence based on the available evidence, then finding an additional but invalid argument A shouldn't affect your credence in P. The related error is assuming that if you argue with someone and are able to demolish all their arguments, that this means that you are correct, and giving too little weight to the possibility that they are a bad arguer with a true opinion. Jim, is that close to what you meant?

EDIT: Whoops, didn't see Jim's response. But it looks like I guessed right. I've also made the related error in the past, and this quote from Black Belt Bayesian was helpful in improving my truth-finding ability:

To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse.

"You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions"

It wasn't much of an "aha!" moment- when I first read it, I thought something along the lines of "Of course higher standards are possible, but if no one can find flaws in your argument, you're doing pretty well." but the more I thought about it, the more I realized that EY made a good point. I had later stumbled upon flaws in my long standing arguments that I had overlooked, yet no one called me on.

Not only was the standard lower than I had previously realized, but it is entirely possible for someone to 1) not believe you 2) not be able put their refutation into words, and 3) still be right.

http://www.overcomingbias.com/2008/09/refutation-prod.html

The big problem with relying on someone else to save you is "Why would they bother?". No one is likely to be as motivated to find mistakes in your beliefs are you are (or at least as you should be).

I've been reading OB for a comparatively short time, so I haven't yet been through the vast majority of your posts. But "The Sheer Folly of Callow Youth" really puts in perspective the importance of truth-seeking and why its necessary.

Quote: "Of this I learn the lesson: You cannot manipulate confusion. You cannot make clever plans to work around the holes in your understanding. You can't even make "best guesses" about things which fundamentally confuse you, and relate them to other confusing things. Well, you can, but you won't get it right, until your confusion dissolves. Confusion exists in the mind, not in the reality, and trying to treat it like something you can pick up and move around, will only result in unintentional comedy. Similarly, you cannot come up with clever reasons why the gaps in your model don't matter. You cannot draw a border around the mystery, put on neat handles that let you use the Mysterious Thing without really understanding it - like my attempt to make the possibility that life is meaningless cancel out of an expected utility formula. You can't pick up the gap and manipulate it."

Link: http://www.overcomingbias.com/2008/09/youth-folly.html

How to make sense out of metaethics. I would particularly name The Meaning of Right.

The most important thing I learned was the high value of the outside perspective. It is something that I strive to deploy deliberately through getting into intentional friendships with other aspiring rationalists at Intentional Insights. We support each other’s ability to achieve goals in life through what we came to call a goal buddy system, providing an intentional outside perspective on each other’s thinking about life projects and priorities.