Expecting Short Inferential Distances

Predictably Wrong

Homo sapiens' environment of evolutionary adaptedness (aka EEA or "ancestral environment") consisted of hunter-gatherer bands of at most 200 people, with no writing.  All inherited knowledge was passed down by speech and memory.

In a world like that, all background knowledge is universal knowledge.  All information not strictly private is public, period.

In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else.  When you discover a new oasis, you don't have to explain to your fellow tribe members what an oasis is, or why it's a good idea to drink water, or how to walk.  Only you know where the oasis lies; this is private knowledge.  But everyone has the background to understand your description of the oasis, the concepts needed to think about water; this is universal knowledge.  When you explain things in an ancestral environment, you almost never have to explain your concepts.  At most you have to explain one new concept, not two or more simultaneously.

In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises.

In the ancestral environment, anyone who says something with no obvious support, is a liar or an idiot.  You're not likely to think, "Hey, maybe this guy has well-supported background knowledge that no one in my band has even heard of," because it was a reliable invariant of the ancestral environment that this didn't happen.

Conversely, if you say something blatantly obvious and the other person doesn't see it, they're the idiot, or they're being deliberately obstinate to annoy you.

And to top it off, if someone says something with no obvious support and expects you to believe it - acting all indignant when you don't - then they must be crazy.

Combined with the illusion of transparency and self-anchoring, I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience - or even communicating with scientists from other disciplines.  When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back.  Or listeners, assuming that things should be visible in one step, when they take two or more steps to explain.  Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.

A biologist, speaking to a physicist, can justify evolution by saying it is "the simplest explanation".  But not everyone on Earth has been inculcated with that legendary history of science, from Newton to Einstein, which invests the phrase "simplest explanation" with its awesome import: a Word of Power, spoken at the birth of theories and carved on their tombstones.  To someone else, "But it's the simplest explanation!" may sound like an interesting but hardly knockdown argument; it doesn't feel like all that powerful a tool for comprehending office politics or fixing a broken car.  Obviously the biologist is infatuated with his own ideas, too arrogant to be open to alternative explanations which sound just as plausible.  (If it sounds plausible to me, it should sound plausible to any sane member of my band.)

And from the biologist's perspective, he can understand how evolution might sound a little odd at first - but when someone rejects evolution even after the biologist explains that it's the simplest explanation, well, it's clear that nonscientists are just idiots and there's no point in talking to them.

A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts.  If you don't recurse far enough, you're just talking to yourself.

If at any point you make a statement without obvious justification in arguments you've previously supported, the audience just thinks you're a cult victim.

This also happens when you allow yourself to be seen visibly attaching greater weight to an argument than is justified in the eyes of the audience at that time.  For example, talking as if you think "simpler explanation" is a knockdown argument for evolution (which it is), rather than a sorta-interesting idea (which it sounds like to someone who hasn't been raised to revere Occam's Razor).

Oh, and you'd better not drop any hints that you think you're working a dozen inferential steps away from what the audience knows, or that you think you have special background knowledge not available to them.  The audience doesn't know anything about an evolutionary-psychological argument for a cognitive bias to underestimate inferential distances leading to traffic jams in communication.  They'll just think you're condescending.

And if you think you can explain the concept of "systematically underestimated inferential distances" briefly, in just a few words, I've got some sad news for you...

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:00 AM
Select new highlight date
All comments loaded

The explanation from ancestral environment seems likely. However, there is also a rational argument for refusing to accept a claim unless all the steps have been laid out from your own knowledge to the claim. While there are genuine truth seekers who have genuinely found truth and who we therefore should, ideally, believe, nevertheless a blanket policy of simply taking these people at their word has the unfortunate side-effect of also rendering us vulnerable to humbug, because we are not equipped to tell apart the humbug from the true statements many steps removed from our knowledge.

At the same time, people do not universally reject claims that are many steps removed from their own experience. After all, scientists have made headway with the public. And unfortunately, humbug also regularly makes headway. There have always been niches in society for people claiming esoteric knowledge.

Eliezer, this is a great insightful observation.

I present to you Exhibit A from the field of computer programming.

I find an easy way to get some of the complicated inferential jumps for free is to find a similar set of inferential jumps they have made in a similar subject. It is much easier to correct a "close" inferential jump than it is to create a new one out of thin air.

Example: When discussing the concept of programming you can use the concept of an assembly line to get their head into a procedural mode of thinking. Once they think about an object visiting a bunch of stations in a factory you can replace "object" with "program" and "station" with "line of code." They still have no idea how programming works, but they can suddenly create a bunch of inferential jumps based on assembly lines.

In my experience, they now start asking questions about programming as related to assembly lines and you can fill in the gaps as you find them.

"So what happens at the end of the line?"
"Well, the program generally loops back around and starts over."
"Oh. So it follows the same line forever?"
"Not necessarily. Sometimes the line takes a detour and heads off into a new area of the plant for awhile. But it generally will come back to the main assembly line."
"But what's the point? Like, how does that make my computer run?"
"Think of the computer like the company. The company owns a whole bunch of assembly lines all over the place and, periodically, it will ask a certain plant to start up and keep running. One of the stations in the assembly line is something like, 'Give report to company'. The company looks at the report, realizes it is a visual report, and hands it to the assembly line that processes visual reports. That assembly line takes the report, breaks it down into RGB pixels, and puts it on the monitor for you to see. For that to happen, all of these programs have to keep spinning on their assembly lines and doing the work at each station and keep sending reports back to the company. You don't have to worry about all of the lines because the computer is doing it for you."
"Wow, that's complicated."
"Yeah, it can get pretty crazy. As a programmer, I design the assembly lines."

Or whatever.

The young seem especially vulnerable to accepting whatever they are told. Santa Claus and all that, but also any nonsense fed to them by their schools. Schools for the young are particularly effective instruments for indoctrinating a population. In contrast, the old tend to be quite a bit more resistant to new claims - for better and for worse.

An evolutionary explanation for this is fairly easy to come up with, I think. Children have a survival need to learn as much as they can as quickly as they can, and adults have a vital role as their teachers. In their respective roles, it is best for adults to be unreceptive to new claims, so that their store of knowledge remains a reliable archive of lessons from the past, and it is best for the young to accept whatever they are told without wasting a lot of time questioning it.

It is too easy to come up with a just so story like this. How would you rephrase it to make it testable?

Here is a counterstory:

Children have a survival need to learn only well-tested knowledge; they cannot afford to waste their precious developmental years believing wrong ideas. Adults, however, have already survived their juvenile years, and so they are presumably more fit. Furthermore, once an adult successfully reproduces, natural selection no longer cares about them; neither senescence nor gullibility affect an adult's fitness. Therefore, we should expect children to be skeptical and adults to be gullible.

This counterstory doesn't function.

A child's development is not consciously controlled; and they are protected by adults; so believing incorrect things temporarily doesn't harm their development at all.

If you wish to produce a counterstory, make it an actual plausible one. Even if it were the case that children tended to be more skeptical of claims, your story would REMAIN obviously false; whereas Constant's story would remain an important factor, and would raise the question of why we don't see what would be expected given the relevant facts.

I've just learned that there is interesting research on this topic. Sorry I don't have better links.

For example, talking as if you think "simpler explanation" is a knockdown argument for evolution (which it is)

I don't quite agree - by itself, X being "simpler" is a reason to increase my subjective belief that X is true (assuming that X is in a field where simplicity generally works) but it's not enough to prove e.g. creationism false. Rather, it is the total lack of evidence for anything supernatural that is the knockdown argument - if I had reason to believe that even one instance of say, ghosts or the effects of prayer were true, then I'd have to think that creationism was possible as well.

I have experienced this problem before-- the teacher assumes you have prior knowledge that you just do not have, and all of what he says afterwards assumes you've made the logical leap. I wonder to what extent thoughtful people will reconstruct the gaps in their knowledge assuming the end conclusion is correct and working backwards to what they know in order to give themselves a useful (but possibly incorrect) bridge from B to A. For example, I recently heard a horrible biochem lecture about using various types of protein sequence and domain homology to predict function and cellular localization. Now, the idea that homology could be used to partially predict these things just seemed logical, and I think my brain just ran with the idea and thought about how I would go about using the technique, and placed everything he said piece-wise into that schema. When I actually started to question specifics at the end of the lecture, it became clear that I didn't understand anything the man was saying at all outside of the words "homology" and "prediction", and I had just filled in what seemed logical to me. How dangerous is it to try to "catch up" when people take huge inferential leaps?

This reminds me of teaching. I think good teachers understand short inferential distances at least intuitively if not explicitly. The 'shortness' of inference is why good teaching must be interactive.

I think Vygotsky's expression "zone of proximal development" means "one inferential step away", so in theory professional teachers should understand this. I prefer to imagine knowledge like a "tech tree" in a computer game.

When teaching one student, it is possible to detect their knowledge base and use their preferred vocabulary. I remember explaining some programming topics to a manager: source code is like a job specification; functions are employees; data are processed materials; exceptions are emergency plans.

Problem is, when teaching the whole class, everyone's knowledge base is very different. In theory it shouldn't be so, because they all supposedly learned the same things in recent years, but in reality there are huge differences -- so the teacher basicly has to choose a subset of class as target audience. Writing a textbook is even more difficult, when there is no interaction.

Now I think of it, this reminds of something Richard Dawkins used to say at some talks: that we (the modern audience) could give Aristotle a tutorial. Being a fantasist myself, I've sometimes wondered how that could be possible. Leaving aside the complications of building a time machine (I leave that to other people), I wondered how would it be to actually meet Aristotle and explain to him some of the things we now know about life, the universe & everything.

First of all, I'd have to learn ancient greek, of course, or no communication would be possible. That would be the easy (and the only easy) part. More complicated would be that, to teach anything modern to Aristotle, one would have to teach an incredible amount of previous stuff. That is, one would have to step quite a large number of inferential steps. If I wanted to explain, for example, the theory of evolution, that would require a lot of anatomy, geography, zoology, botany, and even mathematics and philosophy. One would have to be a true polymath to achieve the feat. It's not that we don't know more about the universe than Aristotle, it is that to cross the inferential 'gap' between Aristotle and us would require an inordinate amount of knowledge.

Maybe a good metaphor is based on Dennett's crane idea: we develop ideas that help us reach higher levels of understanding, but as soon as we reach those upper levels we discard them to build new ones for higher levels. To help someone on the floor, one has to 'rebuild' these old cranes no longer in use.

Actually, evolution might be the easiest one. It's inevitable if you have variation and selection. It's a really pretty theory.

I don't know how hard it would be to convey that observation and experimentation will take you farther than just theorizing.

If I brought back some tech far advanced over Aristotle's period (and I wonder what would be most convincing), it might add weight to my arguments.

And personally, even if I had a time machine and the knowledge of ancient Greek, I don't know what hard it would be to get him to listen to a woman.

One more thing beside a time machine, knowledge of ancient Greek, and a stash of cool stuff-- the ability to argue well enough to convey your ideas to Aristotle and convince him you're right.

This is probably at least as hard as it sounds.

I don't know what hard it would be to get him to listen to a woman.

I would sort of expect any woman who showed up with apparently magical powers to be put into the goddess category. Even someone like Aristotle, who probably didn't believe that gods and goddesses literally existed, would be culturally conditioned to treat a woman who appeared to have super-powers with some respect.

As someone who has done (some) teaching, I think this is absolutely correct. In fact, the most difficult thing I find about teaching is trying to find the student's starting knowledge, and then working from there. If the teacher does not goes back enough 'inferential steps', the student won't learn anything - or worse, they might think they know when they don't.

Excellent stuff.

To Mazur’s consternation, the simple test of conceptual understanding showed that his students had not grasped the basic ideas of his physics course: two-thirds of them were modern Aristotelians...“I said, ‘Why don’t you discuss it with each other?’” Immediately, the lecture hall was abuzz as 150 students started talking to each other in one-on-one conversations about the puzzling question. “It was complete chaos,” says Mazur. “But within three minutes, they had figured it out. That was very surprising to me—I had just spent 10 minutes trying to explain this. But the class said, ‘OK, We’ve got it, let’s move on.’...More important, a fellow student is more likely to reach them than Professor Mazur—and this is the crux of the method. You’re a student and you’ve only recently learned this, so you still know where you got hung up, because it’s not that long ago that you were hung up on that very same thing. Whereas Professor Mazur got hung up on this point when he was 17, and he no longer remembers how difficult it was back then. He has lost the ability to understand what a beginning learner faces.”

http://harvardmagazine.com/2012/03/twilight-of-the-lecture

Some of your claims about the EEA are counterintuitive to me. Basically, it's not obvious that all information not strictly private would have been public. I'm thinking, for example, of present-day isolated cultures in which shamans are trained for several years: surely not all of their knowledge can be produced in a publicly comprehensible form. There must be a certain amount of "Eat this herb -- I could tell you why, but it would take too long to explain". Or so I imagine.

So how much of your description of knowledge in the EEA is your guessimation, and how much is the consensus view? And where can I find papers on the consensus view? My Google-fu fails me.

When you say "A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don't recurse far enough, you're just talking to yourself."

this strongly reminds me of what it is like to try talking, as an atheist, with a christian about any religious issue. I have concluded years ago that I just shouldn't try anymore, that reasonable verbal exchange is not possible...

I suppose that I should recurse... but how and how far where I am not sure.

This is the reason it's a Bad Thing that so many of the deeper concepts of Mormonism have become public knowledge. The first question I get asked, upon revealing that I'm a Mormon, is often, "So, you believe that if you're good in this life, you'll get your own planet after you die?" There are at least three huge problems with this question, and buried deep beneath them, a tiny seed of truth. But I can't just say "The inferential distance is too great for me to immediately explain the answer to that question. Let me tell you about the Plan of Salvation, and we'll move from there," because that sounds like I'm Trying To Convert You, which is a Scary and a Bad Thing, because... out of explanations come brainwashing. Or something.

2 Nephi 28:30:

30 For behold, thus saith the Lord God: I will give unto the children of men line upon line, precept upon precept, here a little and there a little; and blessed are those who hearken unto my precepts, and lend an ear unto my counsel, for they shall learn wisdom; for unto him that receiveth I will give more...

The "your own planet" thing isn't a huge selling point that you'd want to lead with?