Compartmentalization as a passive phenomenon

We commonly discuss compartmentalization as if it were an active process, something you do. Eliezer suspected his altruism, as well as some people's "clicking", was due to a "failure to compartmentalize". Morendil discussed compartmentalization as something to avoid. But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.

I started thinking about this when I encountered an article claiming that the average American does not know the answer to the following question:

If a pen is dropped on a moon, will it:
A) Float away
B) Float where it is
C) Fall to the surface of the moon

Now, I have to admit that the correct answer wasn't obvious to me at first. I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. It was only then that I remembered that the astronauts had walked on the surface of the moon without trouble. Once I remembered that piece of knowledge, I was able to deduce that the pen quite probably would fall.

A link on that page brought me to another article. This one described two students randomly calling 30 people and asking them the question above. 47 percent of them got the question correct, but what was interesting was that those who got it wrong were asked a follow-up question: "You've seen films of the APOLLO astronauts walking around on the Moon, why didn't they fall off?" Of those who heard it, about 20 percent changed their answer, but about half confidently replied, "Because they were wearing heavy boots".

While these articles were totally unscientific surveys, it doesn't seem to me like this would be the result of an active process of compartmentalization. I don't think my mind first knew that pens would fall down because of gravity, but quickly hid that knowledge from my conscious awareness until I was able to overcome the block. What would be the point in that? Rather, it seems to indicate that my "compartmentalization" was simply a lack of a connection, and that such connections are much harder to draw than we might assume.

The world is a complicated place. One of the reasons we don't have AI yet is because we haven't found very many reliable cross-domain reasoning rules. Reasoning algorithms in general are quickly subject to a combinatorial explosion: the reasoning system might know which potential inferences are valid ones, but not which ones are meaningful in any useful sense. Most current-day AI systems need to be more or less fine-tuned or rebuilt entirely when they're made to reason in a domain they weren't originally built for.

For humans, it can be even worse than that. Many of the basic tenets in a variety of fields are counter-intuitive, or are intuitive but have counter-intuitive consequences. The universe isn't actually fully arbitrary, but for somebody who doesn't know how all the rules add up, it might as well be. Think of all the times when somebody has tried to reason using surface analogies, mistaking them for deep causes; or dismissed a deep cause, mistaking it for a surface analogy. Somebody might present us with a connection between two domains, but we have no sure way of testing the validity of that connection.

Much of our reasoning, I suspect, is actually pattern recognition. We initially have no idea of the connection between X and Y, but then we see X and Y occur frequently together, and we begin to think of the connection as an "obvious" one. For those well-versed in physics, it seems mind-numbingly bizarre to hear someone claim that the Moon's gravity isn't enough to affect a pen, but is enough to affect people wearing heavy boots. But as for some hypothetical person who hasn't studied much physics... or screw the hypotheticals - for me, this sounds wrong but not obviously and completely wrong. I mean, "the pen has less mass, so there's less stuff for gravity to affect" sounds intuitively sorta-plausible for me, because I haven't had enough exposure to formal physics to hammer in the right intuition.

I suspect that often when we say "(s)he's compartmentalizing!", we're operating in a domain that's more familiar to us, and thus it feels like an active attempt to keep things separate must be the cause. After all, how could they not see it, were they not actively keeping it compartmentalized?

So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains. Even if they do see the potential, or if it's explicitly pointed out to them, they might still not know enough about the domain in question (such as in the example of heavy boots), or they might find the proposed connection implausible. If you don't know which cross-domain rules and reasoning patterns are valid, then building up a separate set of rules for each domain is the safe approach. Discarding as much of your previous knowledge as possible when learning about a new thing is slow, but it at least guarantees that you're not polluted by existing incorrect information. Build your theories primarily on evidence found from a single domain, and they will be true within that domain. While there can certainly also be situations calling for an active process of compartmentalization, that might only happen in a minority of the cases.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 4:17 PM
Select new highlight date
All comments loaded

GEB has a section on this.

In order to not compartmentalize, you need to test if your beliefs are all consistent with each other. If your beliefs are all statements in propositional logic, consistency checking becomes the Boolean Satisfiability Problem, which is NP-complete. If your beliefs are statements in predicate logic, then consistency checking becomes PSPACE-complete, which is even worse than NP-complete.

Not compartmentalizing isn't just difficult, it's basically impossible.

Reminds me of the opening paragraph of The Call of Cthulhu.

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.

Glenn Beck:

When I sobered up I started looking at all of the things that I believed in and decided to take everything out and only put the things back in me that I knew to be true... and then I would put it back in and then I would look at all the other things that I found were true and then I would match them and if one of them didn't fit with the other then one of them had to be wrong. (source)

P.S. The trick is to use bubble sort.

I agree, save that I think Academian's proposal should be applied and "compartmentalizing" replaced with "clustering". "Compartmentalization" is a more useful term when restricted to describing the failure mode.

Could I express what you said as:

A person is in the predicament of:

1) having a large number of beliefs
2) the mathematically impossible challenge of validating those beliefs for consistency

Therefore:

3) It is impossible to not compartmentalize

This leads to a few questions:

  • Is it still valuable to reduce, albeit not eliminate, compartmentalization?
  • Is there a fast method to rank how impactful a belief is to my belief system, in order to predict whether an expensive consistency check is worthwhile?
  • Is it possible to arrive at a (mathematically tractable) small core set of maximum-impact beliefs that are consistent? (the goal of extreme rationality?)
  • Does probablistic reasoning change how we answer these questions?

Does probablistic reasoning change how we answer these questions?

Edwin Jaynes discusses "lattice" theories of probability where propositions are not universally comparable in appendix A of Probability Theory: The Logic of Science. Following Jaynes's account, probability theory would correspond to a uniformly dense lattice, whereas a lattice with very sparse structure and a few dense regions would correspond to compartmentalized beliefs.

If understand you correctly, you are saying that most people are not knowledgeable enough about the different domains in question to make any (or judge any) cross-domain connections. This seems plausible.

I can think however of another argument that confirms this but also clarifies why on Less Wrong we think that people actively compartmentalize instead of failing to make the connection and that is selection bias. Most people on this site are scientists, programmers or other technical professions. It seems that most are also consequentialists. Not surprisingly, both these facts points to people who enjoy following a chain of logic all way to the end.

So, we tend to learn a field until we know it's basic principles. For example, if you learn about gravity, you can learn just enough so you can calculate the falling speed of an object in gravitational field or you can learn about the bending of space-time by mass. It seems rather obvious to me that the second method encourages cross-domain connections. If you don't know the basic underlying principles of the domains you can't make connections.

I also see this all the time when I teach someone how to use computers. Some people build an internal model of how a computer & programs conceptually work and are then able to use most basic programs. Others learn by memorizing each step and are looking at each program as a domain on it's own instead of generalizing across all programs.

One of the reasons I'm in favor of axiomatization in mathematics is that it prevents compartmentalization and maintains a language (set-theory) for cross-domain connections. It doesn't have to be about completeness.

So yeah, thumbs up for foundations-encourage-connections... they are connections :)

I basically agree, but I'd advocate category theory as a much better base language than set theory.

I wonder if there'd be a difference between the survey as written (asking what a pen would do on the moon, and then offering a chance to change the answer based on Apollo astronauts) vs. a survey in which someone asked "Given that the Apollo astronauts walked on the moon, what do you think would have happened if they'd dropped a pen?"

The first method makes someone commit to a false theory, and then gives them information that challenges the theory. People could passively try to fit the astronaut datum into their current working theory, or they could actively view it as an outside attack on their position which they had to defend against. Maybe if the students had given people the information about the astronauts first, the respondents would have applied the cross-domain knowledge more successfully.

But I totally sympathize with you about the occasional virtues of compartmentalization. The worst field I've ever found for this is health and medicine. You learn that some vitamin is an antioxidant, then you learn that some disease is caused by oxidation, you make the natural assumption that the vitamin would help cure the disease, and then a study comes out saying there's no relationship at all.

I think part of the problem with the moon question was that it suggested two wrong answers first. How would you have answered the question if it was just "If a pen is dropped on the moon, what will happen? Explain in one sentence."

I would have shrugged and said "It will fall down, slowly." But when I saw "float away" and "float where it is", those ideas wormed their way into my head for a few seconds before I could reject them. Just suggesting those ideas managed to mess me up, and I'm someone whose mental model of motion in space is so strong that I damn near cried with joy when I watched Planetes and saw people maneuvering in zero gravity exactly the way they're supposed to. (And it turns out I'm not the only one to have this exact same reaction. Weird.)

So, I'm thinking that the wrong multiple-choice answers are responsible for a lot of the confusion, the same way most people wouldn't interpret bumps in the night as angry ghosts unless they hear that the house is haunted.

But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.

Look at it this way: what evolutionary pressure exists for NOT compartmentalizing?

From evolution's standpoint, if two of your beliefs really need to operate at the same time, then the stimuli will be present in your environment at close enough to the same time as to get them both activated, and that's good enough to work for passive consistency checking. For active consistency checking, we have simple input filters for rejecting stuff that conflicts with important signaling beliefs and whatnot.

OTOH, there's no evolutionary pressure for something that sifts through your entire brain contents, generating arbitrary scenarios where two pieces of information might conflict or produce some startlingly new and useful idea.

Yes, building mental connections between domains requires well-populated maps for both of them, plus significant extra processing. It's more properly treated as a skill which needs development than a cognitive defect. In the pen-on-the-moon example, knowing that astronauts can walk around is not enough to infer that a pen will fall; you also have to know that gravity is multiplicative rather than a threshold effect. And it certainly doesn't help that most peoples' knowledge of non-Earth gravity comes entirely from television, where, since zero-gravity filming is impractical, the writers invariably come up with some sort of confusing phlebotinum (most commonly magnetic boots) to make them behave more like regular-gravity environments.

And it certainly doesn't help that most peoples' knowledge of non-Earth gravity comes entirely from television, where, since zero-gravity filming is impractical, the writers invariably come up with some sort of confusing phlebotinum (most commonly magnetic boots) to make them behave more like regular-gravity environments.

I think you're on to something. I was wondering why the "heavy boots" people singled out the boots. Why not say "heavy suits" or that the astronauts themselves were heavier than pens. Didn't 2001: A Space Odyssey start the first zero-gravity scene with a floating pen and a flight attendant walking up the wall?

Quite convincing, thanks. I'll want to think about it more, but perhaps it would be a good idea to toss the word out the window for its active connotations.

ISTM, though, that there is a knack for for cross-domain generalization (and cross-domain mangling) of insights, that people have this knack in varying degrees, that this knack is an important component of what we call "intelligence", in the sense that if we could figure out what this knack consists of we'd have solved a good chunk of AI. Isn't this a major reason why Hofstadter, for instance, has focused so sharply on analogy-making, fluid analogies, and so on?

(This is perhaps a clue to one thing that has been puzzling me, given Eliezer's interest in AI, namely the predominance of topics such as decision theory on this blog, and the near total absence of discussion around topics such as creativity or analogy-making.)

I think what's sometimes called a "compartment" would be better called a "cluster". Learning consists of forming connections, which can naturally form distinct clusters without "barriers" causally separating them. The solution is then to simply connect the clusters (realize that the moon landing videos are relevant).

But certainly at times people erect intentional barriers to prevent connections from forming (a lawyer effortfully trying not to connect his own morals to the case), and then I would use the term "compartment". Identifying the distinction between clusters and compartments could be a useful diagnostic goal.

(This is perhaps a clue to one thing that has been puzzling me, given Eliezer's interest in AI, namely the predominance of topics such as decision theory on this blog, and the near total absence of discussion around topics such as creativity or analogy-making.)

I'd assumed that was because the focus was not on how to build an AGI but on how you define its goals.

Nitpick: "If a pen is dropped on A moon"
It doesn't specify Earth's moon. If a pen were dropped on say, Deimos, it might very well appear to do B) for a long moment ;) (Deimos is Mars' outermost moon and too small to retain a round shape. Its gravity is only 0.00256 m/s^2 and escape velocity is only 5.6 m/s. That means you could run off it.)

On the other hand, the word "dropped" effectively gives the game away. Things dropped go DOWN, not up, and they don't float in place. Would be better to say "released".

And now, back to our story...

I don't believe it was assumed that compartmentalization is something you actually "do" (make effort towards achieving). Making this explicit is welcome, but assuming the opposite to be the default view seems to be an error.

So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains.

I'm not sure about this. In your examples here, people are in fact completely lacking any full understanding of gravitation and/or (I suppose) knowledge of the masses of notable celestial objects in our solar system.

Now, I have to admit that the correct answer wasn't obvious to me at first.

I up-voted just for you admitting this in your example, but lets talk about this. You knew the moon had gravity, but not much. Then you remember its enough for astronauts to walk on, so most likely enough to attract a pen as well.

Do you understand that the same thing would happen if you were standing on an asteroid? Yes you can stand on an asteroid - I wouldn't recommend walking or any movement at all without a tether but as long as you don't move you will stay on its surface. In this case, gravity would not be enough for you to walk even with "heavy boots".

But if you just release the pen (don't throw it or toss it at all please!) it will still fall. Every object with mass has gravity, and any two objects will be attracted even if its by a relatively weak force. Yes other celestial masses will exert influence but as long as this asteroid is not on a collision course with another body we can be reasonably sure that 3-4 feet from its surface, its gravity will be greater than any other body's.

If you don't understand gravitation, you can't really expect to answer the question correctly. As for the people who can't answer it correctly: lots of people didn't really care much for physics when they studied it, and so while they may have known this at one time (to pass a test) it did not really get integrated into their working knowledge. It may have been possible to dredge up the correct answer by asking the right question to trigger the right memory, but the fact is that they really just don't have this knowledge, even if the information is in their brain.

I'm not sure about this. In your examples here, people are in fact completely lacking any full understanding of gravitation

And even without a full understanding of gravitation and the nitty gritty of what causes it, it would suffice to know 'gravity is basically acceleration'.

I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. I

Many people (probably more people) make the same mistake when asked 'which falls faster, the 1 kg weight or the 20 kg weight?'. I guess this illustrates why compartmentalization is useful. False beliefs that don't matter in one field can kill you in another.

The example you use is in my opinion not a failure of compartmentalization but of communication.

Humans will without fail, due to possesing sufficiently opitmised time saving heuristics, always assume when talking to a nonthreatening, nondescript and polite stranger like youself that you are a regular person (the kind they normally interact with) talking about a situation that fits their usual frame of reference (taking place on a planteary surface, reasonable temperature range, normal g, one atm of preasure, oxygen present enabling combustion ect.) except when you explicitly state otherwise.

Taking two weights of different mass (all else being equal) and dropping them will not result in "neither falling faster". To realize why consider the equation for terminal velocity
[which is not considering bouyancy, Vt=squr(2gm/densityprojected area of objectdrag coefficent)].

They of course won't think about it this way, and even if they did they would note that on a "normal" distance before hitting the ground would result in t1 and t2 being about the same if not really equal.

The rather cringe worthy approximation comes when they unintentionally assume a slopiness of communication on your part (we leave out all except the most important factor when asking short questions) and that you really meant a few other things except mass are not equal (since the average things that they handle that have radically different masses from each other are rarley if ever identical in shape or volume).

The reason it is cringe worthy is not that its a bad assumption to make in their social circle. But that their soical circle is such that they don't have enough interactions like this to categorize the question under "sciencey stuff" in their head!

PS I just realized you may have mistyped and meant the old "What is heavier 10 kg of straw or 10 kg of iron?" which ilustrates the point you try to make a bit better (I actually got the wrong answer when saying my mind out right away at the tender age of 7, I realized my error a second to late to avoid the laughing of my schoolmate). But even this is either a faliure of communication or just ignorance of the concept of density.

PS I just realized you may have mistyped and meant the old "What is heavier 10 kg of straw or 10 kg of iron?" which ilustrates the point you try to make a bit better

No. I meant what I wrote. The thing with the straw and or feathers is just word play, a communication problem. I am talking about an actual misunderstanding of the nature of physics.

I have seen people (science teacher types) ask the question by holding out a rock and a scrunched up piece of paper and asking which will hit the ground first when dropped. There is no sophistry - the universe doesn't do 'trick questions'. Buoyancy, friction and drag are all obviously dwarfed here by experimental error. People get the answer wrong. They expect to see the rock hit the ground noticeably earlier. Even more significantly, they are surprised when they both fall about the same speed. In fact, sometimes they go as far as to accuse the demonstrator of playing some sort of trick and insist on performing the experiment themselves.

The same kind of intuitive (mis)understanding of gravity would lead people to also guess wrong about things like what would happen on the moon.

Even better is the question "what weighs more, a pound of feathers, or a pound of gold?"

Gur zrgny vf yvtugre -- vg'f zrnfherq va gebl cbhaqf, juvpu unir gjryir bhaprf gb gur cbhaq engure guna fvkgrra, naq n gebl bhapr vf nccebkvzngryl gur fnzr na nibveqhcbvf bhapr.

Someone posted a while back that only a third of adults are capable of abstract reasoning. I've had some trouble figuring out exactly it means to go through life without abstract reasoning. The "heavy boots" response is a good example.

Without abstract reasoning, it's not possible to form the kind of theories that would let you connect the behavior of a pen and an astronaut in a gravitational field. I agree that this is an example of lack of ability, not compartmentalization. Of course, scientists are capable of abstract reasoning, so its still possible to accuse them of compartmentalizing even after considering the survey results.

I instantly distrusted the assertion (it falls in the general class of "other people are idiots"-theories, which is always more popular among the Internet geek crowd than they should be), and went to the linked article:

The Piagetians used what they called a clinical interview to determine which reasoning schemes a child had mastered. They posed questions of the children and then asked about how they arrived at their answers. As mentioned above, the elementary reasoning schemes (classification, etc) were what were being used.

Because each clinical interview took two or three hours, it was only possible to get data for a small number of children. Some psychologists decided to try to create a simple pencil and paper version which could then be administered to many children and thereby obtain data about broad classes of children.

This already suggests that the data should be noisy. I can think of at least two problems:

  1. The test only determines, at best, what methods the individual used to solve this particular problem - and, at worst, determines what methods the individual claims to have used to solve the problem.

  2. The accuracy of the test may be greatly reduced by the paper-and-pencil administration thereof. Any confusion which occurs by either the evaluators or takers will obscure the data.

I read the question as asking about THE Moon, not "a moon". The question as written has no certain answer. If a moon is rotating fast enough, a pen held a few feet above its surface will be at orbital velocity. Above this level it will float away. The astronaut might also float away, unless he were wearing heavy boots.

Pens and heavy boots always do the same thing in any gravitational field, unless they modify it somehow, like by moving the moon. Acceleration due to gravity does not depend on the mass of the accelerated object.

If a moon is rotating fast enough, a pen held a few feet above its surface will be at orbital velocity.

Well, even more technically, 'may be at orbital velocity, depending on where on the moon the astronaut is standing'.

For those well-versed in physics, it seems mind-numbingly bizarre to hear someone claim that the Moon's gravity isn't enough to affect a pen, but is enough to affect people wearing heavy boots. But as for some hypothetical person who hasn't studied much physics... or screw the hypotheticals - for me, this sounds wrong but not obviously and completely wrong. I mean, "the pen has less mass, so there's less stuff for gravity to affect" sounds intuitively sorta-plausible for me, because I haven't had enough exposure to formal physics to hammer in the right intuition.

Absolutely. Another piece of the puzzle required to understand whether the pen 'obviously' falls or not is, 'what kind of atmosphere does the moon have'? What fraction of people know that there is no atmosphere on the surface of the moon? (Do I really know this?? I think I just remember being told this, and despite being told, I'm not certain there's absolutely no atmosphere on the moon.)

Without detailed information about the atmosphere, you really don't know. On Earth, the pen floats in water, but doesn't float in air.

(And then you have the added problem that there's a high chance people will first recall the image of the flag blowing on the moon, which is unfortunate for physics.)

On Earth, the pen floats in water, but doesn't float in air.

This is surely also true on the moon? The relative densities of the pen and the fluid you put it in don't change depending on the gravitational field they're in.

Gravity affects pressure affects density. To a first approximation, gases have density directly proportional to their pressure, and liquids and solids don't compress very much.

With air/water/pen the conclusion doesn't change. But an example where it does:
A nitrogen atmosphere at STP has a density of 1251 g/m^3.
A helium balloon at STP has a density of 179 g/m^3. The balloon floats.
Then reduce Earth's gravity by a factor of 10, and hold temperature constant.
The atmospheric pressure reduces by a factor of 10, so its density goes to 125 g/m^3.
But the helium can't expand likewise (assume the balloon is perfectly inelastic), so it's still 179 g/m^3. The balloon sinks.

Another piece of the puzzle required to understand whether the pen 'obviously' falls or not is, 'what kind of atmosphere does the moon have'?

Another unobvious fact is that the force that holds up a floating object is also tied to weight - specifically, the weight of the atmosphere or liquid. Even if the atmosphere on the Moon were precisely as dense as the Earth's (it is not), the pen and the air would be lighter in the same proportion, and the pen would still fall.

Edit: i.e. what bentarm said.

I fail to understand how compartmentalization explains this. I got the answer right the first time. And I suspect most people who go it wrong did so because of the assumptions (unwarranted) they were making - meaning if they had just looked at the question and nothing else, and if they understood basic gravity, they would've got it right. But when you also try to imagine some hypothetical forces on the surface of the moon or relate to zero gravity images seen on tv etc, and if you visualize all these before you visualize the question, you'd probably get it wrong.

So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains. Even if they do see the potential, or if it's explicitly pointed out to them, they might still not know enough about the domain in question (such as in the example of heavy boots), or they might find the proposed connection implausible.

If a person's knowledge is highly compartmentalized, and consists of these three facts:

  1. A human being walked across the moon.

  2. There are small rocks on the surface.

  3. The moon is a planetary body.

without any educational background, would they choose the right answer?

I believe that their is a high probability that basic intuition would lead to an accurate answer.

So what went wrong, in your case? I don't think that you can attribute it to a failure of compartmentalization. It wasn't that you didn't make connections to your prior knowledge; the problem was that you made too many and that you hadn't organized your priors into a confidence hierarchy.

Confusion occurs when tenuous connections are made and lead to an over-analysis of the question. You differ from the person in the hypothetical, because you had prior knowledge of the forces involved. Connections are only helpful when they are made from strong foundational knowledge to new applications. When you are making many connections from a condition of uncertainty, to a new problem, your intuition fails. It results in the assignment of a low confidence level, to each of many connections, while ignoring basic observations or truisms.

It seems, you were confident in the areas of physics, most applicable in this situation; enumerating the atmosphere, gravity, and mass as the most influential. You attempted to remember how these forces interacted and recognized that they had dimensions to them, that you had forgotten. The connections caused your intuition to be replaced by a humbleness, and the go to answer was a balanced combination of forces. Thus the pen would float.

It is clear that you came to the problem with much more information than my hypothetical person, armed with three foundational facts. You too had those three facts, if that was all that was in your moon compartment the intuition would have been clearer.

(I apologize for the presumptions I made in referring to your thought process. This is a situation, in which, we find ourselves frequently.)