Yes, and the last bit needs more explanation/elaboration. What are the non-obvious implications?

The classical understanding of categories centers on necessary and sufficient properties.  If a thing has X, Y, and Z, we say that it belongs to class A; if it lacks them, we say that it does not.  This is the model of how humans construct and recognize categories that philosophers have held since the days of Aristotle.

Cognitive scientists found that the reality isn't that simple.

Human categorization is not a neat and precise process.  When asked to explain the necessary features of, say, a bird, people cannot.  When confronted with collections of stimuli and asked to determine which represent examples of 'birds', people find it easy to accept or reject things that have all or none of the properties they associate with that concept; when shown entities that share some but not all of the critical properties, people spend much more time trying to decide, and their decisions are tentative.   Their responses simply aren't compatible with binary models.

Concepts are associational structures.  They do not divide the world clearly into two parts.  Not all of their features are logically necessary.  The recognition of features produces an activation, the strength of which depends not only on the degree to which the feature is present but a weighting factor.  When the sum of the activations crosses a threshold, the concept becomes active and the stimulus is said to belong to that category.  The stronger the total activation, the more clearly the stimulus can be said to embody the concept.

Does this sound familiar?  It should for us - we have the benefit of hindsight.  We can recognize that pattern - it's how neural networks function.  Or to put it another way, it's how neurons work.

But wait!  There's more!

Try applying that model to virtually every empirical fact we've acquired regarding how people produce their conclusions.  For example, our beliefs about how seriously we should take a hypothetical problem scenario depend not on a rigorous statistical analysis, but a combination of how vividly we feel about the scenario and how frequently it appears in our memory.  People are convinced not only by the logical structure of an argument but the traits of the entities presenting it and the specific way in which the arguments are made.  And so on, and so forth.

Most human behavior derives directly from the behavior of the associational structures in our minds.

To put it another way:  what we call 'thinking' doesn't involve rational thought.  It's *feeling*.  People ponder an issue, then respond in the way that they feel stands out the most from the sea of associations.

Consider the implications for a while.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:02 AM
Select new highlight date
Rendering 50/64 comments  show more

Does this sound familiar?

Yes. I see nothing here not already covered by this, this, and this.

Your final conclusion is like saying that the computation done by computers doesn't involve arithmetic. It's *flow of electric charge*. The charge flows around, then settles down in some stable point in the sea of possible distributions. ETA: On that point, see also this.

People are convinced not only by the logical structure of an argument but the traits of the entities presenting it and the specific way in which the arguments are made.

[…]

Consider the implications for a while.

I know that the main stance here on LW is that we need to improve our thinking, and I completely agree with that, but in this comment I would like to make the following point: as humans, we actually are really good at thinking already. We’ve evolved a certain way of thinking, and it makes sense to think about why we think that way and why it works so well to think that way.

The way thought actually goes (before it is tamed via talking and writing) seems chaotic, undisciplined and opaque (or instead: organic, flexible and efficient). We have a model that transparent, linear logical thought is superior and more reliable, but I suspect this has more to do with the effectiveness of our communication than our thought process. (People who harness transparent, linear logical thought are better at communicating their ideas accurately.)

Whenever I think about something, I notice that a heuristic explanation will arrive first and that this heuristic answer is more reliable than any logical linear answer I subsequently come up with. Therefore, heuristic solution in hand, communication of ideas to myself or others becomes a problem of finding a linear logic that roughly fits the networked solution. (This could be criticized as trying to find an argument that fits the conclusion, but I think this is uncharitable. It is the uncertain process of approximating a complex multi-dimensional organic network with a static, linear causal chain.)

On the one hand, I think a networked, heuristic, gut answer should be re-expressed in a logical linear way, for the purposes of error-checking, fine-honing and communication, and you almost always gain deeper understanding from doing so. (E.g., the network gains clarity and faulty or irrelevant links are down-voted). If there is a discrepancy between a linear argument answer and a heuristic answer, you can compare the two, trying to determine if there is something important missing from the linear argument or a false connection in the heuristic argument.

On the other hand, in the case of a conflict between a heuristic answer and a logical linear answer, I will always go with the heuristic. I realize that this probably sounds like blasphemy on a rationality site, but I am being honest and I can justify this position. (To myself, heuristically – we will see if I can do so linearly.)

Suppose that reasoning logically one way gives an answer, but my gut feeling is that the answer is something else. Experience has taught me that the heuristic always wins with understanding the correct answer first. But this makes sense. If the heuristic is not convinced by a logical argument, then either the logical linear argument is wrong or I haven’t fully understood it, in which case I’m not qualified to have confidence in it. As soon as I have understood the argument, and if I agree with it, my heuristic incorporates the argument. Thus by the time I have understood a correct logical argument, the logical reasoning is just a subset of my functioning heuristic solution.

So I don’t think it’s logical to prefer a logical argument to a heuristic argument, but of course still, a heuristic with a logical backbone is much more solid than one without.

I have written a bit about the relation between logical and heuristic thinking and I think you did an excellent comment; you might consider expanding it slightly as a top level post.

On the other hand, in the case of a conflict between a heuristic answer and a logical linear answer, I will always go with the heuristic.

Unless time limited, I usually try to find the error in my thinking. Usually, it will be some factor I missed in my logical analysis, but sometimes it will be some factor my gut feeling didn't accurately weigh. That is why we need to continue learning about availability heuristic and other biases, we need to learn them so thoroughly that our gut feelings take them into account.

This could be criticized as trying to find an argument that fits the conclusion, but I think this is uncharitable.

I think it's exactly right. All reasons are rationalizations.

Not in the way that 'rationalization' is used in natural language. That refers to a non-rational statement that is used in place of rationality in order to satisfy the desire to present an argument as rational without having to go through the trouble of actually constructing and adopting a rational position.

The biggest functional difference: when a reason is abolished, the behavior goes away. When a rationalization is abolished, the behavior remains.

Yes, and the last bit needs more explanation/elaboration. What are the non-obvious implications?

Wittgenstein was a philosopher who described the inadequacies of "necessary and sufficient conditions" for concepts/categories long before cognitive science existed.

Maybe it's not quite as simple as "philosophers bad, cognitive scientists good"?

By Wittgenstein's time, there were already plenty of philosophers who thought definitions aren't quite captured by necessary and sufficient conditions.

When confronted with collections of stimuli and asked to determine which represent examples of 'birds', people find it easy to accept or reject things that have all or none of the properties they associate with that concept; when shown entities that share some but not all of the critical properties, people spend much more time trying to decide, and their decisions are tentative. Their responses simply aren't compatible with binary models.

In a natural environment, people's uncertainty could be uncertainty about their knowledge of the entity (does that thing really have cloven hooves? does that flying thing have feathers?), rather than about the concept (does a kosher beast have to have cloven hooves? does a bird have to have feathers?). It's possible that people's uncertainty in conditions where they are told that the beast has such and such characteristics is due to their methods of reasoning not being developed for such situations, which are rare in real life.

Voted up, despite the lack of links to related material, because I think it's an important and far underappreciated point. Both in your life, and in AI design, you need to think, "How did this choice/event/phenomenon even come to my attention in the first place?"

(ETA: Hegemonicon made a largely similar point, with good citations.)

For an example of this oversight in action, refer to my previous qualified criticism of an AI lab's automated scientist, where it's easy to miss how much of the "attention focusing" the team did before their program even saw what was left of the problem.

I would revise this though:

what we call 'thinking' doesn't involve rational thought. It's feeling. People ponder an issue, then respond in the way that they feel stands out the most from the sea of associations.

What we call thinking doesn't necessarily involve rational thought, but you can readjust your thinking processes to better align with rationality. Indeed, that's the whole point of this site.

The recognition of features produces an activation, the strength of which depends not only on the degree to which the feature is present but a weighting factor. When the sum of the activations crosses a threshold, the concept becomes active and the stimulus is said to belong to that category.

This is also how linear classifiers in machine learning work, and many other statistical classifiers just replace "sum" with "something else" (support vector machines etc). On pattern recognition problems like "does this image contain a tree?" or "will this person return their loan?" they far outperform human-tuned decision trees, which classify by asking a series of yes/no questions. That's the nature of the complex sensory information we have to process, and it's not surprising that our brains work like that.

I voted this up because the associations I had while reading this post were quite vivid.

I've just recently started thinking about the way I think in the way described in this post. Whenever I think about something, there is indeed an activated network of interconnected ideas. Some observations:

  • Certain links and nodes are brighter than others due to how frequently I've thought about them before or how much interest I have in them.

  • If I understand something well, the links and nodes are easily accessible and don't change very much as I follow them.

  • If I don't understand something well, the network keeps shifting and changing while I interact with it.

  • Regardless of what I am thinking about, the network is larger than my immediate focus and most of it is vague but I can inspect one link or group of links at a time.

Mathematicians have tried to find ways of dealing with this sort of thing, too: http://en.wikipedia.org/wiki/Fuzzy_set

Do you think this method of modelling would make the problem soluble? Or are there still issues?