The classical understanding of categories centers on necessary and sufficient properties. If a thing has X, Y, and Z, we say that it belongs to class A; if it lacks them, we say that it does not. This is the model of how humans construct and recognize categories that philosophers have held since the days of Aristotle.
Cognitive scientists found that the reality isn't that simple.
Human categorization is not a neat and precise process. When asked to explain the necessary features of, say, a bird, people cannot. When confronted with collections of stimuli and asked to determine which represent examples of 'birds', people find it easy to accept or reject things that have all or none of the properties they associate with that concept; when shown entities that share some but not all of the critical properties, people spend much more time trying to decide, and their decisions are tentative. Their responses simply aren't compatible with binary models.
Concepts are associational structures. They do not divide the world clearly into two parts. Not all of their features are logically necessary. The recognition of features produces an activation, the strength of which depends not only on the degree to which the feature is present but a weighting factor. When the sum of the activations crosses a threshold, the concept becomes active and the stimulus is said to belong to that category. The stronger the total activation, the more clearly the stimulus can be said to embody the concept.
Does this sound familiar? It should for us - we have the benefit of hindsight. We can recognize that pattern - it's how neural networks function. Or to put it another way, it's how neurons work.
But wait! There's more!
Try applying that model to virtually every empirical fact we've acquired regarding how people produce their conclusions. For example, our beliefs about how seriously we should take a hypothetical problem scenario depend not on a rigorous statistical analysis, but a combination of how vividly we feel about the scenario and how frequently it appears in our memory. People are convinced not only by the logical structure of an argument but the traits of the entities presenting it and the specific way in which the arguments are made. And so on, and so forth.
Most human behavior derives directly from the behavior of the associational structures in our minds.
To put it another way: what we call 'thinking' doesn't involve rational thought. It's *feeling*. People ponder an issue, then respond in the way that they feel stands out the most from the sea of associations.
Consider the implications for a while.
Voted up, despite the lack of links to related material, because I think it's an important and far underappreciated point. Both in your life, and in AI design, you need to think, "How did this choice/event/phenomenon even come to my attention in the first place?"
(ETA: Hegemonicon made a largely similar point, with good citations.)
For an example of this oversight in action, refer to my previous qualified criticism of an AI lab's automated scientist, where it's easy to miss how much of the "attention focusing" the team did before their program even saw what was left of the problem.
I would revise this though:
What we call thinking doesn't necessarily involve rational thought, but you can readjust your thinking processes to better align with rationality. Indeed, that's the whole point of this site.