The formal concept of the fallacious argument was born as the twin of logic itself. When the ancient Greeks first began to systematically examine the natural arguments people made as they sought to demonstrate the truth of propositions, they noted that certain types of arguments were vulnerable to counterexamples while others were not. The vulnerable were not true - when it was claimed that they justified a conclusion they could not rule out the alternative - and so were identified as fallacious.
Although the validity of logical arguments can be determined through logic, that doesn't particularly distinguish one fallacy from another. It is a curious fact that, despite this, some fallacies are more frequently made by human beings than others. Much more.
For example, "If P, then Q. P. Therefore, Not-Q." is just as basic and elemental an error as "If P, then Q. Q. Therefore, P." is. But the first fallacy is hardly ever found (humans being what they are, there's probably no mistake within our reach that is never made) while the second is extraordinarily common.
It is in fact generally true that we often confuse a unidirectional implication with the bidirectional. If something implies another thing, we leap to the conclusion that the second also implies the first, that the connection is equally strong each way, even though it is fairly trivial to demonstrate that this isn't necessarily the case. This error seems to be inherent to human intuition, as it occurs across contexts and subjects quite regularly, even when people are aware that it's logically invalid; only careful examination counters this tendency.
Much later, Sigmund Freud began to identify ways that people would deny assertions that they found emotionally threatening, what we now call 'psychological defense mechanisms'. The flaws in Freud's work as a whole are not directly relevant to this discussion and are beyond the scope of this site in any case. Suffice it to say that therapists and psychologists do not consider his theories to be either true or useful, that they do consider them to be unscientific and a self-reinforcing belief system, and that many of the concepts which he introduced and have been taken up into the culture at large are invalid. Not all of his work is so flawed, though - particularly his early ideas.
There is a peculiar relationship between the nature of those defense mechanisms and the intuitive fallacies.
When confronted with a contradiction in their emotionally-charged arguments, people who normally reasoned quite appropriately would suddenly begin to fall into fallacies. What's more, they would be unable to see the errors in their reasoning. Even more extraordinarily, they would often reach conclusions which were superficially related to the correct ones, but which were applied to the wrong concepts, situations, and individuals. In projection, for example, motives and traits belonging to the patients are instead asserted to belong to others or even the world itself; properties within the psyche are "projected" outward. So when a therapist demonstrated to a patient that some of their beliefs were incompatible or their arguments were contradictory, the patient might assert that the therapist was the one who had the irrational concern or obsession.
In such cases it seems clear that there is an awareness of some kind that the unpleasant conclusion must be reached, but not of where the property must be attributed. Accusing the therapist of possessing the unacceptable property seems to reduce tension of some sort - it's a relief that people actively seek out and will vehemently, even violently, defend.
Guilt, hate, fear, forbidden joys and loves - there are countless ways people will deny that they possess them. But they all tend to follow into certain predictable patterns, as the wild diversity of snowflakes still showcases repeating and similar forms.
Why is this the case? Ultimately, it took research into concept formation before psychology could really produce an answer to that question.
Next time: associational thought and the implications for rationality.
Perhaps a better example of a "fallacy" that is really just a mismatch between expected verses actual meanings is the example of how many doctors fail to accurately estimate the probability of false positives given a positive. It's called the base rate fallacy and works like this (from here):
Given that a women has a positive result, what is the probability that she actually has breast cancer?
It turns out that the probability she has cancer is only 7.76%, but many doctors would over-estimate this.
Well, I argue that this is a mismatch between the intellectual way a mathematician would present the problem and the way a doctor experiences medicine. A doctor who frequently gives mammograms would certainly learn over time that most women who have a positive result don't have breast cancer. From their point of view, the occurrence of 'false positives" -- i.e., results that were positive but false -- is 92%. Yet on a pen and paper test they are told that they rate of false positives is 9.6% and they misinterpret this.
On the one hand, you could just explain very clearly what is meant by this other rate of false positives. Doctors are generally intelligent and can understand this little circumlocution. On the other hand, you could instead give them the more natural figure -- the one that jives with experience; that 92% of positives are false, and remove the fallacy altogether.
I think that mathematicians have an obligation to use definitions that jive with experience (good mathematics always does?), especially instead of calling common sense "fallacious" when, actually, it is just being more Bayesian than frequentist.