"Outside the laboratory, scientists are no wiser than anyone else."  Sometimes this proverb is spoken by scientists, humbly, sadly, to remind themselves of their own fallibility.  Sometimes this proverb is said for rather less praiseworthy reasons, to devalue unwanted expert advice.  Is the proverb true?  Probably not in an absolute sense.  It seems much too pessimistic to say that scientists are literally no wiser than average, that there is literally zero correlation.

But the proverb does appear true to some degree, and I propose that we should be very disturbed by this fact.  We should not sigh, and shake our heads sadly.  Rather we should sit bolt upright in alarm.  Why?  Well, suppose that an apprentice shepherd is laboriously trained to count sheep, as they pass in and out of a fold.  Thus the shepherd knows when all the sheep have left, and when all the sheep have returned.  Then you give the shepherd a few apples, and say:  "How many apples?"  But the shepherd stares at you blankly, because they weren't trained to count apples - just sheep.  You would probably suspect that the shepherd didn't understand counting very well.

Now suppose we discover that a Ph.D. economist buys a lottery ticket every week.  We have to ask ourselves:  Does this person really understand expected utility, on a gut level?  Or have they just been trained to perform certain algebra tricks?

One thinks of Richard Feynman's account of a failing physics education program:

"The students had memorized everything, but they didn't know what anything meant.  When they heard 'light that is reflected from a medium with an index', they didn't know that it meant a material such as water.  They didn't know that the 'direction of the light' is the direction in which you see something when you're looking at it, and so on.  Everything was entirely memorized, yet nothing had been translated into meaningful words.  So if I asked, 'What is Brewster's Angle?' I'm going into the computer with the right keywords.  But if I say, 'Look at the water,' nothing happens - they don't have anything under 'Look at the water'!"

Suppose we have an apparently competent scientist, who knows how to design an experiment on N subjects; the N subjects will receive a randomized treatment; blinded judges will classify the subject outcomes; and then we'll run the results through a computer and see if the results are significant at the 0.05 confidence level.  Now this is not just a ritualized tradition.  This is not a point of arbitrary etiquette like using the correct fork for salad.  It is a ritualized tradition for testing hypotheses experimentally.  Why should you test your hypothesis experimentally?  Because you know the journal will demand so before it publishes your paper?  Because you were trained to do it in college?  Because everyone else says in unison that it's important to do the experiment, and they'll look at you funny if you say otherwise?

No: because, in order to map a territory, you have to go out and look at the territory.  It isn't possible to produce an accurate map of a city while sitting in your living room with your eyes closed, thinking pleasant thoughts about what you wish the city was like.  You have to go out, walk through the city, and write lines on paper that correspond to what you see.  It happens, in miniature, every time you look down at your shoes to see if your shoelaces are untied.  Photons arrive from the Sun, bounce off your shoelaces, strike your retina, are transduced into neural firing frequences, and are reconstructed by your visual cortex into an activation pattern that is strongly correlated with the current shape of your shoelaces.  To gain new information about the territory, you have to interact with the territory.  There has to be some real, physical process whereby your brain state ends up correlated to the state of the environment.  Reasoning processes aren't magic; you can give causal descriptions of how they work.  Which all goes to say that, to find things out, you've got to go look.

Now what are we to think of a scientist who seems competent inside the laboratory, but who, outside the laboratory, believes in a spirit world?  We ask why, and the scientist says something along the lines of:  "Well, no one really knows, and I admit that I don't have any evidence - it's a religious belief, it can't be disproven one way or another by observation."  I cannot but conclude that this person literally doesn't know why you have to look at things.  They may have been taught a certain ritual of experimentation, but they don't understand the reason for it - that to map a territory, you have to look at it - that to gain information about the environment, you have to undergo a causal process whereby you interact with the environment and end up correlated to it.  This applies just as much to a double-blind experimental design that gathers information about the efficacy of a new medical device, as it does to your eyes gathering information about your shoelaces.

Maybe our spiritual scientist says:  "But it's not a matter for experiment.  The spirits spoke to me in my heart."  Well, if we really suppose that spirits are speaking in any fashion whatsoever, that is a causal interaction and it counts as an observation.  Probability theory still applies.  If you propose that some personal experience of "spirit voices" is evidence for actual spirits, you must propose that there is a favorable likelihood ratio for spirits causing "spirit voices", as compared to other explanations for "spirit voices", which is sufficient to overcome the prior improbability of a complex belief with many parts.  Failing to realize that "the spirits spoke to me in my heart" is an instance of "causal interaction", is analogous to a physics student not realizing that a "medium with an index" means a material such as water.

It is easy to be fooled, perhaps, by the fact that people wearing lab coats use the phrase "causal interaction" and that people wearing gaudy jewelry use the phrase "spirits speaking".  Discussants wearing different clothing, as we all know, demarcate independent spheres of existence - "separate magisteria", in Stephen J. Gould's immortal blunder of a phrase.  Actually, "causal interaction" is just a fancy way of saying, "Something that makes something else happen", and probability theory doesn't care what clothes you wear.

In modern society there is a prevalent notion that spiritual matters can't be settled by logic or observation, and therefore you can have whatever religious beliefs you like.  If a scientist falls for this, and decides to live their extralaboratorial life accordingly, then this, to me, says that they only understand the experimental principle as a social convention.  They know when they are expected to do experiments and test the results for statistical significance.  But put them in a context where it is socially conventional to make up wacky beliefs without looking, and they just as happily do that instead.

The apprentice shepherd is told that if "seven" sheep go out, and "eight" sheep go out, then "fifteen" sheep had better come back in.  Why "fifteen" instead of "fourteen" or "three"?  Because otherwise you'll get no dinner tonight, that's why!  So that's professional training of a kind, and it works after a fashion - but if social convention is the only reason why seven sheep plus eight sheep equals fifteen sheep, then maybe seven apples plus eight apples equals three apples.  Who's to say that the rules shouldn't be different for apples?

But if you know why the rules work, you can see that addition is the same for sheep and for apples.  Isaac Newton is justly revered, not for his outdated theory of gravity, but for discovering that - amazingly, surprisingly - the celestial planets, in the glorious heavens, obeyed just the same rules as falling apples.  In the macroscopic world - the everyday ancestral environment - different trees bear different fruits, different customs hold for different people at different times.  A genuinely unified universe, with stationary universal laws, is a highly counterintuitive notion to humans!  It is only scientists who really believe it, though some religions may talk a good game about the "unity of all things".

As Richard Feynman put it:

"If we look at a glass closely enough we see the entire universe. There are the things of physics: the twisting liquid which evaporates depending on the wind and weather, the reflections in the glass, and our imaginations adds the atoms. The glass is a distillation of the Earth's rocks, and in its composition we see the secret of the universe's age, and the evolution of the stars. What strange array of chemicals are there in the wine? How did they come to be? There are the ferments, the enzymes, the substrates, and the products. There in wine is found the great generalization: all life is fermentation. Nobody can discover the chemistry of wine without discovering, as did Louis Pasteur, the cause of much disease. How vivid is the claret, pressing its existence into the consciousness that watches it! If our small minds, for some convenience, divide this glass of wine, this universe, into parts — physics, biology, geology, astronomy, psychology, and so on — remember that Nature does not know it! So let us put it all back together, not forgetting ultimately what it is for. Let it give us one more final pleasure: drink it and forget it all!"

A few religions, especially the ones invented or refurbished after Isaac Newton, may profess that "everything is connected to everything else".  (Since there is a trivial isomorphism between graphs and their complements, this profound wisdom conveys exactly the same useful information as a graph with no edges.)  But when it comes to the actual meat of the religion, prophets and priests follow the ancient human practice of making everything up as they go along.  And they make up one rule for women under twelve, another rule for men over thirteen; one rule for the Sabbath and another rule for weekdays; one rule for science and another rule for sorcery...

Reality, we have learned to our shock, is not a collection of separate magisteria, but a single unified process governed by mathematically simple low-level rules.  Different buildings on a university campus do not belong to different universes, though it may sometimes seem that way.  The universe is not divided into mind and matter, or life and nonlife; the atoms in our heads interact seamlessly with the atoms of the surrounding air.  Nor is Bayes's Theorem different from one place to another.

If, outside of their specialist field, some particular scientist is just as susceptible as anyone else to wacky ideas, then they probably never did understand why the scientific rules work.  Maybe they can parrot back a bit of Popperian falsificationism; but they don't understand on a deep level, the algebraic level of probability theory, the causal level of cognition-as-machinery. They've been trained to behave a certain way in the laboratory, but they don't like to be constrained by evidence; when they go home, they take off the lab coat and relax with some comfortable nonsense.  And yes, that does make me wonder if I can trust that scientist's opinions even in their own field - especially when it comes to any controversial issue, any open question, anything that isn't already nailed down by massive evidence and social convention.

Maybe we can beat the proverb - be rational in our personal lives, not just our professional lives.  We shouldn't let a mere proverb stop us:  "A witty saying proves nothing," as Voltaire said.  Maybe we can do better, if we study enough probability theory to know why the rules work, and enough experimental psychology to see how they apply in real-world cases - if we can learn to look at the water.  An ambition like that lacks the comfortable modesty of being able to confess that, outside your specialty, you're no better than anyone else.  But if our theories of rationality don't generalize to everyday life, we're doing something wrong.  It's not a different universe inside and outside the laboratory.

Addendum:  If you think that (a) science is purely logical and therefore opposed to emotion, or (b) that we shouldn't bother to seek truth in everyday life, see "Why Truth?"  For new readers, I also recommend "Twelve Virtues of Rationality."

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 11:18 AM
Select new highlight date
All comments loaded

Tim Worstall, if a PhD economist has pleasurable dreams about winning the lottery, that is exactly what I would call "failing to understand probability on a gut level". Look at the water! A calculated probability of 0.0000001 should diminish the emotional strength of any anticipation, positive or negative, by a factor of ten million. Otherwise you've understood the probability as little symbols on paper but not what it means in real life.

Also, a good economist should be aware that winning the lottery often does not make people happy - though one must take into account that they were the sort of people who bought lottery tickets to begin with.

Tim Worstall, if a PhD economist has pleasurable dreams about winning the lottery, that is exactly what I would call "failing to understand probability on a gut level"

In that case, wouldn't you say that anyone who suffers from akrasia (which is pretty much everyone at some time) has a failure of understanding on a gut level? My subconscious mind doesn't seem to understand that it's a bad idea to eat a box of pizza every night; so I have to rely on my conscious mind to take charge, or at least try to.

Occasionally even health-conscious people eat stuff like pizza, which is arguably the equivalent of buying the occasional lottery ticket. In each case, the conscious mind is aware that one is doing something counter-productive. In the case of a lottery ticket, one is enjoying the fantasy of being free from his day-to-day financial worries,even though there is essentially zero chance of actually succeeding. In the case of pigging out, one is enjoying the feeling of being stuffed with tasty food, even though there is essentially zero chance that there will be a food shortage next week which will justify his having pigged out.

The point is not that scientists should be perfect in all spheres of human endeavor. But neither should anyone who really understands science, deliberately start believing things without evidence. It's not a moral question, merely a gross and indefensible error of cognition. It's the equivalent of being trained to say that 2 + 2 = 4 on math tests, but when it comes time to add up a pile of candy bars you decide that 2 + 2 ought to equal 5 because you want 5 candy bars. You may do well on math tests, when you apply the rules that have been trained into you, but you don't understand numbers. Similarly, if you deliberately believe without evidence, you don't understand cognition or probability theory. You may understand quarks, or cells, but not science.

Newton may have been a hotshot physicist by the standards of the 17th century, but he wasn't a hotshot rationalist by the standards of this one. (Laplace, on the other hand, was explicitly a probability theorist as well as a physicist, and he was an outstanding rationalist by the standards of that era.)

Yes, academics largely train people to follow various standard procedures as social conventions, instead of getting people to really understand the reasons for those conventions. Apparently it is very hard to teach and test regarding the underlying reasons. That is the fact that really gives me pause.

Yes, there have been many great scientists who believed in utter crap - though fewer of them and weaker belief, as you move toward modern times.

And there have also been many great jugglers who didn't understand gravity, differential equations, or how their cerebellar cortex learned realtime motor skills. The vast majority of historical geniuses had no idea how their own brains worked, however brainy they may have been.

You can make an amazing discovery, and go down in the historical list of great scientists, without ever understanding what makes Science work. Though you couldn't build a scientist, just like you couldn't build a juggler without knowing all that stuff about gravity and differential equations and error correction in realtime motor skills.

I still wouldn't trust the one's opinion about a controversial issue in which they had an emotional stake. I couldn't rely on them to know the difference between evidence versus a wish to believe. If they can compartmentalize their brains for a spirit world, maybe they compartmentalize their brains for scientific controversies too - who knows? If they gave into temptation once, why not again? I'll find someone else to ask for their summary of the issues.

I know for a fact that some scientists, even some world-renowned scientists, are morons outside of their own field. I used to manage construction at a Big 10 University, and had many conversations like this one:

BRILLIANT SCIENTIST, looking over my estimate for a remodelling project on his floor: "What the heck is this, $4000 for a door? A door? I just replaced the front door of my house for $500!"

ME: "Sir, your house is made of wood, and the doors don't have to meet any particular fire code. This building is concrete and steel, and the doors have to be 90-minute fire-rated. This means, among other things, that the door slab has to be hollow metal, which means it is heavy, which means that the frame, hinges, latch, and handle all have to be much sturdier than the hardware on wood doors. Also, the carpenter who will install this door is probably getting paid more than carpenters who work residential, and he's going to have to spend more time on it because it is more complicated. Finally, the lock core has to match all the rest of the cores in this building, so as not to mess up the keying system."

BS: "Don't give me that! This is ridiculous!"

I wish I had a dime for every time this happened . . . .

Do you have any idea of whether the first flash of stubborn anger (probably status driven) ever gets undercut by later reflection?

A calculated probability of 0.0000001 should diminish the emotional strength of any anticipation, positive or negative, by a factor of ten million.

I don't play the lottery, but I sometimes have pleasurable daydreams about what I'd do if I were some great success - found the cure for cancer, proved P=NP, won a Nobel prize... objectively speaking, the probability is extremely low, but it doesn't scale my pleasure down by a million times.

"Now suppose we discover that a Ph.D. economist buys a lottery ticket every week. We have to ask ourselves: Does this person really understand expected utility, on a gut level?"

Tricky question. It we look purely at the financial return, the odds, then no. If we look at the return in utility, possibly yes.

Is $1 too much to pay for a couple of days of pleasurable dreams about what one would do if one won? Don't we think that such fleeing from reality has some value to the one entering such a fantasy, a suspension of the rules of the real world?

If we don't agree that that has some value then it's going to be terribly difficult to explain why people spend $8 to do to the movies for 90 minutes.

I don't buy lottery tickets.. but I still dream about what I'd do if I won. I realised a while back that i don't actually have to pay to have those dreams.

Sorry, ambiguous wording. 0.05 is too weak, and should be replaced with, say, 0.005. It would be a better scientific investment to do fewer studies with twice as many subjects and have nearly all the reported results be replicable. Unfortunately, this change has to be standardized within a field, because otherwise you're deliberately handicapping yourself in an arms race.

Ah, yes, I see. I understand and lean instinctively towards agreeing. Certainly I agree about the standardization problem. I think it's rather difficult to determine what is the best number, though. 0.005 is as equally pulled out of a hat as Fisher's 0.05.

From your "A Technical Explanation of Technical Explanation":

Similarly, I wonder how many betters on horse races realize that you don't win by betting on the horse you think will win the race, but by betting on horses whose payoffs exceed what you think are the odds. But then, statistical thinkers that sophisticated would probably not bet on horse races.

Now I know that you aren't familiar with gambling. The latter is precisely what the professional gamblers do, and some of them do bet on horse races, or sports. Professional gamblers, unlike the amateurs, are sophisticated statistical thinkers. (And horse races are acceptable for sophisticated gamblers because there's only the small vigorish involved, and there's plenty of area for specialized knowledge.)

I think you've made a common statistical fallacy. Perhaps "someone who bets on horse races is probably not a sophisticated statistical thinker." But it does not necessarily follow that "someone who is a sophisticated statistical thinker probably does not bet on horse races." Bayes's Theorem, my man. :)

I know plenty of math Ph.D.s and grad students who do gamble online and look for arbitrage in a variety on ways. Whether they're representative I don't know.

John, I consider myself a 'Bayesian wannabe' and my favorite author thereon is E. T. Jaynes. As such, I follow Jaynes in vehemently denying that the posterior probability following an experiment should depend on "whether Alice decided ahead of time to conduct 12 trials or decided to conduct trials until 3 successes were achieved". See Jaynes's Probability Theory: The Logic of Science.

The 0.05 significance level is not just "arbitrary", it is demonstrably too high - in some fields the actual majority of "statistically significant" results fail to replicate, but the failures to replicate don't get into the prestigious journals, and are not talked about and remembered.

Apparently it is very hard to teach and test regarding the underlying reasons.

Does "apparently" (in general) mean you aren't using additional sources of information? In this case, are you concluding that it's difficult simply from the fact that it isn't done? That only seems to me like evidence that it's not worth it. Unfortunately, the value driving the system is getting published, not advancing science.

Joseph, how did they get these "competing rules" in the first place? By making them up as they went along. So, in accordance with human psychology, they make up lots of different rules for different occasions that "feel different". Both sides (or all sides) of any religious battle do this, and it doesn't matter who wins, they still won't come up with a unified answer.

Shouldn't that lead to at least some (if very poor) "testing" of rules over time? Some (such as taboos which strengthen social cohesion or which inadvertently help avoid dangerous behavior) would help the ground adapt, whilst others (which do neither) would be unlikely to continue.

Sorry, ambiguous wording. 0.05 is too weak, and should be replaced with, say, 0.005. It would be a better scientific investment to do fewer studies with twice as many subjects and have nearly all the reported results be replicable. Unfortunately, this change has to be standardized within a field, because otherwise you're deliberately handicapping yourself in an arms race. This probably deserves its own post.

In my head, I always translate so-called "statistically significant" results into (an often poorly-computed approximation to) a likelihood ratio of 0.05 over the null hypothesis. I believe that experiments should report likelihood ratios.

I am an infinite set atheist - have you ever actually seen an infinite set?

I am a "subjective/objective" Bayesian. If we are ignorant about a phenomenon, this is a fact about our state of mind, not a fact about the phenomenon. Probabilities are in the mind, not in the environment. Nonetheless I follow a correspondence, rather than a coherentist, theory of truth: we are trying to concentrate as much subjective probability mass as possible into (the mental representation that corresponds to) the real state of affairs. See my "The Simple Truth" and "A Technical Explanation of Technical Explanation".

Probability theory still applies.

Ah, but which probability theory? Bayesian or frequentist? Or the ideas of Fisher?

How do you feel about the likelihood principle? The Behrens-Fisher problem, particularly when the variances are unknown and not assumed to be equal? The test of a sharp (or point) null hypothesis?

It does no good to assume that one's statistics and probability theory are not built on axioms themselves. I have rarely met a probabilist or statistician whose answer about whether he or she believes in the likelihood principle or in the logically contradicted significance tests (or in various solutions of the Behrens-Fisher problem) does not depend on some sort of axiom or idea of what simply "seems right." Of course, there are plenty of scientists who use mutually contradictory statistical tests, depending on what they're doing.

A calculated probability of 0.0000001 should diminish the emotional strength of any anticipation, positive or negative, by a factor of ten million.

And there goes Walter Mitty and Calvin, then. If it is justifiable to enjoy art or sport, why is it not justifiable to enjoy gambling for its own sake?

if the results are significant at the 0.05 confidence level. Now this is not just a ritualized tradition. This is not a point of arbitrary etiquette like using the correct fork for salad.

The use of the 0.05 confidence level is itself a point of arbitrary etiquette. The idea that results close to identical, yet one barely meeting the arbitrary 0.05 confidence level and the other not, can be separated into two categories of "significant" and "not significant" is a ritualized tradition indeed perhaps not understood by many scientists. There are important reasons for having an arbitrary point to mark significance, and of having that custom be the same throughout science (and not chosen by the experimenter). But the actual point is arbitrary etiquette.

The commonality of utensils or traffic signals in a culture is important, even though the specific forms that they take are arbitrary. The exact confidence level used is arbitrary; it's important that there is a standard.

Nor is Bayes's Theorem different from one place to another.

No, but the statistical concept of "confidence" depends on how an experimenter thinks that a study was designed. See for example this discussion of the likelihood principle.

If Alice conducts 12 trials with 3 successes and 9 failures, do we reject the null hypothesis p = .5 versus p < .5 at the 0.05 confidence level? It turns out that the answer depends in the classical frequentist sense on whether Alice decided ahead of time to conduct 12 trials or decided to conduct trials until 3 successes were achieved. What if Alice drops dead after recording the results of the trials but not the setup? Then Bob and Chuck, finding the notebook, may disagree about significance. The "significance" depends on the design of the experiment rather than the results alone, according to classical methods.

How many scientists understand that?

John Thacker:

I consider myself a finitist, but not an ultrafinitist; I believe in the existence of numbers expressed using Conway chained arrow notation. I am also willing to reject finitism iff a physical theory is constructed which requires me to believe in infinite quantities. I tentatively believe in real numbers and differential equations because physics requires (though I also hold out hope that e.g. holographic physics or some other discrete view may enable me to go digital again). However, I don't believe that the real numbers in physics are really made of Dedekind cuts, or any other sort of infinite set. I am willing to relinquish my skepticism if a high-energy supercollider breaks open a real number and we find an infinite number of rational numbers bopping around inside it.

I consider the Axiom of Choice to be a work of literary fiction, like "Lord of the Rings".

Bayesian probability theory works quite well on finite sets. Real-world problems are finite. Why should I need to accept infinity to use Bayes on real-world problems?

The two-envelopes problem shows the necessity of having a finite prior.

Godel's Completeness theorem shows that any first-order statement true in all models of a set of first-order axioms is provable from those axioms. Thus, the failure of Peano Arithmetic to prove itself consistent is because there are many "supernatural" models of PA in which PA itself is not consistent; that is, there exist supernatural numbers corresponding to proofs of P&~P. PA shouldn't prove itself consistent because that assertion does not in fact follow from the axioms of PA. (This view was suggested to me by Steve Omohundro.) Now, I don't believe in these supernatural numbers, but PA hasn't been given enough information to rule them out, and so it is behaving properly in refusing to assert its own consistency.

I have no desperate psychological need for absolute certainty or proof, which, even if PA proved itself sound, I couldn't have in any case, because I would have to believe in PA's soundness before I trusted its proof of soundness. Or maybe I'm in the grips of a Cartesian demon playing with my mathematical abilities.

Correspondence, not coherence, very easily justifies mathematics. Math can make successful predictions, ergo, it's probably true. No one has ever seen an infinite set, ergo, they probably don't exist, and at any rate I have no reason to believe in them.

Heh. Fair enough, John, I suppose that someone has to arbitrage the books. I'll add it to Jane Galt's observation regarding the genuine usefulness of salad forks.

I agree that 0.005 is equally pulled out of a hat. But I also agree on your earlier observation regarding there being some necessity for standardization here.

Personally, I would prefer to standardize "small", "medium", and "large" effect sizes, then report likelihood ratios over the point null hypothesis. A very strong advantage of this approach is that it lets someone do a large study and report a startling likelihood advantage of 1000 for "no effect" over "small effect", rather than just the boring old phrase "not statistically significant". This is probably worth its own post, but I may not get around to writing it.

"And there goes Walter Mitty and Calvin, then. If it is justifiable to enjoy art or sport, why is it not justifiable to enjoy gambling for its own sake?"

You don't have to believe (at any level) that there's a higher chance of you winning than there actually is to enjoy gambling. You just have to consider that the "thrill" payoff inherent in the uncertainty itself is high enough to justify the money that will be statistically spent. I think exactly the same argument could be made about sport.

Douglas, I have found it hard to teach when I have tried, but I'm sure another reason it is rarely done is that academic rewards for it tend to be small relative to the costs.

In sum, I agree, but one small issue I take is when you argue that someone acts contrary to their learning it demonstrates that they don't really understand it. I'm sure this is often the case, but sometimes it's a matter of akrasia: the person knows what they should do and why, even deep down inside, yet finds themselves unable to do it.

Humans suffer heavily from their biases. I recall at in middle school I came to the conclusion that no deities existed, yet it took me a long while to act on it because of social pressures, so I continued to behave contrary to my beliefs out of fear. It was only later in life that I gained the self-confidence and bravery to act upon my beliefs, no matter how contrary to the social norm.

You might say that I didn't really understand and that if I did I would have acted differently, but I find this contrary to my own experience, and this is only one such example. The human brain is a mine field, and even when we understand, we may still fail to act correctly.

Hmmm...

Q) Why do I believe that special relativity is true? A) Because scientists have told me their standards of evidence, and that the evidence for special relativity meets those standards.

I haven't seen anything contract when moving close to the speed of light. I haven't measured the speed of light in a vacuum and found that it is independent of the non-accelerating motion of the observer. I haven't measured a change in mass during nuclear reactions. I simply hear what people tell me, and decide to believe it.

George Orwell put it far more elegantly, and you can read what he wrote at http://www.newenglishreview.org/blog_direct_link.cfm?blog_id=4274

I can try to apply filters to determine who I can regard as a legitimate authority on various topics. Anyone whose arguments are logically inconsistent is obviously right out. I can check credentials. I can ask people why they accept a claim, and if I disapprove of their standard of evidence, I can give their claims less credence. I can see if the topic is controversial among those whose standards of evidence I respect, and if it is, I can refrain from judgment on the grounds that if there were strong evidence either way, there would be no controversy.

Many things tend to be such that we have to act without anywhere near the amount of evidence that even the social sciences demand. How should I invest my money? What will make me more attractive to potential mates? Who should I vote for? Is (insert enemy here) really a dire threat that my country needs to fight and defeat? What career should I pursue? Which person should I hire? It's really hard to design and perform experiments to answer questions like this. Heck, we still don't even know what kind of food is best to eat!