James,

Nice of you to drop by and comment -- I still remember that really interesting discussion about price indexes we had a few months ago!

One thing I find curious in economics is that basically anything studied under that moniker is considered to belong to a single discipline, and economists of all sorts apparently recognize each other as professional colleagues (even when they bitterly attack each other in ideological disputes). This despite the fact that the intellectual standards in various subfields of economics are of enormously different quality, ranging from very solid to downright pseudoscientific. And while I occasionally see economists questioning the soundness of their discipline, it's always formulated as questioning the soundness of economics in general, instead of a more specific and realistic observation that micro is pretty solid as long as one knows and respects the limitations of one's models, whereas macro is basically just pseudoscience.

Is the tendency for professional solidarity really that strong, or am I perhaps misperceiving this situation as an outsider?

Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields

(This post is an expanded version of a LW comment I left a while ago. I have found myself referring to it so much in the meantime that I think it’s worth reworking into a proper post. Some related posts are "The Correct Contrarian Cluster" and "What is Bunk?")

When looking for information about some area outside of one’s expertise, it is usually a good idea to first ask what academic scholarship has to say on the subject. In many areas, there is no need to look elsewhere for answers: respectable academic authors are the richest and most reliable source of information, and people claiming things completely outside the academic mainstream are almost certain to be crackpots. 

The trouble is, this is not always the case. Even those whose view of the modern academia is much rosier than mine should agree that it would be astonishing if there didn’t exist at least some areas where the academic mainstream is detached from reality on important issues, while much more accurate views are scorned as kooky (or would be if they were heard at all). Therefore, depending on the area, the fact that a view is way out of the academic mainstream may imply that it's bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.

I will discuss some heuristics that, in my experience, provide a realistic first estimate of how sound the academic mainstream in a given field is likely to be, and how justified one would be to dismiss contrarians out of hand. These conclusions have come from my own observations of research literature in various fields and some personal experience with the way modern academia operates, and I would be interested in reading others’ opinions. 

Low-hanging fruit heuristic

As the first heuristic, we should ask if there is a lot of low-hanging fruit available in the given area, in the sense of research goals that are both interesting and doable. If yes, this means that there are clear paths to quality work open for reasonably smart people with an adequate level of knowledge and resources, which makes it unnecessary to invent clever-looking nonsense instead. In this situation, smart and capable people can just state a sound and honest plan of work on their grant applications and proceed with it.

In contrast, if a research area has reached a dead end and further progress is impossible except perhaps if some extraordinary path-breaking genius shows the way, or in an area that has never even had a viable and sound approach to begin with, it’s unrealistic to expect that members of the academic establishment will openly admit this situation and decide it’s time for a career change. What will likely happen instead is that they’ll continue producing output that will have all the superficial trappings of science and sound scholarship, but will in fact be increasingly pointless and detached from reality. 

Arguably, some areas of theoretical physics have reached this state, if we are to trust the critics like Lee Smolin. I am not a physicist, and I cannot judge directly if Smolin and the other similar critics are right, but some powerful evidence for this came several years ago in the form of the Bogdanoff affair, which demonstrated that highly credentialed physicists in some areas can find it difficult, perhaps even impossible, to distinguish sound work from a well-contrived nonsensical imitation. [1]

Somewhat surprisingly, another example is presented by some subfields of computer science. With all the new computer gadgets everywhere, one would think that no other field could be further from a stale dead end. In some of its subfields this is definitely true, but in others, much of what is studied is based on decades old major breakthroughs, and the known viable directions from there have long since been explored all until they hit against some fundamentally intractable problem. (Or alternatively, further progress is a matter of hands-on engineering practice that doesn't lend itself to the way academia operates.) This has led to a situation where a lot of the published CS research is increasingly distant from reality, because to keep the illusion of progress, it must pretend to solve problems that are basically known to be impossible. [2] 

Ideological/venal interest heuristic

Bad as they might be, the problems that occur when clear research directions are lacking pale in comparison with what happens when things under discussion are ideologically charged or a matter in which powerful interest groups have a stake. As Hobbes remarked, people agree about theorems of geometry not because their proofs are solid, but because "men care not in that subject what be truth, as a thing that crosses no man’s ambition, profit, or lust." [3]

One example is the cluster of research areas encompassing intelligence research, sociobiology, and behavioral genetics, which touches on a lot of highly ideologically charged questions. These pass the low-hanging fruit heuristic easily: the existing literature is full of proposals for interesting studies waiting to be done. Yet, because of their striking ideological implications, these areas are full of work clearly aimed at advancing the authors’ non-scientific agenda, and even after a lot of reading one is left in confusion over whom to believe, if anyone. It doesn’t even matter whose side one supports in these controversies: whichever side is right (if any one is), it’s simply impossible that there isn’t a whole lot of nonsense published in prestigious academic venues and under august academic titles. 

Yet another academic area that suffers from the same problems is the history of the modern era. On many significant events from the last two centuries, there is a great deal of documentary evidence laying around still waiting to be assessed properly, so there is certainly no lack of low-hanging fruit for a smart and diligent historian. Yet due to the clear ideological implications of many historical topics, ideological nonsense cleverly masquerading as scholarship abounds. I don’t think anything resembling an accurate world history of the last two centuries could be written without making a great many contrarian claims. [4] In contrast, on topics that don't arouse ideological passions, modern histories are often amazingly well researched and free of speculation and distortion. (In particular, if you are from a small nation that has never really been a player in world history, your local historians are likely to be full of parochial bias motivated by the local political quarrels and grievances, but you may be able to find very accurate information on your local history in the works of foreign historians from the elite academia.) 

On the whole, it seems to me that failing the ideological interest test suggests a much worse situation than failing the low-hanging fruit test. The areas affected by just the latter are still fundamentally sound, and tend to produce work whose contribution is way overblown, but which is still built on a sound basis and internally coherent. Even if outright nonsense is produced, it’s still clearly distinguishable with some effort and usually restricted to less prestigious authors. Areas affected by ideological biases, however, tend to drift much further into outright delusion, possibly lacking a sound core body of scholarship altogether. 

[Paragraphs below added in response to comments:] 

What about the problem of purely venal influences, i.e. the cases where researchers are under the patronage of parties that have stakes in the results of their research? On the whole, the modern Western academic system is very good at discovering and stamping out clear and obvious corruption and fraud. It's clearly not possible for researchers to openly sell their services to the highest bidder; even if there are no formal sanctions, their reputation would be ruined. However, venal influences are nevertheless far from nonexistent, and a fascinating question is under what exact conditions researchers are likely to fall under them and get away with it.

Sometimes venal influences are masked by scams such as setting up phony front organizations for funding, but even that tends to be discovered eventually and tarnish the reputations of the researchers involved. What seems to be the real problem is when the beneficiaries of biased research enjoy such status in the eyes of the public and such legal and customary position in society that they don't even need to hide anything when establishing a perverse symbiosis that results in biased research. Such relationships, while fundamentally representing venal interest, are in fact often boasted about as beneficial and productive cooperation. Pharmaceutical research is an often cited example, but I think the phenomenon is in fact far more widespread, and reaches the height of perverse perfection in those research communities whose structure effectively blends into various government agencies. 

The really bad cases: failing both tests

So far, I've discussed examples where one of the mentioned heuristics returns a negative answer, but not the other. What happens when a field fails both of them, having no clear research directions and at the same time being highly relevant to ideologues and interest groups? Unsurprisingly, it tends to be really bad. 

The clearest example of such a field is probably economics, particularly macroeconomics. (Microeconomics covers an extremely broad range of issues deeply intertwined with many other fields, and its soundness, in my opinion, varies greatly depending on the subject, so I’ll avoid a lengthy digression into it.) Macroeconomists lack any clearly sound and fruitful approach to the problems they wish to study, and any conclusion they might draw will have immediately obvious ideological implications, often expressible in stark "who-whom?" terms. 

And indeed, even a casual inspection of the standards in this field shows clear symptoms of cargo-cult science: weaving complex and abstruse theories that can be made to predict everything and nothing, manipulating essentially meaningless numbers as if they were objectively measurable properties of the real world [5], experts with the most prestigious credentials dismissing each other as crackpots (in more or less diplomatic terms) when their favored ideologies clash, etc., etc. Fringe contrarians in this area (most notably extreme Austrians) typically have silly enough ideas of their own, but their criticism of the academic mainstream is nevertheless often spot-on, in my opinion.

Other examples

So, what are some other interesting case studies for these heuristics? 

An example of great interest is climate science. Clearly, the ideological interest heuristic raises a big red flag here, and indeed, there is little doubt that a lot of the research coming out in recent years that supposedly links "climate change" with all kinds of bad things is just fashionable nonsense [6]. (Another sanity check it fails is that only a tiny proportion of these authors ever hypothesize that the predicted/observed climate change might actually improve something, as if there existed some law of physics prohibiting it.) Thus, I’d say that contrarians on this issue should definitely not be dismissed out of hand; the really hard question is how much sound insight (if any) remains after one eliminates all the nonsense that’s infiltrated the mainstream. When it comes to the low-hanging fruit heuristic, I find the situation less clear. How difficult is it to achieve progress in accurately reconstructing long-term climate trends and forecasting the influences of increasing greenhouse gases? Is it hard enough that we’d expect, even absent an ideological motivation, that people would try to substitute cleverly contrived bunk for unreachable sound insight? My conclusion is that I’ll have to read much more on the technical background of these subjects before I can form any reliable opinion on these questions. 

Another example of practical interest is nutrition. Here ideological influences aren’t very strong (though not altogether absent either). However, the low-hanging fruit raises a huge red flag: it’s almost impossible to study these things in a sound way, controlling for all the incredibly complex and counterintuitive confounding variables. At the same time, it’s easy to produce endless amounts of plausible-looking junk studies. Thus, I’d expect that the mainstream research in this area is on average pure nonsense, with a few possible gems of solid insight hopelessly buried under it, and even when it comes to very extreme contrarians, I wouldn’t be tremendously surprised to see any one of them proven right at the end.  My conclusion is similar when it comes to exercise and numerous other lifestyle issues.

Exceptions

Finally, what are the evident exceptions to these trends? 

I can think of some exceptions to the low-hanging fruit heuristic. One is in historical linguistics, whose standard well-substantiated methods have had great success in identifying the structure of the world’s language family trees, but give no answer at all to the fascinating question of how far back into the past the nodes of these trees reach (except of course when we have written evidence). Nobody has any good idea how to make progress there, and the questions are tantalizing. Now, there are all sorts of plausible-looking but fundamentally unsound methods that purport to answer these questions, and papers using them occasionally get published in prestigious non-linguistic journals, but the actual historical linguists firmly dismiss them as unsound, even though they have no answers of their own to offer instead. [7] It’s an example of a commendable stand against seductive nonsense.

It’s much harder to think of examples where the ideological interest heuristic fails. What field can one point out where mainstream scholarship is reliably sound and objective despite its topic being ideologically charged? Honestly, I can’t think of one.

What about the other direction -- fields that pass both heuristics but are nevertheless nonsense? I can think of e.g. artsy areas that don’t make much of a pretense to objectivity in the first place, but otherwise, it seems to me that absent ideological and venal perverse incentives, and given clear paths to progress that don’t require extraordinary genius, the modern academic system is great in producing solid and reliable insight. The trouble is that these conditions often don’t hold in practice. 

I’d be curious to see additional examples that either confirm of disprove these heuristics I proposed.

Footnotes

[1] Commenter gwern has argued that the Bogdanoff affair is not a good example, claiming that the brothers have been shown as fraud decisively after they came under intense public scrutiny. However, even if this is true, the fact still remains that they initially managed to publish their work in reputable peer-reviewed venues and obtain doctorates at a reputable (though not top-ranking) university, which strongly suggests that there is much more work in the field that is equally bad but doesn't elicit equal public interest and thus never gets really scrutinized. Moreover, from my own reading about the affair, it was clear that in its initial phases several credentialed physicists were unable to make a clear judgment about their work. On the whole, I don’t think the affair can be dismissed as an insignificant accident. 

[2] Moldbug’s "What’s wrong with CS research" is a witty and essentially accurate overview of this situation. He mostly limits himself to the discussion of programming language research, but a similar scenario can be seen in some other related fields too.

[3] Thomas Hobbes, Leviathan, Chapter XI.

[4] I have the impression that LW readers would mostly not be interested in a detailed discussion of the topics where I think one should read contrarian history, so I’m skipping it. In case I’m wrong, please feel free to open the issue in the comments.

[5] Oskar Morgenstern’s On the Accuracy of Economic Observations is a tour de force on the subject, demonstrating the essential meaninglessness of many sorts of numbers that economists use routinely. (Many thanks to the commenter realitygrill for directing me to this amazing book.) Morgenstern is of course far too prestigious a name to dismiss as a crackpot, so economists appear to have chosen to simply ignore the questions he raised, and his book has been languishing in obscurity and out of print for decades. It is available for download though (warning: ~31MB PDF).

[6] Some amusing lists of examples have been posted by the Heritage Foundation and the Number Watch (not intended to endorse the rest of the stuff on these websites). Admittedly, a lot of the stuff listed there is not real published research, but rather just people's media statements. Still, there's no shortage of similar things even in published research either, as a search of e.g. Google Scholar will show.

[7] Here is, for example, the linguist Bill Poser dismissing one such paper published in Nature a few years ago. 

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 4:32 AM
Select new highlight date
Rendering 50/278 comments  show more

If you are going to suggest that academic climate research is not up to scratch, you need to do more than post links to pages that link to non-academic articles. Saying "you can find lots on google scholar" is not that same as actually pointing to the alleged sub-standard research.

For a long time I too was somewhat skeptical about global warming. I recognized the risk that researchers would exaggerate the problem in order to obtain more funding.

What I chose to do to resolve the matter was to deep dive into a few often-raised skeptic arguments using my knowledge of physics as a starting point, and learning whatever I needed to learn along the way (it took a while). The result was that the academic researchers won 6-0 6-0 6-0 in three sets (to use a tennis score analogy). Most striking to me was the dishonesty and lack of substance on the "skeptic" side. There was just no "there" there.

The topics I looked into were: accuracy of the climate temperature record, alleged natural causes explaining the recent heating, the alleged saturation of the atmospheric CO2 infra-red wavelengths, and the claim that the CO2 that is emitted by man is absorbed very quickly.

In retrospect I became aware that my 'skepticism' was fulled in large part by deliberate misinformation campaigns in the grand tradition of tobacco, asbestos, HFCs, DDT etc. The same techniques, and even many of the same PR firms are involved. As one tobacco executive said "Our product is doubt".

An article about assessing the soundness of the academic mainstream would benefit from also discussing the ways in which the message from, and even the research done in, academia is corrupted and distorted by commercial interests. Economics is a case in point, but it is a big issue also in drug research and other aspects of medicine.

Another thing I have noticed in looking into various areas of academic research is just how much research in every field I looked at is inconclusive, inconsequential, flawed or subtly biased (look up "desk drawer bias" for example).

Edit: fixed a few typos.

Edit: good article by the way, very well reasoned.

waveman:

If you are going to suggest that academic climate research is not up to scratch, you need to do more than post links to pages that link to non-academic articles. Saying "you can find lots on google scholar" is not that same as actually pointing to the alleged sub-standard research.

I agree that I should have argued and referenced that part better. What I wanted to point out is that there is a whole cottage industry of research purporting to show that climate change is supposedly influencing one thing or another, a very large part of which appears to advance hypotheses so far-fetched and weakly substantiated that they seem like obvious products of the tendency to involve this super-fashionable topic into one's research whenever possible, for reasons of both status- and career-advancement.

Even if one accepts that the standard view on climate change has been decisively proven and the issue shown to be a pressing problem, I still don't think how one could escape this conclusion.

You wrote "what I chose to do to resolve the matter was to deep dive into three often-raised skeptic arguments using my knowledge of physics as a starting point" and "deliberate misinformation campaigns in the grand tradition of tobacco [etc.]".

Less Wrong is not the place for a comprehensive argument about catastrophic AGW, but I'd like to make a general Less-Wrong-ish point about your analysis here. It is perceptive to notice that millions of dollars are spent on a shoddy PR effort by the other side. It is also perceptive to notice that many of the other side's most popular arguments aren't technically very strong. It's even actively helpful to debunk unreasonable popular arguments even if you only do it for those which are popular on the other side. However, remember that it's sadly common that regardless of their technical merits, big politicized controversies tend to grow big shoddy PR efforts associated with all factions. And even medium-sized controversies tend to attract some loud clueless supporters on both sides. Thus, it's not a very useful heuristic to consider significant PR spending, or the popularity of flaky arguments, as particularly useful evidence against the underlying factual position.

It may be "too much information [about AGW]" for Less Wrong, but I feel I should support my point in this particular controversy at least a little, so... E.g., look at the behavior of Pachauri himself in the "Glaciergate" glaciers-melting-by-2035 case. I can't read the guy's mind, and indeed find some of his behavior quite odd, so for all I know it is not "deliberate." But accidental or not, it looks rather like an episode in a misinformation campaign in the sorry tradition of big-money innumerate scare-environmentalism. Also, Judith Curry just wrote a blog post which mentions, among other things, the amount of money sloshing around in various AGW-PR-related organizations associated with anti-IPCC positions. For comparison, a rather angry critic I don't know much about (but one who should, at a minimum, be constrained by British libel law) ties the Glaciergate factoid to grants of $500K and $3M, and Greenpeace USA seems to have an annual budget of around $30M.

Also, to comment on this:

An article about assessing the soundness of the academic mainstream would benefit from also discussing the ways in which the message from, and even the research done in, academia is corrupted and distorted by commercial interests. Economics is a case in point, but it is a big issue also in drug research and other aspects of medicine.

That would fall under the "venal" part of considering the ideological/venal factors involved. I agree that I should have cited the example of drug research; the main reason I didn't do so is that I'm not confident that my impressions about this area are accurate enough.

One fascinating question about the problem of venal influences, about which I might write more in the future, is when and under what exact conditions researchers are likely to fall under them and get away with it, considering that the present system is overall very good at discovering and punishing crude and obvious corruption and fraud. As I wrote in another comment, sometimes such influences are masked by scams such as setting up phony front organizations for funding, but even that tends to be discovered eventually and tarnish the reputations of the researchers involved. What seems to be the worst problem is when the beneficiaries of biased research enjoy such status in the eyes of the public and such legal and customary position in society that they don't even need to hide anything when establishing a perverse symbiosis that results in biased research.

One marker to watch out for is a kind of selection effect.

In some fields, only 'true believers' have any motivation to spend their entire careers studying the subject in the first place, and so the 'mainstream' in that field is absolutely nutty.

Case examples include philosophy of religion, New Testament studies, Historical Jesus studies, and Quranic studies. These fields differ from, say, cryptozoology in that the biggest names in the field, and the biggest papers, are published by very smart people in leading journals and look all very normal and impressive but those entire fields are so incredibly screwed by the selection effect that it's only "radicals" who say things like, "Um, you realize that the 'gospel of Mark' is written in the genre of fiction, right?"

I agree about the historical Jesus studies. At one point, I got intensely interested in this topic and read a dozen or so books about it by various authors (mostly on the skeptical end). My conclusion is that this is possibly the ultimate example of an area where the questions are tantalizingly interesting, but making any reliable conclusions from the available evidence is basically impossible. At the end, as you say, we get a lot of well written and impressively researched books whose content is however just a rationalization for the authors' opinions held for altogether different reasons.

On the other hand, I'm not sure if you're expressing support for the radical mythicist position, but if you do, I disagree. As much as Christian apologists tend to stretch the evidence in their favor, it seems to me like radical mythicists are biased in the other direction. (It's telling that the doyen of contemporary mythicism, G.A. Wells, who certainly has no inclination towards Christian apologetics, has moderated his position significantly in recent years.)

When I wrote "What is Bunk?" I thought I had a pretty good idea of the distinction between science and pseudoscience, except for some edge cases. Astrology is pseudoscience, astronomy is science. At the time, I was trying to work out a rubric for the edge cases (things like macroeconomics.)

Now, though, knowing a bit more about the natural sciences, it seems that even perfectly honest "science" is much shakier and likelier to be false than I supposed. There's apparently a high probability that the conclusions of a molecular biology paper will be false -- even if the journal is prestigious and the researchers are all at a world-class university. There's simply a lot of pressure to make results look more conclusive than they are.

In the field of machine learning, which I sometimes read the literature in, there are foundational debates about the best methods. Ideas which very smart and highly credentialed people tout often turn out to be ineffective, years down the road. Apparently smart and accomplished researchers will often claim that some other apparently smart and accomplished researcher is doing it all wrong.

If you don't actually know a field, you might think, "Oh. Tenured professor. Elite school. Dozens of publications and conferences. Huge erudition. That means I can probably believe his claims." Whereas actually, he's extremely fallible. Not just theoretically fallible, but actually has a serious probability of being dead wrong.

I guess the moral is "Don't trust anyone but a mathematician"?

I guess the moral is "Don't trust anyone but a mathematician"?

Safety in numbers? ;)

Perhaps it's useful to distinguish between the frontier of science vs. established science. One should expect the frontier to be rather shaky and full of disagreements, before the winning theories have had time to be thoroughly tested and become part of our scientific bedrock. There was a time after all when it was rational for a layperson to remain rather neutral with respect to Einstein's views on space and time. The heuristic of "is this science established / uncontroversial amongst experts?" is perhaps so boring we forget it, but it's one of the most useful ones we have.

I guess the moral is "Don't trust anyone but a mathematician"?

Theorems get published all the time that turn out to have incorrect proofs or to be not even theorems. There was about a decade long period in the late 19th century where there was a proof of the four color theorem that everyone thought was valid. And the middle of the 20th century there were serious issues with calculating homology groups and cohomology groups of spaces where people kept getting different answers. And then there are a handful of examples where theorems simply got more and more conditions tacked on to them as more counterexamples to the theorems became apparent. The Euler formula for polyhedra is possibly the most blatant such example.

So even the mathematicians aren't always trustworthy.

As the first heuristic, we should ask if there is a lot of low-hanging fruit available in the given area, in the sense of research goals that are both interesting and doable. If yes, this means that there are clear paths to quality work open for reasonably smart people with an adequate level of knowledge and resources, which makes it unnecessary to invent clever-looking nonsense instead. In this situation, smart and capable people can just state a sound and honest plan of work on their grant applications and proceed with it.

In contrast, if a research area has reached a dead end and further progress is impossible except perhaps if some extraordinary path-breaking genius shows the way, or in an area that has never even had a viable and sound approach to begin with, it’s unrealistic to expect that members of the academic establishment will openly admit this situation and decide it’s time for a career change. What will likely happen instead is that they’ll continue producing output that will have all the superficial trappings of science and sound scholarship, but will in fact be increasingly pointless and detached from reality.

This sounds like a useful heuristic, but I think there's another one almost directly opposed to it which is worth keeping in mind. In some branches of psychology, for instance, there is so much low hanging fruit that you'd think that researchers would never have a shortage of work. But instead, entire schools of psychology have persisted based on conclusions drawn from single experiments which were never followed up with the appropriate further research to narrow down the proper interpretation. I've been told that sociology and anthropology suffer similar issues.

If a field (or sub-field) doesn't exhibit enough interest in pursuing low hanging fruit, I think that's a good sign that there's a high ratio of ideological rationalization to solid research.

We'd expect most changes to the Earth's climate to be bad (on net) for its current inhabitants because the Earth has been settled in ways that are appropriate to its current climate. Species are adapted to their current environment, so if weather patterns change and the temperature goes up or down, or precipitation increases or decreases, or whatever else, that's more likely to be bad for them than good.

Similarly, humans grow crops in places where those crops grow well, live where they have access to water but not too many floods (and where they are on land rather than underwater), and so on. If the climate changes, then the number of places on Earth that would be a good place for a city might not change, but fewer of our existing cities will be in one of those places.

There are some expected benefits of global warming (e.g., "Crop productivity is projected to increase slightly at mid- to high latitudes for local mean temperature increases of up to 1-3°C depending on the crop, and then decrease beyond that in some regions"). But, unsurprisingly, climate scientists are projecting more costs than benefits, and a net cost. News articles are likely to have a further bias towards explaining negative events rather than positive ones, and may be of uneven quality (as waveman pointed out), so if you want a thorough account of the costs and benefits you should look at something like the IPCC report, which was the source of my quote about increased crop productivity.

Here's another one: what I call the layshadow heuristic: could an intelligent layperson produce passable, publishable work [1] in that field after a few days of self-study? It's named after the phenomenon in which someone with virtually no knowledge of the field sells the service of writing papers for others who don't want to do the work, and are never discovered, with their clients being granted degrees.

The heuristic works because passing it implies very low inferential distance and therefore very little knowledge accumulation.

[1] specifically, work that unsuspecting "experts" in the field cannot distinguish from that produced by "serious" researchers with real "experience" and "education" in that field.

SilasBarta:

It's named after the phenomenon in which someone with virtually no knowledge of the field sells the service of writing papers for others who don't want to do the work, and are never discovered, with their clients being granted degrees.

I agree this is indicative of serious pathology of one sort or another, but in fairness, I find it plausible that in many fields there might be a very severe divide between real scholarship done by people on the tenure track and the routine drudgery assigned to students, even graduate students who aren't aiming for the tenure track.

The pathologies of the educational side of the modern academic system are certainly a fascinating topic in its own right.

I've been surprised by how bad the majority of scholarship is around the "inspired-by" or "metaphorical" genre of algorithms - neural networks, genetic algorithms, Baum's Hayek machine and so on. My guess is that the colorful metaphors allow you to disguise any success as due to the technique rather than a grad student poking and prodding at it until a demo seems to work.

Within the metaphorical algorithms, I've been surprised at reinforcement learning in particular. It may have started with a metaphor of operant conditioning, but it has a useful mathematical foundation related to dynamic programming.

I've been surprised by how bad the majority of scholarship is around the "inspired-by" or "metaphorical" genre of algorithms - neural networks, genetic algorithms, Baum's Hayek machine and so on. My guess is that the colorful metaphors allow you to disguise any success as due to the technique rather than a grad student poking and prodding at it until a demo seems to work.

This was my area of research in my postgrad years. Specifically ants and birds. I couldn't agree more. The techniques do work but the scholarship was absolutely abysmal.

As an economist myself (though a microeconomist) I share some of your concerns about macroeconomics. The way support and opposition for the US's recent stimulus broke down along ideological lines was wholly depressing.

I think the problem for macro is that they have almost no data to work with. You can't run a controlled experiment on a whole country and countries tend to be very different from each other which means there are a lot of confounding factors to deal with. And without much evidence, how could they hope to generate accurate beliefs?

Add to that the raw complexity of what economists study. The human brain the most complex object known to exist and the the global economy is about 7 billion of them interacting with each other.

None of this is meant to absolve macroeconomics, it may just be that meaningful study in this area just isn't possible. Macro has made some gains, there's a list of things that don't work in development economics and stabilisation policy is better than it was in the 1970s. But apart from that? Not much.

James,

Nice of you to drop by and comment -- I still remember that really interesting discussion about price indexes we had a few months ago!

One thing I find curious in economics is that basically anything studied under that moniker is considered to belong to a single discipline, and economists of all sorts apparently recognize each other as professional colleagues (even when they bitterly attack each other in ideological disputes). This despite the fact that the intellectual standards in various subfields of economics are of enormously different quality, ranging from very solid to downright pseudoscientific. And while I occasionally see economists questioning the soundness of their discipline, it's always formulated as questioning the soundness of economics in general, instead of a more specific and realistic observation that micro is pretty solid as long as one knows and respects the limitations of one's models, whereas macro is basically just pseudoscience.

Is the tendency for professional solidarity really that strong, or am I perhaps misperceiving this situation as an outsider?

I'm surprised that you don't mention the humanities as a really bad case where there is little low-hanging fruit and high ideological content. Take English literature for example. Barrels of ink have been spilled in writing about Hamlet, and genuinely new insights are quite rare. The methods are also about as unsound as you can imagine. Freud is still heavily cited and applied, and postmodern/poststructuralist/deconstructionist writing seems to be accorded higher status the more impossible to read it is.

Ideological interest is also a big problem. This seems almost inevitable, since the subject of the humanities is human culture, which is naturally bound up with human ideals, beliefs, and opinions. Academic disciplines are social groups, so they have a natural tendency to develop group norms and ideologies. It's unsurprising that this trend is reinforced in those disciplines that have ideologies as their subject matter. The result is that interpretations which do not support the dominant paradigm (often a variation on how certain sympathetic social groups are repressed, marginalized, or "otherized"), are themselves suppressed.

One theory of why the humanities are so bad is that there is no empirical test for whether an answer is right or not. Incorrect science leads to incorrect predictions, and even incorrect macroeconomics leads to suboptimal policy decisions. But it's hard to imagine what an "incorrect" interpretation of Hamlet even looks like, or what the impact of having an incorrect interpretation would be. Hence, there's no pressure towards correct answers that offsets the natural tendency for social communities to develop and enforce social norms.

I wonder if "empirical testability" is a should be included with the low-hanging fruit heuristic.

AShepard:

I'm surprised that you don't mention the humanities as a really bad case where there is little low-hanging fruit and high ideological content.

Well, I have mentioned history. Other humanities can be anywhere from artsy fields where there isn't even a pretense of any sort of objective insight (not that this necessarily makes them worthless for other purposes), to areas that feature very well researched and thought-out scholarship if ideological issues aren't in the way, and if it's an area that hasn't been already done to death for generations (which is basically my first heuristic).

I wonder if "empirical testability" is a should be included with the low-hanging fruit heuristic.

Perhaps surprisingly, it doesn't seem to me that empirical testability is so important. Lousy work can easily be presented with plenty of empirical data carefully arranged and cherry-picked to support it. To recognize the problem in such cases and sort out correct empirical validation from spin and propaganda is often a problem as difficult as sorting out valid from invalid reasoning in less empirically-oriented work.

On "ideologically charged" science producing good results:

Evolutionary biology, in general. Creationism went down really hard and really quickly.

Did it? Sure, it's clear cut now. But what I've read about the subject says that back in the days when it was a matter of mainstream intellectual debate, it was long and very messy, and included things like scientists on the 'right' side accepting extremely dodgy evidence for spontaneous generation of life in the test tube because they felt that to reject it would weaken the case for being able to do without divine intervention.

I don't think this is a good example. My post is intended to apply to the contemporary academia, whereas the basics of evolutionary theory were proposed way back in the 19th century, and the decisive controversies over them played out back then, when the situation was very different from nowadays in many relevant ways. (Of course, creationism is still alive and well among the masses, but for generations already it has been a very low-status belief with virtually zero support among the intellectual elites.)

On the other hand, when it comes to questions in evolutionary theory that still have strong implications about issues that are ideologically charged even among the intellectual elites, there is indeed awful confusion and one can find plenty of examples where prestigious academics are clearly throwing their weight behind their favored ideological causes. The controversies over sociobiology are the most obvious example.

In contrast, when it comes to modern applications of evolutionary theory to non-ideologically-sensitive problems, the situation is generally OK -- except in those cases where the authors don't have a clear and sound approach to the problem, so they end up producing just-so stories masquerading as scientific theories. This however is pretty much the situation that should trigger my first heuristic.

You confuse two very different issues.

1) How much weight you should give to the views of academics in that area, e.g., if some claim is accepted by the mainstream establishment (or conversely viewed as a valid point of disagreement) how much should that information affect your own probability judgement.

2) How much progress/how useful is the academic discipline in question. Does it require reform.


Your arguments in the first part are only relevant to #2. The programming language research community may be mirred in hopeless mathematical jealousy as they create more and more arcane type systems while ignoring the fact that ultimately programming language design is an entirely psychological question. The languages are all Turing complete and most offer the same functionality in some form the only question is one of human useability and the community doesn't seem very interested in checking what sorts of type systems or development environments really are empirically more productive. Maybe physics is stuck and can no longer make any real progress.

Nevertheless this has no bearing on how I should treat the evidence that 99% of physics professors predict experiment X will have outcome Y. Indeed, the argument that physics is stuck is largely that they have been so successful in explaining any easily testable phenomena it is difficult to make further progress. Similarly if I see that the programming language research people say that type system Blah is undecidable I will take that evidence seriously even if it doesn't turn out to be that useful.

(Frankly I think the harsh on CS is a bit unfair. Academia by it's nature is conservative and driven by pure research. We don't yet know whether their work will turn out to be useful down the road since CS is such a young discipline while at the same time many people do work in both practical and theoretical areas.)


I think #1 is the more interesting question. Here I would say the primary test should be whether or not disputes eventually produce consensus or not. That is does the discipline build up a store of accepted fact and move on to new issues (with occasional Kuhnian style paradigm shifts) or does it simply stay mired in the same issues without generating conclusions.

Pardon I didn't notice your comment earlier - unfortunately, you don't get notices when someone replies to top-level articles as it's done for replies to comments.

The difference you have in mind is basically the same as what I meant when I wrote about areas that are infested with a lot of bullshit work, but still fundamentally sound. Clearly CS people are smart and possess huge practically useful knowledge and skills -- after all, it's easy for anyone who works in CS research in an institution of any prominence to get a lucrative industry job working on very concrete, no-nonsense, and profitable projects. The foundations of the field are therefore clearly sound and useful.

This however still doesn't mean that there aren't entire bullshit subfields of CS, where a vast research literature is produced on things that are a clear dead-end (or aimed at entirely dreamed-up problems) while everyone pretends and loudly agrees that great contributions are being made. In such cases, the views expressed by the experts are seriously distant from reality, and it would be horribly mistaken to make important decisions by taking them at face value. People who work on such things are of course still capable of earning money doing useful work in industry, but that's only because the sort of bullshit that they have to produce must be sophisticated enough and in conformity with complex formal rules, so in order to produce the right sort of bullshit, you still need great intellectual ability and lots of useful skills.

You may be right that I should have perhaps made a stronger contrast between such fields and those that are rotten to the bottom.

I do agree that there are fields where the overall standards of the academic mainstream are not that high, but I'm not sure about the heuristics - I tend to use a different set.

One confusing factor is that in almost any field, the academic level of an arbitrary academic paper is not that high - average academic papers are published by average scientists, and are generally averagely brilliant - in other words, not that good. The preferred route is typically to prove something that's actually already well known, but there are also plenty of flawed papers. There are also plenty of papers that are perhaps interesting if you're interested in some particularly small niche of some particularly minor topic, but are of no relevance to the average reader. None of this says anything much about the quality of the mainstream orthodoxy, which can be very much higher than the quality of the average paper.

My main principle is that human beings are just not that intelligent. They are intelligent enough to follow a logical argument that is set into a system where there are tightly defined rules from which one can reason. They are NOT intelligent enough to reason sensibly AT ALL in regions where such rules are not defined. Well, perhaps a logical step or two is plausible, but anything beyond that becomes very dubious indeed - it is like trying to build a tower on a foundation of jello.

Reasoning based on vague definitions is a red flag - it encourages people to come up with any answer they want, and believe they've logically arrived at it. Reasoning based on a complicated set of not particularly related facts is a red flag, as nobody is intelligent enough to do it correctly.

Someone once said that all science is either physics or stamp collecting. It's close - you have to have some organising principles of decent mathematical quality to do reasoning with any certainty. Without that, stamp collecting is the limit of the possible.

Equally, maths is not a panacea. It's quite possible, in an academic paper, to spend a great deal of time developing a mathematical argument based on assumptions that aren't really connected to the question you're trying to answer - the maths is probably correct, but the vague and fuzzy bit where the maths is trying to connect to the problem is where it all goes wrong. To take the example everyone knows, financial models that assume average house prices can't go down as well as up may have perfectly correct mathematics, but will not predict well what will happen to those investments when house prices do go down.

In summary, those fields with widely accepted logical systems are probably doing something right. Those fields where there are multiple logical systems that are competing are probably also doing something right - the worst they can do is to reason correctly about the wrong thing. Fields where there is an incumbent system which is vague are bad, as are those fields where freeform reason is the order of the day.

DuncanS:

Someone once said that all science is either physics or stamp collecting. It's close - you have to have some organising principles of decent mathematical quality to do reasoning with any certainty. Without that, stamp collecting is the limit of the possible.

I disagree with this. In many areas there are methodologies that don't approach a mathematical level of formalization, and nevertheless yield rock-solid insight. One case in point is the example of historical linguistics I cited. These people have managed to reach non-obvious conclusions as reliable as anything else in science using a methodology that boils down to assembling a large web of heterogeneous common-sense evidence carefully and according to established systematic guidelines. Their results are a marvelous example of what some people call "traditional rationality" here.

I am reminded of this recent article from the arXiv blog:

Biologists Ignoring Low-Hanging Fruit, Says Drug Discovery Study

Molecular biologists focus most of their attention on a small set of biomolecules, while ignoring the rest, according to a study of research patterns.

[2] Moldbug’s "What’s wrong with CS research" is a witty and essentially accurate overview of this situation. He mostly limits himself to the discussion of programming language research, but a similar scenario can be seen in some other related fields too.

With the slight problem that Moldbug appears to be writing as a Systems Weenie, and being someone with cursory training on multiple sides of this issue (PL/Formal Verification and systems), I don't think his assessment there is accurate.

When assessing an academic field, you should include a kind of null hypothesis: "Academia is investigating interesting problems, but I'm a weenie who doesn't take a complete or unbiased look at the state of academia." This is often true.

Further example: a couple weeks ago I emailed Daniel Dewey about his Value Learners paper. I also read the ensuing LessWrong discussion. It turned out that the fundamental idea behind value learners was published in academia as a PhD thesis in ~2003, and someone linked it.

So why didn't we all know about this? Because we were weenies who didn't look at the academic consensus before diving in ourselves.