I'm unimpressed by this method. First, the procedure as given does more to reinforce pre-existing beliefs and point one to people who will reinforce those beliefs than anything else. Second, the sourcing used as experts is bad or outright misleading. For example, consider global warming. Wikipedia is listed as an expert source. But Wikipedia has no expertise and is itself an attempt at a neutral summary of experts. Even worse, Conservapedia is used both on the global warming and 9-11 pages. Considering that Conservapedia is Young Earth Creationist and thinks that the idea that Leif Erickson came to the the New World is a liberal conspiracy, I don't think any rational individual will consider them to be a reliable source (and the vast majority of American right-wingers I've ever talked to about this cringe when Conservapedia gets mentioned. So this isn't even my own politics coming into play). On cryonics we have Benjamin Franklin listed as pro. Now, that's roughly accurate. But it is also clear that he was centuries too early to have anything resembling relevant expertise. Looking at many of the fringe subjects a large number of the so-called experts who are living today have no intrinsic justification for their expertise (actors are not experts on scientific issues for example). TakeOnIt seems devoted if anything to blurring the nature of expert knowledge to the point where it becomes almost meaningless. The Bayesian Conspiracy would not approve.

Related: http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/, http://lesswrong.com/lw/1mh/that_magical_click/, http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/

Given a claim, and assuming that its truth or falsehood would be important to you, how do you decide if it's worth investigating?  How do you identify "bunk" or "crackpot" ideas?

Here are some examples to give an idea. 

"Here's a perpetual motion machine": bunk.  "I've found an elementary proof of Fermat's Last Theorem": bunk.  "9-11 was an inside job": bunk.  

 "Humans did not cause global warming": possibly bunk, but I'm not sure.  "The Singularity will come within 100 years": possibly bunk, but I'm not sure.  "The economic system is close to collapse": possibly bunk, but I'm not sure.

"There is a genetic difference in IQ between races": I think it's probably false, but not quite bunk.  "Geoengineering would be effective in mitigating global warming": I think it's probably false, but not quite bunk. 

(These are my own examples.  They're meant to be illustrative, not definitive.  I imagine that some people here will think "But that's obviously not bunk!"  Sure, but you probably can think of some claim that *you* consider bunk.)

A few notes of clarification: I'm only examining factual, not normative, claims.  I also am not looking at well established claims (say, special relativity) which are obviously not bunk. Neither am I looking at claims where it's easy to pull data that obviously refutes them. (For example, "There are 10 people in the US population.")  I'm concerned with claims that look unlikely, but not impossible. Also, "Is this bunk?" is not the same question as "Is this true?"  A hypothesis can turn out to be false without being bunk (for example, the claim that geological formations were created by gradual processes.  That was a respectable position for 19th century geologists to take, and a claim worth investigating, even if subsequent evidence did show it to be false.)  The question "Is this bunk?" arises when someone makes an unlikely-sounding claim, but I don't actually have the knowledge right now to effectively refute it, and I want to know if the claim is a legitimate subject of inquiry or the work of a conspiracy theory/hoax/cult/crackpot.  In other words, is it a scientific or a pseudoscientific hypothesis?  Or, in practical terms, is it worth it for me or anybody else to investigate it?

This is an important question, and especially to this community.  People involved in artificial intelligence or the Singularity or existential risk are on the edge of the scientific mainstream and it's particularly crucial to distinguish an interesting hypothesis from a bunk one.  Distinguishing an innovator from a crackpot is vital in fields where there are both innovators and crackpots.

I claim bunk exists. That is, there are claims so cracked that they aren't worth investigating. "I was abducted by aliens" has such a low prior that I'm not even going to go check up on the details -- I'm simply going to assume the alleged alien abductee is a fraud or nut.  Free speech and scientific freedom do not require us to spend resources investigating every conceivable claim.  Some claims are so likely to be nonsense that, given limited resources, we can justifiably dismiss them.

But how do we determine what's likely to be nonsense?  "I know it when I see it" is a pretty bad guide.

First idea: check if the proposer uses the techniques of rationality and science.  Does he support claims with evidence?  Does he share data and invite others to reproduce his experiments? Are there internal inconsistencies and logical fallacies in his claim?  Does he appeal to dogma or authority?  If there are features in the hypothesis itself that mark it as pseudoscience, then it's safely dismissed; no need to look further.

But what if there aren't such clear warning signs?  Our gracious host Eliezer Yudkowsky, for example, does not display those kinds of obvious tip-offs of pseudoscience -- he doesn't ask people to take things on faith, he's very alert to fallacies in reasoning, and so on.  And yet he's making an extraordinary claim (the likelihood of the Singularity), a claim I do not have the background to evaluate, but a claim that seems implausible.  What now?  Is this bunk?

A key thing to consider is the role of the "mainstream."  When a claim is out of the mainstream, are you justified in moving it closer to the bunk file?  There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians.  As far as I can tell, the best representatives of these schools don't commit the kinds of fallacies and bad arguments of the typical pseudoscientist.  How much should we be troubled, though, by the fact that most scientists of their disciplines shun them?  Perhaps it's only reasonable to give some weight to that fact.  

Or is it? If all the scientists themselves are simply making their judgments based on how mainstream the outsiders are, then "mainstream" status doesn't confer any information.  The reason you listen to academic scientists is that you expect that at least some of them have investigated the claim themselves.  We need some fraction of respected scientists -- even a small fraction -- who are crazy enough to engage even with potentially crackpot theories, if only to debunk them.  But when they do that, don't they risk being considered crackpots themselves?  This is some version of "Tolerate tolerance."  If you refuse to trust anybody who even considers seriously a crackpot theory, then you lose the basis on which you reject that crackpot theory.  

So the question "What is bunk?", that is, the question, "What is likely enough to be worth investigating?", apparently destroys itself.  You can only tell if a claim is unlikely by doing a little investigation.  It's probably a reflexive process: when you do a little investigation, if it's starting to look more and more like the claim is false, you can quit, but if it's the opposite, then the claim is probably worth even more investigation.  

The thing is, we all have different thresholds for what captures our attention and motivates us to investigate further.  Some people are willing to do a quick Google search when somebody makes an extraordinary claim; some won't bother; some will go even further and do extensive research.  When we check the consensus to see if a claim is considered bunk, we're acting on the hope that somebody has a lower threshold for investigation than we do.  We hope that some poor dogged sap has spent hours diligently refuting 9-11 truthers so that we don't have to.  From an economic perspective, this is an enormous free-rider problem, though -- who wants to be that poor dogged sap?  The hope is that somebody, somewhere, in the human population is always inquiring enough to do at least a little preliminary investigation.  We should thank the poor dogged saps of the world.  We should create more incentives to be a poor dogged sap.  Because if we don't have enough of them, we're going to be very mistaken when we think "Well, this wasn't important enough for anyone to investigate, so it must be bunk."

(N.B.  I am aware that many climate scientists are being "poor dogged saps" by communicating with and attempting to refute global warming skeptics.  I'm not aware if there are economists who bother trying to refute Austrian economics, or if there are electrical engineers and computer scientists who spend time being Singularity skeptics.)

 

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 4:39 AM
Select new highlight date
Rendering 50/108 comments  show more

SarahC:

A key thing to consider is the role of the "mainstream." When a claim is out of the mainstream, are you justified in moving it closer to the bunk file?

An important point here is that the intellectual standards of the academic mainstream differ greatly between various fields. Thus, depending on the area we're talking about, the fact that a view is out of the mainstream may imply that it's bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.

From my own observations of research literature in various fields and the way academia operates, I have concluded that healthy areas where the mainstream employs very high intellectual standards of rigor, honesty, and judicious open-mindedness are normally characterized by two conditions:

(1) There is lots of low-hanging fruit available, in the sense of research goals that are both interesting and doable, so that there are clear paths to quality work, which makes it unnecessary to invent bullshit instead.

(2) There are no incentives to invent bullshit for political or ideological reasons.

As soon as either of these conditions doesn't hold in an academic area, the mainstream will become infested with worthless bullshit work to at least some degree. For example, condition (2) is true for theoretical physics, but in many of its subfields, condition (1) no longer holds. Thus we get things like the Bogdanoff affair and the string theory wars -- regardless of who (if anyone) is right in these controversies, it's obvious that some bullshit work has infiltrated the mainstream. Nevertheless, the scenario where condition (1) doesn't hold, but (2) does is relatively benign, and such areas are typically still basically sound despite the partial infestation.

The real trouble starts when condition (2) doesn't hold. Even if (1) still holds, the field will be in a hopeless confusion where it's hardly possible to separate bullshit from quality work. For example, in the fields that involve human sociobiology and behavioral genetics, particularly those that touch on the IQ controversies, there are tons of interesting study ideas waiting to be done. Yet, because of the ideological pressures and prejudices -- both individual and institutional -- bullshit work multiplies without end. (Again, regardless of whom you support in these controversies, it's logically impossible that at least one side isn't bullshitting.) Thus, on the whole, condition (2) is even more critical than (1).

When neither (1) nor (2) holds in some academic field, it tends to become almost pure bullshit. Macroeconomics is the prime example.

SarahC:

There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians.

So, to apply my above criteria to these cases:

  • Climate science is politicized to an extreme degree and plagued by vast methodological difficulties. (Just think about the difficulty of measuring global annual average temperature with 0.1C accuracy even in the present, let alone reconstructing it far into the past.) Thus, I'd expect a very high level of bullshit infestation in its mainstream, so critics scorned by the mainstream should definitely not be dismissed out of hand.

  • Ditto for mainstream vs. Austrian macroeconomics; in fact, even more so. If you look at the blogs of prominent macroeconomists, you'll see lots of ideologically motivated mutual scorn and abuse even within the respectable mainstream. Austrians basically call bullshit on the entire mainstream, saying that the whole idea of trying to study economic aggregates by aping physics is a fundamentally unsound cargo-cult approach, so they're hated by everyone. While Austrians have their own dubious (and sometimes obviously bunk) ideas, their criticism of the mainstream should definitely be taken into account considering its extreme level of politicization and lack of any clearly sound methodology.

  • As for singularitarians, they don't really face opposition from some concrete mainstream academic group. The problem is that their claims run afoul of the human weirdness heuristic, so it's hard to get people to consider their arguments seriously. (The attempts at sensationalist punditry by some authors associated with the idea don't help either.) But my impression is that many prominent academics in the relevant fields who have taken the time to listen to the singularity arguments take them respectfully and seriously, certainly with nothing like the scorn heaped on dissenters and outsiders in heavily politicized fields.

If it's not presumptuous of me, I'd like the Bogdanov affair removed as an example. I was one of the Wikipedia administrators deeply involved in the BA edit-wars on Wikipedia, and while I originally came to it with an open mind (why I was asked to intervene), quickly there came to be not a single doubt in my mind that the brothers were complete con artists and possess only a talent for self-promotion and media manipulation.

This is unlike string theory, where there are good arguments on both sides and one could genuinely be uncertain.

However, would you agree that Bogdanoff brothers' work has been, at least at some points, approved and positively reviewed by credentialed physicists with official and reputable academic affiliations? After all, they successfully published several papers and defended their theses.

Now, it may be that after their work came under intense public scrutiny, it was shown to be unsound so convincingly that it led some of these reviewers to publicly reverse their previous judgments. However, considering that the overwhelming majority of research work never comes under any additional scrutiny beyond the basic peer review and thesis defense procedures, this still seems to me like powerful evidence that the quality of many lower-profile publications in the field could easily be as bad.

However, would you agree that Bogdanoff brothers' work has been, at least at some points, approved and positively reviewed by credentialed physicists with official and reputable academic affiliations? After all, they successfully published several papers and defended their theses.

As I recall, they didn't defend their theses, and only eventually got their degrees by a number of questionable devices like replacing a thesis with publications somewhere and forcing a shift to an entirely different field like mathematics.

EDIT: The oddities of their theses is covered in http://en.wikipedia.org/wiki/Bogdanoff_affair#Origin_of_the_affair

For me the primary evidence of a bunk claim is when the claimant fails to reasonably deal with the mainstream. Let's take the creation evolution debate. If someone comes along claiming a creationist position, but is completely unable to even describe what the evolutionary position is, or what might be good about it, then their idea is bunk. If someone is very good at explaining evolution as it really happens, but then goes on to claim something different can happen as well - then it becomes interesting.

Anyone proposing an alternative idea needs to know precisely what it is an alternative to - otherwise they haven't done their homework, and it isn't worth my time.

Yes! This is a key point in the Alternative-Science Respectability Checklist, for example:

Someone comes along and says “I’ve discovered that there’s no need for dark matter.” A brief glance at the abstract reveals that the model violates our understanding of perturbation theory. Well, perhaps there is something subtle going on here, and our conventional understanding of perturbation theory doesn’t apply in this case. So here’s what any working theoretical cosmologist would do (even if they aren’t consciously aware that they’re doing it): they would glance at the introduction to the paper, looking for a paragraph that says “Look, we know this isn’t what you would expect from elementary perturbation theory, but here’s why that doesn’t apply in this case.” Upon not finding that paragraph, they would toss the paper away.

If someone comes along claiming a creationist position, but is completely unable to even describe what the evolutionary position is, or what might be good about it, then their idea is bunk.

Replace "creationist" and "evolutionary" in that sentence with "atheist" and "religious" respectively and you have the most common theist criticism of Dawkins.

Therefore, since theism is more-or-less the mainstream position, wouldn't following your rule force you to conclude that Dawkins' atheism is bunk?

Note that when you consider a claim, you shouldn't set out to prove it false, or to prove it true. You should set out to find a correct conclusion about the claim, the truth about it. Not being skeptical is a particular failure mode that makes experts who you suspect of having this flaw, inappropriate source of knowledge about the claim. "Skepticism" is a similarly flawed mode of investigation.

So, the question shouldn't be, "Who is qualified to refute the Friendly AI idea?", but "Who is qualified to reveal the truth about the Friendly AI idea?".

It should be an established standard to link to the previous posts on the same topic. This is necessary to actually build upon existing work, and not just create blogging buzz. In this case, the obvious reference is The Correct Contrarian Cluster, and also probably That Magical Click and Reason as memetic immune disorder.

By the way, I have spent quite a long time trying to "debunk" the set of ideas around Friendly AI and the Singularity, and my conclusion is that there's simply no reasonable mainstream disagreement with that somewhat radical hypothesis. Why is FAI/Singularity not mainstream? Because the mainstream of science doesn't have to publicly endorse every idea it cannot refute. There is no "court of crackpot appeal" where a correct contrarian can go to once and for all show that their problem/idea is legit. Academia can basically say "fuck off, we don't like you or your idea, you won't get a job at a university unless you work on something we like".

Now such ability to arbitrarily tell people to get lost is useful because there are so many crackpots around, and they are really annoying. But it is a very simple and crude filter, akin to cutting your internet connection to prevent spam email. Just losing Eliezer and Nick Bostrom's insight about friendly AI may cost academia more than all the crackpots put together could ever have cost.

Robin Hanson's way around this was to expend a significant fraction of his life getting tenure, and now they can't sack him, but that doesn't mean that mainstream consensus will update to his correct contrarian position on the singularity; they can just press the "ignore" button.

That's precisely the point I'm trying to make. We do lose a lot by ignoring correct contrarians. I think academia may be losing a lot of knowledge by filtering crudely. If indeed there is no mainstream academic position, pro or con, on Friendly AI, I think academia is missing something potentially important.

On the other hand, institutions need some kind of a filter to avoid being swamped by crackpots. A rational university or journal or other institution, trying to avoid bias, should probably assign more points to "promiscuous investigators," people with respected mainstream work who currently spend time analyzing contrarian claims, whether to confirm or debunk. (I think Robin Hanson is a "promiscuous investigator.")

http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/

is a post that I find relevant.

Peer-Review is about low hanging branches, the stuff supported by enough evidence already that writing about it can be done easily by sourcing extensive support from prior work.

As for the damage of ignoring correct contrarians, there was a nobel prize in economics awarded for a paper on markets with asymmetric information which a reviewer rejected with a comment like "If this is correct then all of economics is wrong".

There is also the story of someone who failed to get a PhD for their work presenting it on multiple seperate occasions, the last of which Einstein was in the room and said it was correct (and it was).

Liked the post. One of the two big questions it's poking at is 'how does one judge a hypothesis without researching it?' To do that, one has to come up with heuristics for judging some hypothesis H* that correlate well enough with correctness to work as a substitute for actual research. The post already suggests a few:

  • Is evidence presented for H?
  • Do those supporting H share data for repeatability?
  • Is H internally inconsistent?
  • Does H depend on logical fallacies?
  • (Debatable) Is H mainstream?

I'll add a few more:

  • If H is a physical or mathematical hypothesis, try and find a quantitative statement of it. If there isn't one, watch out: crackpots are sometimes too busy trying to overthrow a consensus to make sure the math actually works.

  • Suppose some event is already expected to occur as an implication of a well-established theory. If H is meant to be a novel explanation for that event, H not only has to explain the event, it also has to explain why the well-established theory doesn't actually entail the event.

    • Application to global warming. To establish that something other than anthropogenic CO2 is the main driver of current global warming, it is not enough to simply suggest an alternative cause; it's also necessary to explain why the expected warming entailed by quantum theory and anthropogenic CO2 emissions would have failed to materialize.
  • Can H's fans/haters discuss H without injecting their politics? It doesn't really matter if they sometimes mention their politics around H, but if they can't resist the temptation to growl about 'fascists' or 'political correctness' or 'Marxists' or whatever every time they discuss H, watch out. (Unless H is a hypothesis about fascism, political correctness or Marxism or whatever, obviously.)

  • If arguments about H consistently turn into arguments about who should bear the burden of proof, there's probably too little evidence to prove H either way.

  • Hypotheses that implicitly assume current trends will continue or accelerate arbitrarily far into the future should be handled with care. (An exercise I like doing occasionally is taking some time series data that someone's fitted an exponential for and fitting an S-curve instead.)

  • If H based on a small selection from many available data points, is there a rationale for that selection?

    • Application to a Ray Kurzweil slide. Low hanging fruit I admit. Anyway, look at this graph of how long it takes for inventions to enter mass use. Kurzweil plots points for only 6 inventions: the telephone, radio, TV, the PC, the cellphone and the Web. I would be interested to see how neat the graph would be if it included the photocopier, the MP3 player, the tape player, the CD player, the internet, the newspaper, the record player, the USB flash drive, the DVD player, the car, the laser, the LED, the VHS player, the camcorder and so on. The endnotes for Kurzweil's book 'The Singularity Is Near' refer to a version of this chart and estimates 'the current rate of reducing adoption time,' but doesn't seem to say why Kurzweil picked the technologies he did.
  • Looking at the credentials of people discussing H is a quick and dirty rule of thumb, but it's better than nothing.

  • Does whoever's talking about H get the right answer on questions with clearer answers? Someone who thinks vaccines, fluoride in the drinking water and FEMA are all part of the NWO conspiracy is probably a poor judge of whether 9/11 was an inside job.

  • How sloppily is the case for (or against) H made? (E.g. do a lot of the citations fail to match references? Are there citations or links to evidence in the first place? Is the author calling a trend on a log-linear graph 'exponential growth' when it's clearly not a straight line? Do they misspell words like 'exponential?')

  • Are possible shortcomings in H and/or the evidence for H acknowledged? If someone thinks the case for/against H is open and shut, but I'm really not sure, something isn't right.

And Daniel Davies helpfully points out that lying (whether in the form of consistent lies about H itself, or H's supporters/skeptics simply being known liars) can be an informative warning sign.


* The second question being 'do we have enough people researching obscure hypotheses and if not, how do we fix that?' I don't know how to start answering that one yet.

To establish that something other than anthropogenic CO2 is the main driver of current global warming, it is not enough to simply suggest an alternative cause; it's also necessary to explain why the expected warming entailed by quantum theory and anthropogenic CO2 emissions would have failed to materialize.

This isn't the actual epistemic situation. The usual measure of the magnitude of CO2-induced warming is "climate sensitivity" - increase in temperature per doubling of CO2 - and its consensus value is 3 degrees. But the physically calculable warming induced directly by CO2 is, in terms of this measure, only 1 degree. Another degree comes from the "water vapor feedback", and the final degree from all the other feedbacks. But the feedback due to clouds, in particular, still has a lot of uncertainty; enough that, at the lower extreme, it would be a negative feedback that could cancel all the other positive feedbacks and leave the net sensitivity at 1 degree.

The best evidence that the net sensitivity is 3 degrees is the ice age record. The relationship between planetary temperature and CO2 levels there is consistent with that value (and that's after you take into account the natural outgassing of CO2 from a warming ocean). People have tried to extract this value from the modern temperature record too, but it's rendered difficult by uncertainties regarding the magnitude of cooling due to aerosols and the rate at which the ocean warms (this factor dominates how rapidly atmospheric temperature approaches the adjusted equilibrium implied by a changed CO2 level).

The important point to understand is that the full 3-degree sensitivity cannot presently be derived from physical first principles. It is implied by the ice-age paleo record, and is consistent with the contemporary record, with older and sparser paleo data, and with the independently derived range of possible values for the feedbacks. But the uncertainty regarding cloud feedback is still too great to say that we can retrodict this value, just from a knowledge of atmospheric physics.

The important point to understand is that the full 3-degree sensitivity cannot presently be derived from physical first principles.

Agreed. Nonetheless, as best I can calculate, Really Existing Global Warming (the warming that has occurred from the 19th century up to now, rather than that predicted in the medium-term future) is of similar order to what one would get from the raw, feedback-less effect of modern human CO2 emissions.

The additional radiative forcing due to increasing the atmospheric CO2 concentration from C0 to C1 is about 5.4 * log(C1/C0) W/m^2. The preindustrial baseline atmospheric CO2 concentration was about 280 ppm, and now it's more like 388pm - plugging in C0 = 280 and C1 = 388 gives a radiative forcing gain around 1.8W/m^2 due to more CO2.

Without feedback, climate sensitivity is λ = 0.3 K/(W/m^2) - this is the expected temperature increase for an additional W/m^2 of radiative forcing. Multiplying the 1.8W/m^2 by λ makes an expected temperature increase of 0.54K.

Eyeballing the HADCRUT3 global temperature time series, I estimate a rise in the temperature anomaly from about -0.4K to +0.4K, a gain of 0.8K since 1850. The temperature boost of 0.54K from current CO2 levels takes us most of the way towards that 0.8K increase. The remaining gap would narrow if we included methane and other greenhouse gases also. Admittedly, we won't have the entire 0.54K temperature boost just yet, because of course it takes time for temperatures to approach equilibrium, but I wouldn't expect that to take very long because the feedbackless boost is relatively small.

I wouldn't expect that to take very long

This might actually be a nice exercise in choosing between hypotheses. Suppose you had no paleo data or detailed atmospheric physics knowledge, but you just had to choose between 1 degree and 3 degrees as the value of climate sensitivity, i.e. between the hypothesis that all the feedbacks cancel, and the hypothesis that they triple the warming, solely on the basis of (i) that observed 0.8K increase (ii) the elementary model of thermal inertia here. You would have to bear in mind that most anthropogenic emissions occurred in recent decades, so we should still be in the "transient response" phase for the additional perturbation they impose...

This might actually be a nice exercise

Now you've handed me a quantitative model I'm going to indulge my curiosity :-)

You would have to bear in mind that most anthropogenic emissions occurred in recent decades, so we should still be in the "transient response" phase for the additional perturbation they impose...

I think we can account for this by tweaking equation 4.14 on your linked page. Whoever wrote that page solves it for a constant additional forcing, but there's nothing stopping us rewriting it for a variable forcing:

where T(t) is now the change in temperature from the starting temperature, Q(t) the additional forcing, and I've written the equation in terms of my λ (climate sensitivity) and not theirs (feedback parameter).

Solving for T(t),

If we disregard pre-1850 CO2 forcing and take the year 1850 as t = 0, we can drop the free constant. Next we need to invent a Q(t) to represent CO2 forcing, based on CO2 concentration records. I spliced together two Antarctic records to get estimates of annual CO2 concentration from 1850 to 2007. A quartic is a good approximation for the concentration:

The zero year is 1850. Dividing the quartic by 280 gives the ratio of CO2 at time t to preindustrial CO2. Take the log of that and multiply by 5.35 to get the forcing due to CO2, giving Q(t):

Plug that into the T(t) formula and we can plot T(t) as a function of years after 1850:

The upper green line is a replication of the calculation I did in my last post - it's the temperature rise needed to reach equilibrium for the CO2 level at time t, which doesn't account for the time lag needed to reach equilibrium. For t = 160 (the year 2010), the green line suggests a temperature increase of 0.54K as before. The lower red line is T(t): the temperature rise due to the Q(t) forcing, according to the thermal inertia model. At t = 160, the red line has increased by only 0.46K; in this no-feedback model, holding CO2 emissions constant at today's level would leave 0.08K of warming in the pipeline.

So in this model the time lag causes T(t) to be only 0.46K, instead of the 0.54K expected at equilibrium. Still, that's 85% of the full equilibrium warming, and the better part of the 0.8K increase; this seems to be evidence for my guess that we wouldn't have to wait very long to get close to the new equilibrium temperature.

Suppose you had no paleo data or detailed atmospheric physics knowledge, but you just had to choose between 1 degree and 3 degrees as the value of climate sensitivity, i.e. between the hypothesis that all the feedbacks cancel, and the hypothesis that they triple the warming, solely on the basis of (i) that observed 0.8K increase (ii) the elementary model of thermal inertia here.

If I knew that little, I guess I'd put roughly equal priors on each hypothesis, so the likelihoods would be the main driver of my decision. But to run this toy model, should I pretend the only variable forcing I know of is anthropogenic CO2? I'm going to here, because we're assuming I don't have 'detailed atmospheric physics knowledge,' and also because I haven't run the numbers for other variable forcings.

To decide which sensitivity is more likely, I'll calculate which value of λ produces a 0.8K increase from CO2 emissions by 2010 with this model and the above Q(t); then I'll see if that λ is closer to the '3 degrees' sensitivity (λ between 0.8 and 0.9) or the '1 degree' sensitivity (λ = 0.3). For an 0.8K increase, λ = 0.646, so I'd choose the higher sensitivity, which has a λ closer to 0.646.

This is the bunk-detection strategy on TakeOnIt:

  1. Collect top experts on either side of an issue, and examine their opinions.
  2. If '1' does not make the answer clear, break the issue down into several sub-issues, and do '1' for each sub-issue.

Examples that you alluded to in your post (I threw in cryonics because that's a contrarian issue often brought up on LW):

Global Warming
Cryonics
Climate Engineering
9-11 Conspiracy Theory
Singularity

In addition, TakeOnIt will actually predict what you should believe using collaborative filtering. The way it works, is that you enter your opinions on several issues that you strongly believe you've got right. It will then detect the cluster of experts you typically agree with, and extrapolate what your opinion should be for other issues, based on the assumption (explained here) that you should continue to agree with the experts you've previously agreed with.

You can see the predictions it's made for my opinions here. One of the predictions is that I should believe homeopathy is bunk.

I'm unimpressed by this method. First, the procedure as given does more to reinforce pre-existing beliefs and point one to people who will reinforce those beliefs than anything else. Second, the sourcing used as experts is bad or outright misleading. For example, consider global warming. Wikipedia is listed as an expert source. But Wikipedia has no expertise and is itself an attempt at a neutral summary of experts. Even worse, Conservapedia is used both on the global warming and 9-11 pages. Considering that Conservapedia is Young Earth Creationist and thinks that the idea that Leif Erickson came to the the New World is a liberal conspiracy, I don't think any rational individual will consider them to be a reliable source (and the vast majority of American right-wingers I've ever talked to about this cringe when Conservapedia gets mentioned. So this isn't even my own politics coming into play). On cryonics we have Benjamin Franklin listed as pro. Now, that's roughly accurate. But it is also clear that he was centuries too early to have anything resembling relevant expertise. Looking at many of the fringe subjects a large number of the so-called experts who are living today have no intrinsic justification for their expertise (actors are not experts on scientific issues for example). TakeOnIt seems devoted if anything to blurring the nature of expert knowledge to the point where it becomes almost meaningless. The Bayesian Conspiracy would not approve.

TakeOnIt records the opinions of BOTH experts and influencers - not just experts. Perhaps I confused you by not being clear about this in my original comment. In any case, TakeOnIt groups opinions by the expertise of those who hold the opinions. This accentuates - not blurs - the distinction between those who have relevant expertise and those who don't (but who are nonetheless influential). It also puts those who have expertise relevant to the question topic at the top of the page. You seem to be saying readers will easily mistake an expert for an influencer. I'm open to suggestions if you think it could be made clearer than it is.

There is value in recording the opinions of anyone perceived as an expert by a segment of the general population, as it builds a track record for each supposed expert, so that the statistical analysis can reveal that the opinions of some so called experts are just noise, and give a result influenced mainly by the real experts.

See The Correct Contrarian Cluster.

"How much should we be troubled, though, by the fact that most scientists of their disciplines shun them?"

This is not what's actually going on. To quote Eliezer:

"With regard to academia 'showing little interest' in my work - you have a rather idealized view of academia if you think that they descend on every new idea in existence to approve or disapprove it. It takes a tremendous amount of work to get academia to notice something at all - you have to publish article after article, write commentaries on other people's work from within your reference frame so they notice you, go to conferences and promote your idea, et cetera. Saying that academia has 'shown little interest' implies that I put in that work, and they weren't interested. This is not so. I haven't yet taken my case to academia. And they have not said anything about it, or even noticed I exist, one way or the other. A few academics such as Nick Bostrom and Ben Goertzel have quoted me in their papers and invited me to contribute book chapters - that's about it."

(http://en.wikipedia.org/wiki/Talk:Eliezer_Yudkowsky)

I think it's worth emphasizing that ideas aren't "worth investigating" or "not worth investigating" in themselves; different people will have different opportunities to investigate things at different costs, and will have different info and care about the answers to different degrees.

We need some fraction of respected scientists -- even a small fraction -- who are crazy enough to engage even with potentially crackpot theories, if only to debunk them. But when they do that, don't they risk being considered crackpots themselves? This is some version of "Tolerate tolerance." If you refuse to trust anybody who even considers seriously a crackpot theory, then you lose the basis on which you reject that crackpot theory.

(Original post.)

More generally, one can't optimize a process of getting some kind of answers by also using such answers in particular cases where they already happen to be available. Adding this one rule collapses the whole process, as it begins to reuse arbitrary and trivial data, instead of actually doing any work. In particular, this is the reason for the groupthink failure mode. (And Loeb's theorem!)

Thus, it's more precise to say that the problem results from taking on faith that intolerance by others is justified, rather than protesting against excessive tolerance shown by others. When you believe others are wrong in showing excessive tolerance, you make that judgment by yourself. You should be wise to not make that judgment unless you know enough. On the other hand, if you observe that others in your group (or in mainstream) don't tolerate a certain class of pursuits, concluding that this class of pursuits doesn't deserve tolerance just from that is a failure mode, since this social dynamic could red-flag anything, no matter its merit. All it takes is ability to reliably induce that one inferential step, when a person newly introduced to a question looks at existing consensus and leaps to conclusion just from that, without actually considering the question.

First idea: check if the proposer uses the techniques of rationality and science. Does he support claims with evidence? Does he share data and invite others to reproduce his experiments? Are there internal inconsistencies and logical fallacies in his claim? Does he appeal to dogma or authority? If there are features in the hypothesis itself that mark it as pseudoscience, then it's safely dismissed; no need to look further.

More:

Does he use math or formal logic when a claim demands it? Does he accuse others of suppressing his views?

The Crackpot index is helpful, though it is physics centric.

There isn't any universal distinguishing rule, but in general you want to ask would a world where this were false, look just like our own world? A couple of useful specific guidelines:

  1. Is this something people would be disposed to believe even if it were false?

  2. Is this something that would be impossible to disprove even if it were false?

Flying saucers, psychic powers, and the Singularity are good examples here: suppose we lived in a world where they were not real, what would it look like? Answer, people would still believe them because we are disposed to do so (I can personally vouch for that, having spent a little bit of time as a teenager looking into flying saucers, quite a bit more time looking into psychic powers, and been a Singularitarian until a few years ago), and there is no way to disprove them because each comes with a story about why it is unobservable, so such a world would look just like our own.

For a borderline case, I'll suggest cold fusion. Clearly it's something we would like to believe, but it was nicely testable (the required conditions could be created with present-day technology, and low temperature fusion reactions obviously aren't going to be motivated to hide from us or fail to work in the presence of skeptics), so it was worth investigating - and duly investigated and refuted. (Belief in cold fusion now would of course be bunk.)

Bryan Caplan spends time refuting Austrians - he thinks Austrian Economics is a mistake that wastes the time of a lot of quality free market economists.