Here's a poser that occurred to us over the summer, and one that we couldn't really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year).
It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war.
It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?
Note however that that systematically fails to account for the selection bias whereby doom-mongering organisations arise from groups of individuals with high risk estimates.
In the case of Yudkowsky, he started out all: yay: Singularity - and was actively working on accelerating it:
This was written before he hit on the current doom-mongering scheme. According to your proposal, it appears that we should be assigning such writings extra credence - since they reflect the state of play before the financial motives crept in.
Yes, those writings were also free from financial motivation and less subject to the author's feeling the need to justify them than currently produced ones. However, notice that other thoughts also before there was a financial motivation militate against them rather strongly.
An analogy: if someone wants a pet and begins by thinking that they would be happier with a cat than a dog, and writes why, and then thinks about it more and decides that no, they'd be happier with a dog, and writes why, and then gets a dog, and writes why that was the best decision at the time with the evidence available, and in fact getting a dog was actually the best choice, the first two sets of writings are much more free from this bias than the last set. The last set is valuable because it was written with the most information available and after the most thought. The second set is more valuable than the first set in this way. The first set is in no similar way more valuable than the second set.
As an aside, that article is awful. Most glaringly, he said: