Here's a poser that occurred to us over the summer, and one that we couldn't really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year). 

It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war. 

It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?

New Comment
62 comments, sorted by Click to highlight new comments since:

However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war.

This isn't right. Eliezer got into the AI field because he wanted to make a Singularity happen sooner, and only later determined that AI risk is high. Even if Eliezer thought that nuclear war is a bigger risk than AI, he would still be in AI, because he would be thinking that creating a Singularity ASAP is the best way to prevent nuclear war.

Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?

I suggest that if you have the ability to evaluate the arguments on an object level, then do that, otherwise try to estimate P(E|H1) and P(E|H2) where E is the evidence you see and H1 is the "low risk" hypothesis (i.e., AI risk is actually low), H2 is the "high risk" hypothesis, and apply Bayes' rule.

Here's a simple argument for high AI risk. "AI is safe" implies that either superintelligence can't be created by humans, or any superintelligence we do create will somehow converge to a "correct" or "human-friendly" morality. Either of these may turn out to be true, but it's hard to see how anyone could (justifiably) have high confidence in either of them at this point in our state of knowledge.

As for P(E|H1) and P(E|H2), I think it's likely that even if AI risk is actually low, there would be someone in the world trying to make a living out of "crying wolf" about AI risk, so that alone (i.e., an apparent expert warning about AI risk) doesn't increase the posterior probability of H2 much. But what would be the likelihood of that person also creating a rationalist community and trying to "raise the sanity waterline"?

I think people should discount risk estimates fairly heavily when an organisation is based around doom mongering. For instance, The Singularity Institute, The Future of Humanity Institute and the Bulletin of the Atomic Scientists all seem pretty heavily oriented around doom. Such organisations initially attract those with high risk estimates, and they then actively try and "sell" their estimates to others.

Obtaining less biased estimates seems rather challenging. The end of the would would obviously be an unprecidented event.

The usual way of eliciting probability is with bets. However, with an apocalypse, this doesn't work too well. Attempts to use bets have some serious problems.

I think people should discount risk estimates fairly heavily when an organisation is based around doom mongering.

That's why I refuse to join SIAI or FHI. If I did, I'd have to discount my own risk estimates, and I value my opinions too much for that. :)

One should read materials from the people in the organization from before it was formed and grant those extra credence depending on how much one suspects the organization has written its bottom line first.

Note however that that systematically fails to account for the selection bias whereby doom-mongering organisations arise from groups of individuals with high risk estimates.

In the case of Yudkowsky, he started out all: yay: Singularity - and was actively working on accelerating it:

Since then, Yudkowsky has become not just someone who predicts the Singularity, but a committed activist trying to speed its arrival. "My first allegiance is to the Singularity, not humanity," he writes in one essay. "I don't know what the Singularity will do with us. I don't know whether Singularities upgrade mortal races, or disassemble us for spare atoms.... If it comes down to Us or Them, I'm with Them."

This was written before he hit on the current doom-mongering scheme. According to your proposal, it appears that we should be assigning such writings extra credence - since they reflect the state of play before the financial motives crept in.

Yes, those writings were also free from financial motivation and less subject to the author's feeling the need to justify them than currently produced ones. However, notice that other thoughts also before there was a financial motivation militate against them rather strongly.

An analogy: if someone wants a pet and begins by thinking that they would be happier with a cat than a dog, and writes why, and then thinks about it more and decides that no, they'd be happier with a dog, and writes why, and then gets a dog, and writes why that was the best decision at the time with the evidence available, and in fact getting a dog was actually the best choice, the first two sets of writings are much more free from this bias than the last set. The last set is valuable because it was written with the most information available and after the most thought. The second set is more valuable than the first set in this way. The first set is in no similar way more valuable than the second set.

As an aside, that article is awful. Most glaringly, he said:

To Asimov, only three laws were necessary

I don't see a special problem...evaluate the arguments, try to correct for biases. Business as usual. Or do you suspect there is a new type of bias at work here?

One way of testing this is to whether people are willing to discuss existential risk threats that cannot be solved by giving them money. Such comments do exist (see for example Stephen Hawking's comments about the danger of aliens). It is however interesting to note that he's made similar remarks about the threat of AI. (See e.g. here). I'm not sure whether such evaluations are relevant.

Also, I don't think it follows that people like Yudkowsky and Hellman necessarily decide to study the existential risks they do because they have a higher than average estimate for the threats in question. They may just have internalized the threats more. Most humans simply don't internalize the risks of existential risk in a way that alters their actions, even if they are willing to acknowledge high probabilities of problems.

An attitude of "faster" might help a little to deal with the threat from aliens.

Our actions can probably affect the issue - at least a little - so money might help.

Hawking's comments are pretty transparently more about publicity than fundraising, though.

I'd prefer humanity choose to cooperate with aliens if we are in the stronger position. But I agree that we shouldn't expect them to do the same, and that this does argue for generic importance of developing technology faster. (On the other hand, intelligent life seems to be really rare, so trying to outrace others might be a bad idea if there isn't much else, or if the reason there's so little is because of some future filtration event.)

People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year).

Nuclear weapons have been available on the "black market" (thanks to sloppy soviet handling practices) for decades, yet no terrorist or criminal group has ever used a nuclear fission initiation device. Nuclearrisk.org claims "terrorists may soon get their own button on the vest", citing Al-Qaeda's open desire to acquire nuclear weapons.

I am unable, if I assume fully honest and rational assessments, to rectify these points of fact with one another. They disagree with each other. Given the fact that, furthermore, many of these assessments of risk seem to carry the implicit assumption that if a single nuke is used, the whole world will start glowing in the dark ( see: http://news.stanford.edu/news/2009/july22/hellman-nuclear-analysis-071709.html for an example of this (Martin Hellman himself) ) -- well, it gets further absurd.

In other words; folks need to be careful, when crafting expert opinions, to avoid Déformation professionnelle.

Nuclear weapons have been available on the "black market" (thanks to sloppy soviet handling practices) for decades, yet no terrorist or criminal group has ever used a nuclear fission initiation device

Cite please. From Pinker's new book:

It’s really only nuclear weapons that deserve the WMD acronym. Mueller and Parachini have fact-checked the various reports that terrorists got “just this close” to obtaining a nuclear bomb and found that all were apocryphal. Reports of “interest” in procuring weapons on a black market grew into accounts of actual negotiations, generic sketches morphed into detailed blueprints, and flimsy clues (like the aluminum tubes purchased in 2001 by Iraq) were overinterpreted as signs of a development program.

Each of the pathways to nuclear terrorism, when examined carefully, turns out to have gantlets of improbabilities. There may have been a window of vulnerability in the safekeeping of nuclear weapons in Russia, but today most experts agree it has been closed, and that no loose nukes are being peddled in a nuclear bazaar. Stephen Younger, the former director of nuclear weapons research at Los Alamos National Laboratory, has said, “Regardless of what is reported in the news, all nuclear nations take the security of their weapons very seriously.”274 Russia has an intense interest in keeping its weapons out of the hands of Chechen and other ethnic separatist groups, and Pakistan is just as worried about its archenemy Al Qaeda. And contrary to rumor, security experts consider the chance that Pakistan’s government and military command will fall under the control of Islamist extremists to be essentially nil.275 Nuclear weapons have complex interlocks designed to prevent unauthorized deployment, and most of them become “radioactive scrap metal” if they are not maintained.276 For these reasons, the forty-seven-nation Nuclear Security Summit convened by Barack Obama in 2010 to prevent nuclear terrorism concentrated on the security of fissile material, such as plutonium and highly enriched uranium, rather than on finished weapons.

The dangers of filched fissile material are real, and the measures recommended at the summit are patently wise, responsible, and overdue. Still, one shouldn’t get so carried away by the image of garage nukes as to think they are inevitable or even extremely probable. The safeguards that are in place or will be soon will make fissile materials hard to steal or smuggle, and if they went missing, it would trigger an international manhunt. Fashioning a workable nuclear weapon requires precision engineering and fabrication techniques well beyond the capabilities of amateurs. The Gilmore commission, which advises the president and Congress on WMD terrorism, called the challenge “Herculean,” and Allison has described the weapons as “large, cumbersome, unsafe, unreliable, unpredictable, and inefficient.”277 Moreover, the path to getting the materials, experts, and facilities in place is mined with hazards of detection, betrayal, stings, blunders, and bad luck. In his book On Nuclear Terrorism, Levi laid out all the things that would have to go right for a terrorist nuclear attack to succeed, noting, “Murphy’s Law of Nuclear Terrorism: What can go wrong might go wrong.”278 Mueller counts twenty obstacles on the path and notes that even if a terrorist group had a fifty-fifty chance of clearing every one, the aggregate odds of its success would be one in a million. Levi brackets the range from the other end by estimating that even if the path were strewn with only ten obstacles, and the probability that each would be cleared was 80 percent, the aggregate odds of success facing a nuclear terrorist group would be one in ten. Those are not our odds of becoming victims. A terrorist group weighing its options, even with these overly optimistic guesstimates, might well conclude from the long odds that it would better off devoting its resources to projects with a higher chance of success. None of this, to repeat, means that nuclear terrorism is impossible, only that it is not, as so many people insist, imminent, inevitable, or highly probable.

Cite, please.

200 Soviet nukes lost in Ukraine -- article from Sept 13, 2002. There have been reported losses of nuclear submarines at sea since then as well (though those are improbably recoverable). Note: even if that window is closed now, it was open then, and no terrorist groups used that channel to acquire nukes -- nor is there, as your citation notes, even so much as an actually recorded attempt to do so -- in the entirety of that window of opportunity.

When dozens of disparate extremist groups failed to even attempt to acquire a specific category of weapon, we can safely at that point generalize into a principle that governs how 'terrorists' interact with 'nukes' (in this case) such that they are exceedingly unlikely to want to do so.

In this case, I assert it is because all such groups are inherently political, and as such the knowable political fallout (pun intended) of using a nuclear bomb is sufficient that it in and of itself acts as a deterrant against their use: I am possessed of a strong belief that any terrorist organization that used a nuclear bomb would be eradicated by the governments of every nation on the planet. There is no single event more likely to unify the hatred of all mankind against the perpetrator than the rogue use of a nuclear bomb; we have stigmatized them to that great an extent.

200 Soviet nukes lost in Ukraine -- article from Sept 13, 2002. There have been reported losses of nuclear submarines at sea since then as well (though those are improbably recoverable).

A Pravda article about an accounting glitch is not terribly convincing. Accounting problems do not even mean that the bombs were accessible at any point (assuming they existed), much less that they have been available 'on the "black market" (thanks to sloppy soviet handling practices) for decades'! Srsly.

(Nor do lost submarines count; the US and Russia have difficulties in recovering them, black-market groups are right out, even the drug cartels can barely build working shallow subs.)

A Pravda article about an accounting glitch is not terribly convincing. Accounting problems do not even mean that the bombs were accessible at any point (assuming they existed), much less that they have been available 'on the "black market" (thanks to sloppy soviet handling practices) for decades'! Srsly.

You've missed the point of what I was asserting with that article.

I was demonstrating that the Soviets did not keep proper track of their nuclear weapons, to the point where even they did not know how many they had. The rest follows from there with public-knowledge information not the least of which being the extremity of corruption that existed in the CCCP.

Risk mitigation groups would gain some credibility by publishing concrete probability estimates of "the world will be destroyed by X before 2020" (and similar for other years). As many of the risks are a rather short event (think nuclear war / asteroid strike / singularity), the world will be destroyed by a single cause and the respective probabilities can be summed. I would not be surprised if the total probability comes out well above 1. Has anybody ever compiled a list of separate estimates?

On a related note, how much of the SIAI is financed on credit? Any group which estimates high risks of disastrous events should be willing to pay higher interest rates than market average. (As the expected amount of repayments is reduced by the nontrivial probability of everyone dying before maturity of the contract).

[-][anonymous]-20

So you more highly value your immediate personal comfort than you do the long-term survival of the human race?

I don't care at all about the long-term survival of the human race. Is there any reason I should? I care about the short-term survival of humanity but only because it affects me and other people that I care about. But going to prison would also affect me and the people I care about so it would be a big deal. At least like 25% as bad as the end of humanity.

I suspect what you lack is imagination and determination.

Certainly that is true in this case. I'm not going to put a lot of work into developing an elaborate plan to do something that I don't think should be done.

I don't care at all about the long-term survival of the human race. Is there any reason I should?

Define "long-term", then, as "more than a decade from today". I.e.; "long-term" includes your own available lifespan.

But going to prison would also affect me and the people I care about so it would be a big deal. At least like 25% as bad as the end of humanity.

Would you be so kind as to justify this assertion for me? I find my imagination insufficient to the task of assigning equivalent utility metrics to "me in prison" == 0.25x "end of the species".

Certainly that is true in this case. I'm not going to put a lot of work into developing an elaborate plan to do something that I don't think should be done.

... I really hate it when people reject counterfactuals on the basis of their being counterfactuals alone. It's a dishonest conversational tactic.

[-][anonymous]-40

Would you be so kind as to justify this assertion for me? I find my imagination insufficient to the task of assigning equivalent utility metrics to "me in prison" == 0.25x "end of the species".

Well, I give equivalent utility to "death of all the people I care about" and "end of the species." Thinking about it harder I feel like "death of all the people I care about" is more like 10-100X worse than my own death. Me going to prison for murder is about as bad as my own death, so its more like .01-.1x end of humanity. Can you imagine that?

... I really hate it when people reject counterfactuals on the basis of their being counterfactuals alone. It's a dishonest conversational tactic.

I was considering writing a long thing about your overconfidence in thinking you could carry out such a plan without any (I am presuming) experience doing that kind of thing. I was going to explain how badly you are underestimating the complexity of the world around you and overestimating how far you can stray from your own personal experience and still make reasonable predictions. But this is just a silly conversation that everyone else on LW seems to hate so y bother?

Me going to prison for murder is about as bad as my own death, so its more like .01-.1x end of humanity. Can you imagine that?

I'm curious, now, as to what nation or state you live in.

Thinking about it harder I feel like "death of all the people I care about" is more like 10-100X worse than my own death.

Well -- in this scenario you are "going to die" regardless of the outcome. The only question is whether the people you care about will. Would you kill others (who were themselves also going to die if you did nothing) and allow yourself to die, if it would save people you cared about?

(Also, while it can lead to absurd consequences -- Eliezer's response to the Sims games for example -- might I suggest a re-examination of your internal moral consistency? As it stands it seems like you're allowing many of your moral intuitions to fall in line with evolutionary backgrounds. Nothing inherently wrong with that -- our evolutionary history has granted us a decent 'innate' morality. But we who 'reason' can do better.)

I was considering writing a long thing about your overconfidence in thinking you could carry out such a plan without any (I am presuming) experience doing that kind of thing.

I didn't list any plan. This was intentional. I'm not going to give pointers to others who might be seeking them out for reasons I personally haven't vetted on how to do exactly what this topic entails. That, unlike what some others have criticized about this conversation, actually would be irresponsible.

That being said, the fact that you're addressing this to the element you are is really demonstrating a further non-sequitor. It doesn't matter whether or not you believe the scenario plausible: what would your judgment of the rightfulness of carrying out the action yourself int he absence of democratic systems be?

that everyone else on LW seems to hate so y bother?

  1. Why allow your opinions to be swayed by the emotional responses of others?

  2. In my case, I'm currently sitting at -27 on my 30-day karma score. That's not even the lowest I've been in the last thirty days. I'm not really worried about my popularity here. :)

[+][anonymous]-50
[-]Dmytry-40

"because Eliezer is probably the world expert on AI risk"

There is no experts on the AI risk. There's nothing where to get expertise from. He read some SF, got caught on an idea, did not study (self study or otherwise) CS or any actual relevant body of knowledge to the point of producing anything useful, and he is a very convincing writer. The experts, you'll get experts in 2050. He's a dilettante.

People follow some sort of distribution on their risk estimates. Eliezer is just the far far off end of the bell curve on the risk estimate for AI, among those with writing skills. He does make some interesting points, but he's not a risk estimator.

[+][anonymous]-90
[+]Logos01-100