Nuclear weapons seem like the marquee example of rapid technological change after crossing a critical threshold.

Looking at the numbers, it seems to me like:

  • During WWII, and probably for several years after the war, the cost / TNT equivalent for manufacturing nuclear weapons was comparable to the cost of conventional explosives, (AI impacts estimates a manufacturing cost of $25M/each)
  • Amortizing out the cost of the Manhattan project, dropping all nuclear weapons produced in WWII would be cost-competitive with traditional firebombing (which this thesis estimates at 5k GBP (=$10k?) / death, vs. ~100k deaths per nuclear weapon) and by 1950, when stockpiles had gown to >100 weapons, was an order of magnitude cheaper. (Nuclear weapons are much easier to deliver, and at that point the development cost was comparable to manufacturing cost).

Separately, it seems like a 4 year lead in nuclear weapons would represent a decisive strategic advantage, which is much shorter than any other technology. My best guess is that a 2 year lead wouldn't do it, but I'd love to hear an assessment of the situation from someone who understands the relevant history/technology better than I do.

So my understanding is: it takes about 4 years to make nuclear weapons and another 4 years for them to substantially overtake conventional explosives (against a 20 year doubling time for the broader economy). Having a 4 year lead corresponds to a decisive strategic advantage.

Does that understanding seem roughly right? What's most wrong or suspect? I don't expect want to do a detailed investigation since this is pretty tangential to my interests, but the example is in the back of my mind slightly influencing my views about AI, and so I'd like it to be roughly accurate or tagged as inaccurate. Likely errors: (a) you can get a decisive strategic advantage with a smaller lead, (b) cost-effectiveness improved more rapidly after the war than I'm imagining, or (c) those numbers are totally wrong for one reason or another.

I think the arguments for a nuclear discontinuity are really strong, much stronger than any other technology. Physics fundamentally has a discrete list of kinds of potential energy, which have different characteristic densities, with a huge gap between chemical and nuclear energy densities. And the dynamics of war are quite sensitive to energy density (nuclear power doesn't seem to have been a major discontinuity). And the dynamics of nuclear chain reactions predictably make it hard for nuclear weapons to be "worse" in any way other than being more expensive (you can't really make them cheaper by making them weaker or less reliable). So the continuous progress narrative isn't making a strong prediction about this case.

(Of course, progress in nuclear weapons involves large-scale manufacturing. Today the economy grows at roughly the same rate as in 1945, but information technology can change much more rapidly.)

New Comment
35 comments, sorted by Click to highlight new comments since: Today at 5:37 AM

I think we probably had a decisive advantage (and could e.g. have prevented them from developing nuclear weapons) but didn't push it. I'd love to hear from someone with a better knowledge of the history though.

Von Neumann apparently thought so:

In 1948, Von Neumann became a consultant for the RAND Corporation. RAND (Research ANd Development) was founded by defense contractors and the Air Force as a "think tank" to "think about the unthinkable." Their main focus was exploring the possibilities of nuclear war and the possible strategies for such a possibility.

Von Neumann was, at the time, a strong supporter of "preventive war." Confident even during World War II that the Russian spy network had obtained many of the details of the atom bomb design, Von Neumann knew that it was only a matter of time before the Soviet Union became a nuclear power. He predicted that were Russia allowed to build a nuclear arsenal, a war against the U.S. would be inevitable. He therefore recommended that the U.S. launch a nuclear strike at Moscow, destroying its enemy and becoming a dominant world power, so as to avoid a more destructive nuclear war later on. "With the Russians it is not a question of whether but of when," he would say. An oft-quoted remark of his is, "If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o'clock, I say why not one o'clock?"

Just a few years after "preventive war" was first advocated, it became an impossibility. By 1953, the Soviets had 300-400 warheads, meaning that any nuclear strike would be effectively retaliated.

I guess the U.S. didn't launch a first strike because it would have been politically unacceptable to kill millions of people in a situation that couldn't be viewed as self defense. Tangentially, this seems relevant to a long-running disagreement between us, about how bad is it if AI can't help us solve moral/philosophical problems, but only acquire resources and keep us in control. What counts as a decisive strategic advantage depends on one's values and philosophical outlook in general, and this an instance of moral/philosophical confusion potentially being very costly, if the right thing to do from the perspective of the "real values" (e.g., CEV) of the Americans was to do (or threaten) a first strike, in order to either take over more of the universe for themselves or to prevent greater existential risk in the long run.

I agree that "failure to sort out philosophy/values early" has some costs, and this is a reasonable example. The question is: what fraction of the value of the future is sacrificed each subjective year?

Off the top of my head my guess is something like 1-2% per doubling. It sounds like your number is much larger.

(It's a little bit hard to say exactly what counts. I'm talking about something like "value destroyed due to deficiencies in state of the art understanding" rather than "value destroyed due to all philosophical errors by everyone," and so am not counting e.g. the costs from a "better dead then red" mentality.)

I do agree that measured in wall-clock time this is going to become a problem fast. So if AI accelerates problems more than philosophy/values, then we pay an additional cost that depends on the difference between (cumulative additional philosophy/values challenges introduced from AI) - (cumulative additional philosophy/values progress due to AI). I'd eyeball that number at ~1 doubling by default, so see this cost as a further 1-2% of the value of the future.

All of this stands against a 10-20% loss due to AI risk proper, and a 0.1-1% risk of extinction from non-AI technologies in the marginal pre-AGI years. So that's where I'm coming from when I'm not super concerned about this problem.

(These numbers are very made up, my intention is to give a very rough sense of my intuitive model and quantitative intuitions. I could easily imagine that merely thinking about the overall structure of the problem would change my view, not to mention actually getting into details or empirical data.)

How big of a loss do you think the US sustained by not following von Neumann's suggestion to pursue a decisive strategic advantage? (Or if von Neumann's advice was actually wrong according to the American's "real values", how bad would it have been to follow it?)

What do you think is the state of the art understanding in how one should divide resources between saving/investing, personal/family consumption, and altruistic causes? How big of a loss from what's "actually right" do you think that represents? (Would it be wrong for someone with substantial moral uncertainty to estimate that loss to be >10%?)

(It’s a little bit hard to say exactly what counts. I’m talking about something like “value destroyed due to deficiencies in state of the art understanding” rather than “value destroyed due to all philosophical errors by everyone,” and so am not counting e.g. the costs from a “better dead then red” mentality.)

But one of my concerns is that AI will exacerbate the problem that the vast majority of people do not have a state of the art understanding of philosophy, for example by causing a lot of damage based on their incorrect understandings, or freezing or extrapolating from a base of incorrect understandings, or otherwise preempting a future where most people eventually fix their philosophical errors. At the same time AI is an opportunity to help solve this problem, if AI designers are foresighted enough (and can coordinate with each other to avoid races, etc.). So I don't understand why you deliberately exclude this.

Is it because you think AI is not a good opportunity to solve this problem? Can you give your sense of how big this problem is anyway?

So if AI accelerates problems more than philosophy/​values, then we pay an additional cost that depends on the difference between (cumulative additional philosophy/​values challenges introduced from AI) - (cumulative additional philosophy/​values progress due to AI). I’d eyeball that number at ~1 doubling by default, so see this cost as a further 1-2% of the value of the future.

I'm not sure that's the relevant number to look at. Suppose AI doesn't accelerate problems more than philosophy/values, we'd still want AIs that can accelerate philosophy/values even more, to reduce the "normal" losses associated with doublings, which would be substantial even at 1% per doubling if added up over tens of doublings.

I agree that people failing to act on the basis of state-of-the-art understanding is a potentially large problem, and that it would be good to use AI as an opportunity to address that problem, I didn't include it just because it seems like a separate thing (in this context I don't see why philosophy should be distinguished from people acting on bad empirical views). I don't have a strong view on this.

I agree that AI could fix the philosophy gap. But in terms of urgency of the problem, if AI accelerates everything equally, it still seems right to look at the cost per subjective unit of time.

10% doesn't sound like a plausible estimate for the value destroyed from philosophy giving worse answers on saving/consumption/altruism. There are lots of inputs other than philosophical understanding into that question (empirical facts, internal bargaining, external bargaining, etc.), and that problem is itself one of a very large number of determinants of how well the future goes. If you are losing 10% EV on stuff like this, it seems like you are going to lose all the value with pretty high probability, so the less brittle parts of your mixture should dominate.

I didn’t include it just because it seems like a separate thing

I see. I view them as related because both can potentially have the same solution, namely a solution to meta-philosophy that lets AIs make philosophical progress and convinces people to trust the philosophy done by AIs. I suppose you could try to solve the people-acting-on-bad-philosophical-views problem separately, by convincing them to adopt better views, but it seems hard to change people's minds this way on a large scale.

There are lots of inputs other than philosophical understanding into that question (empirical facts, internal bargaining, external bargaining, etc.)

I can easily imagine making >10% changes in my allocation based on changes in my philosophical understanding alone, so I don't see why it matters that there are also other inputs.

that problem is itself one of a very large number of determinants of how well the future goes

On one plausible view, a crucial determinant of how well the future goes is how much of the universe is controlled by AIs/people who end up turning their piece of the universe over to the highest-value uses, which in turn is largely determined by how much they save/invest compared to AIs/people who end up with wrong values. That seems enough to show that 10% loss due to this input is plausible, regardless of other determinants.

so the less brittle parts of your mixture should dominate.

What does this mean and how is it relevant?

Would it be wrong for someone with substantial moral uncertainty to estimate that loss to be >10%?
I can easily imagine making >10% changes in my allocation

Is your argument that 10% is the expected loss, or that it's plausible that you'd lose 10%?

I understood Paul to be arguing against 10% being the expected loss, in which case potentially making >10% changes in allocation doesn't seem like a strong counterargument.

Is your argument that 10% is the expected loss, or that it’s plausible that you’d lose 10%?

I think >10% expected loss can probably be argued for, but giving a strong argument would involve going into the details of my state of moral/philosophical/empirical uncertainties and my resource allocations, and then considering various ways my uncertainties could be resolved (various possible combinations of philosophical and empirical outcomes), my expected loss in each scenario, and then averaging the losses. This is a lot of work, I'm a bit reluctant for privacy/signaling reasons, plus I don't know if Paul would consider my understandings in this area to be state of the art (he didn't answer my question as to what he thinks the state of the art is). So for now I'm pointing out that in at least some plausible scenarios the loss is at least 10%, and mostly just trying to understand why Paul thinks 10% expected loss is way too high rather than make a strong argument of my own.

I understood Paul to be arguing against 10% being the expected loss, in which case potentially making >10% changes in allocation doesn’t seem like a strong counterargument.

Does it help if I restated that as, I think that with high probability if I learned what my "true values" actually are, I'd make at least a 10% change in my resource allocations?

Does it help if I restated that as, I think that with high probability if I learned what my "true values" actually are, I'd make at least a 10% change in my resource allocations?

Yes, that's clear.

So for now I'm pointing out that in at least some plausible scenarios the loss is at least 10%, and mostly just trying to understand why Paul thinks 10% expected loss is way too high rather than make a strong argument of my own.

I had an argument in mind that I thought Paul might be assuming, but on reflection I'm not sure it makes any sense (and so I update away from it being what Paul had in mind). But I'll share it anyway in a child comment.

Potentially confused argument:

Suppose 1) you're just choosing between spending and saving, 2) by default you're going to allocate 50-50 to each, and 3) you know that there are X considerations, such that after you consider each one, you'll adjust the ratio by 2:1 in one direction or the other.

If X is 1, then you expect to adjust the ratio by a factor of 2. If X is 10, you expect to adjust by a factor of sqrt(10)*2.

So, the more considerations there are that might affect the ratio, the more likely it is that you'll end up with allocations close to 0% or 100%. And so, depending on how the realized value is related to the allocation ratio, skipping one of the considerations might not change the EV that much.

Tangentially, this seems relevant to a long-running disagreement between us, about how bad is it if AI can't help us solve moral/philosophical problems, but only acquire resources and keep us in control.

It's not from the rationality community or academia as much, and I haven't looked into the transhumanist/singulatarian literature as much, but it's my impression everyone presumes a successfully human-aligned superintelligence would be able to find solutions which peacefully satisfy as many parties as possible. One stereotypical example given is of how superintelligence may be able to achieve virtual post-scarcity not just for humanity now but for the whole galactic future. So the expectation is a superintelligent AI (SAI) would be a principal actor in determining humanity's future. My impression from the AI alignment community coming from the rationalist side is SAI will be able to inevitably control the light cone no matter its goals, so the best we can do is align it with human interests. So while an SAI might acquire resources, it's not clear an aligned SAI would keep humans in control, for different values of 'aligned' and 'in control'.

So while an SAI might acquire resources, it’s not clear an aligned SAI would keep humans in control, for different values of ‘aligned’ and ‘in control’.

I was referring to Paul's own approach to AI alignment, which does aim to keep humans in control. See this post where he mentions this, and perhaps this recent overview of Paul's approach if you're not familiar with it.

Thanks for clarifying. I didn't know you were specifically referring to Paul's approach. I've got familiarize myself with it more.

The Soviet Union had three things going for them in getting to nuclear parity —

  1. As a command economy, they were able to assign a vast amount of people to working on their equivalent of the Manhattan Project and pour basically unlimited resources into the project, despite having a much weaker economy overall.
  2. They had probably the most effective espionage corps of all time, and stole many of the core technologies instead of needing to re-invent them.
  3. With the notable exception of Churchill, the majority of the world was war-weary and genuinely wanted to believe that Stalin would uphold his Yalta Conference promises.

Let's add 4: America was fighting on two theaters and the USSR was basically fighting on one (which isn't to deny that their part of the war was by far the bloodiest). Subduing Japan and supporting the nationalists in China (the predecessors to the Taiwanese government) took enormous amounts of US military resources.

I'd downplay #2: WWII had all kinds of superweapon development programs, from the Manhattan Project to bioweapons to the Bat Bomb. The big secret, the secret that mattered, was which one would work. After V-J day the secret was out and any country with a hundred good engineers could build one, including South Africa. To the extent that nuclear nonproliferation works today, it works because isotope enrichment requires unusual equipment and leaves detectable traces that allow timely intervention.

This is an important historical case study for predicting future technological change, and I agree is one of the strongest examples of discontinuous progress. This was a short and readable write-up of key points, and I appreciated the outlining of ways you might be wrong, to aid discussion (and it did lead to some good discussion). For these reasons, I curated it.

Based on my understanding, it was not known in 40s that US has 4 years advantage. Stalin started to claim "knowing nuclear secret" in 1947, but some US planners expected that it would take 10-20 years for Soviet Union to create the bomb.

Also, in 1949, the number of US nukes were not enough to stop Soviet army from ground invasion in Western Europe, which it would surely do in case of attack in Moscow.

Also, most of the soviet nuclear program was decentralised in secret unknown locations, which could not be destroyed without thermonuclear weapons or precise missiles, both non existed at the time.

Also US could not spend all its nukes on the first strike as it would mean effective disarmament, so the capabilities of the first strike was limited.

Thus I think that there was an advantage but not "decisive knowable" advantage, and such advantage in the case of nuclear weapons would be reached in 10 but not in 4 years.

I like the style of your analysis. I think your conclusion is wrong because of wonky details about World War 2. 4 years of technical progress at anything important, delivered for free on a silver platter, would have flipped the outcome of the war. 4 years of progress in fighter airplanes means you have total air superiority and can use enemy tanks for target practice. 4 years of progress in tanks means your tanks are effectively invulnerable against their opponents, and slice through enemy divisions with ease. 4 years of progress in manufacturing means you outproduce your opponent 2:1 at the front lines each and overwhelm them with numbers. 4 years of progress in cryptography means you know your opponent's every move and they are blind to your strategy.

Meanwhile, the kiloton bombs were only able to cripple cities "in a single mission" because nobody was watching out for them. Early nukes were so heavy that it's doubtful whether the slow clumsy planes that carried them could have arrived at their targets against determined opposition.

There is an important sense in which fission energy is discontinuously better than chemical energy, but it's not obvious that this translates into a discontinuity in strategic value per year of technological progress.

Separately, it seems like a 4 year lead in nuclear weapons would represent a decisive strategic advantage, which is much shorter than any other technology. My best guess is that a 2 year lead wouldn’t do it, but I’d love to hear an assessment of the situation from someone who understands the relevant history/​technology better than I do.

The US had a 4 year lead in nuclear weapons. The US tested its first nuclear weapon on July 16 1945. The Soviet Union tested its first nuclear weapon on August 29 1949. That is 4 years, 1 month and 13 days during which the US had a complete monopoly on nuclear weapons. Yet, during that time, the US did not and could not have used its nuclear weapons to assert superiority over the Soviet Union.

The US weapons of that era were all roughly in the 20kt range, and even as late as 1948, the US arsenal was less than 100 bombs. The delivery mechanism for these bombs were B-29 and B-36 bombers, which were relatively slow compared to the new jet-powered fighters and interceptors of the late '40s and early '50s. It's not clear that the US would have been able to drop a single nuclear bomb on the Soviet Union during that time, much less enough bombs to seriously dent a well-functioning industrial economy.

This gets at an important distinction between physical goods and software. With physical goods, there is often a long delay between invention and mass production. The US invented nuclear bombs in 1945. But it wasn't able to mass-produce them until after 1950. This is different from AI (and software products more generally) for which mass production is trivial, once the code has been written. It is for this reason that I do not think that nuclear bombs are a good analogy for AI risk.

The AIs still have to make atoms move for anything Actually Bad to happen.

There's a lot of Actually Bad things an AI can do just by making electrons move.

(or the AIs have to at least make atoms fall apart #atomicbombs #snark)

I think a highly relevant detail here is that the biggest bottleneck in development of nuclear weapons is refinement of fissionable material which is a tremendously intense industrial process (and still remains the major deterrent for obtainment of nukes). Without it the development would have been a lot more abrupt (and likely successful on German side).

Just commenting that the progress to thermonuclear weapons represented another discontinuous jump (1-3 orders of magnitude).

Also, whether von Neumann was right depends on the probability for the cold war ending peacefully. If we retrospectively conclude that we had a 90% chance of total thermonuclear war (and just got very lucky in real life) then he was definitely right. If we instead argue from the observed outcome (or historical studies conclude that the eventual outcome was not due to luck but rather due to the inescapable logic of MAD), then he was totally nuts.

Near-misses are not necessarily a very strong guide to estimating retrospective risk. Both sides were incentivized to hide their fail-safes for escalation; to credibly commit to having a twitchy retaliation finger, and at the same time to not actually retaliate (the game is first chicken, then ultimatum, and never prisoner's dilemma). So I would be very wary of trusting the historical record on "what if Petrov had not kept a cool mind".

Looking at the history of nuclear weapons development, it seems like the development of rocket/missile technology for the purposes of delivering nuclear warheads, in addition to the rapid development of nuclear explosives themselves, gave a strategic advantage as well. One could make a case study out of the development of rocket/missile technology.

I agree that nuclear weapons are the best example for both discontinuity and for decisive strategic advantage; I also agree the United States possessed decisive strategic advantage with nuclear weapons, and that we elected not to employ it (by ignorance and/or the value alignment of the US government).

However, I am skeptical of the metrics we are considering here. What is the motivation for using cost/TNT equivalent? Intuitively, this seems to be the same mistake as measuring AGI's power in operations per second; there is a qualitative difference at work which such numbers obscure. Further, this is not the mechanism that militaries use in their planning. To illustrate the point, consider the last paragraph of page 3 in this memo (which is one of the sources provided in Gentzel's information hazards of the Cold War post here). I will quote one line in particular to prime the intuition:

(Single rather than multi-weapon attack on each target is the rule in order to conserve delivery forces.)

I think the key insight was hit on in the AI Impacts blog post which accompanied the cost analysis:

And in particular, there are an empty three orders of magnitude between the most chemical energy packed into a thing and the least nuclear energy packed into a thing.

Dollars and TNT equivalents give the illusion that these things are fungible; if we consider the bombing campaigns in WWII where more destructive power was delivered than with the nuclear bombing missions, we notice that those were entire campaigns and the nuclear bombings were missions. This is the implication of that big jump in energy: it only takes one successful long-range bomber to kill or cripple any city on the face of the earth.

I think our ability to reason about this, especially in a fashion where we can usefully compare it to AGI, is improved by sticking more closely to the strategic implications. One possibility: in lieu of cost, time; in lieu of energy, targets destroyed. So the new metric would be time per target. This seems to map more closely to considerations of how quickly an AGI can accomplish its goals.

Edit: accidentally a word

It's not clear that the US could have employed nuclear weapons decisively. The US had nuclear weapons, true, but the only way of delivering those weapons was by plane, and aircraft of that era were relatively easy to shoot down. The only nuclear capable bomber the US had at the end of World War 2 was the B-29 Superfortress, and it was clear that it was increasingly obsolete in the face of the new generation of jet-powered fighters and interceptors coming into service with both the US and Soviet air forces.

Moreover, as this AskHistorians question points out, the US nuclear inventory was extremely thin prior to about 1950 or so. According to Schlosser, the US barely had a dozen weapons in 1948, and of those, the majority were unusable, due to lack of maintenance. It's not clear that the US could have bombed the Soviet Union into oblivion, even if it had chosen to do so - the Soviets would have been able to shoot down most of our nuclear-capable bombers, and what got through would have been completely insufficient to inflict lasting damage on an industrialized economy.

You raise some good points, but I think they do not change the conclusion. I think the key intuition where we differ is that I believe the requirements for decisive strategic advantage are lower than most people expect.

1) The historical data about the production and deployment of nuclear weapons is predicated on the strategic decision not to exploit them. When comparing weapon volume and delivery, we would be thinking about the counterfactual case where we did decide to exploit them, ie: wartime production continues unabated; weapons are maintained with intent to use them; delivery systems are still being deployed in a wartime context (after air superiority campaigns, etc).

2) Bombing the Soviets into oblivion was not the criteria for victory. The overwhelming total destructive capability developed during the Cold War was not calculated for delivery - it was calculated to have enough capacity left over after receiving a first strike to deliver sufficient destruction. The correct criteria for victory are the ones that were in play during WWII: enough to make any given power surrender. With American war production unaltered, I suggest that very few successful nuclear missions would have been required - just the capitals of the belligerents, and their primary war production centers. The dozen weapons in 1948, had they been well-maintained, would have sufficed. I note that the Lend-Lease program was necessary to provide the Soviets with enough air power to defend against the Nazis; lacking further support, they would have had little with which to defend against American air forces.

The source linked in the AskHistorians answer specifies:

“There has been no attempt to estimate the quantity of atomic bombs which would be required to conduct a prolonged war of attrition,”

And:

The analysis quickly fesses up to the fact that the only nation they’re concerned about is Russia, because they’re the only one who is projected to be even remotely on par with the United States from a military point of view for the next decade.

I am making a few assumptions here - namely that the decision point for exploiting strategic advantage was counterfactually made around the same time the US forces met the Soviet forces in Berlin, if not before. Further, I would not consider local problems like insurgencies or rebellions to challenge the fundamental advantage, because they are largely only a question of how much of the local resources can be safely exploited. The key element, in my view, is that no government could oppose the will of the United States, as distinct from a stricter standard like the United States assumes total control over all population centers. It is sufficient to be the only one who can fully mobilize any resources.

What are your thoughts?

While I think the US could have threatened the soviets into not producing nuclear weapons at that point in time, I think I have trouble seeing how the US could put in the requisite controls/espionage to prevent India/China/Uk etc from developing nuclear weapons later on.

Why would the controls the United States counterfactually put in place to maintain nuclear monopoly be less effective than the ones which are actually in place to maintain nuclear advantage? There would be no question of where or when nuclear inspectors had access, and war would’ve been a minimal risk.

I'm curious what type of nuclear advantage you think America has. It is is still bound by MAD due to nukes on submersibles.

I think that US didn't have a sufficient intelligence capability to know where to inspect. Take Israel as an example.

CIA were saying in 1968 "...Israel might undertake a nuclear weapons program in the next several years". When Israel had already built a bomb in 1966.

As of about 10 years ago MAD conditions no longer apply. I don't have a source for this because it was related to me directly by someone who was present at the briefing, but around 2006-07 our advanced simulations concluded that the United States had something like a ~60% chance of completely eliminating Russia's second strike capacity if they had no warning, and still a ~20% chance if they did.

I was able to find a Foreign Affairs article that discusses some of the reasons for this disparity here. The short version is that we were much more successful in maintaining our nuclear forces than Russia.

I am not certain how the intervening years have affected this calculus, but based on the reaction to Russia's recent claims of nuclear innovation, I suspect they are not changed much. I am reading a book called The Great American Gamble: Deterrence Theory and Practice from the Cold War to the Twenty-First Century, by Keith B. Payne, which I expect will shed considerably more light on the subject.

Interesting. I didn't know Russia's defences had degraded so much.

I feel the need to add an important caveat: MAD as a strategic situation may not apply, but MAD as a defense policy still does. Moving away from the defense policy is what the Foreign Affairs article warns against, and it is what the book I am reading right now concludes is the right course. In historical terms, it argues in favor of Herman Kahn over Schelling.

Promoted to frontpage.