The FHI's mini advent calendar: counting down through the big five existential risks. The first one is an old favourite, forgotten but not gone: nuclear war.

Nuclear War
Current understanding: medium-high
Most worrying aspect: the missiles and bombs are already out there
It was a great fear during the fifties and sixties; but the weapons that could destroy our species lie dormant, not destroyed. 

There has been some recent progress: the sizes of the arsenals have been diminishing, fissile material is more under control than it used to be, and there is more geo-political peace and cooperation.

But nuclear weapons still remain the easiest method for our species to destroy itself. Recent modelling have confirmed the old idea of nuclear winter: soot rising from burning human cities destroyed by nuclear weapons could envelop the world in a dark cloud, disrupting agriculture and the food supplies, and causing mass starvation and death far beyond the areas directly hit. And a creeping proliferation has spread these weapons to smaller states in unstable areas of the world, increasing the probability that nuclear weapons could get used, leading to potential escalation. The risks are not new, and several times (the Cuban missile crisis, the Petrov incident) our species has been saved from annihilation by the slimmest of margins. And yet the risk seems to have slipped off the radar for many governments: emergency food and fuel reserves are diminishing, and we have few “refuges” designed to ensure that the human species could endure a major nuclear conflict.

New Comment
35 comments, sorted by Click to highlight new comments since: Today at 2:32 PM

On the first day of Xrisks, FHI gave to me...

I recently had a Q&A with one of the authors of the recent nuclear winter papers, on the extent of x-risk from nuclear winter. They thought it very small relative to the catastrophic risk.

Here is a discussion of food sources during nuclear winter.

Thanks! We may have tor revise our rankings at some point (since we have nobody working on nuclear war currently, we haven't fully analysed it).

PS: in your Q&A, you don't seem to have mentioned anthropic bias - using past events to estimate survival probability is affected by this.

Anthropics folk can do this. For a subject matter expert I figure it would have added complication and noise.

Fair enough. But it does mean we can't take his estimate at face value.

Could nuclear winter really kill everyone? It feels like there's a lot of investigation yet to be done on sorting the existential risks from the global catastrophic risks.

Yes, there does need to be. We're working on some aspects of this, especially Anders Sandberg (who's considering population fragmentation and vulnerability to other risks, minimal population needed for technology, and so on.

Don't forget the threat of high-atmospheric detonations creating electromagnetic pulses big enough to destroy every un-shielded microchip in Europe, a.k.a., every microchip in Europe. Even a rogue state can manage that.

how's that an x-risk?

It is admittedly not an existential risk for our species, but it is an existential risk for our civilization.

I'd think not, if it's just Europe. It's a good question how long it would take to re-create present technological level

  • from the remaining shielded tech
  • just from books

Well, the fact that Europe managed to bounce back from WWII is encouraging here.

There is two things that make nuclear war x-risks.

One is that nuclear weapon could be used unconventially - to disturb magmatic chambers of supevolcanos or (and) bomb existing nuclear reactors and storages of nuclear waste.

Another is possibility of radial improvment of nuclear bomb technology. First thing is new cheap ways of uranium enrichment via laser enrichment. Another is building of hydrogen thermonuclear bombs without uraniun fuse, by using somekimd of electric-magnectic pinch, laser or cold fusion.

And I forget a cobalt bomb as stationery doomsday device which could be built with price tag 10-100 billions USD.

Classicical nuclear winter is survivable in my oinion.

Anders Sandberg has done calculations for cobalt bombs.

He said: «However, when I calculated the necessary amount of cobalt and from that the necessary yield of the bomb I found that they were definitely in the very, very impractical range (many thousands of tons of metal, at least 960 megatons of yield).« http://www.overcomingbias.com/2009/10/self-assured-destruction.html

But it is only 10 times more when Tzar bomb and it could and should be made as a stationery device, not a transportable bomb. Tipical nuclear reactor weight several thosand tones. So one cobalt bomb will be as heavy and complex as nuclear reactor, and so it is fisable.

How it could happened that you more beleive Sandberg when calculations of Szillrd?

Actually, when I did my calculations my appreciation of Szilard increased. He was playing a very clever game.

Basically, in order to make a cobalt bomb you need 50 tons of neutrons absorbed into cobalt. The only way of doing that requires a humongous hydrogen bomb. Note when Szilard did his talk: before the official announcement of the hydrogen bomb. The people who could point out the problem with the design would be revealing quite sensitive nuclear secrets if they said anything - the neutron yield of hydrogen bombs was very closely guarded, and was only eventually reverse-engineered by studies of fallout isotopes (to the great annoyance of the US, apparently).

Szilard knew that 1) he was not revealing anything secret, 2) criticising his idea required revealing secrets, and 3) the bomb was impractical, so even if somebody tried they would not get a superweapon thanks to his speech.

I think cobalt bombs can be done; but you need an Orion drive to launch them into the stratosphere. The fallout will not be even, leaving significant gaps. And due to rapid gamma absorption in sea water the oceans will be semi-safe. Just wash your fishing boat so fallout does not build up, and you have a good chance of survival.

Basically, if you want to cause an xrisk by poisoning the biosphere, you need to focus on breaking a key link rather than generic poisoning. Nukes for deliberate nuclear winter or weapons that poison the oxygen-production of the oceans are likely more effective than any fallout-bomb.

Thank you for clarification of your position.

I think that you need not to move the bomb to stratospere. Smith in Doomsday men gave estimate that doomsday cobalt bomb should weight near the weight of lincor Missuri that is 70 000 tonn. So you could detonate it on spot - and the enegry of explosion will bring isotopes to upper atmosphere.

Also, if we go in technical details about global radiological contamonation, I think it would be better to use not only cobalt but other isotopes. Gold was discussed as another one. But the best could be some kind of heavy gas like radon because it is does not (as I think) solve in the see but tend to stay in lower atmosphere. It is not a fact but just my opinion about making nuclear doomsday divice more efective, and while I think this partilur opinion is wrong, some one who really wants to make such device could find the ways to make it much more effective , taking different isotopes as blanket of the bomb.

[-][anonymous]11y00

.

[This comment is no longer endorsed by its author]Reply

Calling this an x-risk seems to be a case of either A) stretching the definition considerably, or B) being unduly credulous of the claims of political activists. A few points to consider:

1) During the height of the cold war, when there were about an order of magnitude more nuclear weapons deployed than is currently the case, US military (which had a vested interest in exaggerating the Soviet threat) put estimated casualties from a full-scale nuclear exchange at 30-40% of the US population. While certainly horrific, this falls far short of extinction. Granted, this was before anyone had thought of nuclear winter, but:

2) Stone age humanity survived the last ice age in Europe, where climate conditions were far worse than even the most pessimistic nuclear winter scenarios. It strains credibility to imagine that modern societies would do worse.

3) The nuclear winter concept was invented by peace activists looking for arguments to support the cause of nuclear disarmament, which a priori makes them about as credible as an anit-global-warming scientist funded by an oil company.

4) Nuclear winter doesn't look any better when you examine the data. The whole theory is based on computer models of complex atmospheric phenomena that have never been calibrated by comparing their results to real events. As anyone who's ever built a computer model of a complex system can tell you, such uncalibrated models are absolutely meaningless - they provide no information about anything but the biases of those who built them.

So really, the inclusion of nuclear war on x-risk lists is a matter of media manipulation trumping clear thought. We've all heard the 'nuclear war could wipe out humanity' meme repeated so often that we buy into it despite the fact that there has never been any good evidence for it.

The primary problem with nuclear war is that it isn't obvious that humans can get back to our current tech level without the now consumed resources (primarily fossil fuels) that we've used to bootstrap ourselves up to our current tech level. If that's an issue, then any event that effectively pushes the tech level much below 1900 is about the same as an existential risk, it will just take longer for something else to then finish us off. There's been some discussion on LW about how possible it is to get back to current tech levels without the non-renewables to bootstrap us up, and there doesn't seem to be any real consensus on the matter. This should probably be on the list of things that someone at FHI should spend some time examining.

On the other hand, future civilizations have the benefit of 20th century science unless the catastrophe also manages to destroy all physics textbooks.

Well, the textbooks need to be not just destroyed but accessible, and many advanced physics textbooks depend on other math texts and the like. But most of those are wide spread, so this seems like a reasonable assumption. So it may be possible to avoid spending as much in experimentation. But until the 20th century, experimentation was for physics not a massive set of costs. A more serious issue is going to be engineering, but again, an assumption that basic textbooks will be accessible seems reasonable. The serious issue is more how much one can take advantage of economies of scale and comparative advantage in order to get to the point where one has enough free resources to do things like build solar panels and the like.

Moreover, there's a large amount of what may be things like institutional knowledge or simply isn't commonly written down in textbooks. For example, China for decades now has had trouble making their own high-performance jet engines (1), and during the cold war the USSR had a lot of trouble making high-perfomance microchips. In both cases there likely were other problems at play in addition to technical know-how. but this suggests that there may be more serious issues for many technologies than just basic textbooks.

Yes, this is the general reason why global catastrophic risks (including global warming, global pandemics and so on) shade into existential risks.

In fact, my personal view of x-risk is that these are the most likely and worrying failure modes : some catastrophe cripples human civilization and technology, we never really recover, and then much further in the future die off from some natural extinction event.

"We're not sure if we could get back to our current tech level afterwards" isn't an xrisk.

It's also purely speculative. The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren't cost-competitive. Plus there's wind, geothermal, hydroelectric, solar and nuclear. We're a long, long way away from the "all non-renewables are exhausted" scenario.

"We're not sure if we could get back to our current tech level afterwards" isn't an xrisk.

Yes it is. Right now, we can't deal with a variety of basic x-risk that require large technologies. Big asteroids hit every hundred million years or so and many other disasters can easily wipe out a technological non-advanced species. If our tech level is reduced to even late 19th century and is static then civilization is simply dead and doesn't know it until something comes along to finish it off.

The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren't cost-competitive.

The problem is exactly that: They aren't as cost competitive, and have much lower EROEI. That makes them much less useful and not even clear if they can be used to actually move to our current tech level. For example, to even get >1 EROEI on oil shale requires a fair bit of advanced technology. Similarly, most of the remaining coal is in much deeper locations than classical coal (we've consumed most of the coal that was easy to get to).

Plus there's wind, geothermal, hydroelectric, solar and nuclear. We're a long, long way away from the "all non-renewables are exhausted" scenario.

All of these require high tech levels to start with or have other problems. Geothermal only works for limited locations. Solar requires extremely high tech levels to even have positive energy return. Nuclear power requires similar issues along with massive processing procedures for enough economies of scale to kick in. Both solar and have terrible trouble with providing consistent power which is important for many uses such as manufacturing. Efficient batteries are one answer to that but they require also advance tech. It may help to keep in mind that even with the advantages we had the first time around, the vast majority of early electric companies simply failed. There's an excellent book which discusses many of these issues - Maggie Koerth-Baker's "Before the Lights Go Out." It focuses more on the current American electric grid, but in that context discusses many of these issues.

Now you're just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we'd be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.

I'm not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you've bought into a line of political propaganda that has little relation to reality - there's a large body of evidence that we're nowhere near running out of fossil fuels, and the energy industry experts whose livelihoods rely on making correct predictions mostly seem to be lined up on the side of expecting abundance rather than scarcity. I don't expect you to agree, but anyone who's curious should be able to find both sides of this argument with a little googling.

Now you're just changing the definition to try to win an argument.

So Nick Bostrom, who seems to be one of the major thinkers about existential risk seems to think that this justifies being discussed in the context of existential risk http://www.nickbostrom.com/existential/risks.html . In 5.1 of that link he writes:

The natural resources needed to sustain a high-tech civilization are being used up. If some other cataclysm destroys the technology we have, it may not be possible to climb back up to present levels if natural conditions are less favorable than they were for our ancestors, for example if the most easily exploitable coal, oil, and mineral resources have been depleted. (On the other hand, if plenty of information about our technological feats is preserved, that could make a rebirth of civilization easier.)

Moving on, you wrote:

If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we'd be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.

So I agree we need to be careful about keeping focused on existential risk as proximate causes. I had a slightly annoying discussion with someone earlier today who was arguing that "religious fanaticism" should constitute an existential risk. But in some contexts, the line certainly is blurry. If for example, a nuclear war wiped out all but 10 humans, and then they died from lack of food, I suspect you'd say that the existential risk that got them was nuclear war, not famine. In this context, the question has to be asked if something doesn't completely wipe out humanity but leaves humanity in a situation where it is limping along to the point where things that wouldn't normally be existential risk could easily wipe humanity out should that be classified as existential risk? Even if one doesn't want to call that "existential risk", it seems clear that they share the most relevant features of existential risk (e.g. relevant to understanding the Great Filter, likely understudied and underfunded, will still result in a tremendous loss of human value, will prevent us from traveling out among the stars, will make us feel really stupid if we fail to prevent and it happens, etc.).

I'm not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you've bought into a line of political propaganda that has little relation to reality - there's a large body of evidence that we're nowhere near running out of fossil fuels,

This and the rest of that paragraph seem to indicate that you didn't read my earlier paragraph that closely. Nothing in my comment said that we were running out of fossil fuels, or even that we were running out of fuels with >1 EROEI. There's a lot of fossil fuels left. The issue in this context is that the remaining fossil fuels take technology to efficiently harness, and while we generally have that technology, a society trying to come back from drastic collapse may not have the technology. That's a very different worry than any claim about us running out of fossil fuels in the near or even indefinite future.

Nuclear winter doesn't look any better when you examine the data.

Have you done this? I've asked random climate physics folk (pretty smart and cynical people) to take a look at the nuclear winter models, and they found them reasonable on the basic shape of the effect, although they couldn't vouch for the fine details of magnitude. So it doesn't look to me like just a narrow clique of activists pushing the idea of nuclear winter.

2) This doesn't take into account anthropic effects - we have to have survived to get to where we are now. Looking at the past and saying "hey, we survived that!" doesn't mean that the probabilities were that high.

3) The idea is sufficiently well developed now that it's origins are irrelevant (there are few hippies pushing the idea currently).

4) They are computer models, based on extensive knowledge of atmospheric condition and science. Are they definitely reliable? No. Are they more likely right than wrong? Probably - it's not like the underlying science is particularly difficult.

At what probability of the models being wrong would you say that we can ignore the threat? Are you convinced that the models have at least that probability of being wrong? And if so, based on what - it's not like there's a default position "nuclear winter can't happen" that has a huge prior in its favour, that the models then have to overcome.

This is a topic I frequently see misunderstood, and as a programmer who has built simple physics simulations I have some expertise on the topic, so perhaps I should elaborate.

If you have a simple, linear system involving math that isn't too CPU-intensive you can build an accurate computer simulation of it with a relatively modest amount of testing. Your initial attempt will be wrong due to simple bugs, which you can probably detect just by comparing simulation data with a modest set of real examples.

But if you have a complex, non-linear system, or just one that's too big to simulate in complete detail, this is no longer the case. Getting a useful simulation then requires that you make a lot of educated guesses about what factors to include in your simulation, and how to approximate effects you can't calculate in any detail. The probability of getting these guesses right the first time is essentially zero - you're lucky if the behavior of your initial model has even a hazy resemblance to anything real, and it certainly isn't going to come within an order of magnitude of being correct.

The way you get to a useful model is through a repeated cycle of running the simulator, comparing the (wrong) results to reality, making an educated guess about what caused the difference, and trying again. With something relatively simple like, say, turbulent fluid dynamics, you might need a few hundred to a few thousand test runs to tweak your model enough that it generates accurate results over the domain of input parameters that you're interested in.

If you can't run real-world experiments to generate the phenomena you're interested in, you might be able to substitute a huge data set of observations of natural events. Astronomy has had some success with this, for example. But you need a data set big enough to encompass a representative sample of all the possible behaviors of the system you're trying to simulate, or else you'll just gets a 'simulator' that always predicts the few examples you fed it.

So, can you see the problem with the nuclear winter simulations now? You can't have a nuclear war to test the simulation, and our historical data set of real climate changes doesn't include anything similar (and doesn't collect anywhere near as many data points as a simulator needs, anyway). But global climate is a couple of orders of magnitude more complex than your typical physics or chemistry sims, so the need for testing would be correspondingly greater.

The point non-programmers tend to miss here is that lack of testing doesn't just mean the model is a a little off. It means the model has no connection at all to reality, and either outputs garbage or echoes whatever result the programmer told it to give. Any programmer who claims such a model means something is committing fraud, plain and simple.

There are particulate events in the climate record that a model of nuclear winter could be calibrated against -- any major volcanic eruption, for example. Some have even approached the level of severity predicted for a mild nuclear winter: the "Year Without A Summer" following the 1815 Tambora eruption is the first one I can think of.

This isn't perfect: volcanoes mainly release fine rock ash instead of the wood and hydrocarbon soot that we'd expect from burning cities, which behaves differently in the atmosphere, and while we can get some idea of the difference from looking at events like large-scale forest fires there are limits on how far we can extrapolate. But we should have enough to at least put some bounds on what we could expect.

Different components in the model can be tested separately. How stratospheric gases disperse can be tested. How black soot rises in the atmosphere, in a variety of heat conditions, can be tested. How black soot affects absorption of the solar radiation can be simulated in laboratory, and tested in indirect ways (as Nornagest mentioned, by comparing with volcanic eruptions).

Yes, and that's why you can even attempt to build a computer model. But you seem to be assuming that a climate model can actually simulate all those processes on a relatively fundamental level, and that isn't the case.

When you set out to build a model of a large, non-linear system you're confronted with a list of tens of thousands of known processes that might be important. Adding them all to your model would take millions of man-hours, and make it so big no computer could possibly run it. But you can't just take the most important-looking processes and ignore the rest, because the behavior of any non-linear system tends to be dominated by unexpected interactions between obscure parts of the system that seem unrelated at first glance.

So what actually happens is you implement rough approximations of the effects the specialists in the field think are important, and get a model that outputs crazy nonsense. If you're honest, the next step is a long process of trying to figure out what you missed, adding things to the model, comparing the output to reality, and then going back to the drawing board again. There's no hard, known-to-be-accurate physics modeling involved here, because that would take far more CPU power than any possible system could provide. Instead it's all rules of thumb and simplified approximations, stuck together with arbitrary kludges that seem to give reasonable results.

Or you can take that first, horribly broken model, slap on some arbitrary fudge factors to make it spit out results the specialists agree look reasonable, and declare your work done. Then you get paid, the scientists can proudly show off their new computer model, and the media will credulously believe whatever predictions you make because they came out of a computer. But in reality all you've done is build an echo chamber - you can easily adjust such a model to give any result you want, so it provides no additional evidence.

In the case of nuclear winter there was no preexisting body of climate science that predicted a global catastrophe. There was just a couple of scientists who thought it would happen, and built a model to echo their prediction.

The point non-programmers tend to miss here is that lack of testing doesn't just mean the model is a a little off. It means the model has no connection at all to reality, and either outputs garbage or echoes whatever result the programmer told it to give. Any programmer who claims such a model means something is committing fraud, plain and simple.

This really is a pretty un-bayesian way of thinking - the idea that we should totally ignore incomplete evidence. And by extension that we should chose to believe an alternative hypothesis (''no nuclear winter') with even less evidence merely because it is assumed for unstated reasons to be the 'default belief'.

An uncalibrated sim will typically give crazy results like 'increasing atmospheric CO2 by 1% raises surface temperatures by 300 degrees' or 'one large forest fire will trigger a permanent ice age'. If you see an uncalibrated sim giving results that seem even vaguely plausible, this means the programmer has tinkered with its internal mechanisms to make it give those results. Doing that is basically equivalent to just typing up the desired output by hand - it provides evidence about the beliefs of the programmer, but nothing else.