Related: Existential Risk, 9/26 is Petrov Day

Existential risks—risks that, in the words of Nick Bostrom, would "either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential," are a significant threat to the world as we know it. In fact, they may be one of the most pressing issues facing humanity today.

The likelihood of some risks may stay relatively constant over time—a basic view of asteroid impact is that there is a certain probability that a "killer asteroid" hits the Earth and that this probability is more or less the same every year. This is what I refer to as a "stable risk."

However, the likelihood of other existential risks seems to fluctuate, often quite dramatically. Many of these "unstable risks" are related to human activity.

For instance, the likelihood of a nuclear war at sufficient scale to be an existential threat seems contingent on various geopolitical factors that are difficult to predict in advance. That said, the likelihood of this risk has clearly changed throughout recent history. Nuclear war was obviously not an existential risk before nuclear weapons were invented, and was fairly clearly more of a risk during the Cuban Missile Crisis than it is today.

Many of these unstable, human-created risks seem based largely on advanced technology. Potential risks like gray goo rely on theorized technologies that have yet to be developed (and indeed may never be developed). While this is good news for the present day, it also means that we have to be vigilant for the emergence of potential new threats as human technology increases.

GiveWell's recent conversation with Carl Shulman contains some arguments as to why the risk of human extinction may be decreasing over time. However, it strikes me as perhaps more likely that the risk of human extinction is increasing over time—or at the very least becoming less stable—as technology increases the amount of power available to individuals and civilizations.

After all, the very concept of human-created unstable existential risks is a recent one. Even if Julius Caesar, Genghis Khan, or Queen Victoria for some reason decided to destroy human civilization, it seems almost certain that they would fail, even given all the resources of their empires.

The same cannot be said for Kennedy or Khrushchev.

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 11:30 PM

Perhaps a better title would be "Known and Unknown Risks", since there is nothing inherently different between "stable" and "unstable" risks.

For example, suppose there is a killer asteroid impact in 2015, yet the asteroid in question is not detected until 2014. Then the likelihood of the extinction rises dramatically at the moment of confirmed detection. If subsequently an emergency-build nuclear-powered asteroid deflector knocks the asteroid off the collision course, the relevant x-risk drops back to the baseline or lower.

Similarly, the likelihood of a nuclear annihilation at any given point can be traced to certain events occurring (or becoming known to the risk estimator), like discovery of nuclear fission, nuclear arms race, invention of ICBMs, shooting down the U-2 spy plane, etc.

There's some utility to distinguishing the level of impact individual human actions have, though, and stability/instability seems a good proxy. Either asteroid detection or deflection is a massive undertaking requiring investments on the scale of tens or hundreds of billion of USD (or equivalent), for example, and the smaller decisions or indecisions of an individual politician or NASA administrator are very remotely related to the active event. The popular history(1) of MAD shows several circumstances where the entire system balanced on minutia of policy, technical errors, or the decisions of a single military officer on more than one occasion.

(1) This is a statement about my knowledge, rather than the accuracy of the history, in this case. It's not a topic I've studied in detail.

Perhaps a better title would be "Known and Unknown Risks", since there is nothing inherently different between "stable" and "unstable" risks.

I consider the difference to be extremely important to future decisionmaking, so I'm confused as to why you think this is the case. Can you explain further?

My point is that every disaster likelihood estimate goes through the periods of small and large fluctuations, the only difference between "stable" and "unstable" risks is which phase you are currently in. It would not be a good model to decide that one type of risk is inherently "stable" and another is "unstable".

I think you need to zoom out by a level of abstraction here. Does the argument make sense to you then?

GiveWell's recent conversation with Carl Shulman contains some arguments as to why the risk of human extinction may be decreasing over time.

To be clear, those are arguments for a long-term decline of risk. One of the arguments was that eventually we will have passed major technological transitions, such as interstellar colonization or advanced artificial intelligence. I would expect short-term risk to go up as we go through those transitions before eventually falling to very low levels thereafter (if we survive until then).

Thanks for the clarification. I was definitely confused when I first read that document, because it seemed to paint a much rosier picture than what I consider the case-- that said, I agree that in the event that we pass certain transitions safely, short-term risk will be much less of a concern.

This idea is interesting, but is there much of a distinct between natural and unnatural as categories and stable v. unstable? It would seem like there will be near perfect overlap between natural and stable.

While I agree in general, there are still some natural risks that strike me as not necessarily stable. For instance, I'm not sure if the Yellowstone supervolcano erupting can be considered a stable risk. The same goes for devastating epidemics, though those seem unlikely to be existential risks at all.