The prospect of a dangerous collection of existential risks and risks of major civilizational-level catastrophes in the 21st century, combined with a distinct lack of agencies whose job it is to mitigate against such risks probably indicates that the world might be in something of an emergency at the moment. Firstly, what do we mean by risks? Well, Bostrom has a paper on existential risks, and he lists the following risks as being "most likely":

  • Deliberate misuse of nanotechnology,
  • Nuclear holocaust,
  • Badly programmed superintelligence,
  • Genetically engineered biological agent,
  • Accidental misuse of nanotechnology (“gray goo”),
  • Physics disasters,
  • Naturally occurring disease,
  • Asteroid or comet impact,
  • Runaway global warming,
  • Resource depletion or ecological destruction,
  • Misguided world government or another static social equilibrium stops technological progress,
  • “Dysgenic” pressures (We might evolve into a less brainy but more fertile species, homo philoprogenitus “lover of many offspring”)
  • Our potential or even our core values are eroded by evolutionary development,
  • Technological arrest,
  • Take-over by a transcending upload,
  • Flawed superintelligence,
  • [Stable] Repressive totalitarian global regime,
  • Hanson's cosmic locusts scenario [Added by author]

To which I would add various possibilities for major civilization-level disasters that aren't existential risks, such as milder versions of all of the above, or the following:

  • convergence of computer viruses and cults/religions,
  • advanced personal weapons or surveillance devices such as nanotech, micro-UAV bugs (cyberpunk dystopia),
  • erosion of privacy and freedom through massively oppressive government,
  • highly effective meta-religions such as Scientology or a much more virulent version of modern evangelical Christianity

This collection is daunting, especially given that the human race doesn't have any official agency dedicated to mitigating risks to its own medium-long term survival. We face a long list of challenges, and we aren't even formally trying to mitigate many of them in advance, and in many past cases, mitigation of risks occurred on a last-minute, ad-hoc basis, such as individuals in the cold war making the decision not to initiate a nulcear exchange, particularly in the Cuban missile crisis.

So, a small group of people have realized that the likely outcome of a large and dangerous collection of risks combined with a haphazard, informal methodology for dealing with risks (driven by the efforts of individuals, charities and public opinion) is that one of these potential risks will actually be realized - killing many or all of us or radically reducing our quality of life. This coming disaster is ultimately not the result of any one particular risk, but the result of the lack of a powerful defence against risks.

One could argue that I [and Bostrom, Rees, etc] are blowing the issue out of proportion. We have survived so far, right? (Wrong, actually - anthropic considerations indicate that survival so far is not evidence that we will survive for a lot longer, and technological progress indicates that risks in the future are worse than risks in the past). Major civilizational disasters have already happened many, many times over.

Most ecosystems that ever existed were wiped out by natural means, almost all species that have ever existed have gone extinct, and without human intervention most existing ecosystems will probably be wiped out within a 100 million year timescale. Most civilizations that ever existed, collapsed. Some went really badly wrong, like communist Russia. Complex, homeostatic objects that don't have extremely effective self-preservation systems empirically tend to get wiped by the churning of the universe.

Our western civilization lacks an effective long-term (order of 50 years plus) self-preservation system. Hence we should reasonably expect to either build one, or get wiped out, because we observe that complex systems which seem similar to societies today - such as past societies - collapsed.

And even though our society does have short-term survival mechanisms such as governments and philanthropists, they often behave in superbly irrational, myopic or late-responding ways. It seems that the response to the global warming problem (late-responding, weak, still failing to overcome co-ordination problems) or the invasion of Iraq (plain irrational) are cases in point from recent history, and that there are numerous examples from the past, such as close calls in the cold war, and the spectacular chain of failures that led from world war I to world war II and the rise of Hitler.

This article could be summarized as follows:

The systems we have for preserving the values and existence of our western society, and the human race as a whole are weak, and the challenges of the 21st-22nd century seem likely to overwhelm them.

I originally wanted to write an article about ways to mitigate existential risks and major civilization-level catastrophes, but I decided to first establish that there are actually such things as serious existential risks and major civilization-level catastrophes, and that we haven't got them handled yet. My next post will be about ways to mitigate existential risks.



New Comment
135 comments, sorted by Click to highlight new comments since: Today at 9:35 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

An important consideration not yet mentioned is that risk mitigation is can be difficult to quantify, compared to disaster relief efforts where if you save a house fill of children, you become a hero. Coupled with the fact that people extrapolate the future using the past (which misses all existential risks), the incentive to do anything about it drops pretty much to nil.

Right, this is a well known human bias: people use the most serious disaster that has already happened as an upper limit on possible disasters.
  • Hanson's cosmic locusts scenario

Googling found me this commentary

The result is that [interstellar] colonizers will tend to evolve towards something akin to a locust swarm, using all [resources] for colonization and nothing for anything else.

on Robin Hanson's "Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization".

I sometime wonder why people think this outcome is bad. It is what we will probably get - unless we manage to eliminate competetion and overrule natural selection. In that case, we will still probably get something very similar - since expansion is probably the best way to defend yourself against aliens.
Mind if I ask, but, as opposed to considering it good ?
Indeed. Successfully colonising space is conventionally part of our Glorious Future.

It is one thing to say "Something must be done!" with a tone of righteous superiority. It is another thing entirely to specify what must be done. Many of these risks do not seem existential to me, some (like dystopia) should really be properly buried as ideas (Bostrom actually dismisses this idea in that paper). The ones that do seem realistically existential seem almost impossible to prepare against on any realistic scale - aliens, gray goo, uploads, and massive global warfare/conquest don't seem like they're going to be sensitive to many invest... (read more)

Truths about the world should be stated if they seem important to our collective utility function, and I resent being criticised for tone, righteousness, etc. I would expect that here I can freely state important truths and expect to be criticised based only upon the accuracy and utility-relevance of those statements. If we start criticising people for "sounding righteous", we are incentivizing people to write posts that are pleasant sounding over posts that are accurate. This is suboptimal for rational group behaviour. Furthermore, you shouldn't criticise me for not saying what to do about these problems. If LW implements a general policy of punishing anyone who posts a problem without also posting the solution because that constitutes "righteous superiority", we are incentivizing people to minimize the amount of time they spend thinking about the solution, write down the first solution that comes into their heads and then publish it right next to the problem to prime everyone on less wrong with that likely poor quality solution as a reference point with the special status of being in the post. We are also providing people with an incentive not to publish what they think are important problems that they can't think of solutions to, biasing LW to not even tackle the hard problems. I didn't say that we can't do anything about these risks. Absence of evidence of an ability to risk-mitigate is not evidence of absence of an ability to risk-mitigate.
This is a huge claim. You're claiming first of all that the odds of succumbing to a truly existential event are higher than not. You don't (IMO) provide evidence to support this - you provide some evidence that we may have had really catastrophic events in the past, but, again, only 100% is existential, and you do not finish off your examples - "Hitler could have won" and "Hitler could have won and created a repressive regime that lasted for the remainder of human history" are two very different claims, and the former is not existential. Second, you claim that if we take some steps, we can expect such events not to happen - it is because we lack an "effective long-term preservation system" that we can expect to be destroyed completely. Thus, you have, to my understanding, made two claims: one about the likelihood of existential events, and one about the likelihood of us being able to mitigate them. Again, to my evaluation, you have provided compelling evidence for neither of these conclusions; indeed you've provided virtually no evidence for either of these conclusion (probability, not possibility). That is the root of my criticism of not providing solutions: you claim solutions are possible, desirable, and effective, and you do not provide any evidence to support this claim. Thus, my criticism of your tone as "righteous" is because you seem to be making a strong, "deep" claim without providing adequate supporting evidence or argument. It is not a criticism of your word choice. I have absolutely no problem with people posting about problems that occur to them that they don't know how to solve. I do have a problem with people making strong claims with a definitive tone without providing adequate supporting evidence. I admit this may all hinge on a disagreement in definition over "existential." I take existential to require true obliteration. Gray goo would reach this, as would the-simulation-loses-power or every-atom-splits or humanity-is-enslaved-by-something-for
Given that most societies that ever existed were wiped out, often violently or otherwise catastrophically, and that we have a list of 6 near misses, and that almost all homeostatic complex systems that are loosely analogous to civilization such as ecosystems or long-lived organisms like coral reefs of even organisms in general have existed and then got wiped out again, I think that this is a reasonable claim.
If we had actually had 6 "near misses", then that would be pertinent evidence. In which case, maybe they should be listed, their probabilities and potential impact estimated.
I now get what lead to this confusion. You've referred to both "existential" and "major civilizational-level catastrophes" without much effort to distinguish between the two, though they differ in both extent and probability by a few orders of magnitude. I assumed from the Bostrom paper citation and the long list of existential threats that the article in general was about existential risks, which, on a rereading, it isn't. My concern over showing that something could reasonably be done remains, but you do provide appropriate evidence regarding civilization-level catastrophes. It might be worth a sentence or two clarifying that your concern is civ-level or greater, rather than specifically existential, though I may be the only one who misread the focus here.
Well, I used two different phrases. I drew the distinction in the first sentence, and several other times throughout the article. What else did I not do that I should have done? What probability do you assign to human civilization being wiped out over the next, say, 10,000 years? Less than 0.1% or less than 1%, I presume, since it must be a few orders of magnitude less than 100%? how about this: "The prospect of a dangerous collection of existential risks and risks of major civilizational-level catastrophes ... " ?
This comment is a much more useful criticism than the previous one. I will be making some changes to the article.
Ah, ok, now I understand why this post is being binned: I sound righteous. Can you give me some hints as to what about this post triggered your righteousness detector, because this was not intended...
5Eliezer Yudkowsky14y
I don't think that's the main thrust of his complaint. Lack of specifics is the main problem. If you say "Something must be done!" but not what, then the tone of the writing is moot, so far as righteousness-detectors go.
But at the end of the day, this is supposed to be a rationalist community. All I did was communicate a true fact, without attempting to "sound righteous" - which is a form of social signalling. If we cannot state true facts without false accusations of social signalling being levelled - well, then we have a long we to go as a rationalist group. Telling someone off because they tripped your righteousness detector when all they are trying to do is present an accurate piece of the map is not good group epistemic rationality.
As discussed I screwed this post up. I wanted to split up the two tasks logically: 1. establish that there really is a problem 2. say what to do about it - IN THE NEXT POST! I could have rolled it all into one big post, but that's a lot of material.

You should put the summary at the beginning of the article.

The article starts: which is a summary, right?
You confused me nonetheless. If you have a summery at the beginning (and I didn't perceive it as such), why do you put another, different one at the and, and mark it as the summary?
I like the idea of one at the beginning and one at the end.

A lot of things on the list of risks are "Things I've read about in science fiction." That's no reason to dismiss them, of course, but it does make it easy to put them in the same mental category as other events in science fiction - "interesting but fanciful."

Beware generalizing from fictional evidence []. "Dysgenic pressures" in particular don't seem like they're actually worth fearing in reality, given the Flynn effect [], no matter how many times you've seen Idiocracy [].
Also beware that reversed stupidity is not intelligence []. The existence of the Flynn effect does not imply that "dysgenic" or "eugenic" (scare quotes because there's no value-neutral way to say what counts as an improvement) trends aren't worth thinking about. Suppose hypothetically that genetic trends were leading to lowered average potential intelligence, but that this effect was exactly cancelled by an environmental Flynn effect. This is only a win if you think the status quo is optimal; if you think that more intelligence is better within the range we can apprehend, then IQ not rising fast enough is sad for the same reason that falling IQ would be sad. Cf. the reversal test [].
I actually find the inclusion of this as a "most likely" scenario mildly offensive, since Bostrom explicitly says he finds it seriously improbable: "In any case, the time-scale for human natural genetic evolution seems much too grand for such developments to have any significant effect before other developments will have made the issue moot." There's something... unpalatable about intellectuals bemoaning the "lesser folk" from breeding the species into oblivion, particularly when it seems to contradict both evidence and theory.
The fact that reproducation in our society is now only caused by wanting to reproduce is disturbing and is derivable without ever looking in a sci-fi book
It's disturbing that people have more control over their lives? Why? Because it will result in slightly lower average IQ in the medium term? Because it means our descendants will be monomaniacal fitness-maximizers rather than eudamonic agents [] in the long-long term?
Parents don't just pass their genes on to their children, they pass on some of their ideas. "Dysmemics" seems a bigger problem than "dysgenics".
There are many reasons. For starters, it is going to cause religions such as Roman Catholicism to take over because they have lots of kiddies, which is a meme/gene combo. Essentially, giving people "freedom" without oversight creates a fertile environment for aggressive gene/meme combinations to take over. With better oversight, each individual could probably have almost the same average amount of freedom but society avoids the long-term pitfalls. This thread is making me want to have 10 rational children just to counteract the effect...
This fact you speak of is false, unless by "wanting to reproduce" you actually mean "sexually active, either voluntarily or not, and either inclined to reproduce or poorly informed about or unable to access birth control or unlucky and not so unwilling to reproduce that one will get around any and every obstacle to abortion including psychological attacks, physical prevention, massive social stigma, expense, and physical and emotional pain". Or for the male version, "careless" would do.
I am not sure what you are trying to say here. Are you trying to make the point that women post menopause may want to reproduce but not be able to?
...I'm saying that people can and do reproduce without wanting to, even in our society. It is simply not the case that reproduction is caused only by wanting to reproduce.
It seems to me that not wanting to get an abortion is covered under "wanting to reproduce", at least for the evolutionary considerations I am interested in. From an individual rights perspective, I would retract my statement, but in the context of worrying about dysgenic selection pressures, I think my statement stands.
I think that's a completely inappropriate classification of the catchall "not wanting to get an abortion". It's rarely medically necessary, and it's painful and expensive, even pro-choicers have qualms about it sometimes, it carries enough of a stigma that it can be dangerous for reasons beyond medical complication - there are so many reasons not to have an abortion that it's not at all difficult to imagine a woman whose desire not to reproduce is thereby outweighed, even if you're dismissing as stupid all of the possible religious objections.
Well for the argument I am making this isn't relevant. I can see that sometimes the problems and difficulties of some groups - e.g. women seeking abortions or couples unable to conceive are neglected or trivialized, and I am not trying to do that, so I could replace "wanting to reproduce" with "being able to either reproduce or not, and deciding to reproduce". The argument I am making is about the evolutionary forces shaping society on average, for example by Catholics having more children and a gene for religious belief therefore taking over. I am not trying to argue that every single woman who has a child does so wholeheartedly.
Note that optional reproduction doesn't have to be 100% true for Roko's premise to hold. Even if 75% of children are 'oops babies' that other 25% will have significant effects on the gene distribution (or rather, the vast multitude that weren't born because of people exercising choice will have an effect)
I'm not sure what you mean here. How is it different now from any other period in history, and what effect do you think that'll have?
It is different from what happened in our EEA, for starters. The canonical example of dysgenics is Catholics taking over the world by having more children on average.
To whoever voted this comment down: did your brain provide a particular reason that it was unnecessary to worry about catholics taking over the world by having babies, or did it just output a feeling that it was somehow wrong -- maybe even racist -- to worry about such things?
(Not the downvoter.) Racist? Catholics are not a race.
Some minds tend to jump to the "Racist!" accusation every time they hear a disparaging comment about a group of people, regardless of what those people have in common.
I wasn't implying that it was a sensible feeling -- I was just describing a sort of internal flinch. ETA: Here in California, it is to some extent a race issue. We have a large and growing Hispanic population, who are very strongly catholic. If that population continues to grow, without moderation of their religious leanings, it could significantly impact the politics of the state.
The very fact that you're denying that it's racist is EVEN MORE RACIST! [] P.S. I make no apologies for my recent trend in comment quality...
Catholics have a religion that help them reproduce in the modern world. They may well be more valuable in nature's eyes than the screw-ups who allow their reproductive potential to be sabotaged by their unfamiliar environment. However, Catholicism is not the only system of thought that promotes family values in modern times. See the Amish.
Catholics really aren't that bad.
Well, they support the pope, who causes millions of deaths in Africa by promoting the irrational belief that you shouldn't use condoms, for example, which is caused by the same thing that causes Catholics to have lots of kids.
According to this Wikipedia page [], there were maybe 2.4 million deaths due to AIDS in the whole world in 2007. I doubt the Pope was responsible for most of them.
How many deaths, directly or indirectly derived from the pope's prohibition, would be enough for his influence to be considered negative in this case ?
That's more than balanced by extra births - if the example of Catholics taking over the world by having more children on average has anything to it. The Pope's strategey encourages risk - but the overall effect is positive in terms of helping Catholocism spread. With 1 billion members it must be doing something right.
This is an interesting and thought provoking claim in terms of winning. Perhaps singularitarians should start a religion.
Well, there's the cryonics death cult. Those guys think that, if you perform expensive rituals over your dead body, it might live forever in paradise. It's like the Egyptian pharoes have been reincarnated ;-)
Cryonics is cheap, not expensive. We are those guys think that, if you perform cheap rituals over your dead body, you might live forever in paradise.
It hardly seems like the Pope can be blamed for AIDS-related deaths based on people not using condoms. Given that he advocates "Use abstinence and don't use condoms", and the effectiveness of abstinence is not increased by using condoms, following his advice will not lead to more AIDS. If people follow the advice "Don't use abstinence and don't use condoms" then they're not following his advice and I don't see why he should be blamed for it. If not being abstinent was a live option for Catholics, then I'm sure condoms would be reconsidered. However, if people are already going to disregard his advice regarding abstinence, I don't see why he should have to give them more advice about what to do in that case.

Imagine that the Pope claims that God has issued two new commandments:

  • Walk on your hands at all times.
  • Never wear shoes.

Would you then argue that it's not his fault that most Catholics have dirty feet?

Indeed I would. I would in that case make fun of Catholics for following such a silly religion, and happily tell people who didn't follow one or both of those that they're being bad Catholics. But for anyone who follows the walk-on-your-hands-all-the-time religion, it's certainly their own fault if they're not up to the task.
People who follow, or try to follow, the whole of the Pope's advice can work to reduce the availability and social acceptability of condoms, which will reduce condom use among people who may or may not care what the Pope has to say. Additionally, since abstinence is apparently very difficult for a lot of people, trying to be abstinent will not reliably result in abstinence; I suspect the number of people who go "well, I can't seem to manage abstinence, but at least I'm not using condoms! That part's easy!" is depressingly high.
I don't see why any such person would continue calling himself a Catholic in that situation. Clearly the options there are 'not a Catholic' or 'Catholic who believes he's going to Hell'. And non-Catholics shouldn't listen to the Pope at all. It might be worth saying that Catholicism is somehow harmful to society, but it's hardly a fault of the Pope that he informs people about Catholic doctrine.
This... is religion we are talking about. I am pretty sure that you can confess to a priest that you have been un-abstinent and be forgiven for it and not be destined for hell, although I am not an expert on Catholic doctrine. Adding condom use on top of lack of abstinence would have the consequence of having to do more penance, most likely. I don't expect the Pope to do anything other than advise people according to Catholic doctrine. That's the job description. That doesn't make it a harmless activity. I wouldn't expect someone whose job title was "assassin" to not kill public figures for money, because that is the task of assassins. That doesn't make it a harmless activity.
Of course, if assassins were a socially acceptable profession and a high-profile assassin killed someone, it would not be appropriate to call the assassin out for doing his job; rather, one should question the wisdom of allowing assassins in the first place. If you've got a problem with the Church, then "The Pope should not have done his job" is an inappropriate way to make that complaint.
It wouldn't be beyond the scope of the job of the Pope to choose less harmful doctrines to concentrate on. For instance, instead of concentrating on the evils of condom use, he could encourage charitable giving, which (while less uniquely Catholic) is something that the church approves of.
It really isn't something that he's concentrated on, just something the press went on about a lot. On the Africa trip, he was answering written questions from reporters, so it's not like he brought it up out of nowhere. Also, it's something the Church gets criticized for, so it was appropriate for the Church to recently re-evaluate their position on it. The Pope has certainly come out and encouraged charitable giving, without any reporters pestering him about it.
You do have to be repentant, which kindof implies that you've changed your ways and are not going to do that sort of thing again. The ways of penance are a bit mysterious, but it couldn't possibly be a concern since it would probably involve 5 prayers instead of 4 or something like that. A better response though, is that the person who has condoms and then is unexpectedly not abstinent will likely use them, and someone who is against condom use will probably not find himself in that position, which seems to be harmful. But I still don't think it's the Pope's problem to be giving advice to people for what to do if they're going to go around breaking the rules; why then follow his advice at all?
It's hardly without precedent to give people backup plans for what to do if they break rules: * "Don't you dare go to a party where there is any alcohol, young lady! But if you do, and you get drunk, for the love of God call me and I'll pick you up, don't drive!" * "Do not do anything that would cause you to get set on fire. If you do, stop, drop, and roll." * "Don't wear socks with sandals. But if you must, at least have them be short socks, not knee-length jobs in a heavy fabric." * "Don't drink the water there - if you do, boil it first."
But the Church expects people to be perfect, and isn't particularly concerned about minimizing harm in the case that you break the rules. Picture this conversation: "Don't have sex before marriage, and don't use condoms" "Okay, no sex before marriage. But what if I do, then should I use a condom?" "What do you mean, what if I do? Don't do it at all. Have zero sex before marriage. Also, don't use condoms. It's simple." "Oh, okay. So avoid having sex before marriage, and if I do have sex before marriage, then I shouldn't use a condom." ... The sorts of rules you cite make sense when you're trying to minimize physical harms. When your job is protecting the immortal souls of people, that's a secondary concern at best. "Don't do something that damns your immortal soul to an eternity of torment. But if you do, make sure you wear a sweater" makes about as much sense in this context.
Though that doesn't immediately make it non fictional evidence, dysgenic pressure (as well as the flynn effect and the possibility of genetic engineering as possible counters) is also being briefly mentioned in Nick Bostrom's fundamental paper Existential Risks - 5.3 [].

You cannot use anthropic principle here. Unless you postulate some really weird distribution of risks unlike any other distribution of anything in the universe (and by the outside view you cannot do that), then if risks were likely we would have many near misses - either barely getting away from total destruction of humanity, or events that caused widespread but not complete destruction. We have neither.

Global warming and Iraq war are tiny problems, vastly below any potential to threaten survival of civilization. Totalitarian regimes have very short half-l... (read more)

Agreed that we have evidence about the distribution of risks for asteroids, nuclear war, etc, based on historical data. But we also have empirical experience with disasters that follow power-laws, so that most of the expected damage comes from the most extreme disasters: []
Thanks Carl, that paper is extremely relevant
It seems reasonable to me that the distribution of risks will change as technology improves. And technology is improving faster and faster.
THERE HAVE BEEN NEAR MISSES! We nearly nuked ourselves on 4 occasions that are public knowledge in the cold war. Nazi Germany only lost WWII because Hitler made very silly mistakes. [leading plausibly to a Eurasia wide totalitarian state, which may have been stable] - I would regard the survival of neo-enlightenment Europe from WWII to be a lucky event. Russia is still a big mess, but I must admit, however, that the collapse of the USSR is a very encouraging data point. It seems that there are certain stability providing mechanisms, thanks. No, I disagree. Look at Africa, for example, which seems to get more screwed over time. Also, China seems to be on the knife-edge between actually evolving to a liberal democracy and evolving to a techno-enabled totalitarian dystopia. Look at the great firewall, Tinanamen square, organ harvesting from political dissidents, etc.
4Eliezer Yudkowsky14y
Toba supereruption and genetic bottleneck probably strongest example of near-miss.
The genetic bottleneck around the time of the eruption was not as "near" as all that - in part since there were Neanderthals around at that time as an additional backup mechanism, complementing the surviving humans. Plus, of course, Homo floresiensis! ;-) [] estimates we got down to the last 5,000-10,000 backup copies of the human genome. Figures from before the eruption appear to have not been dramatically higher: * [] There just weren't that many homos around at the time.
True. I had forgotten the genetic bottleneck. Also I think I conflated the risks that will increase with technology (Nukes, Bio, Totalitarian) with the risks that were the actual closest near misses - the anthropic principle doesn't care.
The proposed genetic bottleneck around the time of the eruption was long ago - when the human population may have been very small anyway. Today, we have six billion humans. There are better defenses against such things - in terms of stocked underground bunkers. So: a modern volcanic eruption would have to be vastly more destructive to kill all humans. The probabilities involved are miniscule, and shrink with every passing day. It is only because of a "Pascal's wager"-style argument that people can be made to consider such risks.
I can't find it, but there's an article explaining how the Axis was more or less doomed from the start. In short, United States had twice the production capacity than all other participants combined. I'm saying Hitler's mistakes only hastened the inevitable.
I'm not sure we should argue politics but... American intervention was not inevitable. Even merely materiale supply wasn't inevitable. There were a number of ways America could've been out of the picture or impotent; one of the cited turning points/mistakes was the failure of the Battle of Britain to bring England to terms, or the escape of their army at Dunkirk. Letting America into the war was arguably one of Hitler's greatest mistakes (either by commission or omission, and there was even a historical parallel warning against America that Hitler was intimately familiar with - WWI). America may've been tops in industry, but it's hard to see it launching a transoceanic invasion into Europe with no allied powers closer than... Africa? Asia?
Seems implausible to me, and also it seems to me that this would not be sufficient evidence to claim that Axis was almost certainly doomed from the start, though it would push that way. Also, consider this in relation to Carl's point about long-tailed power-law risk distributions. The fact that WWII was only a quite near miss as opposed to a very very near miss then looks less reassuring.
Looking at [] I see that the US's GDP (a good proxy, I think, for industrial production, was 800 at the start of the war, while total Axis GDP was 685. The rest of the Allies represented 829. So by itself, the US was 17% more than the entire Axis alliance, and just under half of the Allies (ie. the rest of the world). Pretty impressive. The last column has the USA at 1474, or >3x total Axis output (466), and is at 64% of Allies. Incidentally, this means at the end of the war, the US was >2x what the Axis were at the beginning of the war. So the US did not have twice what the rest of the world had; but it did have twice the Axis by the end, and presumably this was foreseeable. So we can change Smith's point from being that the USA could industrially epic pwn the Axis, to merely pwn them.
1Vladimir_Nesov14y [] (fixed link)
Thanks; edited.
Thank you for providing Data.
We /did/ nuke each other - in Japan. Some people even died. Civilisation however, did not end. It seems pretty speculative to classify 20th century history as some sort of "near miss". 6 billion humans represents the enormous success of our species - each human is a backup copy of our DNA. To classify this as a "near disaster" seems strange.
The use of two kiloton yield weapons in a one-sided war in Japan is not exactly the same thing as the use of nearly 100,000 megaton yield weapons in the cold war. In terms of pure explosive yield, the situations differ by a factor of 100,000,000, so I call bullshit on your analogy.
That hypothetical explosion never happened. Estimates of its probability seem necessarily speculative to me. If you want to "establish that there are actually such things as serious existential risks and major civilization-level catastrophes" then invoking things that never happened seems like rather weak evidence.
I am invoking the near misses in the cold war. But now you have changed your tack from "Civilisation however, did not end" (i.e. the effect of a nuclear war is not an existential disaster) to "Estimates of its probability seem necessarily speculative to me", which doesn't really matter. What the probability is is what matters, which you didn't comment about.
I did - I said your estimate of a "near miss" was "speculative". In fact, the world didn't end, and you haven't presented evidence that that was actually a likely outcome. Calling the "cold war" a "near miss" doesn't count for very much. We had zero use of nuclear weapons in anger during that era.
Well, there possibly was the Toba supereruption [], which would fit being a near miss. Arguably, we were very close [] too during the cold war, and several times over - not total extinction, but a nuclear war would've left us very crippled.
We're much safer against even very rare natural disasters like Toba (and others that act through climate) than it was historically. The kind of disaster that could wipe as out gets less and less probable every decade. I'm not even sure if the kind of asteroid that wiped out dinosaurs would be enough to wipe out humanity now, given a few years of prior warning (well, it would kill most people, but that's not even close to getting rid of the entire humanity). I seriously dispute the idea that we were very close to nuclear war. I even more seriously dispute the idea that it would have any long term effects on human civilization if it happened. Even in the middle of WW2 people's life expectancy was far higher than historically typical, violence death rates were far lower, and I'd even take a guess that average personal freedoms compared quite well to the historical record.
Whether those catastrophes could destroy present humanity wasn't the point, which was whether or not near misses in potential extinction events have ever occurred during our past. Consider it that way : under your assumptions of our world being more robust nowadays, what would count as a near miss today, would certainly have wiped the frailer humanity out back then; conversely what counted as a near miss back then, would not be nearly that bad nowadays. This basically means, by constraining the definition of a "near miss" in that way, that it is impossible to show any such near miss in our history. That is at best one step away from saying we're actually safe and shouldn't worry all that much about existential risks. Speaking of which, when arguing the definition of an existential risk, and from that arguing that such catastrophes as a nuclear war, aren't existential risks, blurs the point. Let us rephrase the question : how much would you want to avoid a nuclear war, or a supereruption, or an asteroid strike ? How much effort, time, money should we put into the cause of avoiding such catastrophes ? While it is true a catastrophe that doesn't wipe out humanity forever, isn't as bad as one that does, such an event can still be awfully bad, and deserving of our attention and efforts, so as to prevent it. We're talking billions of human lives lost or spent in awful conditions for decades, centuries, millennia, etc. If that is no cause to serious worry, pray tell what is ?
Total extinction has expected value that's pretty much indistinguishable from minus infinity. Global thermonuclear war? Oh sure it would kill some people but expected number of deaths and amount of suffering from let's say malaria or lack of access to fresh water in the next 100 years is far higher than expected death and suffering from a global thermonuclear war in the next 100 years. Even our most recent total war, WW2, killed laughably small portion of the fighting population relative to historical norms. There's no reason to suspect WW3 would be any different, so number of deaths would most likely be rather limited. And as countries with low birth rates (that is pretty much all countries today) have historical record of trying very hard not to get into any war that could endanger their population (as opposed to send bombs to other countries and such), chance of such a war is tiny. So let's say 1% chance of global thermonuclear war killing 100 million people in the next 100 years (expected 1 million deaths) versus 1 million deaths a year from malaria, and 2 from diarrhea. I think we have our priorities wrong if we care about global thermonuclear wars much. (of course people might disagree with these estimates, in which case they would see a global thermonuclear war as more important issue than me)
Under those assumptions your estimates are sound, really. However, should we only count the direct deaths incurred as a consequence of a direct nuclear strike ? Or should we also take into account the nuclear fallout, radiations, nuclear winter, ecosystems crashing down, massive economy and infrastructure disruption, etc. ? How much more worse does it get if we take such considerations into account ? Aside from those considerations, I really agree with your idea of getting our priorities right, based on numbers. That's exactly the reason why I'd advocate antiagathic research above a lot of other things, which actually kill and make less people suffer than aging itself does, but not everyone seems to agree to that.
Right now 350–500 million people a year suffer from malaria, billions live in places of massive economy and infrastructure disruption, and with health prospects most likely worse than first world person would have in post-thermonuclear-war environment. I doubt fallout would be that bad in the long term. Sure, there would be higher cancer rate, but people would abandon the most irradiated places, take some precautions, and the overall loss of healthy lifespan would most likely be of the same order of magnitude as a couple of decades of progress of medicine. For all I know people after a potential 2100 thermonuclear warfare might live longer and healthier than us.
And what do you think the effect of a full-scale global nuclear war on the poorest one fifth of the world would be? Do you think that they would be unaffected or not affected much?
By 2100 hopefully we won't have the third world any more []. Swapping nuclear warfare for end of third world poverty would be a good exchange for most people. And nuclear warfare is a remote possibility, while third world poverty is real and here with us now. Also notice how much better is life in Hiroshima [] compared to Congo [].
What should be realized here, however, is that Hiroshima could become a relatively ok place because it could receive a huge amount of help [] for being part of the country with such a high GDP. Hiroshima didn't magically get better. A large scale nuclear war would destroy our economy, and thus our capability to respond and patch the damage that way. For that matter, I'm not even sure our undisturbed response systems could be able to deal with more than a few nuked cities []. Also please consider that Hiroshima was nuked by a 18 kt bomb, which is nothing like the average 400 - 500 kt nukes we have now.
How could it receive huge amounts of help if in 1949 where rebuilding started Japan did not have high GDP? Now we have a lot higher GDP, and if all our major cities are too expensive to rebuild, we can just move to other cities. Based on similar situations (WW2, fall of Soviet Union), disruption of economy will most likely not last long, so people after global nuclear war will most likely have plenty of money to use.
Yes indeed. Do you expect that to remain true after a nuclear war too ? More basically, I suppose I could resume my idea as follows : you can poke a hole in a country's infrastructure or economy, and the hole will heal with time because the rest is still healthy enough to help with that - just as a hole poked into a life form can heal, provided that the hole isn't big enough to kill the thing, or send it into a downward spiral of degeneration. But yes, society isn't quite an organism in the same sense. There you probably could have full scale cataplasia, and see something survive someplace, and perhaps even from there, start again from scratch (or better, or worse, than scratch).
As I said, economy of countries destroyed after WW1 and WW2 picked up where it left extremely quickly, and definitely did not result in lasting return to stone age as some imagine. This makes me guess the economic disruption of a global thermonuclear war wouldn't be that long either. This is an outside view, and it's pretty clear, but I understand some people would rather take an inside view, which would be much more pessimistic.
I think you have some typos in your last paragraph that may reverse some of the meaning. So I can't tell if I agree with you. In particular, I'm concerned with the conjunction of totalitarian issues with standard of living. It's certainly true that China and the USSR give examples of peaceful rollback of totalitarian regimes (partial in the USSR and complete in China). The USSR looks to me to have had continually increasing standard of living, including under Stalin, with the lone exception of the war. So totalitarian aspects of a regime may be rather independent of wealth.

Re: One could argue that I [and Bostrom, Rees, etc] are blowing the issue out of proportion.

Bostrom and Rees have both written books on the topic - and so presumably stand to gain financially out of promoting the idea that existential risk is something to be concerned about.

It could also be argued that we are probably seeing a sampling bias here - of all the people on the planet, those with the highest estimate of DOOM are those most likely to alert others to the danger. So: their estimates may well be from the very top end of the distribution.

We're really only just now able to identify these risks and start posing theoretical solutions to be attempted. Our ability to recognize and realistically respond to these threats is catching up. I think saying that we lack good self-preservation mechanisms is to criticize a little unfairly.

Re: One could argue that I [and Bostrom, Rees, etc] are blowing the issue out of proportion. We have survived so far, right? (Wrong, actually - anthropic considerations indicate that survival so far is not evidence that we will survive for a lot longer, and technological progress indicates that risks in the future are worse than risks in the past).

Existence is not evidence, but the absence of previous large-scale disasters should certainly count for something. We have no evidence of civilisation previously arising and then collapsing, which we would expect to see if civilisation was fragile.

Many disasters that would be sufficient to wreck civilization will probably leave at least some survivors. The inhabitants of Easter Island ended up pretty screwed, but they didn't go extinct. Similarly, the collapse of the Mayan civilization left plenty of people left alive, many of which ended up settling in a different area than the former center of civilization. If a major disaster occurs that doesn't manage to kill off basically all animal life on Earth, I suspect that there will still be at least a few people carrying on one hundred years later, even if they have to live as subsistence farmers or hunter-gatherers.

Libertarianism is the best available self-preservation mechanism. It is the social and memetic equivalent of genetic behavioral dispersion; that members of many species behave slightly differently which reduces the likelihood of a large percentage falling to the same cause.

Of the eighteen existential risks Bostrom listed, that would help against maybe three. If you disagree, tell me how that would help with any of them other than resource depletion and evolution.
But is Libertarianism the best available species-preservation mechanism against existential risks like asteroid impact, nuclear holocaust or cosmic locusts?
Libertarian affective death spiral candidate identified?
I would have a much easier time taking libertarianism seriously if its advocates weren't all in obvious affective death spirals. Libertarianism does not handle tragedy of the commons scenarios well at all, and that's exactly what most existential risks are.
Not all libertarians are in an affectative death spiral, obvious or otherwise. It's true that many are, but I, for example, recognize tragedy of the commons scenarios and accept that some regulation can be useful to mitigate these problems. I believe there are some specific legitimate purposes of government, such as outlawing aggression, internalizing costs, and coordination (e.g., everyone drives on the right side of road, it would have worked for everyone to drive on the left, but as a society we had to pick one and go with it). Further, I think that every law should be validated to be achieving such an objective with minimal intervention. I understand how you can form this view, seeing all the pro-business conservatives seizing on libertarian rhetoric to oppose regulation, but then neglecting the responsibility part when they want subsidies, or all the people who correctly notice that most laws are counterproductive and then incorrectly conclude that all laws are counterproductive. But when you claim that all advocates of libertarianism are like that, you are attacking a strawman.
"Libertarian" doesn't carve out a very precise cluster in people-space any more. Pretty much anyone who's reflexively wary of government intervention in the private market can call herself a libertarian. Some libertarians will support meaningful government intervention in tragedy of commons type problems; some may even go so far as to support some level of government assisted/coerced redistribution of wealth. You can argue 'till you're blue in the face that that's not a "real" libertarian, but usage defines meaning, and I think enough such people self-identify that way that the word has become fairly imprecise.
This kind of absurdly absolutist statement achieves nothing but display personal animus toward an ideology. It is true that many libertarians are in death spirals, but I know of no political group that does not have large numbers of supporters in Affective Death Spirals. In case you were unaware, this is a universal tendency for idea-based groups. And politics is the mind-killer, no matter your politics. I agree with this, though:

Self-perpetuation in the strictest sense isn't always the point. The goal isn't to simply impose the same structure onto the future over and over again. It's continuity between structures that's important.

Wanting to live a long life isn't the same as having oneself frozen so that the same physical configuration of the body will persist endlessly. The collapse of ecosystems over a hundred-million-year-long timespan is not a failure, no more than our changing our minds constitutes a failure of self-preservation.

In many of the hypothetical "disasters", civilisation doesn't end - it is just that it is no longer led by humans. That seems a practically inevitable long-term outcome to me (humans are rather obviously too primitive and slug-like to go the distance).

The classification of such outcomes as "disasters" needs a serious rethink, IMO.

You'd think that's actually pretty much what most of us humans care about.
Prepare for disappointment, then. My estimate of the chances of humans persisting for much longer is pretty tiny. Future civilisation is likely to be descended from current civilisation - but humans are much more likely to survive in museums than anywhere else. That outcome is not necessarily a disaster - it could be one of the best possible outcomes. Having humans in charge would be really, really bad for civilisation's health and spaceworthiness.
A fair point. So what you're telling me is that we should desire a future civilization that is descended from our own, probably one that will have some common points with current humanity, like, some of our values, desires (or values, desires who'd have grown from our own) [] etc. ?
It is not my wish to advise what people should or should not desire. However, there being no humans around does not necessarily a disaster make. Maybe the humans transcended their bodies, adopting a new, high-technology medium, which finally allows our brains to be copied and backed up. A disaster? Or the ancient dream of conquering death come true? That would seem to depend on your perspective.

Re: technological progress indicates that risks in the future are worse than risks in the past

Technological progress has led to the current 6 billion backup copies of the human genome. Yet you argue it leads to increased risk? I do not follow your thinking. Surely technological progress has decreased existential risks, making civilisation's survival substantially more likely.

Technological progress seems to be necessary, but not sufficient to ensure our civilization's long term survival. Correct me if I'm wrong, but you seem quite adamant on arguing against the idea that our current civilization is in danger of extinction, when so many other people argue the other way around. This seems like it has the potential to degenerate into a fruitless debate, or even a flame war. Yet you probably have some good points to make; why not think it over, and make a post about it, if your opinion is so different, and substantiated by facts and good reasoning, as I am sure it must be ?
That's a function of the venue of this discussion. The blog's founder goes to existential risk conferences - and so here we see the opinions of his supporters. Doom prophecies are an old phenomenon. The explanation appears to me to be mainly sociological: warning others about risk makes you look as though you are contributing positively. If the risk doesn't actually exist, then it needs manufacturing - so that you can still alert others to the danger. Of course, the bigger the risk, the more important it is to tell people about it. Existential risks are the "biggest" risks of all - so they are the most important ones to tell people about. Plus, alerting people to the risk might help you to SAVE THE WORLD! Thus the modern success of the "doom" meme.
I see your point, sometimes we may have already written the bottom line, and all that comes afterward is trying to justify it []. However, if an existential risk is conceivable, how much would you be ready to pay, or do, to investigate it ? Your answer could plausibly range from nothing, to everything you have. There ought to be a healthy middle there. I could certainly understand how someone would arrive at saying that the problem isn't worth investigating further, because that person has a definite explanation of why other people care about that particular question, their reason being biased. I'd for instance think of religion, as an example of that. I wouldn't read the Bible and centuries of apologetics and debates to decide that God does or doesn't exist. I'd just check to see if at first, people started to justify the existence of a god for other reasons than it existing. That's certainly a much more efficient way of looking at the problem. Is there no sum of money, no amount of effort, however trivial, that could nevertheless be expanded on such an investigation, considering its possible repercussions, however unlikely those seem to be ?
By all means, discuss the risks we face. However, my council is to bear in mind the sociological explanation for the "the end is nigh" phenomenon. 2012 isn't the first year in which the end of the world has been predictied. Are you actually concerned about the risk? Or are you attempting to signal to others what a fine fellow you are by alerting them to potential danger. Or is it that you wish to meet and form alliances with other people who want to help with the fine and noble cause of helping to SAVE THE WORLD? We understand the sociological explanation for the "DOOM" bias. Let us therefore exercise due caution in its immediate vicinity.
You keep throwing accusations of rationalization, but I wonder what it would look like if the earth really did revolve around the sun?
You mean that the idea that the end of the world is nigh is a classical failed model - much like geocentrism was...?
You are not introducing data that distinguishes the hypothesis that the cause of saving the world is worthwhile from the hypothesis that it's but an attire, serving status and rationalization thereof. It's name-calling, not an argument.
The argument was being made that "so many other people argue the other way around". Lots of people arguing something is not a very good reason for thinking it is true. Ideas can become popular because they are good at spreading, not because of their truth value. In the case of risks, it is pretty obvious how this could happen. Warning people about risks has a positive effect on your reputation. It is well known to psychologists that humans concoct risks when they are not real for signalling purposes: "Psychologists have dubbed the phenomenon The Boy Who Cried Wolf Effect, named after Aesop's fable about a shepherd who fakes wolf attacks. In real life, experts say, these "shepherds," mostly women, aren't acting out of boredom. These damsels in distress are very often motivated by an intense desire for attention and may feel unfairly neglected by those close to them, often romantic partners. Others are simply crying out to a world they feel ignores them." * [] I do not know to what extent memetic and evolutionary psychology explanations explain the observed effect. My estimate - from what I have seen - is that the extent is probably quite large. So: I think that discussion of the extent to which these beliefs may be being caused by signalling-related biases is quite appropriate. Under this model, agents exaggerate the risks, and tell others about those risks, which makes them feel good. It also makes the recipients grateful. They then go on to infect others with the DOOM meme. The infected agents construct elaborate rationales to explain the repeated historical failures of the DOOM predictions to come true. Yes, all those other folk who thought the same thing were wrong - but this time it is different, because ...
Your argument is that it's plausible that the idea is propagating independently of its truth (which is obviously true, when the idea is construed at the level of crude approximation []), but it's not an argument against the idea's truth, especially if the idea is recreated apart from its fame. Also, your version of the idea is about fuzzies [], while ours is about utility, prompting different kinds of actions []. The empty buzz of doomsaying was around for a long time, never crossing over towards serious study.
The idea that the DOOM meme is a plague is indeed an argument against its truth value. DOOM being ancient and persistent argues for its basis in human universals, rather than it being a realistic assessment of historical events. If DOOM was a new phenomenon, that might make it a more interesting object of study - but I don't see credible evidence supporting that. DOOM is clearly ancient - see: [] As for "fuzzies" vs "utility" - that just seems like an attempt to rubbish my position. Those who think they are going to RAISE THE ALARM and help SAVE THE WORLD remind me of "Total Recall": "What's bullshit, Mr. Quaid? That you're having a paranoid episode triggered by acute neuro-chemical trauma? Or that you're really an invincible secret agent from Mars who's the victim of an interplanetary conspiracy to make him think he's a lowly construction worker?"
Pirsig: The world's greatest fool may say the Sun is shining, but that doesn't make it dark out. Reversed stupidity is not intelligence [].
I am not sure my message is getting through. Apocalyptic cults have a long history - and they have been studied by scientists. See: [] [] It is not particularly surprising that we see some modern incarnations that embrace the latest instruments of destruction. Are the DOOM-mongers interested in this? Not AFAICS. All you will hear about from them is the DOOM. After all, what could possibly be more important than THE END OF THE WORLD? You should go and warn your loved-ones about the danger immediately!
People who were shouting that the Sun is shining were actually fools, as certified by scientific research. Still, the message remains valid. Also, the fact that they happen to believe in something that is true doesn't make them right [] if they believe it for reasons other than that it's true [].
Your bizarre analogies and obscure cultural references are having the effect of making me lose interest. What has this to do with what I have said? You are arguing that lots of previously inaccurate DOOM prophecies don't mean we don't face DOOM? I don't think I claimed that it did mean that - merely that is was pertinent evidence on the topic. That DOOM prophesies are a well-known sociological phenomenon which has a lot to do with signalling, status, self-esteem, etc - and not much to do with the end of the world - illuminates the behaviour of today's prophets of DOOM, in my view. These days, DOOM is big business. The Lifeboat Foundation has raised hundreds of thousands of dollars in the name of this kind of thing. We should understand how DOOM is marketed. Belief in DOOM is not necessarily invigorating and stimulating - it can have some substantial down-sides - for example, helplessness and a failure to engage in long-term planning. I expand on this theme in the following video: "Tim Tyler: Doom!" * []

New to LessWrong?