This is a "basics" article, intended for introducing people to the concept of existential risk.

On September 26, 1983, Soviet officer Stanislav Petrov saved the world.

Three weeks earlier, Soviet interceptors had shot down a commercial jet, thinking it was on a spy mission. All 269 passengers were killed, including active U.S. senator Lawrence McDonald. President Reagan called the Soviet Union an “evil empire" in response. It was one of the most intense periods of the Cold War.

Just after midnight on September 26, Petrov sat in a secret bunker, monitoring early warning systems. He did this only twice a month, and it wasn’t his usual shift; he was filling in for the shift crew leader.

One after another, five missiles from the USA appeared on the screen. A siren wailed, and the words "ракетном нападении" ("Missile Attack") appeared in red letters. Petrov checked with his crew, who reported that all systems were operating properly. The missiles would reach their targets in Russia in mere minutes.

Protocol dictated that he press the flashing red button before him to inform his superiors of the attack so they could decide whether to launch a nuclear counterattack. More than 100 crew members stood in silence behind him, awaiting his decision.

"I thought for about a minute," Petrov recalled. "I thought I’d go crazy... It was as if I was sitting on a bed of hot coals."

Petrov broke protocol and went with his gut. He refused to believe what the early warning system was telling him.

His gut was right. Russian satellites had misinterpreted shiny reflections on the Earth’s surface as missile launches. Russia was not under attack.

If Petrov had pressed the red button, and his superiors had launched a counterattack, the USA would have detected the incoming Russian missiles and launched their own missiles before they could be destroyed in the ground. Soviet and American missiles would have passed in the night sky over the still, silent Arctic before detonating over hundreds of targets — each detonation more destructive than all the bombs dropped in World War II combined, including the atomic bombs that vaporized Hiroshima and Nagasaki. Most of the Northern Hemisphere would have been destroyed.

Petrov was reprimanded and offered early retirement. To pay his bills, he took jobs as a taxi driver and a security guard. The biggest award he ever received for saving the world was a "World Citizen Award" and $1000 from a small organization based in San Francisco. He spent half the award on a new vacuum cleaner.

During his talk at Singularity Summit 2011 in New York City, hacker Jaan Tallinn drew an important lesson from the story of Stanislav Petrov:

Contrary to our intuition that society is more powerful than any individual or group, it was not society that wrote history on that day... It was Petrov.

...Our future is increasingly determined by individuals and small groups wielding powerful technologies. And society is quite incompetent when it comes to predicting and handling the consequences.

Tallinn knows a thing or two about powerful technologies making global impact. Kazaa, the file-sharing program he co-developed, was once responsible for half of all Internet traffic. He went on to develop the internet calling program Skype, which in 2010 accounted for 13% of all international calls.

Where could he go from there? After reading dozens of articles about the cognitive science of rationality, Tallinn realized:

In order to maximize your impact in the world, you should behave as a prudent investor. You should look for underappreciated [concerns] with huge potential.

Tallinn found the biggest pool of underappreciated concerns in the domain of “existential risks": things that might go horribly wrong and wipe out our entire species, like nuclear war.

The documentary Countdown to Zero shows how serious the nuclear threat is. At least 8 nations have their own nuclear weapons, and the USA has given nuclear weapons to 5 others. There are enough nuclear weapons around to destroy the world several times over, and the risk of a mistake remains even after the cold war. In 1995, Russian president Boris Yeltsin had the “nuclear suitcase" — capable of launching a barrage of nuclear missiles — open in front of him. Russian radar had mistaken a weather rocket for a US submarine-launched ballistic missile. Like Petrov before him, Yeltsin disbelieved his equipment and refused to press the red button. Next time we might not be so lucky.

But it's not just nuclear risks we have to worry about. As Sun Microsystems’ co-founder Bill Joy warned in his much-discussed article Why the Future Doesn’t Need Us, emerging technologies like synthetic biology, nanotechnology, and artificial intelligence may quickly become even more powerful than nuclear bombs, and even greater threats to the human species. Perhaps the International Union for Conservation of Nature will need to reclassify Homo sapiens as an endangered species.

Academics are beginning to accept that humanity lives on a knife’s edge. The famous physicists Martin Rees and John Leslie have written books about existential risk, titled Our Final Hour: A Scientist’s Warning and The End of the World: The Science and Ethics of Human Extinction. In 2008, Oxford University Press published Global Catastrophic Risks, inviting experts to summarize what we know about a variety of existential risks. New research institutes have been formed to investigate the subject, including the Singularity Institute in San Francisco and the Future of Humanity Institute at Oxford University.

Governments, too, are taking notice. In the USA, NASA was given a congressional mandate to catalogue all near-earth objects that are one kilometer or more in diameter, because an impact with such a large object would be catastrophic. President Bush established the National Nanotechnology Initiative to ensure the safe development of molecule-sized materials and machines. (Precisely self-replicating molecular machines could multiply themselves out of control, consuming resources required for human survival.) Many nations are working to reduce nuclear armaments, which pose the risk of human extinction by global nuclear war.

The public, however, remains mostly unaware of the risks. Existential risk is an unpleasant and scary topic, and may sound too distant or complicated to discuss in the mainstream media. For now, discussion of existential risk remains largely constrained to academia and a few government agencies.

The concern for existential risks may appeal to one other group: analytically-minded "social entrepreneurs" who want to have a positive impact on the world, and are accustomed to making decisions based on calculation. Tallinn fits this description, as does Paypal co-founder Peter Thiel. These two are among the largest donors to Singularity Institute, an organization focused on the reduction of existential risks from artificial intelligence.

What is it about the topic of existential risk that appeals to people who act by calculation? The analytic case for doing good by reducing existential risk was laid out decades ago by moral philosopher Derek Parfit:

The Earth will remain inhabitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history.

...Classical Utilitarians... would claim... that the destruction of mankind would be by far the greatest of all conceivable crimes. The badness of this crime would lie in the vast reduction of the possible sum of happiness...

For [others] what matters are... the Sciences, the Arts, and moral progress... The destruction of mankind would prevent further achievements of these three kinds.

Our technology gives us great power. If we can avoid using this power to destroy ourselves, then we can use it to spread throughout the galaxy and create structures and experiences of value on an unprecedented scale.

Reducing existential risk — that is, carefully and thoughtfully preparing to not kill ourselves — may be the greatest moral imperative we have.

New Comment
108 comments, sorted by Click to highlight new comments since: Today at 3:13 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Whilst I really, really like the last picture - it seems a little odd to include it in the article.

Isn't this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn't the picture sort of act against that - by being slightly sci-fi and weird?

Actually, both that and the Earth image at the beginning of the article seem a little out of place. At least the latter would fit well into a print article (where you can devote half a page or a page to thematic images and still have plenty of text for your eyes to seek to), but online it forces scrolling on mid-sized windows before you can read comfortably. I think it'd read more smoothly if it was smaller, along the lines of the header images in "Philosophy by Humans" or (as an extreme on the high end) "The Cognitive Science of Rationality".

Agreed, especially since it is presented with no explanation or context. If the aim was "here's a picture of what we might achieve," I would personally aim for more of a Shock Level 2 image rather than an SL3 one-- presuming, of course, that this is being written for someone around SL1 (which seems likely). That said, I might omit it altogether.
I thought this article was for SL0 people - that would give it the widest audience possible, which I thought was the point? If it's aimed at the SL0's, then we'd be wanting to go for an SL1 image.
SL0 people think "hacker" refers to a special type of dangerous criminal and don't know or have extremely confused ideas of what synthetic biology, nanotechnology, and artificial intelligence are.
Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0's - maybe without mentioning exotic technologies? And would they change their charitable behavior? I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).
I agree with your estimates/answers. There are certainly SL0 existential risks (most people in the US understand nuclear war), but I think the issue in question is that the risks most targeted by the "x-risks community" are above those levels-- asteroid strikes are SL2, nanotech is SL3, AI-foom is SL4. I think most people understand that x-risks are important in an abstract sense but have very limited understanding of what the risks the community is targeting actually represent.
Not only is the picture slightly sci-fi and weird, it's also wrong. I mean, my thought processes on seeing it went something like this: "Oh, hey, it's a ringworld. Presumably this is meant to hint at the glorious future that might be ahead of us if we don't get wiped out, and therefore the importance of not getting wiped ou ... no, wait a moment, it's kinda like a ringworld but it's really really really small. Much smaller than the earth. What the hell's the point of that?"
The picture is of a Stanford torus.
Don't those have to be fully enclosed?
Yes. The part that looks like a sky in the picture is some transparent material that holds the atmosphere in.
Faster build, reduced cost, not such heavy stresses placed on the materials.
I meant "what's the point of that, as opposed to not bothering?". Not "what's the point of that, as opposed to building a full-sized ringworld?".
Not much smaller than the earth at all! With more physics and attention, one could produce better numbers, but as a crude ballpark (using data from wikipedia): Surface area of the Earth: 510,072,000 km^2 Circumference of ring, if it's placed at 1 AU: 2 * pi AU = 939,951,956 km So, if the ring is a little over a half a kilometer in width, it has the same surface area as the Earth - and could be smaller still, if we just compare habitable area.

The scale of curvature there makes it clear it's not 1 AU in radius.

Fair enough, I suppose. But then it's not really a ring world so much as a... what? Space station?
Yeah, pretty much. If it were bigger, I might call it a Culture orbital).
Agreed on this. The ringworld thing comes out of nowhere and doesn't clearly follow from the content of the article. Unless the point is to wink-wink-nudge-nudge at the idea that we might have to do some weird-looking and weird-sounding things in order to save the world... in which case I still don't like the picture.
I read it as "but there's still hope for a big wonderful future", but this is tentative. In any case, thanks for the exposure to Richard Fraser's art.
Or, apparently, a small wonderful future. Look how tiny that ring is!
Also, I'd say both of those pictures seem to have the effect of inducing far mode.
I'm in favor of including the last picture as part of the article, because it shows the possible world we gain by averting existential risk. I don't believe that "context" is necessary, the image is self-explanatory. Nitpicking on ringworld vs. stanford torus is not relevant, or interesting. The overall connotations and message are clear. "Sci-fi" of today becomes "reality" of tomorrow. Non-transhumanists ought to open up their eyes to the potential of the light cone, and introducing them to that potential, whether directly or indirectly, is one of the biggest tasks that we have. Otherwise people are just stuck with what they see right in front of their eyes. For a big picture issue like existential risk, it fits that one would want to also introduce a vague sketch of the possibilities of the big picture future. Suggesting that the Earth picture itself doesn't belong in the post shows some kind of general bias against visuals, or something. You think that a picture about saving human life on earth isn't appropriately paired with a picture of the Earth? What image could be more appropriate than that?

the image is self-explanatory.

I didn't understand it. It didn't self-explain to me.

Non-transhumanists ought to open up their eyes to the potential of the light cone, and introducing them to that potential, whether directly or indirectly, is one of the big tasks we have.

Woah! That's quite a leap! But hold on a second! This isn't meant to be literature, is it? It doesn't seem to me that an explanation of this kind benefits from having hidden meanings and whatnot, especially ideological ones like that.

Nitpicking on ringworld vs. stanford torus is not relevant, or interesting.


Suggesting that the Earth picture itself doesn't belong in the post shows some kind of general bias against visuals, or something.

This is a Fully General Counterargument that you could use on objections to any image, no matter what the image is, and no matter what the objection is.

As for me, I'm not really Blue or Green on whether to keep the image. It's really pretty, but the relevance is dubious at best and nonexistent at worst.

I'm a genius transhumanist who likes sci-fi, and the connotations and message of the image were not clear to me. I wasn't even sure what it was supposed to be a picture of (my first guess was something from the Halo games, though I couldn't imagine the relevance). Is this more something that would be clear to the general populace and not folks like me, and thus should be included in a post to appeal to the general populace?
Strange enough. After all, while I am a transhumanist to some degree and also enjoy scifi, I am far from being a genious. Still the message of the pictures were immeditately obvious.This would suggest towards what you said: they maybe appealing to general people, while not necessarily as appealing to those already very familiar with scifi and transhumanism.
I would count myself among "general people". I didn't get it at all. In fact, having read the comments, I'm still not sure I get it. It's a pretty picture and all, but why is it there?
The first picture is a dark image of a planet with a sligthly threatening atmosphere. It looks like the upper half of a mushroom cloud, but it could be also seen as the earth violently torn apart. This is why I think , given the context, that it symbolises the threat of a nuclear war, and more universally, the threat of a dystopia. The last picture shows a beatiful utopia. I thought it's there to give a message of the type: "If everything goes well, we can still achieve a very good future." That is, while the first picture symbolises the threat of a dystopia, the last one symbolises the hope and possibility of an utopia. Of course, this is merely my interpretation. There are very many ways one can inerprent these pictures.
Note: "interesting", "clear", and perhaps even "relevant" are 2-place words.
Well, how about a picture of human life? Or even a picture of human life being saved; it might not be a bad idea to suggest a similarity between a doctor saving a patient's life and an x-risk-reduction policy saving many peoples' lives. Well, or something like that but a little more subtle as a metaphor.
Needlessly distracting. Most people have enough trouble appreciating the scale of existential risk that their minds often shut down when thinking about it, or just try to change the subject. Adding into it other ideas which are larger scale and even more controversial is not a recipe for getting them to pay attention.
The squid-shaped dingbats are pretty bad, too.

On September 26, 1983, Soviet officer Stanislav Petrov saved the world.

Allegedly saved the world. It actually seems pretty unlikely that the world was saved by Petrov. For one thing, Wikipedia says:

There are varying reports whether Petrov actually reported the alert to his superiors and questions over the part his decision played in preventing nuclear war, because, according to the Permanent Mission of the Russian Federation, nuclear retaliation is based on multiple sources that confirm an actual attack.{2}.

because, according to the Permanent Mission of the Russian Federation, nuclear retaliation is based on multiple sources that confirm an actual attack.

Given that this is coming from the sort of people who thought that setting up the Dead Hand was a good idea, and given that ass-covering and telling the public less than the truth was standard operating procedure in Russia, and given everything we know about the American government's incompetence, paranoia, greed, destructive experiments & actions (like setting PAL locks to zero, to pick a nuclear example) and that nuclear authority really was delegated to individual officers (this and other scandalous aspects came up recently in the New Yorker, actually: )...

I see zero reason to place any credence in their claims. This is fruit of the poisonous tree. They have reason to lie. I have no more reason to disbelieve Petrov than other similar incidents (like the Cuban Missile Crisis's submarine incident).

Very interesting. But the standard account says that Russian authorities were afraid of American attack at the time, and likely to make the wrong decision regardless of standard procedure. So the parent by itself doesn't address the relevant claim. Also, the Wikipedia quote made it sound like Petrov might have reported sighting missiles after all (perhaps with a disclaimer). This is neither cited nor credible. If one of his superiors arguably saved the world by following protocol, high probability Putin's people would have mentioned it in their press release.
And that's why I hate Petrov story. It's ridiculous how otherwise sensible people are willing to believe in it.
This seems to beg the question: what is wrong with our established methods for dealing with these risks? The information you posted, if it is credible, would completely change the story that this post tells. Rather than a scary story about how we may be on the brink of annihilation, it becomes a story about how our organizations have changed to recognize the risks posed by technology, in order to avert these risks. In the Cold War, our methods of doing so were crude, but they sufficed, and we no longer have the same problems. Is x-risk nevertheless an under-appreciated concern? Maybe, but I don't find this article convincing. You could make the argument that, along with the development of a technology, understanding of its risks and how to mitigate them also advances. Then it would not require a dedicated effort to understand these risks in advance. So, why is the best approach to analyse possible future risks, rather than working on projects which solve immediate problems, and dealing with issues as they arise? Don't get me wrong, I respect what the guys at SIAI do, but I don't know the answer to this question. And it seems quite important.
Presumably, in the long term, extinction risk will decrease, as civilisation spreads out. Increased risks have been accompanied by increased risk control - and it is not obvious how these things balance out. Pinker suggests going by death by violence in his latest book - and indeed the risk of death by violence is in decline. Superpowers and world-spanning companies duking it out does not necessarily lead to global security in the short term, though. Most current trends seem positive and probably things will continue to improve - but it is hard to be sure - since technological history is still fairly short.

There aren't enough nuclear weapons to destroy the world, not by a long shot. There aren't even enough nuclear weapons to constitute an existential risk in and off themselves, though they might still contribute strongly to the end of humanity.

EDIT: I reconsidered, and yes, there is a chance that a nuclear war and its aftereffects permanently cripples the potential of humanity (maybe by extinction), which makes it an existential risk. The point I want to make, which was more clearly made by Pfft in a child post, is that this is still something very different from what Luke's choice of words suggests.

How many people will die is of course somewhat speculative, but I think if the war itself killed 10%, that would be a lot. More links on the subject: The effects of a Global Thermonuclear War Nuclear Warfare 101, 102 and 103

"Destroy the world" can mean many things. There aren't nearly enough nuclear weapons to blast Earth itself, the planet will continue to exist, of course. The raw destructive power of nukes may not be enough to kill most of humanity, yes. Targeted on major cities, it'll still kill an enormous amount of people, an overwhelming majority of the targeted country for industrial (ie, urban) countries. But that's forgetting all the "secondary effects" : direct radioactive fallouts, radioactive contamination of rivers and water sources, nuclear winter, ... those are pretty sure to obliterate in the few next years most of the remaining humanity. Maybe not all of us. Maybe a few would survive, in a scorched Earth, without much left of technological civilization. That's pretty much "destroy the world" to me.
This survey's median estimates rate nuclear war as ten times as likely to kill a billion people in the 21st century as to cause human extinction:
How many of the respondents had any specific expertise on nuclear wars?
A handful, who had given presentations to the rest of the group with discussion. Also climate folks.
Do you know anything about what their estimates were?
Not broken out.
The article says "There are enough nuclear weapons around to destroy the world several times over". That suggests some kind of clear-cut quantitative measure, and does not describe the actual situation.

Hi there I'm the artist who's image you've used to illustrate this article. Good article and I've learned a thing or two. Thankyou for using my image and placing it as a link back to my page, all links are good etc. I don't have a problem with my work being used and indeed its pleasant to come across it like this. In future however could you please ask me first and provide a written acknowledgement in the text.



Oh and for those who were discussing the rings exactitudes ... Its a 200km diameter torus with a width of 10km. The atmosphere is "held in" by the strange alien structures looping about the outside of the ring. Probably some sort of induced electro statics. I made this image with the idea of showing a culture that was simultaneously extremely technically advanced and also quite blasé about the existence of the technology. The inhabitants in the towns below may not even glance at the structures above that protect their world. There was no social comment intended, imply what you will :) It was originally intended to be animated, maybe I'll have another attempt at that, but I think it could do with some finishing work first.

These two are among the largest donors to Singularity Institute, an organization focused on the reduction of existential risks from artificial intelligence.

Should this be the Singularity Institute?


It's as if people are being deliberately mischievous by writing both "the SIAI" (which should be "SIAI"), and on the other hand, "Singularity Institute" (which should be "the Singularity Institute").

Luke is probably confused by the fact that the organization is often called "Singinst" by its members. But that expression grammatically functions as a name, like "SIAI" (or, now, "SI"), and thus does not take the definite article.

The full name, however, ("the Singularity Institute") functions grammatically as a description, and thus does take the definite article. Compare: the United Nations, the Brookings Institution, the Institute for Advanced Study, the London School of Economics, the Center for Inquiry, the National Football League.

Abbrevations differ as to whether they function as names or descriptions: IAS, but the UN. SI(AI) is like the former, not the latter.

If the abbreviation is an acronym (i.e. pronounced as a word rather than a string of letter names), then it will function as a name: ACORN, not "the ACORN" (even though, in full, it's "the Association...").

I think Luke may have been trying to take after Singularity University, which doesn't use "the", because that seems to be the convention for universities? But yes, I agree the lack of a definite article here is grating. It creates impression that writer of sentence is Russian.
Specifically, it's the convention for university names following the formula "X University" (as opposed to "University of/for/in X"). These should be thought of as analogous to geographic place-names (which is what they basically are): "Hamilton County", "Bikini Atoll", "Harvard University", etc. ("Singularity University" would be analogous to "Treasure Island".) There are a few rare exceptions: The George Washington University, The Ohio State University (both articles often "mistakenly" omitted!), the Bering Strait. Anyway, why in the world would SI want to "take after" SU? The risk of confusion between these two organizations is large enough as it is.
The main thing was that he used both in the same article. I assumed that the Singularity Institute was correct because I've seen it more frequently, but consistency is the big thing.
Things are not always that simple:
I didn't make any claim about "simplicity", and nor does anything in the link contradict anything I wrote. Indeed, it confirms my point: some things take "the", others don't, and it isn't a matter of on-the-spot whim. Note that I did not propose any general rule for determining which category something falls into without prior knowledge. My comment about descriptions versus names does not have any predictive implications. I could have talked about "weak" and "strong" instead.
There have been quite a few posts on Language Log about which proper names are preceded by the, e.g.
In the time of John Muir and Theodore Roosevelt, "Yosemite" was apparently "The Yosemite" I've been curious as to when it dropped its article.
The About Us/Our Mission page uses the article#Definite_article), as do some other places on the site. The Strategic Plan ("UPDATE: AUGUST 2011") consistently uses no article.
I believe the Strategic Plan was authored by Luke, and hence the criticism also applies there.
Dropping "the" is a conscious, intentional decision by everyone at Singularity Institute as of several months ago and pre-dates Luke's involvement (but post-dates your visit last summer).

That only changes the target of my criticism (now all of you, instead of just Luke), not the criticism itself, obviously.

The "the" isn't droppable, because it was never part of the name in the first place: it was never "The Singularity Institute"; but rather "the Singularity Institute". That is, the article is a part of the contextual grammar. Attempting to "drop" it would be like me declaring that "komponisto" must always be followed by plural verb forms.

(Some organizations do have "The" in the name itself, e.g. The Heritage Foundation. They could decide to drop the "The", and then their logo would say "Heritage Foundation". But one would still write "at the Heritage Foundation"; one just wouldn't write "at The Heritage Foundation".)

I don't know of any example of an "Institute" where people don't use an article in such a context -- which suggests that any such example that might exist isn't high-status enough for me to have heard of it. Even the one that I thought might be an example -- the Mathematical Sciences Research Institute -- also has a grammatical "the"!

You guys should want to be like IAS and MSRI (after all, you'd rather the people at those places be working for you instead!) I don't understand the rationale for this gratuitous eccentricity.

Military units are the only counterexample I can think of, but using "the" is correct for them too, I think. Glancing at wikipedia, it is inconsistent within articles, and Perhaps Singularity Institute is an aspiring paramilitary force.
Indeed - "A dinner at Singularity Institute" would be pronounced "A dinner at ; Singularity Institute", with an awkward pause inserted due to the obviously missing article. Contrast with "A dinner at the Singularity Institute".
Would you explain why?
In testimony to Congress about 15 years ago, the director of the CIA used "CIA" without the definite article, which certainly suggests that he preferred it to be referred to that way. How it is referred to by the public however is probably not up to the leaders of the CIA but rather up to the media and maybe bloggers and tweeters. Note that the American Broadcasting Corporation, the National Broadcasting Corporation, the Citizens Broadcasting Service and the Public Broadcasting Service are able to decide how they will be referred to by the public (because they have unparalleled access to the public's ear) and the have decided they'd like to be referred to without the definite article. All that suggests that there is some advantage to being referred to without the definite article. (Perhaps the definite article has the effect of "distancing" the referent in the mind of the listener.)

Did you miss this comment? Abbreviations are treated separately from the corresponding full names. One doesn't say "the ABC", but one does say "the American Broadcasting Company". Et cetera.

Likewise, "SIAI" (not "the SIAI"), but "the Singularity Institute for Artificial Intelligence".

One may be either "at CIA" (especially if you're an insider) or "at the CIA", but as far as I know one is always "at the Central Intelligence Agency".

active U.S. senator Lawrence McDonald.

Larry McDonald was a congressman, not a senator.

It's worth noting that "Humanity" /= "Human-like (or better) intelligences that largely share our values" /= "Civilization." This gives us three different kinds of existential risk.

Robin Hanson, as I understand him, seems to expect that only the third will survive, and seems to be okay with that. Many Less Wrongers, on the other hand, seem not so concerned with humanity per se, but would care about the survival of human-like intelligences sharing our values. And someone could care an awful lot about humanity per se, and want t... (read more)

His screen would have flashed "ракетное нападение." What you wrote is correct but in a grammatical form which suggests it was taken from inside a larger sentence involving words like "about a rocket attack"... Russian words change depending on their use within the sentence.

"Somebody set up us the bomb."*

This is a good intro to human extinction, but Bostrom coined "existential risk" specifically to include more subtle ways of losing the universe's potential value. If you're not going to mention those, might as well not use the term.

Starting a paragraph with "...and" there is very jarring.

1Paul Crowley12y
Thanks! I would have suggested a fix but I couldn't think of one in the time I had to post.
I got caught on that as well.

I'd like to point out some lukeprog fatigue here, if anyone else wrote this article it would have way more points by now.

I doubt it. I'd bet that if someone else wrote it it would have less votes. It's a slightly expanded version of a post that has already been made, and re-run and had yearly posts linking to it. It's fine to hear the same old story again but it doesn't deserve more than half a dozen votes. It mostly scrapes through because luke seems to be writing something of a sequence on the subject so this would fit more neatly into a link collection.
Thanks for the pointer - it made me realize my lukeprog fatigue and correct for it by upvoting.

If I had been one of those persons with the missile warning and red button, I wouldn't have pressed it even if I knew the warning was real. What use would it be to launch a barrage of nuclear weapons against normal citizens simply because their foolish leaders did so to you? It would only make things worse, and certainly wouldn't save anyone. Primitive needs to revenge can be extremely dangerous with todays technology.

Mutually assured destruction is essentially a precommitment strategy: if you use nuclear weapons on me I commit to destroying you and your allies, a larger downside than any gains achievable from first use of nuclear weapons.

With this in mind, it's not clear to me that it'd be wrong (in the decision-theoretic sense, not the moral) to launch on a known-good missile warning. TDT states that we shouldn't differentiate between actions in an actual and a simulated or abstracted world: if we don't make this distinction, following through with a launch on warning functions to screen off counterfactual one-sided nuclear attacks, and ought to ripple back through the causal graph to screen off all nuclear attacks (a world without a nuclear war in it is better along most dimensions than the alternative). It's not a decision I'd enjoy making, but every increment of uncertainty increases the weighting of the unilateral option, and that's something we really really don't want. Revenge needn't enter into it.

(This assumes a no-first-use strategy, which the USSR at Petrov's time claimed to follow; the US claimed a more ambiguous policy leaving open tactical nuclear options following conventi... (read more)

From a game-theoretic perspective, if the other side knew you thought that way then they should launch on your watch. MAD only works if both sides believe the other is willing to retaliate. If one side is willing to push the button and the other is not willing to retaliate, then the side willing to push the button nukes the other and takes over the world. If you can be absolutely certain the other side never finds out you aren't willing to retaliate, then yours is the optimal policy.
"Willing" can be unpacked. Having the other party believe you are operating under a mixed strategy would be optimal, so long as: a) each side values the other side winning more than mutual destruction, which as humans they probably do, and b) accidental/irrational launches are possible but not significantly higher when facing a perceived mixed strategy. If, say, the USSR and the USA were willing to strike first to win, but not willing to incur a 95% risk of mutual destruction for a 5% chance of total victory, the optimal retaliatory strategy is to (have the other believe you will) retaliate based on a roll of 1d20 - a roll of a natural one has one refrain from retaliating. That way, an accidental launch has a 5% chance of not destroying the world. In practice, declaring a mixed strategy will probably be seen as setting up an excuse to update one's actions based on the expected payoff considering the circumstances that have happened - i.e. to use CDT rather than TDT. Declaring an updateless strategy is a good way to convey one is operating under a mixed one.
This is why you would not have been hired to sit in front of the button, even given the Soviets' dubious hiring techniques. Also, if you had been raised in Soviet Russia, your thoughts on the topic might have been different.
I wouldn't say that. Someone who cares about the issues is likely to lie for signalling purposes and do what he or she can to get the role.
But less likely to have had the foresight to have gotten into the right job track at age 15.
I could indeed simply lie and play the role of an obeying soldier to get the position I were looking for. However, it is of course true that if I had born and lived in a country where people are continiously fed with nationalist propaganda, I would be less likely to disobey the rules or to think it's wrong to retalite.
Followup question: if someone was about to be placed in front of that red button, would you rather it be someone who had previously expressed the same opinion, or someone who had credibly committed to retaliate in case of a nuclear strike (however useless or foolish such retaliation might be)? Conversely, if someone were to be placed in front of the corresponding red button of a country your leaders were about to launch a barrage of nuclear weapons against, which category would you prefer they be in?
Not that I disagree with your conclusion, but there was a significant selection pressure in the process of qualifying to get into the chair in front of the button. Political leaders don't like to give power to subordinates who are not likely to implement leadership's desires. Having gone through the process and its accompanying ideological training makes Petrov's refusal to risk nuclear armageddon even more impressive. Even though moral courage was [ETA: not] a criteria in selecting him, Petrov showed more that anyone could reasonably expect.
Primitive need for revenge can be even more vital with today's technology. It is the only thing holding the most powerful players in check.

The "discussion" of existential risk does occur in the mainstream media, sort of, it's mainly block buster movie's like Independence Day, War of the Worlds, 2012, The Day After Tomorrow and so on. I am confident that people understand the concept, probably however not the phrase. I respectfully suggest that the author amend the original post to include revelation that discussion of existential risk does occur, perhaps mentioning that the discussion is often trivial or often for entertainment purposes.

Whilst there have been a wide abundance of ex... (read more)

The link for "Countdown to Zero" points to the wrong place (I presume).

Fixed, thanks.

A quote relevant to the final section of this post...

The Earth is the cradle of the mind, but one cannot live forever in a cradle.

Konstantin Tsiolkovsky


There are some points that I dislike about this introduction: The first one is the implicit speciesism resulting from the focus on extinction of Homo Sapiens as a species. It would have made sense to use Bostrom's definition of existential risk, which focuses on earth-originating intelligent life instead. Replacement of humans by posthumans is not existential risk. Transhumanism usually advocates the well- being of all sentience, not just humans. This can refer to both non-human animals (e.g. in natural ecosystems) and posthumans spreading into space.

Maybe... (read more)

Some people hereabouts are concerned about some types of posthuman and "earth-originating intelligent life".

If you are writing for a general audience, I think you lose most of that audience here:

But it's not just nuclear risks we have to worry about. As Sun Microsystems’ co-founder Bill Joy warned in his much-discussed article Why the Future Doesn’t Need Us, emerging technologies like synthetic biology, nanotechnology, and artificial intelligence may quickly become even more powerful than nuclear bombs, and even greater threats to the human species. Perhaps the International Union for Conservation of Nature will need to reclassify Homo sapiens as an endangered species.

Or at the very least, explain why self-replicating things are more dangerous.