PART 2:

If humanity is screwed, why sacrifice anything to reduce potential risks? Why forgo the convenience of fossil fuels, or exhort gov­ernments to rethink their nuclear weapons policies? Eat, drink, and be merry, for tomorrow we die! A 2013 survey in four English-speaking countries showed that among the respondents who believe that our way of life will probably end in a century, a majority endorsed the state­ment “The world’s future looks grim so we have to focus on looking after ourselves and those we love.”

Few writers on technological risk give much thought to the cumulative psychological effects of the drumbeat of doom.

First, the “drumbeat of doom” is misleading, for reasons discussed above (and below). Second, it’s not entirely true that few writers have thought hard about the relevant psychological effects. For example, in Morality, I repeatedly emphasize the astronomical value thesis (a reason for hope), endorse what Paul Romer calls “conditional optimism” (which Pinker also cites in Enlightenment Now), and quote Cormac McCarthy’s witticism that “I’m a pessimist but that’s no reason to be gloomy!” Furthermore, I underline the dangers associated with what Jennifer Jacquet calls the “anthropocebo effect,” which refers to “a psychological condition that exacerbates human-induced damage—a certain pessimism that makes us accept human destruction as inevitable.”

Even more, in one of the founding documents of the field, Bostrom conjectures that the depressing nature of the topic may be one reason that it has received so little scholarly attention—to which he adds that “the point [of existential risk studies] is not to wallow in gloom and doom but simply to take a sober look at what could go wrong so we can create responsible strategies for improving our chances of survival.” Other scholars within the field, including Owen Cotton-Barratt, Toby Ord, and Max Tegmark, have highlighted the importance of “existential hope,” which brings to the foreground a sense of how good things could turn out (if we play our cards right). These are just a few of many examples that could be adduced to corroborate the claim that many “writers on technology risk” have thought very much, or very deeply, about mental effects of cogitating doomsday scenarios.

As Elin Kelsey, an environmental communicator, points out,“We have media ratings to protect children from sex or violence in movies, but we think nothing of inviting a scientist into a second grade classroom and telling the kids the planet is ruined. A quarter of (Aus­tralian) children are so troubled about the state of the world that they honestly believe it will come to an end before they get older.”5

Here, Pinker cites a website called “Ocean Optimism,” which, on a separate page, supports the last sentence above by citing the paper “Hope, Despair, and Transformation,” which itself cites a report titled “Children’s Fears, Hopes, and Heroes.” (Why not cite the original? It’s not clear.) The original source of this data also notes that “44% of children are worried about the future impact of climate change,” and “43% of children worried about pollution in the air and water.” At least from one perspective, this is quite encouraging: young people are taking the very real dangers of environmental degradation seriously. While fear can be crippling—it is, at times, the “brother to panic”—it can also be a great motivator.

According to recent polls, so do 15 percent of people worldwide, and between a quarter and a third of Americans.6 In The Progress Paradox, the jour­nalist Gregg Easterbrook suggests that a major reason that Americans are not happier, despite their rising objec­tive fortunes, is “collapse anxiety”: the fear that civiliza­tion may implode and there’s nothing anyone can do about it.

Is there any empirical data to support this thesis, though? So far as I can tell, the answer is “no.” Thus, one might wonder: What if worrying about the end of the world provides the necessary impetus to recycle more, fly less, donate to the Future of Life Institute, plant a tree, stop using plastic bags, earn a degree in philosophy, ecology, or computer science, educate others about the “intertwined” promise and peril of technology, and vote for political leaders who care about the world beyond the next election cycle? In my own case—that is, on a personal level—realizing (i) how much potential future value there is to lose by succumbing to an existential catastrophe, and (ii) the extent to which the existing and emerging risks to humanity are historically unprecedented, are what inspired me to dive into and contribute to the literature. For me, futurological fear has been the greatest driver of intellectual activism to ensure a good future for humanity.

In fact, Easterbrook argues that “collapse anxiety is essential to understanding why Americans do not seem more pleased with the historically unprecedented bounty and liberty in which most live.” But nowhere does he provide an argument or evidence for “collapse anxiety” being essential. For example, after listing a number of desirable trends pertaining to life expectancy, health, education, and comfort, he simply asserts that “collapse anxiety hangs over these achievements, engendering subliminal fear that prosperity will end.”

It’s also worth noting that Easterbrook—again, someone who Pinker cites approvingly, and whose book Google amusingly categorizes as “Self-Help”—claims that “if a collapse were coming, its signs ought to be somewhere. That is not what trends show. Practically everything is getting better.” But this is demonstrably false with respect to, say, climate change and global biodiversity loss, both of which have fueled only the sixth mass extinction event in life’s 3.8-billion-year history. These phenomena are not “getting better” in any sense, and indeed their catastrophic effects, some of which are already irreversible, will almost certainly linger for millennia or longer. (Some biologists have even suggested that the Anthropocene extinction will be our greatest legacy on Earth.) There are also ominous trends with respect to the growing power and accessibility of dual-use emerging technologies, as discussed more below. Suffice it to say that Easterbrook appears to suffer from the same scotoma that I claimed above has led Pinker to embrace an overly roseate picture of where we are and, more importantly, where we’re going.

Of course, people’s emotions are irrelevant if the risks are real.

Wait! A moment ago Pinker was arguing that alerting people of certain risks was itself bad because it can lead to nihilism (“The world’s future looks grim so we have to focus on looking after ourselves and those we love”) and “collapse anxiety,” which distorts one’s perception of just how good contemporary life is. Of course, “people’s emotions” are irrelevant to whether some proposition about existential risk is true or not, since truth is a mind-independent property. But, as we discussed above, people’s emotions are extremely relevant to the paramount task of motivating people to care about these issues, voting for the right political candidates, and so on. Thus, it’s precisely when the risks are real that understanding the emotional responses of humans to global-scale danger is the most relevant. Again, as Bostrom writes, the point isn’t to wallow in gloom and doom, it’s to do something—yet doing something requires good psycho-emotional management.

But risk assessments fall apart when they deal with highly improbable events in complex systems. Since we cannot replay history thousands of times and count the outcomes, a statement that some event will occur with a probability of .01 or .001 or .0001 or .00001 is essentially a readout of the assessor’s subjective confidence. This includes mathematical analyses in which scientists plot the distribution of events in the past (like wars or cyber­ attacks) and show they fall into a power-law distribution, one with “fat” or “thick” tails, in which extreme events are highly improbable but not astronomically improbable.7 The math is of little help in calibrating the risk, because the scattershot data along the tail of the dis­tribution generally misbehave, deviating from a smooth curve and making estimation impossible. All we know is that very bad things can happen.

Which is, I would urge, sufficient for allocating modest resources to investigate “bad things,” including speculative “bad things,” to ensure that they don’t occur. (Recall that Pinker acknowledges above that “the stakes, quite liter­ally, could not be higher.”) It’s also worth noting that many scholars who study existential risks don’t believe that the probability of a global-scale disaster is 0.01 or lower. For example, and “informal” survey of experts conducted by the Future of Humanity Institute at Oxford University yielded a median estimate for human extinction before 2100 of 19 percent. This is pretty typical of estimates by scholars with genuine expertise on the topic, as I discuss in section 1 of my paper “Facing Disaster.” Indeed, just as one is more likely to die from a meteor than a lightning strike, such estimates suggest that people are far more likely to die from an existential catastrophe than either. Insofar as genuine expertise should be headed—and I believe that it would be an act of anti-intellectualism to ignore such experts—Pinker is wrong that we’re dealing with “highly improbable events.”

That takes us back to subjective readouts, which tend to be inflated by the Availability and Negativity biases and by the gravitas market (chapter 4).8

First, it’s worth reemphasizing that existential risk scholars are, on the whole, acutely aware of the confounding effects of cognitive biases—including the availability and negativity biases. As Pinker himself has noted in the past, one of the benefits of knowing about bad modes of intellection is that this knowledge can itself serve as a bulwark against problematic thinking. Second, does Pinker provide any evidence to support the claim that “subjective readouts” by scientists—in our case, existential risk scholars—“tend to be inflated by the Availability and Negativity biases”? The citation provided in footnote 8 is this: “Overestimating the probability of extreme risks: Pinker 2011, pp. 368-73,” where “Pinker 2011” refers to Better Angels. I encourage readers to investigate these pages for themselves to try and identify what’s relevant to the sentence above, which starts a new paragraph in Enlightenment Now. Indeed, there is not a single mention of the availability or negativity biases on these pages. Pinker does mention “power law” phenomena, but only says the following:

(a) “Terrorist attacks obey a power-law distribution, which means they are generated by mechanisms that make extreme events unlikely, but not astronomically unlikely.”

(b) “Combine exponentially growing damage with an exponentially shrinking chance of success, and you get a power law, with its disconcertingly thick tail. Given the presence of weapons of mass destruction in the real world, and religious fanatics willing to wreak untold damage for a higher cause, a lengthy conspiracy producing a horrendous death toll is within the realm of thinkable probabilities.” And…

(c) “In practice, as you get to the tail of a power-law distribution, the data points start to misbehave, scattering around the line or warping it downward to very low probabilities. The statistical spectrum of terrorist damage reminds us not to dismiss the worst-case scenarios, but it doesn’t tell us how likely they are.”

So, citation 8 appears to be misplaced at the end of Pinker’s sentence—and again, I would argue that “unlikely” extreme events and events with “horrendous death toll[s]” that are “within the realm of thinkable probabilities” should be enough to fund precisely those organizations, focused on existential risks, that Pinker seems to denigrate. (After all, Pinker tells us that existential risk is a “useless category.”)

Third, and just as importantly, there are numerous cognitive biases that Pinker conspicuously ignores to make his case—biases that can lead one to underestimate the probability of a global catastrophe or human extinction. For example, the “observation selection effect” occurs when one’s data is skewed by the fact that gathering such data is dependent upon the existence of observers like us. In other words, there are some types of catastrophes that are incompatible with the existence of certain observers, meaning that observers will always find themselves in worlds in which those types of catastrophes have not previously occurred—a fact that could lead such observers to underestimate the probability of those catastrophes. As Milan Ćirković puts the point, “people often erroneously claim that we should not worry too much about existential disasters, since none has happened in the last thousand or even million years. This fallacy needs to be dispelled.” Other biases especially relevant in this context include the disjunction fallacy, overconfidence, progress trap delusions, brain lag, and what Günther Anders calls “apocalyptic blindness,” which

determines a notion of time and future that renders human beings incapable of facing the possibility of a bad end to their history. The belief in progress, persistently ingrained since the Industrial Revolution [contra Pinker], causes the incapability of humans to understand that their existence is threatened, and that this could lead to the end of their history.

In my view, a fair (rather than tendentious) presentation of these issues would note—as I do in Morality—the various biases that can push one in either direction of over- or under-estimating the likelihood of doom.

Those who sow fear about a dreadful prophecy may be seen as serious and responsible, while those who are measured are seen as complacent and naive. Despair springs eternal. At least since the Hebrew prophets and the Book of Revelation, seers have warned their contemporaries about an immi­nent doomsday.

Sure, but the epistemological foundation of religious prophesies could not be more different than the epistemological foundation of scientific warnings about climate change, the Anthropocene extinction, nuclear conflict, and even engineered pandemics and misaligned superintelligence. I stress this point repeatedly in my books Morality and, especially, The End. It’s important because some non-experts might inadvertently conflate these two categories simply because the message presented—“Be wary!”—sounds vaguely similar. Unfortunately, by mentioning “prophets,” the “Book of Revelation,” and “seers,” Pinker contributes to this problem. Indeed, Pinker frequently vacillates—both above and below in his chapter—between talking about the existential warnings of scientists and the apocalyptic logorrhea of religionists. This further muddles the discussion.

Forecasts of End Times are a staple of psychics, mystics, televangelists, nut cults, founders of religions, and men pacing the sidewalk with sandwich boards saying “Repent!”9 The storyline that climaxes in harsh payback for technological hubris is an archetype of Western fiction, including Promethean fire, Pandora’s box, Icarus’s flight, Faust’s bargain, the Sorcerer’s Apprentice, Frankenstein’s monster, and, from Holly­wood, more than 250 end-of-the-world flicks.10 As the engineer Eric Zencey has observed, “There is seduction in apocalyptic thinking. If one lives in the Last Days, one’s actions, one’s very life, take on historical meaning and no small measure of poignance.”11

Here footnote 11 provides the following citation: “Quoted in Ronald Bailey, ‘Everybody Loves a Good Apocalypse,’ Reason, Nov. 2015.” First, Bailey is a former climate-denying libertarian who edited the 2002 book Global Warming and Other Eco Myths: How the Environmental Movement Uses False Science to Scare Us to Death, although he has more recently acknowledged that climate change is real but “will be solved through economic growth.” Second, Bailey doesn’t provide a citation to Zencey’s quote, meaning that Pinker references a secondary source that quotes a scholar without providing a primary source citation to verify the accuracy of the quote. As it happens, I reached out to Zencey—not an “engineer,” but a political economist—and asked him about the quote. His response was illuminating:

I appreciate your effort to nail down the source, and I especially appreciate the opportunity to set the record a great deal straighter than it has been. That quotation has bedeviled me. It is accurate but taken completely out of context. … You’d be doing me a service if you set the record straight.

The original source of the quote is a highly contemplative 1988 article in The North American Review titled “Apocalypse and Eco-logy.” In it, Zencey claims that, in response to catastrophic environmental degradation, he once anticipated a “coming transcendence of industrial society,” a kind of “apocalyptic redemption” that would usher in an epoch marked by “the freedoms we would enjoy if only political power were decentralized and our economy given over to sustainable enterprises using renewable fuels and minimizing resources.” This was ultimately an optimistic form of apocalypticism; as Zencey puts it, “we were optimists, filled with confidence in the power of education.” Yet in our exchange, Zencey is quite explicit that

too many people use that quotation [about “apocalyptic thinking”] to make it seem that I line up against the idea that we face an ecological apocalypse. If on reading the essay [“Apocalypse and Ecology”] you think I wasn’t sufficiently apocalyptic about the damage humans are doing to the ecosystems that are their life-support system, I can only plead that in 1988 we knew far less than we know now about how rapidly our ecological problems would foreclose upon us, and I wanted the ecology movement to reach an audience, not leave itself vulnerable to being apparently disproven in the short run.

All of this being said, it’s worth noting once more—at the risk of belaboring this point to death—that the sort of “apocalyptic thinking” referenced by Bailey and Pinker does not characterize the kind of concern ubiquitous among existential risk scholars.

Scientists and technologists are by no means immune. Remember the Y2K bug?12

This is yet another suspicious citation, provided in footnote 12. It points to a 407-word-long New York Times article titled “Revisiting Y2K: Much Ado About Nothing?,” which also includes a video by the Retro Report. Much of what Pinker says below draws directly from, and parallels, this short article and video, including the quotes of Bill Clinton and Jerry Falwell, and the reference to bolts in a bridge—almost as if Pinker is simply copying (in his own words) this information. He continues:

In the 1990s, as the turn of the millennium drew near, computer scientists began to warn the world of an impending catastrophe. In the early decades of computing, when information was expensive, programmers often saved a couple of bytes by represent­ ing a year by its last two digits. They figured that by the time the year 2000 came around and the implicit “19” was no longer valid, the programs would be long obso­lete. But complicated software is replaced slowly, and many old programs were still running on institutional mainframes and embedded in chips. When 12:00 A.M.on January 1, 2000, arrived and the digits rolled over, a pro­ gram would think it was 1900 and would crash or go hay­ wire (presumably because it would divide some number by the difference between what it thought was the current year and the year 1900, namely zero, though why a pro­gram would do this was never made clear). At that moment, bank balances would be wiped out, elevators would stop between floors, incubators in maternity wards would shut off, water pumps would freeze, planes would fall from the sky, nuclear power plants would melt down, and ICBMs would be launched from their silos.

And these were the hardheaded predictions from tech­-savvy authorities (such as President Bill Clinton, who warned the nation, “I want to stress the urgency of the challenge. This is not one of the summer movies where you can close your eyes during the scary part”).

To be clear, Clinton may have been an “authority who was tech-savvy,” but he wasn’t a “savvy authority of tech,” which makes it odd, in my view, to cite him in the context of evaluating the Y2K warnings.

Cultural pessimists saw the Y2K bug as comeuppance for enthralling our civilization to technology. Among reli­gious thinkers, the numerological link to Christian mil­lennialism was irresistible. The Reverend Jerry Falwell declared, “I believe that Y2K may be God’s instrument to shake this nation, humble this nation, awaken this nation and from this nation start revival that spreads the face of the earth before the Rapture of the Church.” A hundred billion dollars was spent worldwide on reprogramming software for Y2K Readiness, a challenge that was likened to replacing every bolt in every bridge in the world.

As a former assembly language programmer I was skeptical of the doomsday scenarios, and fortuitously I was in New Zealand, the first country to welcome the new millennium, at the fateful moment. Sure enough, at 12:00 A.M. on January 1, nothing happened (as I quickly reassured family members back home on a fully function­ing telephone). The Y2K reprogrammers, like the ele­phant-repellent salesman, took credit for averting disaster, but many countries and small businesses had taken their chances without any Y2K preparation, and they had no problems either. Though some software needed updating (one program on my laptop displayed “January 1, 19100”), it turned out that very few pro­grams, particularly those embedded in machines, had both contained the bug and performed furious arithmetic on the current year.

This is half true, according to the Retro Report video. Therein, the narrator states that “not everything needed to be fixed, including most embedded chips.” Of course, “most” chips not needing to be fixed does not entail “very few” needing to be fixed. The latter term suggests a small minority whereas the former merely denotes a non-majority (of problematic chips).

The threat turned out to be barely more serious than the lettering on the sidewalk prophet’ s sandwich board.

It is perplexing how Pinker arrives at this conclusion—especially given the citation of footnote 12. Indeed, this may be a particularly striking example of Pinker’s preferential elevation of data that supports his narrative while quietly ignoring those that don’t. For example, the New York Times article asks: “Was it all just a huge goof that faked out even the president?,” to which it responds, “no, according to this week’s Retro Report video. While a lot was overblown—we spent an estimated $100 billion to combat Y2K—there were also legitimate concerns.” The article adds that “among other things, Retro Report concludes that the financial markets were able to reopen quickly after 9/11 thanks to lessons learned from the work on Y2K.” As one might expect, Pinker doesn’t mention either (a) the legitimacy issue, or (b) the longer-term benefits of having taken the Y2K threat seriously.

The video also contains a number of statements that Pinker chooses to ignore in this discussion. For example, with respect to dodging a global disaster, Paul Saffo laments that “you never get credit for the disasters you avert, especially if you’re a programmer and nobody understands what you’re doing to begin with.” Similarly, John Koskinen, who led Clinton’s “Council on Year 2000 Conversion,” asserts that “we have sort of a lack of confidence that things can get done [in America]. People did not grasp the magnitude of the effort. The easier thing to keep in your mind was, ‘All that noise about it and nothing happen, it must have just been a hoax.’” The narrator of the video then notes that “the Senate’s final report on Y2K found that government and industry did successfully avert a crisis at an estimated cost of 100 billion dollars,” although it adds that such efforts may have overspent by 30 percent. The point is this: The article and video that Pinker cites are quite balanced, whereas Pinker’s presentation of the topic is not, which indicates, to me, that Pinker has an agenda.

The Great Y2K Panic does not mean that all warnings of potential catastrophes are false alarms, but it reminds us that we are vulnerable to techno-apoca­lyptic delusions.

Again, the quotes above suggest that, at least from one legitimate perspective, this wasn’t a “techno-apocalyptic delusion,” although (i) there is ongoing debate about how necessary certain measures were (i.e., there isn’t a settled view about whether Y2K slipped from alarm to alarmism), and (ii) there were many conspiracy theorists, religious fanatics, gun-loving survivalists, and so on, who exploited the “dread factor” of Y2K for their own purposes.

How should we think about catastrophic threats? Let’s begin with the greatest existential question of all, the fate of our species. As with the more parochial question of our fate as individuals, we assuredly have to come to terms with our mortality. Biologists joke that to a first approximation all species are extinct, since that was the fate of at least 99 percent of the species that ever lived. A typical mammalian species lasts around a million years, and it’s hard to insist that Homo sapiens will be an excep­tion. Even if we had remained technologically humble hunter-gatherers, we would still be living in a geological shooting gallery.13 A burst of gamma rays from a super­nova or collapsed star could irradiate half the planet, brown the atmosphere, and destroy the ozone layer, allowing ultraviolet light to irradiate the other half.14 Or the Earth’s magnetic field could flip, exposing the planet to an interlude of lethal solar and cosmic radiation.

I must say, the last sentence is a bit odd given that Pinker consistently selects the most optimistic set of data or data interpretations to support his case (at least in this chapter), yet this is a more pessimistic account of what might happen. For example, some scientists contend that a magnetic field flip would not do much more than cause power outages, interfere with radio communications, and possibly damage satellites. But even the worst case scenario isn’t that bad: it could render the ozone vulnerable to coronal mass ejections (CMEs), resulting in ozone holes that increase the rate of skin cancer. The other risk that Pinker identifies—i.e., gamma-rays—is so improbable that it’s hardly worth mentioning, especially in a chapter that references far more likely risk scenarios, from climate change to global pandemics.

An asteroid could slam into the Earth, flattening thousands of square miles and kicking up debris that would black out the sun and drench us with corrosive rain. Supervol­canoes or massive lava flows could choke us with ash, C02, and sulfuric acid. A black hole could wander into the solar system and pull the Earth out of its orbit or suck it into oblivion. Even if the species manages to survive for a billion more years, the Earth and solar system will not: the sun will start to use up its hydrogen, become denser and hotter, and boil away our oceans on its way to becom­ing a red giant.

Technology, then, is not the reason that our species must someday face the Grim Reaper. Indeed, technology is our best hope for cheating death, at least for a while.

This is utterly perplexing. First, the propositions “technology could save us” and “technology could destroy us” are not mutually exclusive. Which is to say, both could be true at the same time, just as the propositions “this AK-47 could save me” and “this AK-47 could kill me” could simultaneously be true. As mentioned above, one of the primary causes for existential concern about emerging technologies is that all, or nearly all, are dually usable for both morally good and bad ends, and this “dual-use” property appears to be an intrinsic feature of such artifacts (i.e., the “promise and peril” of advanced technology is a package deal; to neutralize either is to eliminate both.)

Second, Pinker is correct that “technology is our best hope for cheating death,” but only if we’re talking about natural “kill mechanisms” like asteroid/comet impacts and—perhaps—supervolcanic eruptions (although see below). We don’t know how to guard against gamma ray bursts, supernovae, or black holes, and while venturing into space could enable us to survive the death of our solar system, there are strong reasons for believing that space expansionism is a sour recipe for immense suffering, if not total annihilation. This being said, no one questions that using technology to eliminate risks from nature is a good thing—it is, in fact, a common refrain from existential risk scholars—but nor does anyone who seriously study the future of humanity believe that the greatest dangers to our collective survival are natural phenomena. Rather, it’s large-scale human activity, dual-use emerging technologies, and value-misaligned superintelligence that, without question, constitute the most urgent global-scale hazards.

As long as we are entertaining hypothetical disasters far in the future, we must also ponder hypothetical advances that would allow us to survive them, such as growing food under lights powered with nuclear fusion, or synthe­sizing it in industrial plants like biofuel.15

For the sake of clarity, the citation provided in footnote 15 is to Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, by David Denkenberger and Joshua Pearce. Although the structure of Pinker’s sentence might imply that Denkenberger and Pearce endorse growing food under lights that are powered by nuclear fusion, this is not the case. Rather, Denkenberger and Pearce claim that growing food under such lights would be too inefficient and pricey; a configuration of this sort would only be practicable for “high value” commodities like drugs. This citation is also rather odd (and misleading) because Denkenberger takes the topic of existential risks very seriously—indeed, his organization ALLFED notes that “there is an estimated 10 percent chance of a complete loss of food production capability this century.” Furthermore, in response to my criticisms of Pinker’s understanding of technological risk, Denkenberger tells me, “I agree that [Pinker, in this chapter] is being too dismissive of the risks from technology.” Note once more the causal link between worrying about existential risks and working to mitigate them. In a sense, “collapse anxiety” is precisely what motivates Denkenberger’s important research.

Even technolo­gies of the not-so-distant future could save our skin. It’s technically feasible to track the trajectories of asteroids and other “extinction-class near-Earth objects,” spot the ones that are on a collision course with the Earth, and nudge them off course before they send us the way of the dinosaurs.16 NASA has also figured out a way to pump water at high pressure into a supervolcano and extract the heat for geothermal energy, cooling the magma enough that it would never blow its top.17

The term “figured out” is rather misleading. What NASA did was propose a “thought experiment” for how a supervolcano might be defused. But, importantly, this is not practically feasible right now and won’t be for the foreseeable future (e.g., we are barely able to dig sufficiently deep into the ground); doing this would require truly huge amounts of water; and there remains a chance that pumping water into a supervolcano could actually trigger a supereruption, which could bring about a catastrophic volcanic winter.

Our ancestors were powerless to stop these lethal menaces, so in that sense technology has not made this a uniquely dangerous era in the history of our species but a uniquely safe one.

Once again, this is true if we’re talking about natural risks. With respect to the growing swarm of anthropogenic risks discussed above, just the opposite is the case.

For this reason, the techno-apocalyptic claim that ours is the first civilization that can destroy itself is miscon­ceived.

Which “techno-apocalypticists” make this claim? Pinker doesn’t provide a citation, which is unsurprising, since no reasonable person would claim that “ours” (whatever that means exactly) is the very first civilization in human history that’s capable of destroying itself. After all, asserting this would entail denying that the Mayan, Roman, and Easter Island civilizations ever collapsed, which is patently absurd. If by “[our] civilization” one means the McLuhanian “global village” in which all 7.6 billion contemporary humans live, then yes, since 1945—but not before—we have indeed possessed the historically unique ability to wreak planetary-scale harm that could bring about the implosion of all major human societies around the world.

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 8:30 AM

The space warre scenario in the linked paper is a logical consequence of extant technologies, with increases in scale rather than major technical leaps being the primary hurdle.

We can either get good at solving collective action problems like the space-warre scenario, or we can fail to make it to space.