On super-plagues, I've understood the consensus position to be that even though you could create one that had really big death tolls, actual human extinction would be very unlikely. E.g.

Asked by Representative Christopher Shays (R-Conn.) whether a pathogen could be engineered that would be virulent enough to “wipe out all of humanity,” Fauci and other top officials at the hearing said such an agent was technically feasible but in practice unlikely.

Centers for Disease Control and Prevention Director Julie Gerberding said a deadly agent could be engineered with relative ease that could spread throughout the world if left unchecked, but that the outbreak would be unlikely to defeat countries’ detection and response systems.

“The technical obstacles are really trivial,” Gerberding said. “What’s difficult is the distribution of agents in ways that would bypass our capacity to recognize and intervene effectively.”

Fauci said creating an agent whose transmissibility could be sustained on such a scale, even as authorities worked to counter it, would be a daunting task.

“Would you end up with a microbe that functionally will … essentially wipe out everyone from the face of the Earth? … It would be very, very difficult to do that,” he said.

Asteroid strikes do sound more plausible, though there too I would expect a lot of people to be aware of the possibility and thus devote considerable measures to ensuring the safety of any space operations capable of actually diverting asteroids.

I'm not an expert on bioweapons, but I note that the paper you cite is dated 2005, before the advent of synthetic biology. The recent report from FHI seems to consider bioweapons to be a realistic existential risk.

1turchin3yThe problem this consensus position is that it failed to imagine that several deadly pandemics could run simultaneously, and existential terrorists could deliberately organize it by manipulating several viruses. Rather simple AI may help to engineer deadly plagues in droves, and it should not be superintelligent to do so. Personally, I see the big failure of all x-risks community in ignoring and not even discussing such risks.

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

by philosophytorres 1 min read25th Aug 201732 comments

6


Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)