You could argue that mere destruction would be easier than converting everything to orgasmium, but both seem hard enough to basically require a superintelligence.

We can kill everyone today or in the near future by diverting a large asteroid to crash into Earth, or by engineering a super-plague. Doing either would take significant resources but isn't anywhere near requiring a superintelligence. In comparison, converting everything to orgasmium seems much harder and is far beyond our current technological capabilities.

On super-plagues, I've understood the consensus position to be that even though you could create one that had really big death tolls, actual human extinction would be very unlikely. E.g.

Asked by Representative Christopher Shays (R-Conn.) whether a pathogen could be engineered that would be virulent enough to “wipe out all of humanity,” Fauci and other top officials at the hearing said such an agent was technically feasible but in practice unlikely.

Centers for Disease Control and Prevention Director Julie Gerberding said a deadly agent could be engineered

... (read more)

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

by philosophytorres 1 min read25th Aug 201732 comments

6


Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)