While biotech risks are existential at the current time, they lessen as we get more technology. If we can have hermetically sealable living quarters and bioscanners that sequence and look for novel virus and bacteria we should be able to detect and lock down infected areas. Without requiring brain scanners and red goo.

I think we can do similar interventions for most other existential risks classes. The only one you need really invasive surveillance for is AI. How dangerous tool AI is depends on what intelligence actually is. Which is an open question. So I don't think red goo and brain scanners will become a necessity, conditional on my view of intelligence being correct.

I think they will grow before dimish because tech is cheapening.

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

by philosophytorres 1 min read25th Aug 201732 comments


Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)