1) the math may work out for this, but you're giving up a lot of potential-existence-time to do so (halfway or more to the heat-death of the universe).

2) we haven't gotten off this planet, let alone to another star, so it seems a bit premature to plan to get out of many-eon light cones.

3) If there is an event that shows offence stronger than defense (and you're a defender), it's too late to get away.

4) Wherever you go, you're bringing the seeds of such an event with you - there's nothing that will make you or your colony immune from whatever went wrong for the rest of the known intelligent life in the universe.

(1) Agreed, although I would get vastly more resources to personally consume! Free energy is probably the binding limitation on computational time which probably is the post-singularity binding limit on meaningful lifespan.

(2) An intelligence explosion might collapse to minutes the time between when humans could walk on Mars and when my idea becomes practical to implement.

(3) Today offense is stronger than defense, yet I put a high probability on my personally being able to survive another year.

(4) Perhaps. But what might go wrong is a struggle for li... (read more)

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

by philosophytorres 1 min read25th Aug 201732 comments

6


Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)