It would seem that increasing certainty for "Does the rocket launch successfully?" would be more important than "How early does the rocket launch?". Most acts that shoot for a early launch would seem to increase the risk that something goes wrong in the launch or that the launching colonization would be insufficient or suicidal. Otherwise it just seem like logic of "better die soon to get to heaven faster to have 3 days + infinity instead of just infinity in it". I think that I ought to turn down any offerings of shady moral actions for however virgins in heaven (and this should not be sensitive (atleast greatly) to the number of virgins). So if it used for "lets get seriously rockety" I don't think the analysis adds anything beyond "rockets are cool".

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

by philosophytorres 1 min read25th Aug 201732 comments

6


Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)