Third article TL;DR: It is clear that superintelligence singleton is the most obvious solution to prevent all non-AI risks.

However, the main problem is that there is a risk of creation of such singleton (risks of unfriendly AI), risks of implementation it (AI have to fight a war for global domination probably against other AIs, nuclear national states etc) and risks of singletone failure (if it halts - it is forever).

As result, we only move risks from one side equation to another, and even replace known risks with unknown risks.

I think that other possible solutions exist, where many agents unite in some kind of police to monitor each other, like suggested David Brin in his transparent society. Such police may consist not of citizens, but of AIs.

Yes, good points. As for "As result, we only move risks from one side equation to another, and even replace known risks with unknown risks," another way to put the paper's thesis is this: insofar as the threat of unilateralism becomes widespread, thus requiring a centralized surveillance apparatus, solving the control problem is that mush more important! I.e., it's an argument for why MIRI's work matters.

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

by philosophytorres 1 min read25th Aug 201732 comments

6


Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)