Yes, good points. As for "As result, we only move risks from one side equation to another, and even replace known risks with unknown risks," another way to put the paper's thesis is this: insofar as the threat of unilateralism becomes widespread, thus requiring a centralized surveillance apparatus, solving the control problem is that mush more important! I.e., it's an argument for why MIRI's work matters.

I think that unilateralist biological risks will be soon here. I modeled their development in my unpublished article about multipandemic, and compare their number with the historical number of computer viruses. There was 1 virus a year in the beginning of 1980s, 1000 a year in 1990, millions in 2000s, millions of malwares a day in 2010s, according to some report on CNN. But the peak of damage was in 1990s as viruses were more destructive at the time and aimed on data deletion, and not much antiviruses were available. Thus it needs around 10 years to move from the technical possibility of creating a virus at home to global mulripandemic.

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

by philosophytorres 1 min read25th Aug 201732 comments

6


Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)