Is there anything we can realistically do about it? Without crippling the whole of biotech?

Perhaps have any bioprinter, or other tool, be constantly connected to a narrow AI, to make sure it doesn't accidentally, or intentionally , print ANY viruses, bacteria, or prions.

0turchin3yJump ASAP to friendly AI or to another global control system, may be using many interconnected narrow AIs as AI police. Basically, if we don't create global control system, we are doomed. But it may be decentralised to escape the worst side of the totalitarianism. Regarding FAI research it is a catch-22. If we slow down AI research effectively, biorisks will start to dominate. If we accelerate AI, we more likely create it before AI safety theory implementation is ready. I could send anyone interested my article about the biorisks and all this, which I don't want to publish openly on the internet, hoping for some journal publication.

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

by philosophytorres 1 min read25th Aug 201732 comments


Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)