Interesting topics :) About your second paper:

You say you provide “a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges.” So, it sounds like the list excludes those whose morality implies that it would be right to kill everyone, or who may want to kill everyone, but who have simply kept quiet about it.

In footnote 2, you write “Note that, taken to its extreme, classical utilitarianism could also, arguably, engender an existential risk,” and you refer to an argument by David Pearce. That’s an important note. That also goes beyond individuals who themselves have “expressed omnicidal urges,” since the argument is from Pearce; not a classical utilitarian reporting her urges. By the way, I think it is fine to say that "classical utilitarianism could also, arguably, engender an existential risk." But the risk is also about killing everyone, which need not be an existential risk (in the sense that Earth-originating, intelligent life goes away, or fails to realize its potential), since, if there is a risk, a or the main risk is presumably that classical utilitarianism implies that it would be right to kill all of us in order to spread well-being beyond Earth, which would not be an existential catastrophe in the sense I just mentioned.

A fun and important exercise would be to start from your, the author’s, morality, and analyze if it implies that it would be right to kill everyone. Without knowing much at all about your morality, I guess that one could make a case for it, and that it would be a complex investigation to see if the arguments you could give as replies are really successful.

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

by philosophytorres 1 min read25th Aug 201732 comments

6


Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)