Ok, reading your first essay, my first thought is this:

Let's say that you are correct and the future will see a wide variety of human and post-human species, cultures, and civilizations, which look at the universe in very different ways and have very different mindsets, and which use technology and science in entierly different ways which may totally baffle outside observers. To quote your essay:

The point is that different species may be in the same situation with respect to each others’ ability to manipulate the physical world. A species X could observe something happening in the universe but have no way, in principle, of understanding the causal mechanisms behind the explanandum-phenomenon. By the same token, species X could wiggle some feature of the universe in a way that species Y finds utterly perplexing. The result could be a very odd and potentially catastrophic sort of “mutually asymmetrical warfare,” where the asymmetry here refers to fundamental differences in how various species understand the universe and, therefore, are able to weaponize it. Unlike a technologically “advanced” civilization on Earth fighting a more technologically “primitive” society, such space conflicts would be more like Homo sapiens engaged in an all-out war with bonobos—except that this asymmetry would be differently mirrored back toward us.

If that is true, then it seems to me that the civilizations which would have the biggest advantages would be very diverse civilizations, wouldn't it? If at some point in the future, a certain civilization (say, for the sake of example, let's call it the "inner solar system civilization") has a dozen totally different varieties of humans and transhumans and post-humans and maybe uplifted animals and maybe some kind of AI's or whatever, living in wildly different environments, with very different goals and ideas and ways of looking at the universe, and these different groups develop in contact with each other in such a way that they still are generally on good terms and share ideas and information (even when they don't really understand each other), it seems like that diverse civilization would have a huge advantage in any kind of conflict with monopolar civilizations where everyone in the civilization had the same worldview. The diverse civilization would have a dozen different types of technologies and worldviews and ways of "wiggling some feature in the universe", while the monopoplar civilization would only have one; the diverse civilization would probably also advance more quickly overall.

So, if that is true, then I would think that in that kind of future situation, the civilizations that would have the greatest advantage in any possible conflict would be the very diverse civilizations with a large number of different sub-types of civilizations living in harmony with each other; and those civilizations would, I suspect, also tend to be the most peaceful and the least likely to start an interstellar war just because that other civilization seemed different or weird to them. More likely a diverse civilization that already is sharing ideas between a dozen different species of posthumans would be more interested in sharing ideas and knowledge with more distance civilizations instead of engaging in war with them.

Maybe I'm just being overly optimistic, but it seems like that may be a way out of the problem you are talking about.

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

by philosophytorres 1 min read25th Aug 201732 comments

6


Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)