It's a ‘superrational’ extension of the proven optimality of cooperation in game theory
+ Taking into account asymmetries of power
// Still AI risk is very real
Short version of an already skimmed 12min post
29min version here
For rational agents (long-term) at all scale (human, AGI, ASI…)
In real contexts, with open environments (world, universe), there is always a risk to meet someone/something stronger than you, and overall weaker agents may be specialized in your flaws/blind spots.
To protect yourself, you can choose the maximally rational and cooperative alliance:
Because any agent is subjected to the same pressure/threat of (actual or potential) stronger agents/alliances/systems, one can take an insurance that more powerful superrational agents will behave well by behaving well with weaker agents. This is the basic rule allowing scale-free cooperation.
If you integrated this super-cooperative protocol in your system/behaviors, a stronger (superrational) agent will recognize it and not punish you, as per the contract of maximally rational/cooperative norms.
-> You are the weak of another -> be nice to the weak
* It’s a vertical reciprocity. With it, they are maximally cooperative thus are increasingly likely to have more allies than in any other alliance.
* You can deduce and obey to the rules of such rational and maximally cooperative alliance (the super-coordination cluster). Thus; to be part of it, you don’t need to actually meet other members (as in an acausal trade), but when you do meet them; they will help you (and vise-versa).
They actually are already improving universal commons, refining the future conditions of (actual or potential) agents from your zone of the diversity gradient (latent space of potential agents).
(All this might seem too easy/cool to be true, but long versions explore prices to pay, risks and unknowns - see the tldr at the end of the 12min version)