Wiki Contributions

Comments

Steve M11mo10

What is the thinking around equilibrium between multiple AGIs with competing goals?

For example AGI one wants to maximize paperclips, AGI two wants to help humans create as many new episodes of the Simpsons as possible, AGI three wants to make sure humans aren't being coerced by machines using violence, AGI four wants to maximize egg production.

In order for AGI one not to have AGIs two through four not try to destroy it, it needs to either gain such a huge advantage over them it can instantly destroy/neutralize them, or it needs to use strategies that don't prompt a 'war'. If a large number of these AGIs have goals that are at least partially aligned with human preferences, could this be a way to get to an 'equilibrium' that is at least not dystopian for humans? Why or why not?