LESSWRONG
LW

1382
Steve M
6110
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
Steve M2y10

What is the thinking around equilibrium between multiple AGIs with competing goals?

For example AGI one wants to maximize paperclips, AGI two wants to help humans create as many new episodes of the Simpsons as possible, AGI three wants to make sure humans aren't being coerced by machines using violence, AGI four wants to maximize egg production.

In order for AGI one not to have AGIs two through four not try to destroy it, it needs to either gain such a huge advantage over them it can instantly destroy/neutralize them, or it needs to use strategies that don't prompt a 'war'. If a large number of these AGIs have goals that are at least partially aligned with human preferences, could this be a way to get to an 'equilibrium' that is at least not dystopian for humans? Why or why not?

Reply
7What exactly does 'Slow Down' look like?
2y
0