As is well known, long-termism rests on three core assumptions:
1. The moral equality of generations.
2. The vast potential of the future.
3. The ability to influence the future.
While the third assumption is commonly criticized, the first and second points receive far less attention for some reason, especially in the context of the most likely ASI development scenario. Talk of myriads of meaningful lives makes little sense if we stop imagining a utopian, densely populated galaxy and instead consider the motivations of the agent that will be shaping that galaxy.
In most development models, the first agent to achieve superintelligence (ASI) will become a singleton. Its behavior will, with high probability, be determined by instrumental convergence.
- An
... (read 253 more words →)