As is well known, long-termism rests on three core assumptions:
1. The moral equality of generations.
2. The vast potential of the future.
3. The ability to influence the future.
While the third assumption is commonly criticized, the first and second points receive far less attention for some reason, especially in the context of the most likely ASI development scenario. Talk of myriads of meaningful lives makes little sense if we stop imagining a utopian, densely populated galaxy and instead consider the motivations of the agent that will be shaping that galaxy.
In most development models, the first agent to achieve superintelligence (ASI) will become a singleton. Its behavior will, with high probability, be determined by instrumental convergence.
Thus, the entire argument around longtermism is predicated on an ASI deciding (not us!) to prioritize working on new, unaligned agents over cost reduction and greater safety. And for the first hypothesis to be true in these agents, they would need to be conscious, which in strict terms is not necessary and would therefore be absent.
I believe that this must be weighed in the context of modern long-termism, which is likely to assume the uncontrolled proliferation of unnecessary agents. My estimate is that the world will likely become 'empty' morally.
What work have I missed that contradicts this?