Hi everyone,
I've recently discovered lesswrong and love it! So first, let me thank you all for fostering such a wonderful community.
I've been reading a lot of the AI material and I find myself asking a question you all have surely considered, so I wanted to pose it.
If I believe that human beings are evolutionarily descended from apes, and I ask myself whether apes -- if they had control over allowing human evolution to happen -- should have allowed it or stopped it, I'm honestly not sure what the answer should be.
It seems like apes would in all likelihood be better off without humans around, so from the perspective of apes, they should... (read more)
I really like this article, thanks for writing it! I think you correctly point out that AI successionism is psychologically convenient for a lot of people, especially those working in AI or benefitting from it, and you do a great job illustrating how that happens.
That said, I’m not sure the memetic story settles the underlying philosophical question: are humans actually the optimal long-term stewards of value on Earth (or beyond), and if not, is it at least plausible that advanced AI could become a better steward?
Absent fairly definitive answers to those questions, it seems premature to dismiss AI successionism outright, even if we should be cautious about the motivations and social dynamics that make it appealing.