x
Exploring non-anthropocentric aspects of AI existential safety — LessWrong