x
Power-Seeking AI and Existential Risk — LessWrong