x
Worst-case thinking in AI alignment — LessWrong