Worst-case thinking in AI alignment — LessWrong