x
AI Regulation May Be More Important Than AI Alignment For Existential Safety — LessWrong