The Sensible Way Forward for AI Alignment — LessWrong