The Alignment Trap: AI Safety as Path to Power — LessWrong