x
The Alignment Paradox: How Solving AI Safety Might Guarantee Managed Abdication — LessWrong