x
Alignment Is Not One Problem: A 3D Map of AI Risk — LessWrong