x
Entropic Alignment: What If AI Safety Is a Natural Law, Not a Rulebook? — LessWrong