x
When Alignment Succeeds by Compressing Humans: On Predictability, Reference Drift, and Epistemic Blindness in AI Governance — LessWrong