x
Non-Adversarial Goodhart and AI Risks — LessWrong