x
Anthropic's Core Views on AI Safety — LessWrong