x
How evals might (or might not) prevent catastrophic risks from AI — LessWrong