Why imperfect adversarial robustness doesn't doom AI control — LessWrong