This is a linkpost for https://arxiv.org/abs/2403.13793

To understand the risks posed by a new AI system, we must understand what it can and cannot do. Building on prior work, we introduce a programme of new “dangerous capability” evaluations and pilot them on Gemini 1.0 models. Our evaluations cover four areas: (1) persuasion and deception; (2) cyber-security; (3) self-proliferation; and (4) self-reasoning. [Evals for CBRN capabilities are under development.] We do not find evidence of strong dangerous capabilities in the models we evaluated, but we flag early warning signs. Our goal is to help advance a rigorous science of dangerous capability evaluation, in preparation for future models.

At last, DeepMind talks about its dangerous capability evals. With details! Yay!

(These evaluations were done in January 2024; the Gemini 1.0 models were released in December 2023. DeepMind hasn't yet made RSP-like commitments — that is, specific commitments about risk assessment (for extreme risks), safety and security practices as a function of risk assessment results, and training and deployment decisions as a function of risk assessment results. Demis recently suggested that DeepMind might make RSP-like commitments this year.)

Random interesting note: DeepMind hired 8 superforecasters to make relevant predictions, most notably about when some eval-thresholds will trigger.


See summary on Twitter by last author Toby Shevlane.

New Comment