Talent Needs of Technical AI Safety Teams
Co-Authors: @yams, @Carson Jones, @McKennaFitzgerald, @Ryan Kidd MATS tracks the evolving landscape of AI safety[1] to ensure that our program continues to meet the talent needs of safety teams. As the field has grown, it’s become increasingly necessary to adopt a more formal approach to this monitoring, since relying on a few individuals to intuitively understand the dynamics of such a vast ecosystem could lead to significant missteps.[2] In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions. The overarching perspectives presented here are not attributed to any specific individual or organization; they represent a collective, distilled consensus that our team believes is both valuable and responsible to share. Our aim is to influence the trajectory of emerging researchers and field-builders, as well as to inform readers on the ongoing evolution of MATS and the broader AI Safety field. All interviews were conducted on the condition of anonymity. Needs by Organization Type Organization typeTalent needsScaling Lab (e.g., Anthropic, Google DeepMind, OpenAI) Safety TeamsIterators > AmplifiersSmall Technical Safety Orgs (<10 FTE)Iterators > Machine Learning (ML) EngineersGrowing Technical Safety Orgs (10-30 FTE)Amplifiers > IteratorsIndependent ResearchIterators > Connectors Here, ">" means "are prioritized over." Archetypes We found it useful to frame the different profiles of research strengths and weaknesses as belonging to one of three archetypes (one of which has two subtypes). These aren’t as strict as, say, Diablo classes; this is just a way to get some handle on the complex network of skills involved in AI safety research. Indeed, capacities tend to converge with experience, and neatly classifying mo