This winter, MATS will be running our seventh program. In early-mid 2024, 46% of alumni from our first four programs (Winter 2021-22 to Summer 2023) completed a survey about their career progress since participating in MATS. This report presents key findings from the responses of these 72 alumni.
Co-Authors: @Rocket, @LauraVaughan, @McKennaFitzgerald, @Christian Smith, @Juan Gil, @Henry Sleight, @Matthew Wearden, @Ryan Kidd
The ML Alignment & Theory Scholars program (MATS) is an education and research mentorship program for researchers entering the field of AI safety. This winter, we held the fifth iteration of the MATS program, in which 63 scholars received mentorship from 20 research mentors. In this post, we motivate and explain the elements of the program, evaluate our impact, and identify areas for improving future programs.
Key details about the Winter Program:
Co-Authors: @Rocket, @Juan Gil, @Christian Smith, @McKennaFitzgerald, @LauraVaughan, @Ryan Kidd
The ML Alignment & Theory Scholars program (MATS, formerly SERI MATS) is an education and research mentorship program for emerging AI safety researchers. This summer, we held the fourth iteration of the MATS program, in which 60 scholars received mentorship from 15 research mentors. In this post, we explain the elements of the program, lay out some of the thinking behind them, and evaluate our impact.
Key details about the Summer 2023 Program:
I also find it plausible that the top 1-5 scholars are responsible for most of the impact, and we want to investigate this to a greater extent. Unfortunately, it's difficult to evaluate the impact of a scholar's research and career trajectory until more like 3-12 months after the program, so we decided to separate that analysis from the retrospective of the summer 2023 program.
We've begun collecting this type of information (for past cohorts) via alumni surveys and other sources and hope to have another report out in the next few months that closer tracks the impact that we expect MATS to have.