Give me feedback! :)
Why does the AI safety community need help founding projects?
I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:
I think that the distribution of mentors we are drawing from is slowing growing to include more highly respected academics and industry professionals by percentage. I think this increases the average quality of our mentor applicant pool, but I understand that this is might be controversial. Note that I still think our most impactful mentors are well-known within the AI safety field and most of the top-50 most impactful researchers in AI safety apply to mentor at MATS.
This is a hard question to answer precisely, as we have changed the metrics by which we have evaluated potential mentors several times. The average quality of mentors we accept has grown each program, by my lights. I weakly think that the average quality of mentors applying has also grown, though much slower.
In contrast to the apparent exponential growth in AI conference attendees, the number of AI publications since 2013 has grown quadratically (data from the Standford HAI 2025 AI Index Report). Quadratic growth in publications suggests that a linearly (constantly) increasing number of researchers are producing publications at a linear (constant) rate. Extrapolating a little, this growth rate suggests there will be 3.7M cumulative publications by 2030 (since 2013).
If the AI researcher growth rate is linear, the exponential growth of AI conference attendees might be due to increased industry presence. Also, it's possible that the attendee growth rate is also quadratic.
Open question: how fast did the field of cybersecurity grow since the launch of the internet?
Attendees at the top-four AI conferences have been growing at 1.26x/year on average. Data is from Our World in Data. I skipped 2020-2021 for all conferences and 2022 for ICLR, as these conferences were virtual due to the COVID pandemic and had increased virtual attendance.
One could infer from these growth rates that the academic field of AI is growing 1.26x/year. Interestingly, the AI safety field (including technical and governance) seems to be growing at 1.25x/year.
The top-10 most-cited papers that MATS contributed to are (all with at least 290 citations)
Compare this to the top-10 highest-karma LessWrong posts that MATS contributed to (all with over 200 karma):
80% of MATS alumni who completed the program before 2025 are still working on AI safety today, based on a survey of all available alumni LinkedIns or personal websites (242/292 ~ 83%). 10% are working on AI capabilities, but only ~6 at a frontier AI company (2 at Anthropic, 2 at Google DeepMind, 1 at Mistral AI, 1 extrapolated). 2% are still studying, but not in a research degree focused on AI safety. The last 8% are doing miscellaneous things, including non-AI safety/capabilities software engineering, teaching, data science, consulting, and quantitative trading.
Of the 193+ MATS alumni working on AI safety (extrapolated: 234):
10% of MATS alumni co-founded an active AI safety start-up or team during or after the program, including Apollo Research, Timaeus, Simplex, ARENA, etc.
Errata: I mistakenly included UK AISI in the "non-profit AI safety organization" category instead of "government agency". I also mistakenly said that the ~6 alumni working on AI capabilities at frontier AI companies were all working on pre-training.