Give me feedback! :)
Why does the AI safety community need help founding projects?
I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:
In contrast to the apparent exponential growth in AI conference attendees, the number of AI publications since 2013 has grown quadratically (data from the Standford HAI 2025 AI Index Report). Quadratic growth in publications suggests that a linearly (constantly) increasing number of researchers are producing publications at a linear (constant) rate. Extrapolating a little, this growth rate suggests there will be 3.7M cumulative publications by 2030 (since 2013).
If the AI researcher growth rate is linear, the exponential growth of AI conference attendees might be due to increased industry presence. Also, it's possible that the attendee growth rate is also quadratic.
Open question: how fast did the field of cybersecurity grow since the launch of the internet?
Attendees at the top-four AI conferences have been growing at 1.26x/year on average. Data is from Our World in Data. I skipped 2020-2021 for all conferences and 2022 for ICLR, as these conferences were virtual due to the COVID pandemic and had increased virtual attendance.
One could infer from these growth rates that the academic field of AI is growing 1.26x/year. Interestingly, the AI safety field (including technical and governance) seems to be growing at 1.25x/year.
The top-10 most-cited papers that MATS contributed to are (all with at least 290 citations)
Compare this to the top-10 highest-karma LessWrong posts that MATS contributed to (all with over 200 karma):
Here is a plot of the annual citations received by MATS, EleutherAI, and Apart research, adjusted so they start on the same year. The three organizations are somewhat comparable, as they leverage large networks of external collaborators: MATS mentors/fellows, EleutherAI Discord, Apart sprint participants.
The EleutherAI data fits a logistic curve perfectly, asymptoting to ~18.5k citations/year. I can't fit the others as at least 4 data points are needed to fit a logistic curve.
I made a Google Scholar page for MATS. This was inspired by @Esben Kran's Google Scholar for Apart Research. Eleuther AI subsequently made one too. I think all AI safety organizations and research programs should consider making Google Scholar pages to better share research and track impact.
Great post! I will fund this project on Manifund.com
80% of MATS alumni who completed the program before 2025 are still working on AI safety today, based on a survey of all available alumni LinkedIns or personal websites (242/292 ~ 83%). 10% are working on AI capabilities, but only ~6 at a frontier AI company (2 at Anthropic, 2 at Google DeepMind, 1 at Mistral AI, 1 extrapolated). 2% are still studying, but not in a research degree focused on AI safety. The last 8% are doing miscellaneous things, including non-AI safety/capabilities software engineering, teaching, data science, consulting, and quantitative trading.
Of the 193+ MATS alumni working on AI safety (extrapolated: 234):
10% of MATS alumni co-founded an active AI safety start-up or team during or after the program, including Apollo Research, Timaeus, Simplex, ARENA, etc.
Errata: I mistakenly included UK AISI in the "non-profit AI safety organization" category instead of "government agency". I also mistakenly said that the ~6 alumni working on AI capabilities at frontier AI companies were all working on pre-training.