Give me feedback! :)
Why does the AI safety community need help founding projects?
I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:
The top-10 most-cited papers that MATS contributed to are (all with at least 290 citations)
Compare this to the top-10 highest-karma LessWrong posts that MATS contributed to (all with over 200 karma):
Here is a plot of the annual citations received by MATS, EleutherAI, and Apart research, adjusted so they start on the same year. The three organizations are somewhat comparable, as they leverage large networks of external collaborators: MATS mentors/fellows, EleutherAI Discord, Apart sprint participants.
The EleutherAI data fits a logistic curve perfectly, asymptoting to ~18.5k citations/year. I can't fit the others as at least 4 data points are needed to fit a logistic curve.
I made a Google Scholar page for MATS. This was inspired by @Esben Kran's Google Scholar for Apart Research. Eleuther AI subsequently made one too. I think all AI safety organizations and research programs should consider making Google Scholar pages to better share research and track impact.
Great post! I will fund this project on Manifund.com
Gemini 3 estimates that there are 15-20k core ML academics and 100-150k supporting PhD students and Postdocs worldwide. If the TMLR sample is representative, this indicates that there are:
I analyzed the research interests of the 454 Action Editors on the Transactions on Machine Learning Research (TMLR) Editorial Board to determine what proportion of ML academics are interested in AI safety (credit to @scasper for the idea).
a MATS-like program that is more oriented around doing ambitious understanding of the nature of intelligence
Sounds like PIBBSS/PrincInt!
80% of MATS alumni who completed the program before 2025 are still working on AI safety today, based on a survey of all available alumni LinkedIns or personal websites (242/292 ~ 83%). 10% are working on AI capabilities, but only ~6 at a frontier AI company (2 at Anthropic, 2 at Google DeepMind, 1 at Mistral AI, 1 extrapolated). 2% are still studying, but not in a research degree focused on AI safety. The last 8% are doing miscellaneous things, including non-AI safety/capabilities software engineering, teaching, data science, consulting, and quantitative trading.
Of the 193+ MATS alumni working on AI safety (extrapolated: 234):
10% of MATS alumni co-founded an active AI safety start-up or team during or after the program, including Apollo Research, Timaeus, Simplex, ARENA, etc.
Errata: I mistakenly included UK AISI in the "non-profit AI safety organization" category instead of "government agency". I also mistakenly said that the ~6 alumni working on AI capabilities at frontier AI companies were all working on pre-training.