Give me feedback! :)
Why does the AI safety community need help founding projects?
I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:
SFF-2025 S-Process funding results have been announced! This year, the S-process gave away ~$34M to 88 organizations. The mean (median) grant was $390k ($241k).
Also of interest: MATS mentor applications have been growing linearly at a rate of +70/year. We are now publicly advertising for mentor applications for the first time (previously, we sent out mass emails and advertised on Slack workspaces), which might explain the sub-exponential growth. Giving the exponential growth in applicants and linear growth in mentors, the latter seems to be the limiting constraint still.
I've updated the OP to reflect my intention better. Also, thanks for reminding me re. advertising on LW; MATS actually hasn't made an ad post yet and this seems like a big oversight!
Any alternative framing/text you recommend? I think it's a pretty useful statistic for AI safety field-building.
Here are the AI capabilities organizations where MATS alumni are working (1 at each except for Anthropic and GDM, where there are 2 each):
Alumni also work at these organizations, which might be classified as capabilities or safety-adjacent:
80% of MATS alumni who completed the program before 2025 are still working on AI safety today, based on a survey of all available alumni LinkedIns or personal websites (242/292 ~ 83%). 10% are working on AI capabilities, but only ~6 at a frontier AI company (2 at Anthropic, 2 at Google DeepMind, 1 at Mistral AI, 1 extrapolated). 2% are still studying, but not in a research degree focused on AI safety. The last 8% are doing miscellaneous things, including non-AI safety/capabilities software engineering, teaching, data science, consulting, and quantitative trading.
Of the 193+ MATS alumni working on AI safety (extrapolated: 234):
10% of MATS alumni co-founded an active AI safety start-up or team during or after the program, including Apollo Research, Timaeus, Simplex, ARENA, etc.
Errata: I mistakenly included UK AISI in the "non-profit AI safety organization" category instead of "government agency". I also mistakenly said that the ~6 alumni working on AI capabilities at frontier AI companies were all working on pre-training.