Give me feedback! :)
Why does the AI safety community need help founding projects?
I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:
Yes, I would generally support picking the latter as they have a "faster time to mentorship/research leadership/impact" and the field seems currently bottlenecked on mentorship and research leads, not marginal engineers (though individual research leads might feel bottlenecked on marginal engineers).
We should prioritize people who already have research or engineering experience or a very high iteration speed as we are operating under time constraints; AGI is coming soon. Additionally, I think "research taste" will be more important than engineering ability given AI automation and this takes a long time to build; better to select people with existing research experience they can adapt from another field (also promotes interdisciplinary knowledge transfer).
I talk more about it here.
Yoshua Bengio, Paul Christiano, Geoffrey Irving seem more like technical AI safety experts than AI policy experts, but they arguably have strong influence on governments.
What should be done? I think:
I suspect that some LWers would interpret this as a (bad) argument for countries to build datacenters so they can exercise political control over AGI. I don't think this works.
Exactly! Also:
AI safety field-building in Australia should accelerate. My rationale:
I think that the distribution of mentors we are drawing from is slowing growing to include more highly respected academics and industry professionals by percentage. I think this increases the average quality of our mentor applicant pool, but I understand that this is might be controversial. Note that I still think our most impactful mentors are well-known within the AI safety field and most of the top-50 most impactful researchers in AI safety apply to mentor at MATS.
80% of MATS alumni who completed the program before 2025 are still working on AI safety today, based on a survey of all available alumni LinkedIns or personal websites (242/292 ~ 83%). 10% are working on AI capabilities, but only ~6 at a frontier AI company (2 at Anthropic, 2 at Google DeepMind, 1 at Mistral AI, 1 extrapolated). 2% are still studying, but not in a research degree focused on AI safety. The last 8% are doing miscellaneous things, including non-AI safety/capabilities software engineering, teaching, data science, consulting, and quantitative trading.
Of the 193+ MATS alumni working on AI safety (extrapolated: 234):
10% of MATS alumni co-founded an active AI safety start-up or team during or after the program, including Apollo Research, Timaeus, Simplex, ARENA, etc.
Errata: I mistakenly included UK AISI in the "non-profit AI safety organization" category instead of "government agency". I also mistakenly said that the ~6 alumni working on AI capabilities at frontier AI companies were all working on pre-training.