Give me feedback! :)
Why does the AI safety community need help founding projects?
I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:
Thanks for sharing, Akshyae! Based on the DMs I received after posting this, I think your experience is unfortunately common. Great job sticking at it and launching Explainable!
Can you share which?
True, but we accepted 75% of all scholars into the 6-month extension last program, so the pressure might not be that large now.
Of note: when I first approached you about becoming a MATS mentor, I don't think you had significant field-building or mentorship experience and had relatively few papers. Since then, you have become one of the most impactful field-builders, mentors, and researchers in AI safety, by my estimation! This is a bet I would take again.
Edit: I mistakenly said "27% at frontier labs" when I should have said "27% at for-profit companies". Also, note that this is 27% of those working on AI safety (80%), so 22% of all alumni.
In regards to adversarial selection, we can compare MATS to SPAR. SPAR accepted ~300 applicants in their latest batch, ~3x MATS (it's easier to scale if you're remote, don't offer stipends, and allow part-timers). I would bet that the average research impact of SPAR participants is significantly lower than that of MATS, though there might be plenty of confounders here. It might be worth doing a longitudinal study here comparing various training programs' outcomes over time, including PIBBSS, ERA, etc.
I think your read of the situation re. mentor ratings is basically correct: increasingly many MATS mentors primarily care about research execution ability (generally ML), not AI safety strategy knowledge. I see this as a feature, not a bug, but I understand why you disagree. I think you are prioritizing a different skillset than most mentors that our mentor selection committee rates highly. Interestingly, most of the technical mentors that you rate highly seem to primarily care about object-level research ability and think that strategy/research taste can be learned on the job!
Note that I think the pendulum might start to swing back towards mentors valuing high-level AI safety strategy knowledge as the Iterator archetype is increasingly replaced/supplemented by AI. The Amplifier archetype seems increasingly in-demand as orgs scale, and we might see a surge in Connectors as AI agents improve to the point that their theoretical ideas are more testable. Also note that we might have different opinions on the optimal ratio of "visionaries" vs. "experimenters" in an emerging research field.
I like this comment. I think it's easy to overfit on the most salient research agendas, especially if there are echo chambers and tight coupling between highly paid frontier AI staff and nonprofit funders. The best way I know to combat this at MATS is:
Note that I expect overfitting to decrease with further scale and diversity, given the above practices are adhered to!
80% of MATS alumni who completed the program before 2025 are still working on AI safety today, based on a survey of all available alumni LinkedIns or personal websites (242/292 ~ 83%). 10% are working on AI capabilities, but only ~6 at a frontier AI company (2 at Anthropic, 2 at Google DeepMind, 1 at Mistral AI, 1 extrapolated). 2% are still studying, but not in a research degree focused on AI safety. The last 8% are doing miscellaneous things, including non-AI safety/capabilities software engineering, teaching, data science, consulting, and quantitative trading.
Of the 193+ MATS alumni working on AI safety (extrapolated: 234):
10% of MATS alumni co-founded an active AI safety start-up or team during or after the program, including Apollo Research, Timaeus, Simplex, ARENA, etc.
Errata: I mistakenly included UK AISI in the "non-profit AI safety organization" category instead of "government agency". I also mistakenly said that the ~6 alumni working on AI capabilities at frontier AI companies were all working on pre-training.