I found over 10% of fellows did another fellowship after their fellowship. This doesn’t feel enormously efficient.
This matches my finding that 10% of each MATS cohort, on average, did MATS before. I don't think this is particularly bad; it's somewhat analogous to undergrads doing several internships in different fields or with different companies before graduating.
Interesting insight, though with MATS claiming 446 alumni in this post vs the 218 you found I suspect there'll be some bias (eg the other profiles are no longer working in AI/AI safety and so MATS is less recognisable, or they're senior enough in AI safety to have removed it from their profile eg to reduce cold messaging. I'd expect the former is more likely).
I do wonder what proportion of fellows who return for another fellowship elsewhere do so predominantly for the funding (as opposed to mentorship) and would benefit from a higher availability of grants.
We invest heavily in fellowships, but do we know exactly where people go and the impact the fellowships have? To begin answering this question I manually analyzed over 600 alumni profiles from 9 major late-stage fellowships (fellowships that I believe could lead directly into a job following). These profiles represent current participants and alumni from MATS, GovAI, ERA, Pivotal, Talos Network, Tarbell, Apart Labs, IAPS, and PIBBS.
Executive Summary
Key Insights from mini-project
Of the target fellowships I looked at, 21.5% (139) did at least one other fellowship alongside their target fellowship. 12.4% of fellows (80) had done a fellowship before the fellowship and 11.1% (72) did a fellowship after.
Since these fellowships are ‘late-stage’ - none of them are designed to be much more senior than many of the others - I think it is quite surprising that over 10% of alumni do another fellowship following the target fellowship.
I also think it’s quite surprising that only 12.4% of fellows had done an AI Safety fellowship before - only slightly higher than those who did one after. This suggests that fellowships are most of the time taking people from outside of the ‘standard fellowship stream’.
Individual fellowships
Whilst most fellowships tended to stick around the average, here are some notable trends:
Firstly, 20.2% (17) of ERA fellows did a fellowship after ERA, whilst only 9.5% (8) had done a fellowship before. This suggests ERA is potentially, and somewhat surprisingly, an earlier stage fellowship than other fellowships, and more of a feeder fellowship. I expect this will be somewhat surprising to people, since ERA is as prestigious and competitive as most of the others.
Secondly, MATS was the other way round, with 15.1% (33) having done a fellowship before and only 6.9% (15) doing a fellowship after. This is unsurprising, as MATS is often seen as one of the most prestigious AI Safety Fellowships.
Thirdly, Talos Network had 32.3% overall doing another fellowship before or after Talos, much higher than the 21.5%average. This suggests Talos is more enmeshed in the fellowship ecosystem than other fellowships.
Links between fellowships
On the technical side, I found very strong links between MATS and SPAR, AI Safety Camp and ARENA (13, 9 and 7 fellows respectively had gone directly between one and the other), which is unsurprising.
Perhaps more surprisingly, on the governance side I found equally strong links between GovAI and ERA, IAPS and Talos, which also had 13, 9 and 7 links respectively. All of these fellowships are also half the size of MATS, which makes this especially surprising.
For fun, I also put together a Sankey Visualisation of this. It’s a little jankey but I think it gives a nice visual view of the network. View the Sankey Diagram Here.
Preliminary Directional Signals: IRG Data
As part of the IRG project I participated in this summer (during which I produced this database) I used this data to produce the following datapoints:
However, these results were produced very quickly. They used both AI tools to extract data and a manual, subjective judgement to decide whether someone worked in AI Safety or not. Whilst I expect they are in the right ballpark, view them as directional rather than conclusional.
Notes on the Data
Open Questions: What can this dataset answer?
Beyond the basic flow of talent, this dataset is primed to answer deeper questions about the AIS ecosystem. Here are a few useful questions I believe the community could tackle directly with this data. For the first 4, the steps are quite straightforward and would make a good project. The last may require some thinking (and escapes me at the moment):
The Dataset Project
I wanted to release this dataset responsibly to the community, as I believe fellowship leads, employers, and grantmakers could gain valuable insights from it.
Request Access: If you'd like access to the raw dataset, please message me or fill in this form. Since the dataset contains personal information, I will be adding people on a person-by-person basis.
Note: If you're not affiliated with a major AI Safety Organization, please provide a brief explanation of your intended use for this data.
Next Steps
Firstly, I’d be very interested in working on one of these questions, particularly over the summer. If you’d be interested in collaborating with or mentoring me, have an extremely low bar for reaching out to me.
I would be especially excited to hear from people who have ideas for how to deal with the counterfactual impact question.
Secondly, if you’re an organisation and would like some kind of similar work done for your organisation or field, also have an extremely low bar for reaching out.
If you have access or funding for AI tools like clay.com, I’d be especially interested.