Based on my recollections of being around in 2015, your number from then seems too high to me (I would have guessed there were at most 30 people doing what I would have thought of as AI x-risk research back then). Can I get a sense of who you're counting?
Based on updated data and estimates from 2025, I estimate that there are now approximately 600 FTEs working on technical AI safety and 500 FTEs working on non-technical AI safety (1100 in total).
I think it's suggestive to compare with e.g. the number of FTEs related to addressing climate change, for a hint at how puny the numbers above are:
Using our definition's industry approach, UK employment in green jobs was an estimated 690,900 full-time equivalents (FTEs) in 2023. (https://www.ons.gov.uk/economy/environmentalaccounts/bulletins/experimentalestimatesofgreenjobsuk/july2025)
Jobs in renewable energy reached 16.2 million globally in 2023 (https://www.un.org/en/climatechange/science/key-findings)
While I like the idea of the comparison, I don't think the gov't definition of "green jobs" is the right comparison point. (e.g. those are not research jobs)
Thanks for this work.
In your technical dataset apart research appears twice, with 10 and 40 FTEs listed respectively, is that intentional? Is it meant to track the core team vs volunteers participating in hackathons etc?
Can you say a bit more about how these numbers are estimated? eg. 1 person looks at the websites and writes down how many they see, estimating from other public info, directly asking the orgs when possible?
I'm surprised that Anthropic and GDM have such small numbers of technical safety researchers in your dataset. What are the criteria for inclusion / how did you land on those numbers?
What changed that made the historical datapoints so much higher? E.g. you now think that there were >100 technical AIS researchers in 2016, whereas in 2022 you thought that there had been <50 technical AIS researchers in 2016.
I notice that the historical data revisions are consistently upward. This looks consistent with a model like: in each year x, you notice some "new" people that should be in your dataset, but you also notice that they've been working on TAIS-related stuff for many years by that point. If we take that model literally, and calibrate it to past revisions, it suggests that you're probably undercounting right now by a factor of 50-100%. Does that sound plausible to you?
Thanks for assembling this dataset!
The goal of this post is to analyze the growth of the technical and non-technical AI safety fields in terms of the number of organizations and number of FTEs working in these fields.
In 2022, I estimated that there were about 300 FTEs (full-time equivalents) working in the field of technical AI safety research and 100 on non-technical AI safety work (400 in total).
Based on updated data and estimates from 2025, I estimate that there are now approximately 600 FTEs working on technical AI safety and 500 FTEs working on non-technical AI safety (1100 in total).
Note that this post is an updated version of my old 2022 post Estimating the Current and Future Number of AI Safety Researchers.
The first step for analyzing the growth of the technical AI safety field is to create a spreadsheet listing the names of known technical AI safety organizations, when they were founded, and an estimated number of FTEs for each organization. The technical AI safety dataset contains 70 organizations working on technical AI safety and a total of 645 FTEs working at them (68 active organizations and 620 active FTEs in 2025).
Then I created two scatter plots showing the number of technical AI safety research organizations and FTEs working at them respectively. On each graph, the x-axis is the years from 2010 to 2025 and the y-axis is the number of active organizations or estimated number of total FTEs working at those organizations. I also created models to fit the scatter plots. For the technical AI safety organizations and FTE graphs, I found that an exponential model fit the data best.
The two graphs show relatively slow growth from 2010 to 2020 and then the number of technical AI safety organizations and FTEs starts to rapidly increase around 2020 and continues rapidly growing until today (2025).
The exponential models describe a 24% annual growth rate in the number of technical AI safety organizations and 21% growth rate in the number of technical AI safety FTEs.
I also created graphs showing the number of technical AI safety organizations and FTEs by category. The top three categories by number of organizations and FTEs are Misc technical AI safety research, LLM safety, and interpretability.
Misc technical AI safety research is a broad category that mostly consists of empirical AI safety research that is not purely focused on LLM safety research such as scalable oversight, adversarial robustness, jailbreaks, and otherwise research that covers a variety of different areas and is difficult to put into a single category.
I also applied the same analysis to a dataset of non-technical AI safety organizations. The non-technical AI safety landscape, which includes fields like AI policy, governance, and advocacy, has also expanded significantly. The non-technical AI safety dataset contains 45 organizations working on non-technical AI safety and a total of 489 FTEs working at them.
The graphs plotting the growth of the non-technical AI safety field show an acceleration in the rate of growth around 2023 though a linear model fits the data well from the years 2010 - 2025.
In the previous post from 2022, I counted 45 researchers on Google Scholar with the AI governance tag. There are now over 300 researchers with the AI governance tag, evidence that the field has grown.
I also created graphs showing the number of non-technical AI safety organizations and FTEs by category.
Thanks to Ryan Kidd from SERI MATS for sharing data on AI safety organizations which was useful for writing this post.
The following graph shows the difference between the old dataset and model from the Estimating the Current and Future Number of AI Safety Researchers (2022) post compared with the updated dataset and model.
The old model is the blue line and the new model is the orange line.
The old model predicts a value of 484 active technical FTEs in 2025 and the true value is 620. The percentage error between the predicted and true value is 22%.
Name | Founded | Year of Closure | Category | FTEs |
Machine Intelligence Research Institute (MIRI) | 2000 | 2024 | Agent foundations | 10 |
Future of Humanity Institute (FHI) | 2005 | 2024 | Misc technical AI safety research | 10 |
Google DeepMind | 2010 | Misc technical AI safety research | 30 | |
GoodAI | 2014 | Misc technical AI safety research | 5 | |
Jacob Steinhardt research group | 2016 | Misc technical AI safety research | 9 | |
David Krueger (Cambridge) | 2016 | RL safety | 15 | |
Center for Human-Compatible AI | 2016 | RL safety | 10 | |
OpenAI | 2016 | LLM safety | 15 | |
Truthful AI (Owain Evans) | 2016 | LLM safety | 3 | |
CORAL | 2017 | Agent foundations | 2 | |
Scott Niekum (University of Massachusetts Amherst) | 2018 | RL safety | 4 | |
Eleuther AI | 2020 | LLM safety | 5 | |
NYU He He research group | 2021 | LLM safety | 4 | |
MIT Algorithmic Alignment Group (Dylan Hadfield-Menell) | 2021 | LLM safety | 10 | |
Anthropic | 2021 | Interpretability | 40 | |
Redwood Research | 2021 | AI control | 10 | |
Alignment Research Center (ARC) | 2021 | Theoretical AI safety research | 4 | |
Lakera | 2021 | AI security | 3 | |
SERI MATS | 2021 | Misc technical AI safety research | 20 | |
Constellation | 2021 | Misc technical AI safety research | 18 | |
NYU Alignment Research Group (Sam Bowman) | 2022 | 2024 | LLM safety | 5 |
Center for AI Safety (CAIS) | 2022 | Misc technical AI safety research | 5 | |
Fund for Alignment Research (FAR) | 2022 | Misc technical AI safety research | 15 | |
Conjecture | 2022 | Misc technical AI safety research | 10 | |
Aligned AI | 2022 | Misc technical AI safety research | 2 | |
Apart Research | 2022 | Misc technical AI safety research | 10 | |
Epoch AI | 2022 | AI forecasting | 5 | |
AI Safety Student Team (Harvard) | 2022 | LLM safety | 5 | |
Tegmark Group | 2022 | Interpretability | 5 | |
David Bau Interpretability Group | 2022 | Interpretability | 12 | |
Apart Research | 2022 | Misc technical AI safety research | 40 | |
Dovetail Research | 2022 | Agent foundations | 5 | |
PIBBSS | 2022 | Interdisciplinary | 5 | |
METR | 2023 | Evals | 31 | |
Apollo Research | 2023 | Evals | 19 | |
Timaeus | 2023 | Interpretability | 8 | |
London Initiative for AI Safety (LISA) and related programs | 2023 | Misc technical AI safety research | 10 | |
Cadenza Labs | 2023 | LLM safety | 3 | |
Realm Labs | 2023 | AI security | 6 | |
ACS | 2023 | Interdisciplinary | 5 | |
Meaning Alignment Institute | 2023 | Value learning | 3 | |
Orthogonal | 2023 | Agent foundations | 1 | |
AI Security Institute (AISI) | 2023 | Evals | 50 | |
Shi Feng research group (George Washington University) | 2024 | LLM safety | 3 | |
Virtue AI | 2024 | AI security | 3 | |
Goodfire | 2024 | Interpretability | 29 | |
Gray Swan AI | 2024 | AI security | 3 | |
Transluce | 2024 | Interpretability | 15 | |
Guide Labs | 2024 | Interpretability | 4 | |
Aether research | 2024 | LLM safety | 3 | |
Simplex | 2024 | Interpretability | 2 | |
Contramont Research | 2024 | LLM safety | 3 | |
Tilde | 2024 | Interpretability | 5 | |
Palisade Research | 2024 | AI security | 6 | |
Luthien | 2024 | AI control | 1 | |
ARIA | 2024 | Provably safe AI | 1 | |
CaML | 2024 | LLM safety | 3 | |
Decode Research | 2024 | Interpretability | 2 | |
Meta superintelligence alignment and safety | 2025 | LLM safety | 5 | |
LawZero | 2025 | Misc technical AI safety research | 10 | |
Geodesic | 2025 | CoT monitoring | 4 | |
Sharon Li (University of Wisconsin Madison) | 2020 | LLM safety | 10 | |
Yaodong Yang (Peking University) | 2022 | LLM safety | 10 | |
Dawn Song | 2020 | Misc technical AI safety research | 5 | |
Vincent Conitzer | 2022 | Multi-agent alignment | 8 | |
Stanford Center for AI Safety | 2018 | Misc technical AI safety research | 20 | |
Formation Research | 2025 | Lock-in risk research | 2 | |
Stephen Byrnes | 2021 | Brain-like AGI safety | 1 | |
Roman Yampolskiy | 2011 | Misc technical AI safety research | 1 | |
Softmax | 2025 | Multi-agent alignment | 3 | |
70 | 645 |
Name | Founded | Category | FTEs |
Centre for Security and Emerging Technology (CSET) | 2019 | research | 20 |
Epoch AI | 2022 | forecasting | 20 |
Centre for Governance of AI (GovAI) | 2018 | governance | 40 |
Leverhulme Centre for the Future of Intelligence | 2016 | research | 25 |
Center for the Study of Existential Risk (CSER) | 2012 | research | 3 |
OpenAI | 2016 | governance | 10 |
DeepMind | 2010 | governance | 10 |
Future of Life Institute | 2014 | advocacy | 10 |
Center on Long-Term Risk | 2013 | research | 5 |
Open Philanthropy | 2017 | research | 15 |
Rethink Priorities | 2018 | research | 5 |
UK AI Security Institute (AISI) | 2023 | governance | 25 |
European AI Office | 2024 | governance | 50 |
Ada Lovelace Institute | 2018 | governance | 15 |
AI Now Institute | 2017 | governance | 15 |
The Future Society (TFS) | 2014 | advocacy | 18 |
Centre for Long-Term Resilience (CLTR) | 2019 | governance | 5 |
Stanford Institute for Human-Centered AI (HAI) | 2019 | research | 5 |
Pause AI | 2023 | advocacy | 20 |
Simon Institute for Longterm Governance | 2021 | governance | 10 |
AI Policy Institute | 2023 | governance | 1 |
The AI Whistleblower Initiative | 2024 | whistleblower support | 5 |
Machine Intelligence Research Institute | 2024 | advocacy | 5 |
Beijing Institute of AI Safety and Governance | 2024 | governance | 5 |
ControlAI | 2023 | advocacy | 10 |
International Association for Safe and Ethical AI | 2024 | research | 3 |
International AI Governance Alliance | 2025 | advocacy | 1 |
Center for AI Standards and Innovation (U.S. AI Safety Institute) | 2023 | governance | 10 |
China AI Safety and Development Association | 2025 | governance | 10 |
Transformative Futures Institute | 2022 | research | 4 |
AI Futures Project | 2024 | advocacy | 5 |
AI Lab Watch | 2024 | watchdog | 1 |
Center for Long-Term Artificial Intelligence | 2022 | research | 12 |
SaferAI | 2023 | research | 14 |
AI Objectives Institute | 2021 | research | 16 |
Concordia AI | 2020 | research | 8 |
CARMA | 2024 | research | 10 |
Encode AI | 2020 | governance | 7 |
Safe AI Forum (SAIF) | 2023 | governance | 8 |
Forethought Foundation | 2018 | research | 8 |
AI Impacts | 2014 | research | 3 |
Cosmos Institute | 2024 | research | 5 |
AI Standards Labs | 2024 | governance | 2 |
Center for AI Safety | 2022 | advocacy | 5 |
CeSIA | 2024 | advocacy | 5 |
45 | 489 |