LESSWRONG
LW

1687
Ryan Kidd
2407Ω72211952
Message
Dialogue
Subscribe
  • Co-Executive Director at ML Alignment & Theory Scholars Program (2022-present)
  • Co-Founder & Board Member at London Initiative for Safe AI (2023-present)
  • Manifund Regrantor (2023-present)  |  RFPs here
  • Advisor, Catalyze Impact (2023-present)  |  ToC here
  • Advisor, AI Safety ANZ (2024-present)
  • Ph.D. in Physics at the University of Queensland (2017-2023)
  • Group organizer at Effective Altruism UQ (2018-2021)

Give me feedback! :)

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
3Ryan Kidd's Shortform
3y
122
Ryan Kidd's Shortform
Ryan Kidd21d*1210

80% of MATS alumni who completed the program before 2025 are still working on AI safety today, based on a survey of all available alumni LinkedIns or personal websites (242/292 ~ 83%). 10% are working on AI capabilities, but only ~6 at a frontier AI company (2 at Anthropic, 2 at Google DeepMind, 1 at Mistral AI, 1 extrapolated). 2% are still studying, but not in a research degree focused on AI safety. The last 8% are doing miscellaneous things, including non-AI safety/capabilities software engineering, teaching, data science, consulting, and quantitative trading.

Of the 193+ MATS alumni working on AI safety (extrapolated: 234):

  • 34% are working at a non-profit org (Apollo, Redwood, MATS, EleutherAI, FAR.AI, MIRI, ARC, Timaeus, LawZero, RAND, METR, etc.);
  • 27% are working at a for-profit org (Anthropic, Google DeepMind, OpenAI, Goodfire, Meta, etc.);
  • 18% are working as independent researchers, probably with grant funding from Open Philanthropy, LTFF, etc.;
  • 15% are working as academic researchers, including PhDs/Postdocs at Oxford, Cambridge, MIT, ETH Zurich, UC Berkeley, etc.;
  • 6% are working in government agencies, including in the US, UK, EU, and Singapore.

10% of MATS alumni co-founded an active AI safety start-up or team during or after the program, including Apollo Research, Timaeus, Simplex, ARENA, etc.

Errata: I mistakenly included UK AISI in the "non-profit AI safety organization" category instead of "government agency". I also mistakenly said that the ~6 alumni working on AI capabilities at frontier AI companies were all working on pre-training.

Reply132
Ryan Kidd's Shortform
Ryan Kidd1y*291

Why does the AI safety community need help founding projects?

  1. AI safety should scale
    1. Labs need external auditors for the AI control plan to work
    2. We should pursue many research bets in case superalignment/control fails
    3. Talent leaves MATS/ARENA and sometimes struggles to find meaningful work for mundane reasons, not for lack of talent or ideas
    4. Some emerging research agendas don’t have a home
    5. There are diminishing returns at scale for current AI safety teams; sometimes founding new projects is better than joining an existing team
    6. Scaling lab alignment teams are bottlenecked by management capacity, so their talent cut-off is above the level required to do “useful AIS work”
  2. Research organizations (inc. nonprofits) are often more effective than independent researchers
    1. “Block funding model” is more efficient, as researchers can spend more time researching, rather than seeking grants, managing, or other traditional PI duties that can be outsourced
    2. Open source/collective projects often need a central rallying point (e.g., EleutherAI, dev interp at Timaeus, selection theorems and cyborgism agendas seem too delocalized, etc.)
  3. There is (imminently) a market for for-profit AI safety companies and value-aligned people should capture this free energy or let worse alternatives flourish
    1. If labs or API users are made legally liable for their products, they will seek out external red-teaming/auditing consultants to prove they “made a reasonable attempt” to mitigate harms
    2. If government regulations require labs to seek external auditing, there will be a market for many types of companies
    3. “Ethical AI” companies might seek out interpretability or bias/fairness consultants
  4. New AI safety organizations struggle to get funding and co-founders despite having good ideas
    1. AIS researchers are usually not experienced entrepeneurs (e.g., don’t know how to write grant proposals for EA funders, pitch decks for VCs, manage/hire new team members, etc.)
    2. There are not many competent start-up founders in the EA/AIS community and when they join, they don’t know what is most impactful to help
    3. Creating a centralized resource for entrepeneurial education/consulting and co-founder pairing would solve these problems
Reply
Ryan Kidd's Shortform
Ryan Kidd1y*482

I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:

  • Funding for AI safety PhDs (e.g., with these supervisors), particularly in exploratory research connecting AI safety theory with empirical ML research.
  • An AI safety PhD advisory service that helps prospective PhD students choose a supervisor and topic (similar to Effective Thesis, but specialized for AI safety).
  • Initiatives to critically examine current AI safety macrostrategy (e.g., as articulated by Holden Karnofsky) like the Open Philanthropy AI Worldviews Contest and Future Fund Worldview Prize.
  • Initiatives to identify and develop "Connectors" outside of academia (e.g., a reboot of the Refine program, well-scoped contests, long-term mentoring and peer-support programs).
  • Physical community spaces for AI safety in AI hubs outside of the SF Bay Area or London (e.g., Japan, France, Bangalore).
  • Start-up incubators for projects, including evals/red-teaming/interp companies, that aim to benefit AI safety, like Catalyze Impact, Future of Life Foundation, and YCombinator's request for Explainable AI start-ups.
  • Initiatives to develop and publish expert consensus on AI safety macrostrategy cruxes, such as the Existential Persuasion Tournament and 2023 Expert Survey on Progress in AI (e.g., via the Delphi method, interviews, surveys, etc.).
  • Ethics/prioritization research into:
    • What values to instill in artificial superintelligence?
    • How should AI-generated wealth be distributed?
    • What should people do in a post-labor society?
    • What level of surveillance/restriction is justified by the Unilateralist's Curse?
    • What moral personhood will digital minds have?
    • How should nations share decision making power regarding transformative AI?
  • New nonprofit startups that aim to benefit AI safety.
Reply11
Ryan Kidd's Shortform
Ryan Kidd3d130

SFF-2025 S-Process funding results have been announced! This year, the S-process gave away ~$34M to 88 organizations. The mean (median) grant was $390k ($241k).

Reply
Ryan Kidd's Shortform
Ryan Kidd4d20

Also of interest: MATS mentor applications have been growing linearly at a rate of +70/year. We are now publicly advertising for mentor applications for the first time (previously, we sent out mass emails and advertised on Slack workspaces), which might explain the sub-exponential growth. Giving the exponential growth in applicants and linear growth in mentors, the latter seems to be the limiting constraint still.

Reply
Ryan Kidd's Shortform
Ryan Kidd5d30

Josh Landes shared that BlueDot Impact's application rate has been increasing by 4.7x/year.

Reply
Ryan Kidd's Shortform
Ryan Kidd7d20

I've updated the OP to reflect my intention better. Also, thanks for reminding me re. advertising on LW; MATS actually hasn't made an ad post yet and this seems like a big oversight!

Reply
Ryan Kidd's Shortform
Ryan Kidd7d50

Any alternative framing/text you recommend? I think it's a pretty useful statistic for AI safety field-building.

Reply
Ryan Kidd's Shortform
Ryan Kidd7d180

I'm interested in determining the growth rate of AI safety awareness to aid field-building strategy. Applications to MATS have been increasing by 70% per year. Do any other field-builders have statistics they can share?

Reply
Ryan Kidd's Shortform
Ryan Kidd19d40

Here are the AI capabilities organizations where MATS alumni are working (1 at each except for Anthropic and GDM, where there are 2 each):

  • Anthropic
  • Barcelona Supercomputing Cluster
  • Conduit Intelligence
  • Decart
  • EliseAI
  • Fractional AI
  • General Agents
  • Google DeepMind
  • iGent AI
  • Imbue
  • Integuide
  • Kayrros
  • Mecha Health
  • Mistral AI
  • MultiOn
  • Norm AI
  • NVIDIA
  • Palantir
  • Phonic
  • RunRL
  • Salesforce
  • Sandbar
  • Secondmind
  • Yantran

Alumni also work at these organizations, which might be classified as capabilities or safety-adjacent:

  • Freestyle Research
  • Leap Labs
Reply
Load More
43Apply to MATS 9.0!
4d
0
17MATS 8.0 Research Projects
6d
0
8MATS is hiring!
5mo
0
64Apply to MATS 8.0!
6mo
5
26MATS Spring 2024 Extension Retrospective
7mo
1
18[Job ad] LISA CEO
7mo
4
94Implications of the inference scaling paradigm for AI safety
8mo
70
44MATS mentor selection
8mo
12
10[Job Ad] MATS is hiring!
1y
0
43MATS AI Safety Strategy Curriculum v2
1y
6
Load More
MATS Program
2 years ago
(+255/-37)
MATS Program
2 years ago
(+14/-46)