Ryan Kidd

Give me feedback! :)

Current

Past

  • Ph.D. in Physics from the University of Queensland (2017-2022)
  • Group organizer at Effective Altruism UQ (2018-2021)

Wiki Contributions

Comments

Can you estimate dark triad scores from the Big Five survey data?

You might be interested in this breakdown of gender differences in the research interests of 1331 applicants to the MATS Summer 2024 and Winter 2024-25 Programs. The plot shows the difference between the percentage of male applicants who indicated interest in specific research directions from the percentage of female applicants who indicated interest in the same.

The most male-dominated research interest is mech interp, possibly due to the high male representation in software engineering (~80%), physics (~80%), and mathematics (~60%). The most female-dominated research interest is AI governance, possibly due to the high female representation in the humanities (~60%). Interestingly, cooperative AI was a female-dominated research interest, which seems to match the result from your survey where female respondents were less in favor of "controlling" AIs relative to men and more in favor of "coexistence" with AIs.

This is potentially exciting news! You should definitely visit the LISA office, where many MATS extension program scholars are currently located.

Last program, 44% of scholar research was on interpretability, 18% on evals/demos, 17% on oversight/control, etc. In summer, we intend for 35% of scholar research to be on interpretability, 17% on evals/demos, 27% on oversight/control, etc., based on our available mentor pool and research priorities. Interpretability will still be the largest research track and still has the greatest interest from potential mentors and applicants. The plot below shows the research interests of 1331 MATS applicants and 54 potential mentors who have applied for our Summer 2024 or Winter 2024-25 Programs.

Oh, I think we forgot to ask scholars if they wanted Microsoft at the career fair. Is Microsoft hiring AI safety researchers?

Thank you so much for conducting this survey! I want to share some information on behalf of MATS:

  • In comparison to the AIS survey gender ratio of 9 M:F, MATS Winter 2023-24 scholars and mentors were 4 M:F and 12 M:F, respectively. Our Winter 2023-24 applicants were 4.6 M:F, whereas our Summer 2024 applicants were 2.6 M:F, closer to the EA survey ratio of 2 M:F. This data seems to indicate a large recent change in gender ratios of people entering the AIS field. Did you find that your AIS survey respondents with more AIS experience were significantly more male than newer entrants to the field?
  • MATS Summer 2024 applicants and interested mentors similarly prioritized research to "understand existing models", such as interpretability and evaluations, over research to "control the AI" or "make the AI solve it", such as scalable oversight and control/red-teaming, over "theory work", such as agent foundations and cooperative AI (note that some cooperative AI work is primarily empirical).
  • The forthcoming summary of our "AI safety talent needs" interview series generally agrees with this survey's findings regarding the importance of "soft skills" and "work ethic" in impactful new AIS contributors. Watch this space!
  • In addition to supporting core established AIS research paradigms, MATS would like to encourage the development of new paradigms. For better or worse, the current AIS funding landscape seems to have a high bar for speculative research into new paradigms. Has AE Studios considered sponsoring significant bounties or impact markets for scoping promising new AIS research directions?
  • Did survey respondents mention how they proposed making AIS more multidisciplinary? Which established research fields are more needed in the AIS community?
  • Did EAs consider AIS exclusively a longtermist cause area, or did they anticipate near-term catastrophic risk from AGI?
  • Thank you for the kind donation to MATS as a result of this survey!

I found this article useful. Any plans to update this for 2024?

Wow, high praise for MATS! Thank you so much :) This list is also great for our Summer 2024 Program planning.

Another point: Despite our broad call for mentors, only ~2 individuals expressed interest in mentorship who we did not ultimately decide to support. It's possible our outreach could be improved and I'm happy to discuss in DMs.

Ryan Kidd104

I don't see this distribution of research projects as "Goodharting" or "overfocusing" on projects with clear feedback loops. As MATS is principally a program for prosaic AI alignment at the moment, most research conducted within the program should be within this paradigm. We believe projects that frequently "touch reality" often offer the highest expected value in terms of reducing AI catastrophic risk, and principally support non-prosaic, "speculative," and emerging research agendas for their “exploration value," which might aid potential paradigm shifts, as well as to round out our portfolio (i.e., "hedge our bets").

However, even with the focus on prosaic AI alignment research agendas, our Summer 2023 Program supported many emerging or neglected research agendas, including projects in agent foundations, simulator theory, cooperative/multipolar AI (including s-risks), the nascent "activation engineering" approach our program helped pioneer, and the emerging "cyborgism" research agenda.

Additionally, our mentor portfolio is somewhat conditioned on the preferences of our funders. While we largely endorse our funders' priorities, we are seeking additional funding diversification so that we can support further speculative "research bets". If you are aware of large funders willing to support our program, please let me know!

Load More