Ryan Kidd

Give me feedback! :)

Current

Past

  • Ph.D. in Physics from the University of Queensland (2017-2022)
  • Group organizer at Effective Altruism UQ (2018-2021)

Wiki Contributions

Comments

Wow, high praise for MATS! Thank you so much :) This list is also great for our Summer 2024 Program planning.

Another point: Despite our broad call for mentors, only ~2 individuals expressed interest in mentorship who we did not ultimately decide to support. It's possible our outreach could be improved and I'm happy to discuss in DMs.

Ryan Kidd5mo104

I don't see this distribution of research projects as "Goodharting" or "overfocusing" on projects with clear feedback loops. As MATS is principally a program for prosaic AI alignment at the moment, most research conducted within the program should be within this paradigm. We believe projects that frequently "touch reality" often offer the highest expected value in terms of reducing AI catastrophic risk, and principally support non-prosaic, "speculative," and emerging research agendas for their “exploration value," which might aid potential paradigm shifts, as well as to round out our portfolio (i.e., "hedge our bets").

However, even with the focus on prosaic AI alignment research agendas, our Summer 2023 Program supported many emerging or neglected research agendas, including projects in agent foundations, simulator theory, cooperative/multipolar AI (including s-risks), the nascent "activation engineering" approach our program helped pioneer, and the emerging "cyborgism" research agenda.

Additionally, our mentor portfolio is somewhat conditioned on the preferences of our funders. While we largely endorse our funders' priorities, we are seeking additional funding diversification so that we can support further speculative "research bets". If you are aware of large funders willing to support our program, please let me know!

There seems to be a bit of pushback against "postmortem" and our team is ambivalent, so I changed to "retrospective."

FYI, the Net Promoter score is 38%.

Do you think "46% of scholar projects were rated 9/10 or higher" is better? What about "scholar projects were rated 8.1/10 on average" ?

We also asked mentors to rate scholars' "depth of technical ability," "breadth of AI safety knowledge," "research taste," and "value alignment." We ommitted these results from the report to prevent bloat, but your comment makes me think we should re-add them.

Load More