Apply for MATS Winter 2023-24!
Applications are now open for the Winter 2023-24 cohort of MATS (previously SERI MATS). Our mentors are: Adrià Garriga Alonso, Alex Turner, Buck Shlegeris, Caspar Oesterheld, Daniel Murfet, David ‘davidad’ Dalrymple, Erik Jenner, Ethan Perez, Evan Hubinger, Francis Rhys Ward, Jeffrey Ladish, Jesse Clifton, Jesse Hoogland, Lee Sharkey, Neel Nanda, Owain Evans, Stephen Casper, Vanessa Kosoy, and researchers at Sam Bowman's NYU Alignment Research Group, including Asa Cooper Stickland, Julian Michael, Shi Feng, and David Rein. Submissions for most mentors are due on November 17 (and for Neel Nanda on November 10). Many mentors ask challenging candidate selection questions, so make sure you allow adequate time to complete your application. We encourage prospective applicants to fill out our brief interest form to receive program updates and application deadline reminders. You can also fill out our recommendation form to let us know about someone who might be a good fit, and we will share our application with them. We are currently funding constrained and accepting donations to support further research scholars. If you would like to support our work, you can donate here! Program Details MATS is an educational seminar and independent research program (40 h/week) in Berkeley, CA that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley AI safety research community. MATS provides scholars with housing in Berkeley, CA, as well as travel support, a co-working space, and a community of peers. The main goal of MATS is to help scholars develop as AI safety researchers. You can read more about our theory of change here. Based on individual circumstances, we may be willing to alter the time commitment of the program and arrange for scholars to leave or start early. Please tell us your availability when applying. Our tentative timeline for the MATS Winter 2023-24 program is below. Sch
AGI Should Have Been a Dirty Word
Epistemic status: passing thought.
It is absolutely crazy that Mark Zuckerberg can say that smart glasses will unlock personal superintelligence or whatever incoherent nonsense and be taken seriously. That reflects poorly on AI safety's comms capacities.
Bostrom's book should have laid claim to superintelligence! It came out early enough that it should have been able to plant its flag and set the connotations of the term. It should have made it so Zuckerberg could not throw around the word so casually.
I would go further, and say that the early safety writing on AGI should have been enough that the labs were too scared to say in public in... (read more)