No professors were actively interested in the topic, and programs like SPAR, which we helped build, would quickly saturate with applicants. Currently, we are experimenting with a promising system with Georgia Tech faculty as mentors and experienced organizers as research managers.
At UChicago we use experienced organizers as research managers and have found this to be overall successful. Outside mentorship is typically still required but this is just the cherry on the top and doesn't require a large time commitment from those outside people.
I believe SPAR is a really good resource but is becoming increasingly competitive. For very junior level people (e.g., impressive first year college students with no research experience) there is a lowish probability that they will be accepted to SPAR but this presents a really good opportunity for University groups to step in.
We will spend less time upskilling new undergraduates who will get those skills from other places soon anyway.
I believe that one of the highest impact things that UChicago's group does is give these "very junior level" people their first research experience. This can shorten the timeframe that these students will be qualified to join an AI safety org by ~1 year but the total amount of time from "very junior level" to "at an org that does good research" is still probably 3+ years. This does fall out of "AI 2027" timelines but because university groups have much more leverage in a longer timeline world (5+ years) I think that this makes quite a bit of sense.
(Full disclosure I'm quite biased on the final point because I give those short timelines a much lower probability than most other organizers--both at UChicago and in general. On some level I suspect this is why I come up with arguments for why it makes sense to care about longer timelines even if you believe in short ones. In general, I still tend to think that students who are most serious about making impact in a short timeline world drop out of college -- there have been three students at UChicago who have done this -- and for students that remain, it makes sense to think about a 5+ year world.)
More on giving undergrads their first research experience. Yes, giving first research experience is high impact, but we want to reserve these opportunities to the best people. Often, this first research experience is most fruitful when they work with a highly competent team. We are turning focus to assemble such teams and find fits for the most value aligned undergrads.
We always find it hard to form pipelines because individuals are just so different! I don't even feel comfortable using 'undergrad' as a label if I'm honest...
Thanks for the insights! The structured research program at UChicago you describe is exactly what we're trying to do now, but it's too soon to say whether or not it's working.
I think the idea of giving people their first research experience can be valuable, although we tend to find the students who are attracted to these opportunities already have/are getting research experience, even if they are at a "very junior level." Something like <5% of new undergraduates that self-select into wanting to do research with us don't already have exposure to technical research, so I'm unsure how valuable that as a goal is.
I think what's high-impact here is getting people to their first published work related to AI safety, which can dramatically shorten timelines to FTE work.
This post is an organizational update from Georgia Tech’s AI Safety Initiative (AISI) and roughly represents our collective view. In this post, we share lessons & takes from the 2024-25 academic year, describe what we’ve done, and detail our plans for the next academic year.
Hey, we’re organizers of Georgia Tech’s AI Safety Initiative (AISI), thanks for dropping by! The purpose of this post is to document and share our activities with fellow student AI safety organizations. Feel free to skip to sections as you need. We welcome all constructive discussion! Put any further questions, feedback, or disagreements in the comments and one of us will certainly respond. A brief outline:
This post is primarily authored by Yixiong (co-director) and Parv (collaborative initiatives lead). First and foremost, we’d like to give a HUGE shoutout to our team - Ayush, Stepan, Alec, Eyas, Andrew, Vishnesh, Tanush, Harshit, and Jaehun - for volunteering countless hours despite busy schedules. None of this would’ve been possible without y’all <3. We would also like to thank Open Philanthropy, our faculty advisors, and external collaborators for supporting our mission.
AISI saw significant expansion of our education, outreach, and research activities in the past year; here’s what we’ve been up to:
This is our introductory offering - a technical fellowship distilled from BlueDot Impact’s course. See our syllabus here. In the past year, we received over 160 applications and hosted 16 cohorts of 6-8 students each. Cohorts met for ninety minutes every week, and after six weeks had the opportunity to apply for AISI’s research programs and support. This is an effective program every AIS student organization should have, though it’s high recall and low precision. We estimate >90% of our current organizers and engaged members are past fellows but <20% of fellows become engaged members. Here are our takes on how to run this well:
These are facilitation and accountability groups for alignment research upskilling based on the AI Alignment Research Engineer Accelerator (ARENA). We offer this opportunity instead of directing people to ARENA because they have limited capacity and seem to be focused on working professionals looking for a career change into AI safety. Since we began in late March, 8 people have joined, and a majority have plans of becoming full-time AI safety researchers after graduation. It is too early to say how successful this project is, but we have found the ARENA materials to be extremely useful for upskilling. We find virtual attendance to be a consistent challenge and plan for in-person groups next semester with food and boba.
Previously, only our most involved members (most of them organizers) were involved in technical or governance research (and they have done amazing work!). We long struggled to offer research opportunities due to a shortage of qualified mentors in the field of AI safety. No professors were actively interested in the topic, and programs like SPAR, which we helped build, would quickly saturate with applicants. Currently, we are experimenting with a promising system with Georgia Tech faculty as mentors and experienced organizers as research managers. In April, we put out a call for researchers which over 30 students responded to, and currently have 6 ongoing projects over the summer.
We also got to do lots of unique projects this year as opportunities crossed our desks. We view these as longer term investments into building our network, portfolio, and reach.
We organized a track at Georgia Tech’s main AI hackathon focused on LLM evals with the support of Apart Research and Anthropic. Yixiong wrote a more detailed retrospective here. But to summarize:
In February, we demonstrated red-teaming techniques to Congressional lawmakers at the Exhibition on Advanced AI with CAIP, successfully jailbreaking ChatGPT to generate bioweapons information and reveal sensitive medical data. Publicizing this was great for our on-campus credibility and renewed interest in our governance work.
We responded to two of the National AI Strategic Plan RFIs: the National AI Action Plan and National AI R&D Strategic Plan. We put out a call for researchers and chose ~12 members, followed by panicked meetings and writing over 3 weeks or so. This was the first time we’ve been able to directly include public policy students and faculty, which has now turned into our governance working group. We think responding to RFIs is a great low-cost way to engage on-campus audiences, get ideas out, and create a reference document for on-campus AI governance efforts.
We held the largest AI Safety event in Atlanta, with introductory workshops from Changlin Li and Tyler Tracy, as well as a keynote from Jason Green-Lowe of CAIP. This was very rewarding to set up and seemed to have a lot of reach to non-undergraduates, and is something we’ll consider setting up again next year.
We piloted a version of AI Futures’ tabletop exercise edited to be more accessible for non-technical and low-context GT faculty and staff - thank you so much to James Vollmer for the materials! We think this is useful for working with public policy and national security experts, and is one of the few things that was able to attract senior faculty, even if they thought the game conditions were ridiculous. We’ll keep iterating and hope to bring this to public policy groups next year.
We gave a talk at the GT Open Source Program Office Spring Conference arguing against frontier open-weights models. Despite the presentation later being described as “provocative,” we think most people were surprisingly receptive to our work and wanted to learn more.
We participated in a panel on Ethical AI in the Online MS in Computer Science annual conference, which draws about 250 people and discusses a program with over 10,000 graduates. This had similarly good reception and opened the door for conversations with faculty.
In reflection, we realized our strategic thinking was often limited to the usual activities of a college club - a rather unhelpful label for creating impact. The question is better framed as ‘what should we do as a group of X students, at X university, with X resources’. With this in mind, we plan to focus a part of our effort on reallocating the huge amount of resources Georgia Tech has towards work in AI safety. This looks like:
Yes, this is partially due to timelines. No, we’re not saying undergrads don’t matter. Seeing how soon some of our concerns may materialize, we think it makes sense to leverage our talent-heavy environment (yay universities!) to assemble teams that can tackle real problems. We will spend less time upskilling new undergraduates who will get those skills from other places soon anyway. Concretely, we will:
We think it’s likely that AI will be a major voting issue in upcoming elections, especially as it relates to privacy, misuse, and most importantly unemployment. If we can inform key decision-makers - local industry heads, politicians, and community leaders - about what transformative AI looks like and why we’re concerned, we have a chance to make real change. We want to collaborate with orgs like BlueDot Impact and the AI Safety Awareness Foundation to do the last mile work of distributing their resources and running their workshops. Exactly how we do this is still up in the air, and we’d love to hear good ideas!