The deadline for the Second Look Fellowship has passed, but we will continue to review applications on a rolling basis for especially qualified candidates up until March 15th. Second Look is also looking to hire a full-time generalist-cofounder. If interested, you should email me at zroe@uchicago.edu.
The deadline for the Summer Research Fellowship is March 15th at 11:59pm in your timezone. The rest of this post describes the details of this opportunity. Although I am not directly involved in running this program, I strongly endorse it and encourage anyone with a background in AI safety to apply!
The Existential Risk Laboratory (XLab) at the University of Chicago is accepting applications for the 2026 Summer Research Fellowship (SRF), a 10-week, in-person research program for students and early-career researchers working on AI safety and security, nuclear security, or the confluence of these issues.
Unlike more established fields, the field of AI safety and security has not yet reached a consensus on what the foundational problems are. The rate at which advances occur together with the youth of the field means that important questions and approaches have likely been overlooked. At the same time, the literature is shallow enough that dedicated junior researchers can reach the intellectual frontier with just months of dedicated work. The confluence of AI and nuclear security is arguably even more neglected — vanishingly few researchers have fluency in both domains, leaving critical questions almost entirely unexamined. And within nuclear security itself, the nonproliferation field has traditionally underinvested in right-of-boom issues such as nuclear winter and civil defense, relying on Cold War–era assumptions about deterrence and escalation dynamics that are increasingly outdated. The Summer Research Fellowship aims to give early-career researchers the time, resources, and intellectual community they need to identify consequential but overlooked questions across all of these areas.
Our program is designed to enable fellows to develop their own research agendas. Each fellow scopes their own research question, justifies its importance, identifies appropriate methods, and delivers a significant written product: a journal article, workshop submission, white paper, or equivalent. They are advised by a domain expert mentor, but the intellectual direction of the project is theirs. This is not intended to be a research assistantship. It is intended as practice for leading your own research program.
We’re looking for applicants committed to careers in AI safety or nuclear security who just need the time, resources, and community to develop their ideas. If that describes you, we want to hear from you.
Logistics
Applications are due March 15th at 11:59 PM in your timezone. Please don’t spend more than 3 hours on the application. Applicants with existing research proposals should be able to finish the application in ~1/2 hour.
The program will run for 10 weeks from June 15th to August 22nd, in-person on the University of Chicago campus.
Stipend: All fellows receive a $10,000 stipend, University of Chicago dorm housing, a full dining hall meal plan, and funds for weekday lunches.
Compute: Technical fellows receive $4,000 in compute and API credit support.
Mentorship: Fellows have regular one-on-one check-ins with a research manager throughout the program. Admitted fellows, with program staff assistance, will be responsible for identifying and reaching out to appropriate domain experts to advise their work.
Workspace: Dedicated coworking space on the University of Chicago campus.
Programming: Workshops on research methodology, including forecasting, open-source intelligence, and research process skills, alongside Q&A sessions with policy practitioners and active researchers on open questions and career paths.
Community: A cohort of 15–20 peers working on related problems. Social events, peer review sessions, and shared living spaces foster lasting professional relationships.
Continued involvement: Fellows may have the opportunity to continue their work with XLab as affiliate researchers depending on their contributions during the summer.
Commitment: Fellows are expected to make the program their primary commitment for the duration of the fellowship. Fellows should not be concurrently working substantial hours in other labs or internships or taking full course loads unless directly relevant to their project.
Who should apply
The SRF is a demanding program. Our ideal applicant has
demonstrated commitment to a career in AI safety, security, and governance or nuclear security
substantive familiarity with the field and its open questions
shown an ability to work and think independently
This program is particularly valuable for people who have built real knowledge of the field but haven’t yet had the opportunity to develop a substantial, sole- or lead-authored research product or explore their interests full-time. No technical background is required, but we expect fellows pursuing technical projects to have completed ARENA or similar material.
If you’re on the fence, we encourage you to apply. Some of our strongest fellows were deeply uncertain about whether they were ready or whether they were even the kind of person who could contribute. If you have a serious interest in these issues and have ideas you want to develop, we want to see your application.
Note: We are only able to accept fellows with US work authorization: US citizens, permanent residents, or international students currently studying in the United States (typically on a current F-1 visa). We are unable to sponsor visas. We strongly encourage applicants with US work authorization who are studying at international universities to explicitly note this in their application.
This summer, UChicago XLab is running two programs:
The deadline for the Second Look Fellowship has passed, but we will continue to review applications on a rolling basis for especially qualified candidates up until March 15th. Second Look is also looking to hire a full-time generalist-cofounder. If interested, you should email me at zroe@uchicago.edu.
The deadline for the Summer Research Fellowship is March 15th at 11:59pm in your timezone. The rest of this post describes the details of this opportunity. Although I am not directly involved in running this program, I strongly endorse it and encourage anyone with a background in AI safety to apply!
XLab 2026 Summer Research Fellowship:
Apply at the link here.
The Existential Risk Laboratory (XLab) at the University of Chicago is accepting applications for the 2026 Summer Research Fellowship (SRF), a 10-week, in-person research program for students and early-career researchers working on AI safety and security, nuclear security, or the confluence of these issues.
Unlike more established fields, the field of AI safety and security has not yet reached a consensus on what the foundational problems are. The rate at which advances occur together with the youth of the field means that important questions and approaches have likely been overlooked. At the same time, the literature is shallow enough that dedicated junior researchers can reach the intellectual frontier with just months of dedicated work. The confluence of AI and nuclear security is arguably even more neglected — vanishingly few researchers have fluency in both domains, leaving critical questions almost entirely unexamined. And within nuclear security itself, the nonproliferation field has traditionally underinvested in right-of-boom issues such as nuclear winter and civil defense, relying on Cold War–era assumptions about deterrence and escalation dynamics that are increasingly outdated. The Summer Research Fellowship aims to give early-career researchers the time, resources, and intellectual community they need to identify consequential but overlooked questions across all of these areas.
Our program is designed to enable fellows to develop their own research agendas. Each fellow scopes their own research question, justifies its importance, identifies appropriate methods, and delivers a significant written product: a journal article, workshop submission, white paper, or equivalent. They are advised by a domain expert mentor, but the intellectual direction of the project is theirs. This is not intended to be a research assistantship. It is intended as practice for leading your own research program.
We’re looking for applicants committed to careers in AI safety or nuclear security who just need the time, resources, and community to develop their ideas. If that describes you, we want to hear from you.
Logistics
Applications are due March 15th at 11:59 PM in your timezone. Please don’t spend more than 3 hours on the application. Applicants with existing research proposals should be able to finish the application in ~1/2 hour.
The program will run for 10 weeks from June 15th to August 22nd, in-person on the University of Chicago campus.
Stipend: All fellows receive a $10,000 stipend, University of Chicago dorm housing, a full dining hall meal plan, and funds for weekday lunches.
Compute: Technical fellows receive $4,000 in compute and API credit support.
Mentorship: Fellows have regular one-on-one check-ins with a research manager throughout the program. Admitted fellows, with program staff assistance, will be responsible for identifying and reaching out to appropriate domain experts to advise their work.
Workspace: Dedicated coworking space on the University of Chicago campus.
Programming: Workshops on research methodology, including forecasting, open-source intelligence, and research process skills, alongside Q&A sessions with policy practitioners and active researchers on open questions and career paths.
Community: A cohort of 15–20 peers working on related problems. Social events, peer review sessions, and shared living spaces foster lasting professional relationships.
Continued involvement: Fellows may have the opportunity to continue their work with XLab as affiliate researchers depending on their contributions during the summer.
Commitment: Fellows are expected to make the program their primary commitment for the duration of the fellowship. Fellows should not be concurrently working substantial hours in other labs or internships or taking full course loads unless directly relevant to their project.
Who should apply
The SRF is a demanding program. Our ideal applicant has
demonstrated commitment to a career in AI safety, security, and governance or nuclear security
substantive familiarity with the field and its open questions
shown an ability to work and think independently
This program is particularly valuable for people who have built real knowledge of the field but haven’t yet had the opportunity to develop a substantial, sole- or lead-authored research product or explore their interests full-time. No technical background is required, but we expect fellows pursuing technical projects to have completed ARENA or similar material.
If you’re on the fence, we encourage you to apply. Some of our strongest fellows were deeply uncertain about whether they were ready or whether they were even the kind of person who could contribute. If you have a serious interest in these issues and have ideas you want to develop, we want to see your application.
Note: We are only able to accept fellows with US work authorization: US citizens, permanent residents, or international students currently studying in the United States (typically on a current F-1 visa). We are unable to sponsor visas. We strongly encourage applicants with US work authorization who are studying at international universities to explicitly note this in their application.