TLDR;apply to join a 13-week research programme in AI safety. You’ll write a technical paper in a team of 3-4 with supervision from an experienced researcher. The programme is full-time in London.
About LASR:
London AI Safety Research (LASR) Labs is an AI safety research programme focused on reducing the risk of existential risk from advanced AI. We focus on action-relevant questions tackling concrete threat models.
LASR participants are matched into teams of 3-4 and work with a supervisor to write an academic-style paper, with support and management from LASR.
We expect LASR Labs to be a good fit for applicants looking to join technical AI safety teams in the next year. 90% of alumni from previous cohorts have gone on to work in AI safety/security, including at UK AISI, Apollo, OpenAI's dangerous capabilities evaluations team, and Coefficient Giving. Many have continued working with their supervisors, or are doing AI Safety research in their PhD programmes. LASR will also be a good fit for someone hoping to publish in academia; we have a significant portion of our papers accepted to top ML conferences including 50% of papers in our spring 2025 cohort accepted to NeurIPS and even an oral presentation at NeurIPS 2025.
The programme will run from July 20th - October 16th (13 weeks). You will receive an £11,000 stipend to cover living expenses in London, and we will also provide food, office space and travel.
In week 0, you will learn about and critically evaluate a handful of technical AI safety research projects with support from LASR. Developing an understanding of which projects might be promising is difficult and often takes many years, but is essential for producing useful AI safety work. Week 0 aims to give participants space to develop their research prioritisation skills and learn about various different agendas and their respective routes to value. At the end of the week, participants will express their preferences for preferred projects, and we will match them into teams.
In the remaining 12 weeks, you will write and then submit an AI safety research paper (as a preprint, workshop paper, or conference paper).
During the programme, flexible and comprehensive support will be available, including;
Cutting-edge guidance on automating research workflows
Workshops on writing, engineering and research
Talks from leading AI safety researchers
Career coaching
Accountability and productivity assistance
Who should apply?
We are looking for applicants with the following skills:
Technical ability: Machine learning engineering experience and strong quantitative skills.
Research ability: Willingness to experiment, iterate, and dive into execution under uncertainty. An ability to develop a theory of change for a project focused on impact.
Communication skills: An ability to clearly articulate the outcomes and implications of experiments, coupled with transparent reasoning.
For more detail on how we think about and measure technical and research ability, refer to “tips for empirical alignment research” by Ethan Perez, which outlines in detail the specific skills valued within an empirical AI safety research environment.
There are no specific requirements for experience, but we anticipate successful applicants will have done some of these things:
Conducted research in a domain relevant to the topics below or research at the intersection of your domain and frontier AI systems.
Experienced working with LLMs.
Worked on research or machine learning in industry.
Completed or in the process of a PhD in a relevant field like Computer Science, Physics, Maths, etc.
Research shows that people from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work.
Note: this programme takes place in London. Participants without an existing right to work in the UK will be provided with support for visas under the Government Authorised Exchange programme. Please get in touch if you have any visa-related questions; contact[at]arcadiaimpact.org
Topics and supervisors:
All of the projects will be targeted towards reducing X-risk and focused on a concrete threat model. Historically, we’ve had projects focused on AI control, evaluation, and alignment, using both black-box and white-box methods.
Our supervisors for the current round (Winter 2026) are Kola Ayonrinde, Noah Siegel, Dmitrii (Dima) Krasheninnikov, Stefan Heimersheim, David Africa, and Robert Kirk. Some of the topics include mechanistic interpretability, evaluation awareness, model organisms, training dynamics, and CoT faithfulness.
The supervisors for the Summer 2026 round will be announced in the next couple of months. We’ve tended to work with supervisors from Google DeepMind, the UK AI Security Institute (AISI), and top UK universities.
Timeline:
Application deadline: March 30th at 23:59 GMT. Offers will be sent in May, following a skills assessment and an interview.
How to Apply:
You can apply on the LASR Labs website at lasrlabs.org
How is LASR different from other programmes?
There are many similar programmes in AI safety, including MATS, PIBBSS, Pivotal Research Fellowship,and ERA. We expect all of these programmes to be an excellent opportunity to gain relevant skills for a technical AI safety career. LASR Labs might be an especially good option if;
You’re open to learning in-depth about many different kinds of projects
You want to focus on producing an academic-style paper
You like working in a team, with an emphasis on group accountability
You are interested in developing research taste around AI Safety projects
TLDR; apply to join a 13-week research programme in AI safety. You’ll write a technical paper in a team of 3-4 with supervision from an experienced researcher. The programme is full-time in London.
About LASR:
London AI Safety Research (LASR) Labs is an AI safety research programme focused on reducing the risk of existential risk from advanced AI. We focus on action-relevant questions tackling concrete threat models.
LASR participants are matched into teams of 3-4 and work with a supervisor to write an academic-style paper, with support and management from LASR.
We expect LASR Labs to be a good fit for applicants looking to join technical AI safety teams in the next year. 90% of alumni from previous cohorts have gone on to work in AI safety/security, including at UK AISI, Apollo, OpenAI's dangerous capabilities evaluations team, and Coefficient Giving. Many have continued working with their supervisors, or are doing AI Safety research in their PhD programmes. LASR will also be a good fit for someone hoping to publish in academia; we have a significant portion of our papers accepted to top ML conferences including 50% of papers in our spring 2025 cohort accepted to NeurIPS and even an oral presentation at NeurIPS 2025.
Participants will work full-time and in person from the London Initiative for Safe AI (LISA) co-working space, a hub for researchers from organisations such as Apollo Research, Bluedot Impact, ARENA, and Pivotal. The office will host various guest sessions, talks, and networking events.
Programme details:
The programme will run from July 20th - October 16th (13 weeks). You will receive an £11,000 stipend to cover living expenses in London, and we will also provide food, office space and travel.
In week 0, you will learn about and critically evaluate a handful of technical AI safety research projects with support from LASR. Developing an understanding of which projects might be promising is difficult and often takes many years, but is essential for producing useful AI safety work. Week 0 aims to give participants space to develop their research prioritisation skills and learn about various different agendas and their respective routes to value. At the end of the week, participants will express their preferences for preferred projects, and we will match them into teams.
In the remaining 12 weeks, you will write and then submit an AI safety research paper (as a preprint, workshop paper, or conference paper).
During the programme, flexible and comprehensive support will be available, including;
Who should apply?
We are looking for applicants with the following skills:
For more detail on how we think about and measure technical and research ability, refer to “tips for empirical alignment research” by Ethan Perez, which outlines in detail the specific skills valued within an empirical AI safety research environment.
There are no specific requirements for experience, but we anticipate successful applicants will have done some of these things:
Research shows that people from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work.
Note: this programme takes place in London. Participants without an existing right to work in the UK will be provided with support for visas under the Government Authorised Exchange programme. Please get in touch if you have any visa-related questions; contact[at]arcadiaimpact.org
Topics and supervisors:
All of the projects will be targeted towards reducing X-risk and focused on a concrete threat model. Historically, we’ve had projects focused on AI control, evaluation, and alignment, using both black-box and white-box methods.
Our supervisors for the current round (Winter 2026) are Kola Ayonrinde, Noah Siegel, Dmitrii (Dima) Krasheninnikov, Stefan Heimersheim, David Africa, and Robert Kirk. Some of the topics include mechanistic interpretability, evaluation awareness, model organisms, training dynamics, and CoT faithfulness.
The supervisors for the Summer 2026 round will be announced in the next couple of months. We’ve tended to work with supervisors from Google DeepMind, the UK AI Security Institute (AISI), and top UK universities.
Timeline:
Application deadline: March 30th at 23:59 GMT. Offers will be sent in May, following a skills assessment and an interview.
How to Apply:
You can apply on the LASR Labs website at lasrlabs.org
How is LASR different from other programmes?
There are many similar programmes in AI safety, including MATS, PIBBSS, Pivotal Research Fellowship, and ERA. We expect all of these programmes to be an excellent opportunity to gain relevant skills for a technical AI safety career. LASR Labs might be an especially good option if;