TLDR; apply by October 27th to join a 13-week research programme in AI safety. You’ll write a technical paper in a team of 3-4 with supervision from an experienced researcher. The programme is full-time in London.
Apply to be a participant here. We’re also looking for a programme manager, and you can read more about the role here.
London AI Safety Research (LASR) Labs (previously run as AI Safety Hub Labs) is an AI safety research programme focussed on reducing the risk of loss of control to advanced AI. We focus on action-relevant questions tackling concrete threat models.
LASR participants are matched into teams of 3-4 and will work with a supervisor to write an academic-style...
Edit: Applications for this round are now closed! If you are interested in future rounds, you can express interest here.
TLDR; apply by April 24th 23:59 GMT+1 to join a 12-week programme and write a technical AI safety paper in a team of 4 with supervision from an experienced researcher. Work full time from the LISA offices in London, alongside AI safety organisations including Apollo Research, Bluedot Impact and Leap Labs.
Apply to be a participant here
Express interest in being a supervisor here
London AI Safety Research (LASR) Labs (previously run as AI Safety Hub Labs) is a research programme where participants will work in small teams to publish a paper and accompanying blog post contributing to AI safety.
Teams of 4 will work with a supervisor to write an academic paper,...
Applications for this round are now closed! If you are interested in future rounds, you can express interest here: https://airtable.com/appbzbkQ3OwRBaojt/shruJmwbbk07e1i7y