Update on the UK AI Taskforce & upcoming AI Safety Summit
While preparing for an upcoming Convergence Analysis post on the UK AI Taskforce and our recommendations, I looked into their reports, their plans for the upcoming AI safety summit, the recommendations from other orgs, and some miscellaneous UK AI events. I doubt we’ll include this in our post but I thought the updates were worth sharing, so here is a brief summary. The UK AI taskforce In April 2023, the UK government committed £100 million to its new AI Foundation Models Taskforce, led by Ian Hogarth. The taskforce was created in response to a white paper titled A pro-innovation approach to AI regulation in March, and was modeled on the 2020 Vaccine Taskforce, with similar “agility and delegated authority”. The government announced an AI Safety Summit on the 1st and 2nd of November at Bletchley Park, and put out a call for expressions of interest, looking for: > people with diverse skills and backgrounds to work in or with the Taskforce, to catalytically advance AI safety this year with a global impact. We're particularly interested in building out "safety infrastructure" and developing risk assessments that can inform policymakers and spur global coordination on AI safety. In September, the taskforce released their first report. In summary: * Their expert advisory board consists of Yoshua Bengio, Paul Christiano, Matt Collins, Anne Keast-Butler, Alex van Someren, Helen Stokes-Lampard, and Matt Clifford. * Initially, their lone but influential AI researcher was Nitarshan Rajkumar, but now they're hiring technical AI experts including Yarin Gal, head of Oxford Applied & Theoretical ML Group, and David Krueger, leader of a Cambridge Deep Learning & AI Alignment group. They plan to scale up their team by an order of magnitude and encourage applications. * The taskforce is partnering with several organizations: * ARC Evals, a non-profit focused on AI evaluations. The report emphasizes a close collaboration with ARC Evals in the leadup to the summit, alon