While preparing for an upcoming Convergence Analysis post on the UK AI Taskforce and our recommendations, I looked into their reports, their plans for the upcoming AI safety summit, the recommendations from other orgs, and some miscellaneous UK AI events. I doubt we’ll include this in our post but I thought the updates were worth sharing, so here is a brief summary. 

The UK AI taskforce

In April 2023, the UK government committed £100 million to its new AI Foundation Models Taskforce, led by Ian Hogarth. The taskforce was created in response to a white paper titled A pro-innovation approach to AI regulation in March, and was modeled on the 2020 Vaccine Taskforce, with similar “agility and delegated authority”. 

The government announced an AI Safety Summit on the 1st and 2nd of November at Bletchley Park, and put out a call for expressions of interest, looking for:

people with diverse skills and backgrounds to work in or with the Taskforce, to catalytically advance AI safety this year with a global impact. We're particularly interested in building out "safety infrastructure" and developing risk assessments that can inform policymakers and spur global coordination on AI safety.

In September, the taskforce released their first report. In summary: 

  • Their expert advisory board consists of Yoshua Bengio, Paul Christiano, Matt Collins, Anne Keast-Butler, Alex van Someren, Helen Stokes-Lampard, and Matt Clifford.
  • Initially, their lone but influential AI researcher was Nitarshan Rajkumar, but now they're hiring technical AI experts including Yarin Gal, head of Oxford Applied & Theoretical ML Group, and David Krueger, leader of a Cambridge Deep Learning & AI Alignment group. They plan to scale up their team by an order of magnitude and encourage applications.
  • The taskforce is partnering with several organizations:
  • They've brought in Ollie Ilott as director of the taskforce, who previously led the Prime Minister’s domestic private office. As far as I can tell, he’s new to the topic of AI. 

The UK AI safety summit

The organizers have released some more details on the upcoming summit. In summary:

  • The summit will focus on frontier AI, which they define as "highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models", though they’ll also consider “specific narrow AI which can hold potentially dangerous capabilities”. They include this graphic:
  • They mention doing scenario research to "identify where the frontier of AI development currently is and where it may advance to".
  • They're focused on "misuse risks" and "loss of control risks", though they add "This is not to minimize the wider societal risks that AI - at the frontier and not - can have, including misinformation, bias and discrimination and the potential for mass automation". 
  • There’s no mention of existential risk. The original white paper only mentions X-risk once, when arguing for coherence between risk assessors so as not to ignore “‘high impact but low probability’ risks such as existential risks posed by artificial general intelligence or AI biosecurity risks.” 

There are some official pre-summit events, details about which will apparently be published on @SciTechgovuk and other social media channels:

  • October 11th at The Alan Turing Institute: Exploring existing UK strengths on AI safety and opportunities for international collaboration
  • October 12th at the British Academy: Possibilities of AI for the public good: the Summit and beyond
  • October 17th at techUK: Opportunities from AI; Potential risks from AI; Solutions that exist in the tech sector
  • October 25th at the Royal Society: Horizon scanning AI safety risks across scientific disciplines. 

And two upcoming opportunities for public engagement:

  • October 18th: LinkedIn Q&A with Secretary of State Michelle Donelan .
  • November 1st: Watch keynote speeches on the Summit livestream.

Misc.

Several organizations have published recommendations for the taskforce and for the summit, such as: 

Some news articles (e.g. here, here) claim that the government is rushing to finalize an agreement among world leaders before the summit in November. 

There’s also the AI Fringe, a series of events on safe and responsible AI across the UK from October 30th till November 3rd, separate but complementary to the government’s summit. Their events “will feature a series of keynotes, fireside chats, panels, roundtables and workshops to expand the conversation on AI.” 

Public perception of existential risk

I’ll finish with some personal thoughts on existential risk. In my opinion, it’s a shame that the taskforce, their summit, and the AI Fringe do not address existential risk. There are brief mentions of AI biosecurity risks and hints that the summit may include discussion about “losing control” of AI, but I feel the topic is still largely neglected in their report and summit plans (and at the AI Fringe, even though they found time for a session on moon base psychophysics with lunar-themed entertainment).

It makes sense to me that they are focused on short and medium-term risks, but I would like to see investment in mitigating longer-term, larger-scale risks. I also suspect (though I won’t try to provide evidence here) that a good chunk of the public see existential risk from AI as a fringe idea[1], and I think that experts and the government should combat that by publicly recognizing and addressing existential risks. Indeed, despite the limited mention of existential risk, some think the UK is too focused on it: in Why the UK AI Safety Summit will fail to be meaningful, Dr Keegan McBride, research lecturer at the Oxford Internet Institute, writes that the “summit and the UK’s current AI strategy is primarily concerned with existential risks” even though “the idea that AI will bring about the end of the world in the near future is not grounded in reality”. Politico also published How Silicon Valley doomers are shaping Rishi Sunak’s AI plans, which, like Dr McBridge, describes how effective altruists are pushing the UK’s policy in the wrong direction: 

Effective altruists claim that super-intelligent AI could one day destroy humanity, and advocate policy that’s focused on the distant future rather than the here-and-now. Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.

I strongly disagree with both these articles, and I’m interested in how people think we can improve public perception of EA and advocacy for existential safety work, as well as how people feel about the UK’s AI safety efforts more broadly. Do you think the UK’s response is promising or lackluster? Should they focus on existential risk, or will that follow from short and medium-term risk reduction? In what direction should we try to steer the taskforce? 

  1. ^

    My colleague, Dr Justin Bullock, has written an article on public perception of AI risk and safety published in the Humanist Perspectives journal. He argues that the recent AIPI and AIMS survey data shows that the public do recognise the existential risk from AI.

New Comment
2 comments, sorted by Click to highlight new comments since:

On the bright side Connor Leahy from Conjecture is going to be at the summit so there will be at least one strong voice for existential risk present there

Update, 16th October:

The Q&A with Secretary of State for Science, Michelle Donelan MP, has been moved to today on LinkedIn.

The programme for the summit has been released. Brief summary:

Day 1

Roundtables on "understanding frontier AI risks":
1. Risks to Global Safety from Frontier AI Misuse
2. Risks from Unpredictable Advances in Frontier AI Capability
3. Risks from Loss of Control over Frontier AI
4. Risks from the Integration of Frontier AI into Society

Roundtables on "improving frontier AI safety":
1. What should Frontier AI developers do to scale responsibly?
2. What should National Policymakers do in relation to the risk and opportunities of AI?
3. What should the International Community do in relation to the risk and opportunities of AI?
4. What should the Scientific Community do in relation to the risk and opportunities of AI?

Panel discussion on "AI for good – AI for the next generation".

Day 2

"The Prime Minister will convene a small group of governments, companies and experts to further the discussion on what steps can be taken to address the risks in emerging AI technology and ensure it is used as a force for good. In parallel, UK Technology Secretary Michelle Donelan will reconvene international counterparts to agree next steps."