Review

On November 1st and 2nd, the UK held an international AI Summit. Speeches were given, institutes founded, roundtables held, and 29 countries signed the Bletchley Declaration. This is a brief overview of events leading up to and at the summit, following up last month’s Update on the UK AI Taskforce & upcoming AI Safety Summit.

Pre-summit

Prime Minister Rishi Sunak gave a speech in late October at the Royal Society to introduce the themes of the summit. He is optimistic about the promise of AI but said he was compelled to highlight the UK intelligence community’s stark warnings, citing dangers like AI-supported biochemical weapons, cyber-attacks, disinformation, and “in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely…through the kind of AI sometimes referred to as ‘super intelligence’”. However, he did downplay near-term existential risk: “This is not a risk that people need to be losing sleep over right now.” 

He talked about the importance of 3rd party testing of models, his pride in the £100m taskforce, and announced a new AI Safety Institute which will evaluate new types of AI across many aspects of risk. He argued that AI safety is an international concern, and that the summit would be attended by civil society, AI companies, and leading countries, adding “yes – we’ve invited China.” 

Taking inspiration from the Intergovernmental Panel on Climate Change, he proposed a global panel, nominated by the countries and orgs attending the summit, to publish a State of AI Science report. He argued that the UK’s tax and visa regimes make it ideal for European AI work, and announced several government projects: the construction of a £1b supercomputer; £2.5b for quantum computing; and £100m for using AI for breakthrough treatments for previously incurable diseases. 

This will support the government’s existing infrastructure, such as the £100m announced for BridgeAI to encourage the use of AI in “low-adoption sectors, such as construction, agriculture and creative industries”, and £290m in a “broad package of AI skills initiatives”. 

Ahead of the summit, the UK requested leading AI developers to outline their AI Safety Policies across 9 areas:

  • Responsible capability scaling
  • Evals and red-teaming
  • Model reporting & information sharing
  • Security controls including securing model weights
  • Reporting structure for vulnerabilities
  • Identifiers of AI-generated material
  • Prioritizing research on risks posed by AI
  • Preventing and monitoring model misuse
  • Data input controls and audits. 

Amazon, Anthropic, Google DeepMind, Inflection, Meta, Microsoft, and OpenAI complied, and you can see their reports here. I won’t describe them in this post, so if you’re interested those, check out Nate Soares’ Thoughts on the AI Safety Summit company policy requests and responses, which includes Matthew Gray’s thoughts after a close reading of the policies. 

The Summit

The summit’s attendees included academics, civil society, the governments of 27 countries, AI industry orgs and leaders (partially listed here), and multilateral organizations. The focus was on frontier AI, which they defined as "highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models", though they also considered “specific narrow AI which can hold potentially dangerous capabilities”.

AI Safety Summit 2023

Day One

King Charles opened the summit with a virtual address, comparing the rise of powerful AI to the discovery of electricity, the splitting of the atom, the world wide web, and the harnessing of fire. Like Rishi, he highlights the potential for AI to help with medicine, carbon-neutral energy, and so on, and the necessity of international cooperation.

At the summit, 28 countries and organizations including the US, EU, and China signed the Bletchley Declaration, calling for international co-operation to manage the risks of AI. The declaration notes that their agenda for addressing frontier AI risk will focus on:

  • Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
  • Building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

The majority of the first day was devoted to a number of roundtable discussions; four on Understanding Frontier AI Risks and four on Improving Frontier AI Safety. Summaries from the roundtables are available here. They mostly follow the theme of “there’s some risk now, there’ll be more sooner or later, we should research/invest in/govern AI safety, but hopefully without losing the promise of AI”, but for slightly more detail, here’s the 8 subjects (in bold) with my summaries of their summaries of what the participants agreed on, with reduced redundancy and hedging:

Risks to global safety from Frontier AI Misuse: GPT-4 et al. already make cyberattacks and biochemical weapon design “slightly easier”, and that risks will grow as capabilities do. They noted that some companies are putting safeguards on their models, but this needs to be supported by government action. 

Risks from Unpredictable Advances in Frontier AI Capability: frontier AI is way ahead of predictions from a few years ago, and that increasing investment means this trend will continue. Advanced AI is promising for health, education, environment, and science. However, it’s dangerous enough that all frontier models must be developed and tested rigorously, regardless of their potential benefits. Open source models risk the spread of powerful models to incautious or malicious actors, and perhaps should be discouraged. However, evaluation tools are safer to share. 

Risks from Loss of Control over Frontier AI: Current AI is controllable and non-agentic. The future is uncertain, we should consider incentives for safe development. Some decisions should never be deferred to an AI. 

Risks from the Integration of Frontier AI into Society: Current frontier AI is an existential threat to democracy, human & civil rights, and fairness. We need to clarify how existing tools and laws apply to AI, we need better evals, and those evals should include societal metrics for their real-life context. 

What should Frontier AI developers do to scale responsibly? We don’t know whether capability scaling is inevitable or not, but should prepare for risks anyway. Responsible scaling policies are promising, but probably not sufficient. Improving these systems should be done in months, not years. Governance will be necessary as well as company policies. The UK and US AI Safety Institutes will be important for all this.

What should National Policymakers do in relation to the risk and opportunities of AI? International cooperation is necessary. Regulation, correctly done, can support innovation rather than stifling it. 

What should the International Community do in relation to the risk and opportunities of AI? International cooperation is necessary. 

What should the Scientific Community do in relation to the risk and opportunities of AI? We need models that are safe by design and tools like non-removable off switches. We should be cautious about open source models, and cautious of a concentration of power like with the internet. We need to coordinate on a list of open research questions. However, these open questions and other problems aren’t purely technical; they’re sociotechnical. 

There was also a panel discussion on AI for the next generation, but no report from that, and some more roundtables the next day with similar conclusions. 

Day Two

On the second day, Sunak convened “a small group of governments, companies and experts to further the discussion on what steps can be taken to address the risks in emerging AI technology and ensure it is used as a force for good” while “UK Technology Secretary Michelle Donelan will reconvene international counterparts to agree next steps”. There are fewer reports from this day, but the chair released a statement about the summit and the Bletchley Declaration. This 10 page statement highlights some of the participant suggestions, which I'll summarize, falling into the following three categories:

Inequality

Equity is important, and we must ensure that “the broadest spectrum [is] able to benefit from AI and [is] shielded from its harms”. Multi-stakeholder collaboration is essential, as is reducing barriers to entry for groups such as women and minority groups. The impact of AI may be unequal when AI trained on biased or discriminatory data sets, which may perpetuate harm. To improve this, we should promote and work with the UN’s AI for Good programme and initiatives such as the UK’s AI for Development programme.

Further, AI could also promote International inequality, as poorer countries may lack the technology stack required to design and deploy AI while still being affected by its use elsewhere. 

Law and evaluation

Voluntary commitment is not sufficient, legal regulation will be necessary. In some cases, models should be proven to be safe before they’re deployed. Governments shouldn’t just do pre- and post-deployment testing, but also during development and training runs. Supporting this, we need to develop better tools for predicting a model’s capabilities before training. Safety testing shouldn’t be restricted to development but should include testing in real, deployed contexts. 

The government should set standards for models’ propensity to accidents or mistakes, devised in a way that can be measured reproducibly. 

Knowledge sharing and community building

Open source models may be extra risky, though they do promote innovation and transparency. 

We shouldn’t focus only on the frontier or the current state of AI, but both. Harms already exist, such as the spread of false narratives and the implications for democratic elections, AI-enhanced crime, and AI increasing inequality and amplifying biases. AI will likely be used to interfere in elections soon. 

While the summit didn’t focus on military use of AI, it is important, and the chair of the summit welcomed the Summit on Responsible AI in the Military Domain that was co-hosted by the Netherlands and Korea in February 2023.

The chair of the summit emphasized the need for international cooperation, and welcomed the Council of Europe’s work to negotiate the first intergovernmental treaty on AI, the G7 Hiroshima AI Process, and the Global Challenge to Build Trust in the Age of Generative AI. While discussing the tradeoff between domestic and international action, the statement notes that “several countries welcomed the forthcoming review of the 2019 OECD Recommendation on Artificial Intelligence, which informed the principles agreed by the G20”. The statement also praises the UNESCO Recommendation on the Ethics of AI, which has the “broadest current international applicability

For more details, you can see a summary of the UK’s AI policies and government updates from the summit here. The chair made two more statements on the second day, one short one here that doesn’t cover anything new, and another here, which announces a “State of the Science” report, covered below.

Speeches and announcements

Kamala Harris attended, though Joe Biden did not, staying in the US and signing an executive order. You can see a concise summary of the order here and a detailed breakdown in Zvi’s summary of the executive order and follow-up collation of Reactions to the Executive Order. At the summit, Harris announced a new US institution, the National Institute of Standards and Technology (NIST) AI Safety Consortium

Several notable figures made speeches:

  • Dario Amodei gave a speech to the summit on Anthropic’s Responsible Scaling Policy, transcribed here.
  • Elon Musk had a long interview with Rishi Sunak, in which he praised the UK’s response and especially the decision to include China at the summit.
  • Several more talks are available here, including from Stuart Russell, Max Tegmark, Jaan Tallinn, and more. 

The summit will be the first in a sequence, with the next being virtual and hosted by South Korea in mid-2024, and the third by France in late-2024.  

State of the Science report

During the summit, the chair announced a new report with the purpose of understanding the capabilities and risks of Frontier AI. The report will be chaired by Yoshua Bengio, Turing Award winner and member of the UN’s Scientific Advisory Board. 

The report will “facilitate a shared science-based understanding of the risks associated with frontier AI and to sustain that understanding as capabilities continue to increase”. It won’t introduce new material, but rather summarize the best existing research, and it will be published ahead of the next AI Safety Summit (though I don’t know whether this means the virtual summit in mid-2024, or the in-person one in late-2024). 

The AI Safety Institute

As mentioned briefly above, the UK has introduced its new AI Safety Institute (AISI) as “an evolution of the UK’s Frontier AI Taskforce”. This presentation to parliament has the full details on the AISI, but here’s my very brief summary:

The institutes’ three core functions are to:

  • Develop and conduct evaluations on advanced AI systems
  • Drive foundational AI safety research
  • Facilitate information exchange

The AISI is not a regulator, and won’t determine government regulation. Rather, it will inform UK and international policymaking and provide tools for governance and regulation (e.g. secure methods to fine-tune systems with sensitive data, platforms to solicit collective input and participation in model training and risk assessment, techniques to analyze training data for bias). 

Initially funded with £100 million, “the Institute will be backed with a continuation of the Taskforce’s 2024 to 2025 funding as an annual amount for the rest of this decade, subject to it demonstrating the continued requirement for that level of public funds.” 

To see what the née taskforce was up to before this change, and who they’re working with, check out my previous post. To hear what we at Convergence Analysis think of the Summit and AISI’s plans, stay tuned! We are working on a post. 
 

New Comment