I’m delighted to be here at the Royal Society, the place where the story of modern science has been written for centuries.

Now, I’m unashamedly optimistic about the power of technology to make life better for everyone.

So, the easy speech for me to give - the one in my heart I really want to give…

…would be to tell you about the incredible opportunities before us.

Just this morning, I was at Moorfields Eye Hospital.

They’re using Artificial Intelligence to build a model that can look at a single picture of your eyes…

…and not only diagnose blindness, but predict heart attacks, strokes, or Parkinson’s.

And that’s just the beginning.

I genuinely believe that technologies like AI will bring a transformation as far-reaching…

…as the industrial revolution, the coming of electricity, or the birth of the internet.

Now, as with every one of those waves of technology, AI will bring new knowledge…

…new opportunities for economic growth, new advances in human capability…

…and the chance to solve problems that we once thought beyond us.

But like those waves, it also brings new dangers and new fears.

So, the responsible thing for me to do – the right speech for me to make – is to address those fears head on…

…giving you the peace of mind that we will keep you safe…

…while making sure you and your children have all the opportunities for a better future that AI can bring.

Now, doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.

So, I won’t hide them from you.

That’s why today, for the first time, we’ve taken the highly unusual step…

…of publishing our analysis on the risks of AI…

…including an assessment by the UK intelligence communities.

These reports provide a stark warning.

Get this wrong, and AI could make it easier to build chemical or biological weapons.

Terrorist groups could use AI to spread fear and destruction on an even greater scale.

Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse.

And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely

…through the kind of AI sometimes referred to as ‘super intelligence’.

Indeed, to quote the statement made earlier this year by hundreds of the world’s leading AI experts:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Now, I want to be completely clear:

This is not a risk that people need to be losing sleep over right now.

I don’t want to be alarmist.

And there is a real debate about this - some experts think it will never happen at all.

But however uncertain and unlikely these risks are, if they did manifest themselves, the consequences would be incredibly serious.

And when so many of the biggest developers of this technology themselves warn of these risks…

…leaders have a responsibility to take them seriously, and to act.

And that is what I am doing today – in three specific ways.

First, keeping you safe.

Right now, the only people testing the safety of AI…

…are the very organisations developing it.

Even they don’t always fully understand what their models could become capable of.

And there are incentives in part, to compete to build the best models, quickest.

So, we should not rely on them marking their own homework, as many of those working on this would agree.

Not least because only governments can properly assess the risks to national security.

And only nation states have the power and legitimacy to keep their people safe.

The UK’s answer is not to rush to regulate.

This is a point of principle – we believe in innovation, it’s a hallmark of the British economy

so we will always have a presumption to encourage it, not stifle it.

And in any case, how can we write laws that make sense for something we don’t yet fully understand?

So, instead, we’re building world-leading capability to understand and evaluate the safety of AI models within government.

To do that, we’ve already invested £100m in a new taskforce

more funding for AI safety than any other country in the world.

And we’ve recruited some of the most respected and knowledgeable figures in the world of AI.

So, I’m completely confident in telling you the UK is doing far more than other countries to keep you safe.

And because of this – because of the unique steps we’ve already taken – we’re able to go even further today.

I can announce that we will establish the world’s first AI Safety Institute – right here in the UK.

It will advance the world’s knowledge of AI safety.

And it will carefully examine, evaluate, and test new types of AI

so that we understand what each new model is capable of

exploring all the risks, from social harms like bias and misinformation, through to the most extreme risks of all.

The British people should have peace of mind that we’re developing the most advanced protections for AI of any country in the world.

Doing what’s right and what’s necessary to keep you safe.

But AI does not respect borders.

So we cannot do this alone.

The second part of our plan is to host the world’s first ever Global AI Safety Summit next week, at Bletchley Park – the iconic home of computer science.

We’re bringing together the world’s leading representatives…

…from Civil Society…

…to the companies pioneering AI…

…and the countries most advanced in using it.

And yes – we’ve invited China.

I know there are some who will say they should have been excluded.

But there can be no serious strategy for AI without at least trying to engage all of the world’s leading AI powers.

That might not have been the easy thing to do, but it was the right thing to do.

So, what do we hope to achieve at next week’s Summit?

Right now, we don’t have a shared understanding of the risks that we face.

And without that, we cannot hope to work together to address them.

That’s why we will push hard to agree the first ever international statement about the nature of these risks.

Yet AI is developing at breath taking speed.

Every new wave will become more advanced, better trained, with better chips, and more computing power.

So we need to make sure that as the risks evolve, so does our shared understanding.

I believe we should take inspiration from the Intergovernmental Panel on Climate Change…

…which was set up to reach an international scientific consensus.

So, next week, I will propose that we establish a truly global expert panel…

…nominated by the countries and organisations attending …

…to publish a State of AI Science report.

Of course, our efforts also depend on collaboration with the AI companies themselves.

Uniquely in the world, those companies have already trusted the UK with privileged access to their models.

That’s why the UK is so well-placed to create the world’s first Safety Institute.

And at next week’s Summit I will work together with the companies and countries to deepen our partnerships.

My vision, and our ultimate goal, should be to work towards a more international approach to safety…

…where we collaborate with partners to ensure AI systems are safe before they are released.

And so to support this, we will make the work of our Safety Institute available to the world.

That’s the right thing to do morally, in keeping with the UK’s historic role on the international stage.

And it’s also the right thing economically, for families and businesses up and down the country.

Because the future of AI is safe AI.

And by making the UK a global leader in safe AI, we will attract even more of the new jobs and investment that will come from this new wave of technology.

Just think for a moment about what that will mean for our country.

The growth it will catalyse, the jobs it will create, the change it can deliver –for the better.

And that’s the third part of our plan – to make sure that everyone in our country can benefit from the opportunities of AI.

We’ve already got strong foundations.

Third in the world for tech, behind only the US and China.

The best place in Europe to raise capital.

All of the leading AI companies - choosing the UK as their European headquarters.

The most pro-investment tax regime…

The most pro-entrepreneur visa regime, to attract the world’s top talent…

…and the education reforms to give our own young people the skills to succeed.

And we’re going to make it even easier for ambitious people with big ideas to start, grow, and compete in the world of AI.

That’s not just about having the technical skills, but the raw computing power.

That’s why we’re investing almost a billion pounds in a supercomputer thousands of times faster than the one you have at home.

And it’s why we’re investing £2.5bn in quantum computers, which can be exponentially quicker than those computers still.

To understand this, consider how Google’s Sycamore quantum computer…

…can solve a maths problem in 200 seconds, that would take the world’s fastest supercomputer 10,000 years.

And as we invest more in our computing power, we’ll make it available for researchers and businesses, as well as government…

…so that when the best entrepreneurs in the world think about where they want to start and scale their AI businesses, they choose the UK.

And finally, we must target our scientific efforts towards what I think of as AI for good.

Right across the western world, we’re searching for answers to the question of how we can improve and increase our productivity.

Because that’s the only way over the long-term to grow our economy and raise people’s living standards.

And in a million different ways, across every aspect of our lives, AI can be that answer.

In the public sector, we’re clamping down on benefit fraudsters…

…and using AI as a co-pilot to help clear backlogs and radically speed up paperwork.

Just take for example, the task of producing bundles for a benefits tribunal.

Before, a week’s work could produce around 11.

Now – that takes less than an hour.

And just imagine the benefits of that rolled out across the whole of government.

In the private sector, start-ups like Robin AI are revolutionising the legal profession…

…writing contracts in minutes, saving businesses and customers time and money.

London-based Wayve is using sophisticated AI software to create a new generation of electric, self-driving cars.

But more than all of this – AI can help us solve some of the greatest social challenges of our time.

It can help us finally achieve the promise of nuclear fusion, providing abundant, cheap, clean energy with virtually no emissions.

It can help us solve world hunger, by making food cheaper and easier to grow…

…and preventing crop failures by accurately predicting when to plant, harvest or water your crops.

And AI could help find novel dementia treatments or develop vaccines for cancer.

That’s why today we’re investing a further £100m to accelerate the use of AI…

…on the most transformational breakthroughs in treatments for previously incurable diseases.

Now I believe nothing in our foreseeable future will be more transformative for our economy, our society, and all our lives, than this technology.

But in this moment, it is also one of the greatest tests of leadership we face.

It would be easy to bury our heads in the sand and hope it’ll turn out alright in the end.

To decide it’s all too difficult, or the risks of political failure are too great.

To put short-term demands ahead of the long-term interest of the country.

But I won’t do that.

I will do the right thing, not the easy thing.

I will always be honest with you about the risks.

And you can trust me to make the right long-term decisions…

…giving you the peace of mind that we will keep you safe…

…while making sure you and your children have all the opportunities for a better future that AI can bring.

I feel an extraordinary sense of purpose.

When I think about why I came into politics…

Frankly, why almost anyone came into politics…

It’s because we want to make life better for people…

…to give our children and grandchildren a better future.

And we strive, hour after hour, policy after policy, just trying to make a difference.

And yet, if harnessed in the right way, the power and possibility of this technology…

…could dwarf anything any of us have achieved in a generation.

And that’s why I make no apology for being pro-technology.

It’s why I want to seize every opportunity for our country to benefit in the way I’m so convinced that it can.

And it’s why I believe we can and should, look to the future with optimism and hope.

Thank you.

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 9:26 PM
[-]trevor6mo125

And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely...

The UK’s answer is not to rush to regulate.

This is a point of principle – we believe in innovation, it’s a hallmark of the British economy…

…so we will always have a presumption to encourage it, not stifle it.

This is pretty unfortunate, insofar as speeches go, especially because this foreshadows similar policies to be adopted in the US. But it's not surprising, given the pace at which AI's information processing capabilities are already rapidly becoming the keys to the kingdom for major governments and militaries around the world e.g. for SOTA information warfare in the hybrid warfare era solidified by the Ukraine War.

The bit that came immediately after those lines also felt pretty important:

And in any case, how can we write laws that make sense for something we don’t yet fully understand?

So, instead, we’re building world-leading capability to understand and evaluate the safety of AI models within government.

To do that, we’ve already invested £100m in a new taskforce…

…more funding for AI safety than any other country in the world.

And we’ve recruited some of the most respected and knowledgeable figures in the world of AI.

So, I’m completely confident in telling you the UK is doing far more than other countries to keep you safe.

And because of this – because of the unique steps we’ve already taken – we’re able to go even further today.

I can announce that we will establish the world’s first AI Safety Institute – right here in the UK.

It will advance the world’s knowledge of AI safety.

And it will carefully examine, evaluate, and test new types of AI…

…so that we understand what each new model is capable of…

…exploring all the risks, from social harms like bias and misinformation, through to the most extreme risks of all.

The British people should have peace of mind that we’re developing the most advanced protections for AI of any country in the world.

To me this seemed like good news - "don't rush to regulate, actually take the time for experts to figure out what makes sense" sounds like the kind of approach that might actually give sensible regulation rather than something that was quickly put together and sounded good but doesn't actually make any sense.

[-]trevor6mo4-13

I think it's clear that the overall tone of this speech was pro-acceleration and just giving lip service to AI safety. You never know for sure with high-level politician speeches, of course, since plausible deniability and playing both sides is baked in, this is their area of expertise and has been for centuries; and in spite of that, this speech was still unusually strong against AI safety.

He ends the speech on this note:

And that’s why I make no apology for being pro-technology.

It’s why I want to seize every opportunity for our country to benefit in the way I’m so convinced that it can.

And it’s why I believe we can and should, look to the future with optimism and hope.

He also explicitly says he understands AI safety concerns and that they are unlikely, which makes his rejection of them in favor of acceleration even worse. Money is still being dangled instead of spent, and it could instead go to AI safety's enemies at any time; they are still holding the cards. This makes it even more relevant to evaluate the odds that we're already in the timeline where major governments and militaries are enamored with SOTA AI-powered psychological manipulation, instead of not evaluating that.

this speech was still unusually strong against AI safety.

I think that's a reasonable read if you're operating in a conceptual framework where acceleration and safety must be mutually exclusive, but the sense I got was that that's not the framework he's operating under. My read of the speech is as pro-acceleration and pro-safety. Invest a lot in AI development, and also invest a lot in ensuring its safety.

I think it's definitely possible that Rishi Sunak might be operating in an epistemic environment where both AI capabilities and AI alignment seem easy, but that's also bad news. 

If leaders think that alignment is easy, then that's setting humanity up for a situation where leaders pick the alignment engineers who are the best at loudly saying "yes, I can do it, pick me pick me pick meeeeee" and then everyone dies because the leadership stacked their team with people with the strongest tendency to imagine themselves succeeding, when in reality humans solving alignment might be like chimpanzees doing bridge engineering or rocket science.

If we had regulation in the UK ASAP then that would mean that governments would still be able to exploit uses of existing systems without burning the remaining timeline before the finish line. But this indicates that people are probably going to have to continue trying to solve alignment during race dynamics instead of during a regulatory pause, and $100m is probably not worth that, especially because that $100m will give the UK leverage over the AI safety community, instead of regulation which would give them leverage over AI capabilities companies.

It's important not to ignore that this speech is to the general public.
While I agree that "in the most unlikely but extreme cases" is not accurate, it's not clear that this reflects the views of the PM / government, rather than what they think it's expedient to say.

Even if they took the risk fully seriously, and had doom at 60%, I don't think he'd say that in a speech.

The speech is consistent with [not quite getting it yet], but also consistent with [getting it, but not thinking it's helpful to say it in a public speech]. I'm glad Eliezer's out there saying the unvarnished truth - but it's less clear that this would be helpful from the prime minister.

It's worth considering the current political situation: the Conservatives are very likely to lose the next election (no later than Jan 2025 - but it often happens early [this lets the governing party pick their moment, have the element of surprise, and look like calling the election was a positive choice]).
Being fully clear about the threat in public could be perceived as political desperation. So far, the issue hasn't been politicized. If not coming out with the brutal truth helps with that, it's likely a price worth paying. In particular, it doesn't help if the UK government commits to things that Labour will scrap as soon as they get in.

Perhaps more importantly from his point of view, he'll need support from within his own party over the next year - if he's seen as sabotaging the Conservatives' chances in the next election by saying anything too weird / alarmist-seeming / not-playing-to-their-base, he may lose that.

Again, it's also consistent with not quite getting it, but that's far from the only explanation.

We could do a lot worse than Rishi Sunak followed by Keir Starmer.
Relative to most plausible counterfactuals, we seem to have gotten very lucky here.

So, next week, I will propose that we establish a truly global expert panel nominated by the countries and organisations attending to publish a State of AI Science report

This could result in something. Right now there seems to be a sort of quiet majority position among AI researchers that the risk is substantial and potentially near, but that was just a survey, over AI researchers who had not really been invited to engage deeply with the topic and reach an actionable conclusion.

This could be the point when this becomes global common knowledge, the atmosphere in government and in the public consciousness could tilt.

100mn for safety research

£2.5bn in quantum computers, which can be exponentially quicker than those computers still.

This could be read as investing 25 times as much into the most inscrutable, potent form of capabilities research? There are very few people who even have intuitions for what QCs are capable of, I don't feel great about this xD

Well, this is nice to see! Perhaps a little late, but still good news...