- P(Human extinction by 2030) = 25% * 30% = 7.5%
- P(Permanent world dictatorship by winner by 2030) = 25% * 40% = 10%
Imagine that you received such a letter. Then asking you to quit the race would reduce P(your dictatorship) to 0 instead of 3.3%. But how does it reduce P(extinction)? Maybe a better argument is that alignment is hard AND that alignment to anyone's dictatorship or another dystopian future (e.g. North Korea letting most of its population starve) is WAY harder?
(edited)
An AI lab head could do a lot to prevent extinction if they did not run an AI lab. For starters they could make it their (and their org's) fulltime job to convince their entire network that extinction is coming. Then they could try convincing the public and run for US election to get an AI pause.
But yes I haven't spelled out a detailed alternate plan for what to do if you're a well-networked billionaire trying to fix AI risk, and it is worth doing so.
This might have relevant stuff: Support the movement against AI xrisk
This letter is addressed
I understand there are many reasons you might care a lot about creating ASI.
I'm sure you also have other frameworks for approaching life decisions besides using probabilities like this. You can still spend 15 minutes of your time thinking through the conclusions from this framework, and then discard it.
If I were in your shoes as an AI lab head, as of 2025-10, here are my probabilities of the following outcomes by 2030.
I would not personally accept this deal, if I was in your shoes.
This is maybe obvious to you already
I do want your lab to be forcibly shutdown in order to prevent these outcomes, but before I do that, I wanted to make one last attempt to persuade you to voluntarily shutdown your lab.