I'm curious if you're talking to left-wing folks about this at all, because this is a very common view on the left. This article by Mike Monteiro is a representative example.
Other than that - yes, I think you're fighting the good fight and I'd love to help somehow. Maybe one priority I'd point out is that the stronger actors need to be weakened first. Regulation or security measures that weaken the weaker actors first, but leave the stronger ones capable of racing to AI, will do more harm than good.
I believe, deeply, that my cause is just and the truth is on my side. And that means we can win.
No, it actually doesn't.
AI companies are explicitly trying to build AIs that are smarter than humans, despite clear signs that it might lead to human extinction. It will be tragic and ironic if humanity’s largest project ever is an all-out race to destroy ourselves. But can we really stop building more and more powerful AI? Or do we just need to try to “steer” it and hope for the best?
Climate change and other societal failures have led more and more people to realize that the world is not the sensible, ordered place we’ve often been taught to believe it is. The world is a mess, countries can’t cooperate, and there’s no one in charge of making sure we don’t do crazy things like build technology that has a good chance of killing us all. And it looks like that’s what we’re going to do.
So does that mean we should just give up on actually managing the development of AI sensibly? Hope for the best, plan for the worst… well, not the worst, but… a scenario where (if we’re lucky) at least some people survive… maybe the ones who live in the right countries, the ones who own shares of AI companies, the ones with bunkers, …
I’m not doing that. I’m not going to do that. I don’t care how bad the odds look, I plan to go down fighting. Because I believe, deeply, that my cause is just and the truth is on my side. And that means we can win.
When I say “the truth is on my side”, I don’t mean that we’ll definitely lose control of superhuman AI. What I mean is: There is a big risk, and the risk is not worth taking if there’s any non-horrible thing we can do to avoid it.
And there is! We can get rid of advanced AI chips and the factories used to produce them. Scaling up AI is a massive project. It relies on concentrated supply chains, unprecedented investments, and government support. We can reduce the amount of computation available for AI instead of aggressively scaling it up. Governments of the world can work to verify and enforce such an arrangement.
We’ve done this for nuclear weapons. There has still been a gradual proliferation of nuclear capabilities, but the difference here is: superintelligence doesn’t exist yet, so there could be much more political will to prevent it from being developed. Imagine how the US would’ve reacted if North Korea was trying to build the first nuclear weapon, instead of just its first nuclear weapon -- I don’t think it would’ve happened.
I don’t know if this is the best plan. But I haven’t heard a better one. And it really seems like time is running out. We don’t know how fast AI progress will be, but it’s just not reasonable to count on it stalling out before we get to real AI. To gamble everyone’s life on such a prediction. Again, it’s the uncertainty that’s the really knockdown argument here.
Many people I talk to think this proposal is radical or unrealistic, but it’s actually common sense -- if you believe the risk is real, and see no better alternative. Of course, there’s no guarantee that we can make it happen -- the world is a bit of a mess. But I think the barriers here are political, not technical.
So I’m starting an organization to help us to do the sane thing. Evitable’s mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligence.
Polls show that most people don’t want superintelligence. I think people don’t realize just how bad the situation is, though. Or they (also) don’t feel like there’s anything they can do about it. But when everything you care about is threatened, you don’t give up, you fight to protect it.
If we can get people to understand how dire the situation is, I think that’s half the battle. The other half is showing them that — at least for now — they still have power.
You don’t have to believe superintelligence is a real thing to support this mission. You just have to believe that countries should not be throwing all of their weight behind AI companies in their efforts to build it.
You don’t have to believe that superintelligence is an extinction risk. You just have to believe that it’s not the right choice for humanity to build it right now.
But I also think more and more people will realize that the risk of extinction from superintelligence is real and urgent. And then there’s the other risks. Total unemployment. Extreme, unprecedented concentration of power. The end of human culture and relationships as we know them. All of this is at stake.
As more and more people realize our situation, the only argument against stopping AI will be: “it’s inevitable”. It’s not.