This is a linkpost for https://networksocieties.com/p/the-perpetual-technological-cage
The country or countries that first develop superintelligence will make sure others cannot follow,
You seem to think that superintelligence, however defined, will by default be taking orders from meatbags, or at least care about the meatbags' internal political divisions. That's kind of heterodox on here. Why do you think that?
That's a fair point, I should have been more explicit.
My post is examining the risk conditional on the labs solving alignment well enough to keep the ASI under their control.
So yes, I agree that the primary risk is uncontrolled alignment failure.
I'm just pointing out that even if labs develop aligned superintelligence, we face a second risk: a global, perpetual monopoly on power.
Unless superintelligence is developed under a global consensus, the risks will be shared by all, but the upside won't. This is why I signed the superintelligence statement.
The superintelligence statement is the following:
If superintelligence were to benefit all humanity, I might accept some risk in building it because of its immense potential, including its ability to help reduce other existential threats. We may need help to survive The Great Filter. But at the moment, the risks will be shared by all humanity while the benefits would be concentrated in the US and/or China.
The country or countries that first develop superintelligence will make sure others cannot follow, just as the first nuclear powers created non-proliferation treaties to deter others. Ukraine was persuaded to give up its nuclear arsenal in exchange for security assurances from the US and Europe.
If the US and China build superintelligence without major international reform, they might offer limited funding and technology to other nations in exchange for cooperation, but they will enforce that no other countries build it, with force if necessary.
Yet this time, the stakes go far beyond nuclear weapons. Nuclear non-proliferation merely limited the means of destruction; a monopoly on superintelligence would limit the means of invention itself. It would mean that technological progress for humanity outside their borders could be limited forever. That’s why now is not the time. Not only because scientists believe it’s unsafe, or because there is no strong public buy-in, but also because the US and/or China may be constructing a perpetual technological cage for the rest of humanity.
--
Join YouCongress to debate, vote, and propose policies for safe, democratic AI governance — before a few nations decide for all of us.