131

LESSWRONG
LW

130
SuperintelligenceAI

5

The Perpetual Technological Cage

by Hector Perez Arenas
22nd Oct 2025
2 min read
0

5

This is a linkpost for https://networksocieties.com/p/the-perpetual-technological-cage
SuperintelligenceAI

5

New Comment
Moderation Log
More from Hector Perez Arenas
View more
Curated and popular this week
0Comments

Unless superintelligence is developed under a global consensus, the risks will be shared by all, but the upside won't. This is why I signed the superintelligence statement.

The superintelligence statement is the following:

We call for a prohibition on the development of superintelligence, not lifted before there is
    1. broad scientific consensus that it will be done safely and controllably, and
    2. strong public buy-in.
 
Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.


If superintelligence were to benefit all humanity, I might accept some risk in building it because of its immense potential, including its ability to help reduce other existential threats. We may need help to survive The Great Filter. But at the moment, the risks will be shared by all humanity while the benefits would be concentrated in the US and/or China.

The country or countries that first develop superintelligence will make sure others cannot follow, just as the first nuclear powers created non-proliferation treaties to deter others. Ukraine was persuaded to give up its nuclear arsenal in exchange for security assurances from the US and Europe.

If the US and China build superintelligence without major international reform, they might offer limited funding and technology to other nations in exchange for cooperation, but they will enforce that no other countries build it, with force if necessary.

Yet this time, the stakes go far beyond nuclear weapons. Nuclear non-proliferation merely limited the means of destruction; a monopoly on superintelligence would limit the means of invention itself. It would mean that technological progress for humanity outside their borders could be limited forever. That’s why now is not the time. Not only because scientists believe it’s unsafe, or because there is no strong public buy-in, but also because the US and/or China may be constructing a perpetual technological cage for the rest of humanity.

--

Join YouCongress to debate, vote, and propose policies for safe, democratic AI governance — before a few nations decide for all of us.