LESSWRONG
LW

AI GovernanceAI
Frontpage

1

The Cadca Transition Map - Navigating the Path to the ASI Singleton

by cadca
26th Jun 2025
12 min read
0

1

AI GovernanceAI
Frontpage

1

New Comment
Moderation Log
More from cadca
View more
Curated and popular this week
0Comments

Progress

“Acquisition, acquisition, acquisition, merger! Until there was only one business left in every field.”

― Vincent H. O'Neil

 

Human progress is mapped in dozens of different metrics: food supply, gross domestic product, computational power. As civilization has developed, many of these metrics have trended upwards, finding exponential growth over time. Many other metrics had bucked that trend, though. Child mortality, food scarcities. Of these, there is one peculiar value of note: the number of sovereign states. Today, there are ~195 sovereign countries in the world. A thousand years ago, there were hundreds of sovereign kingdoms, nations, and states. Ten thousand years ago, humanity lived across ten thousand different tribes.

This is an odd trend, which raises a peculiar question. Why does this seemingly innocuous number follow the trends of much more troubling statistics? There are many different factors, most of which revolve around a central axis: technological improvements in communication, production, and security have led to sovereign states growing in power and influence. Through unification or conquest, those ten thousand tribes eventually aggregated into 195 countries. This is a well known and well documented trend observed by historians and sociologists like Charles Tilly, who famously argued that states consolidated power through competition and warfare.

As mentioned earlier, those many metrics have seen exponential growth over time. However, there’s a new development which seems poised to deliver these potent returns in extremely quick time. If you’re reading this, I assume you’re familiar with the idea of the ‘intelligence explosion’, or of the ‘technological singularity’. If humanity is looking at a new technological revolution in artificial intelligence, then the ramifications will be vast. Those metrics that we discussed will likely see greater progress than currently expected.

This brings into question that specific metric, the number of sovereign states. If AI delivers an Intelligence Revolution, particularly in those specific fields that have historically pressured this metric, then we can expect a great change to come to the world in the next few decades. Nations will likely fall, and blocs will likely consolidate. This is rational extrapolation based on observed patterns of political and social history. The ramifications, however, will have a significant impact on millions, if not billions, of lives across the globe.

 

The Model

This is the Cadca Transition Map. It’s a model I created to help contextualize the effects of the Intelligence Revolution. It is a 2D plane, defined by the axes I’ve labelled ‘Centralization’ and ‘Integration’. To perform a focused analysis of the purely geopolitical and structural dynamics, this model provisionally holds the technical alignment problem constant. This is a thought experiment designed to answer the question: even if we achieve perfectly obedient, controllable AI, could the game theory of its deployment still lead us to catastrophe? It separates the challenge of building the tool from the challenge of using it wisely. This allows us to see a long-term layer of risk that is often overlooked in favor of short-term challenges.

The x-axis represents ‘centralization’, which, in this context, specifically refers to the reduction in sovereign actors towards one. The left-hand extreme is decentralized, which implies many different agents. The right-hand extreme is centralized, which implies one supreme, uncontested agent. The y-axis represents ‘integration’, which, in this context, refers to the agency that artificial intelligence has in governance. Fully unintegrated, the bottom-most extreme, refers to governance entirely under the control and management of humans. Fully integrated, the top-most extreme, refers to governance entirely under the control and management of AI. 

The yellow dot represents our current starting position: fully unintegrated, with only the most minor tasks beginning to see automation, and decentralized, with political power split between the greatest number of nations to exist in the context logically. In a later section, I will discuss the vectors that will determine our path across this map, but for now, understand that the most significant pressures push our dot both up and towards the right.

Each of the four corners represents a different state of affairs. These are speculative, but based on rational extrapolation. 

  • The bottom-left represents ‘fragmented nation-states’, which mirrors the modern world. In this corner, competing nations conflict over valuable resources and political power. Human policy makers are entirely in control, with some automation beginning to establish in low-level aspects of governance.
    • On the left side of this quadrant, power is split between hundred of nation states. They all pursue their own agendas, though can be expected to cooperate when mutually beneficial.
    • On the right side of this quadrant, these states begin to consolidate their power into larger international coalitions. Individual states forfeit their sovereignty to these coalitions in order to remain competitive in the world.
    • At the bottom of this quadrant, AI plays a minimal role in governance. It is used for automating menial tasks, like bookkeeping and software engineering.
    • At the top of this quadrant, AI plays an increased role in governance. As capabilities increase, AI is integrated into the economy, the national security apparatus, and the political spheres, playing significant roles as analysts. Still, most decision-making power lies in human hands.

 

  • The bottom-right represents a ‘panoptic state’. This would be a world similar to our current one, but with a characteristic consolidation of state power from many small nations into larger international blocs.
    • On the left side of this quadrant, one might expect to see one or two hegemonic blocs emerge, with smaller independent states heavily pressured or even puppeted by these blocs. These hegemons may force the rest of the world to adopt their technological and governance standards—their social credit system, their internet protocols, their surveillance architecture.
    • On the right side of this quadrant, one might expect to see those blocs further consolidate into true global states. Pressure from increasing centralization would inevitably force one state to emerge supreme, either through significant coercion or global warfare.
    • At the bottom of this quadrant, one might expect AI to play a minimal role in governance. It would be responsible for data analysis on colossal scales. Mass surveillance, predictive crime reporting, societal monitoring. However, human bureaucrats still make the majority of decisions.
    • At the top of this quadrant, one might expect AI to play a larger but still strictly managed role. It might be used to automate much of the lower-level implementation. Law enforcement, tax collection, economic planning. Significant to note here is a remaining human elite, who would still make the high-level decisions beyond the purview of AI.

 

  • The top-left represents the ‘artificial agora’. This would be a world similar to ours, but where artificial intelligence is capable and cheap enough to massively upset the balance of power, creating a rapidly changing and potentially dangerous world.
    • On the left side of this quadrant, one might expect to see a world dominated by countless AI entities of varying origins and goals. Governments, corporations, NGOs, and even individuals can develop and distribute these powerful entities. Power is completely fractured and based on data, processing power, and code.
    • On the right side of this quadrant, one might expect to see fewer AI, concentrated in a few global super-states. These super-states will be primarily managed by the AI, and thus these AIs will likely be in direct competition with each other.
    • At the bottom of this quadrant, AI is a powerful tool but ultimately still a tool. They are hyper-competent, but the motivations and controls are still recognizably human. The danger is that these tools are so powerful and fast that their actions are impossible to fully control or understand.
    • At the top of this quadrant, one might expect AI to be able to fully supplant human autonomy. At this level, AIs would essentially be the final decision-makers. They will likely compete with one another in pursuing their numerous goals, and in their conflicts, humans would likely be near-insignificant.

 

  • The top-right represents the ‘supersovereign.’ This would be a world where a single, unified artificial intelligence (or a tightly integrated council of AIs) has become the sole governing entity for all of humanity. It has the intelligence and the power to manage global affairs with unparalleled efficiency. This is Bostrom’s notion of a ‘singleton’, specifically in ASI form.
    • On the left side of this quadrant, one might expect to see a small council of advanced AI agents that collectively rule together. Each AI would likely have its own specialization- logistics, research, security- and would collectively make decisions.
    • On the right side of this quadrant, instead of several independent agents, all of civilization is governed by one AI agent that wields absolute authority over its domain. It would collectively and unilaterally manage every facet of governance.
    • At the bottom of this quadrant, AI manages most aspects of governance, but major decision-making and agenda-setting are still ultimately in the hands of a few individuals at the top of the pyramid.
    • At the top of this quadrant, humanity is fully removed from the process of governance, replaced by AI. The AI supersovereign will have full authority over global civilization, with zero input from humans.

This model is meant to help understand and communicate the future of human society. The starting point is deliberately in the bottom left, but we could extrapolate down and to the left to frame the history of human society: it has always been a story of technological innovation and integration. It has always been a story of political consolidation and centralization. The model assumes that these forces continue, and illustrates possible, or even inevitable, ‘states’ that society might go through.

 

Vectors

Currently, I’ve plotted our starting point in the bottom left corner. This is because I believe our current state to be simultaneously the least centralized and the least integrated we will ever be. As AI advances, forces will inevitably push us out of this corner towards the others. I refer to these forces as ‘vectors’, because they have both a direction and a magnitude. Here are some example vectors that I see arising. 

 

Centralization (Rightwards, slightly upwards)

This vector is illustrated by the reduction in sovereign states that have existed over time. As communication becomes faster and cheaper, militaries become more powerful and better equipped, and populations grow larger and more productive, the most powerful states will invariably subsume less powerful ones. This is the principle of ‘survival of the fittest’ on a global political scale. With AI, this concentration becomes even more pronounced. The organizations that can muster the most compute will be able to train the most powerful AI.

Automation (Upwards, slightly rightwards)

For the purpose of this conversation, I consider ‘integration’ synonymous with ‘automation’. After all, AI is ultimately just automated intelligence, and intelligence is one of humanity’s most powerful tools. Nations that integrate AI quicker are externally more competitive and internally more productive. Thus, they will be poised to expand and consolidate faster than non- empowered counterparts. This can quickly negate downward vectors, as the conversations shift from, “we don’t know how to integrate this technology safely” to “we must integrate this technology quickly in order to stay ahead of our enemies”.

Societal Reluctance (Downwards, slightly leftwards)

There is currently significant apprehension across society about the adoption of AI and its role in our world. Much of the current discussion centers around job displacement, environmental impact, intellectual property, etc. Beyond that, much of our cultural understanding of AI is rooted in antagonism. It is the ‘big bad’ in many of our sci-fi stories, whether overtly or covertly. The magnitude of this vector is most likely to weaken over time, but would likely see a major spike in the event of an AI disaster.

Political Inertia (Leftwards, slightly downwards)

Policy change is notoriously difficult and slow to effect across national and international scales. Lawmakers already struggle to enact meaningful change in response to emergent, rapidly changing crises. The scale and scope of AI can just as plausibly exacerbate this as counter this. Beyond that, agents are naturally incentivised to pursue their own goals, and to seek power as an instrument to that. Thus, many agents will be more hesitant to relinquish their power to larger coalitions that might not always pursue the same goals.

AI Proliferation (Leftwards, upwards)

The dissemination of powerful AI to the public is a notable vector that runs perpendicular to most others. By making these AI open-source, more organizations and individuals can access them and use their power. This decentralizing force is coupled with an increase of integration across the board as the tools become easier to access, edit, and less expensive to use. Alternatively, antitrust actions against large tech companies that develop AI, like Google, could plausibly have the same effect.


It is important to note that, aggregating all of these vectors, I anticipate a significant average tendency towards the top-right of the map. The vectors that push towards centralization and integration are overall more powerful than their opposites. Thus, the question shifts from if we reach a supersovereign to how we get there. This is the purpose of the map: to chart our course there. The important thing to note here is that I do believe we have the power to choose this course, because, ultimately, we decide the magnitude of the vectors. If we wish to avoid the panoptic state, we empower upward vectors and leftward vectors. If we wish to avoid the agora, we empower the downward and rightward vectors. 

Equally notable are the inherent effects each quadrant will have on the vectors themselves. For example, the agora would be chaotic and unstable, by nature. Thus, we could expect to see a surge in societal reluctance in a relatively short amount of time. On the other hand, the panoptic state is stable by nature. Societal reluctance can be carefully controlled or even forcefully overwritten. Thus, it is no great leap in logic to believe that a panoptic state could last for a significantly long time, as its status quo is reinforced by advanced AI. However, inevitably the pressure towards greater automation and maximum efficiency will push it upwards.

This brings me to the logical conclusion of the model. Considering the impact of these vectors, I see only two results for continued AI development. The first is the inevitable development of a supersovereign. It is the ultimate embodiment of progress, shaped and encouraged by the exigent environmental pressures. Even if society ventures into a different quadrant, I do believe that, inevitably, those pressures will draw society towards the top-right corner. The second I refer to as a ‘flashout’. It’s a fanciful way of saying ‘complete civilizational collapse’. Nuclear war, ASI bioweapon, human extinction - the possibilities far outstrip the range of the imagination. I think this outcome is unlikely, but far from impossible. It should be a sobering point of reflection about what we truly want for AI and for ourselves.

 

The Fundamental Variable

“All models are wrong, but some are useful.”

― George E. P. Box

There is an element of this situation that the model is simply insufficient to account for: human values. Trying to model the entire breadth of human values and how they correlate to a rapidly changing world is, surprisingly, quite difficult. However, any shift in values could have a massive impact on what the world looks like in any of these quadrants. For example, a panoptic state could plausibly be a totalitarian regime from the works of Orwell. Equally plausible is a megacorporation that puts less value on state authority and more value on economic growth. The myriad AIs of the agora will be as varied and unpredictable as the people who develop them. Even the supersovereign itself could be set with any configuration of values; a utopia builder is just as plausible as a paperclip machine. Both are fully integrated, fully centralized. The main difference is the values that we enshrine in the systems themselves.

Having established a model of the structural dynamics driving geopolitical change, we can now address the question of values. The model is deliberately value-neutral, but the path society takes across its landscape is not. The route to a centralized, integrated state is critically important, as different paths create irreversible, path-dependent outcomes for the character of the final state. The how inexorably shapes the what. With that consideration, it’s important to reiterate the dangers of entering either the agora or the panoptic state on the path to the supersovereign. A supersovereign that evolves from a panoptic state will likely invariably be authoritarian, as that would be the system it was built upon. On the other hand, a supersovereign from the artificial agora would only be possible by eradicating any ASI that threatens it. Thus, while determining the ASI’s terminal goal is nigh-impossible, such an ASI would likely be highly aggressive and aligned with suboptimal goals. Either pose long-term risks for the health and safety of humanity.

Even if we do get a properly aligned supersovereign with proper values, the mechanics of governance is something that must be further discussed. Should any individual human or group of humans have power to influence the supersovereign and its decisions? A supersovereign would be immune to human greed, fear, wrath. It could operate on perfect rationale and create a state free of corruption. Does reintroducing humanity into decision-making threaten that? These are questions that far too few people are discussing. Human society as a whole needs to discuss this together, because the forecasts are in, and it seems as if we may need to make a decision within the next few decades.

Much of the conversation about AI safety is about the technical challenges that researchers face. This is inarguably a crucial topic to be discussing, even if it is a bit inaccessible to most people. However, even the most finely calibrated machine with perfect precision is bound to falter if misused. We must discuss the precision aspects, but we must also discuss the accuracy aspects. We must come to a consensus about what future we want to aim for. Even if not a consensus, we need to discuss what we want from this, in clear, concrete terms. Doomerism is not a valid substitute for long-term planning.

Thanks for reading my first post! In the course of writing this, I've gone down a rabbit hole concerning CEV; instead of including my thoughts here, I will likely share my thoughts in my next post.