LESSWRONG
LW

AI
Frontpage

19

Multipolar AI is Underrated

by Allison Duettmann
17th May 2025
19 min read
1

19

AI
Frontpage

19

Multipolar AI is Underrated
4RogerDearnaley
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 4:21 AM
[-]RogerDearnaley4mo42

Another possible advantage of multipolar AI: if we have multiple AI's of frontier capability levels, then (ignoring the effects of race dynamics), that increases the probability that one of them is well aligned. Similarly it increases the probability that one is badly aligned. However, there is likely a capability level of ASI that we humans would be unable to control or defeat, but where if we sided with one of two ASIs of that level that were having a conflict, our contribution could still swing the balance and affect which of them won. It also makes protocols like debate more convincing, if the debaters are AIs that are actually of unrelated design: they seem less likely to be colluding to deceive us.

Reply
Moderation Log
More from Allison Duettmann
View more
Curated and popular this week
1Comments

Setup

AGI timelines are rapidly shortening, with a growing number of predictions and scenarios suggesting something close to AGI could emerge within 2 years - 5 years. Currently, most plausible short timeline scenarios lean heavily toward centralized or “unipolar” scenarios: either via AI labs closely collaborating with or being nationalized by the government in a "positive" scenario, or through an uncontrollable breakout of an AGI singleton in a "negative" scenario. 

This post is an invitation to consider alternative scenarios – scenarios that are decentralized or “multipolar”. To provide a starting point for doing so, I highlight a) some advantages and disadvantages of unipolar vs. multipolar AI scenarios, b) technology seeds that could move us closer to a multipolar scenario, and c) factors to take into account when considering the likelihood of each scenario. Rather than trying to convince you that multipolar scenarios are necessarily better or more likely, I hope to show that multipolar scenarios are at least underexplored enough in terms of both desirability and likelihood to warrant more attention. 

Unipolar vs. Multipolar AI

The concepts "multipolar" and "unipolar" describe scenarios across a wide spectrum. At the extreme unipolar end, a single superintelligent AGI system, centrally controlled by one global authority might maintain global stability through stringent surveillance and enforcement, managing all risks from other AI actors or disruptive technologies. At the opposite multipolar extreme, a diverse network of AI agents, human actors, and hybrid entities cooperates within decentralized frameworks of distributed protocols, institutions, and shared norms. Intermediate scenarios might involve a few governmental bodies managing distinct nationalized AI projects, or a broad array of corporations, nonprofits, and public-interest groups collaborating through shared AI initiatives. Here, my focus is on evaluating the feasibility and safety advantages of moving away from the extreme unipolar scenario toward multipolar arrangements.

Why Fear a Multipolar Scenario?

The extreme unipolar scenario is often proposed not only because it seems more likely but because it is perceived as safer. Multipolar scenarios indeed carry specific risks, including:

  • Unchecked tech proliferation: The widespread dissemination of AI-enhanced biotechnology and sophisticated cyberattack capabilities could provide numerous actors with the ability to launch civilization-threatening attacks.
  • Inherent instability: Continuous risk of individual AGI actors attempting takeovers creates an environment of persistent instability.
  • Multipolar traps and coordination failures: Society risks becoming stuck in persistent inadequate equilibria due to coordination problems and multipolar traps.
  • Geopolitical urgency: Currently, the U.S. holds a unique advantage that could facilitate a successful unitary takeover, potentially locking in Enlightenment values globally. However, this window of opportunity may rapidly diminish if rival powers like China catch up or actively sabotage U.S. efforts.

Why Fear a Unipolar Scenario?

While multipolar scenarios have widely discussed and intuitive risks, the dangers associated with seemingly safer unipolar alternatives, e.g. in which one organization controls AGI development remains underexplored, yet substantial. These include:

  • Value lock-in: Centralized control could permanently embed values ignorant of or opposed to those desired by large segments of the global population.
  • Institutional stagnation: History demonstrates that large institutions frequently suffer from gradual mission creep which stifles innovation and progress—as exemplified by the history of institutions like IRBs.
  • Internal corruption: Centralization of ultimate power in one organization incentivizes intense internal competition for such power, frequently favoring malevolent actors who are not afraid to be ruthless in getting to the top.
  • Single points of failure: Systems that localize technological capability in one point increase vulnerability to catastrophic accidents, and create irresistible honeytraps for external attacks or extortion.
  • Susceptibility to AI deception: A unipolar organization typically allocates fewer resources for rigorous red-teaming than an entire multipolar ecosystem, increasing vulnerability to deception, e.g. by sophisticated AI systems.
  • First-strike instabilities: We are currently in a multipolar world of many powerful, rivaling actors holding civilization-destroying weapons. The mere credibility or perceived likelihood of a unitary takeover by a single effort could incentivize rival powers to initiate preemptive catastrophic kinetic attacks.
  • Historical precedents: Unipolar governance scenarios historically correlate with totalitarianism, severe human rights abuses and even genocide, while multipolar institution designs have proven more resilient and supportive of human flourishing.

It’s hard to weigh up the risks arising in the multipolar vs. the unipolar scenario against each other. However, given how many unaddressed risks the unipolar scenario shows, it is odd that it is so popular. Just going by their risks, the multipolar scenario might very well turn out to be the lesser evil but it is at least underexplored enough to warrant careful consideration. Ideally, an attractive multipolar scenario would proactively develop mechanisms to mitigate its inherent risks, rather than simply ignoring or downplaying them. Let’s look at a few building blocks for multipolar paths that try to do this. 

Which Multipolar AI Paths to Build Instead?

There aren’t many coherent visions of these scenarios but a few starting points include:

  • Comprehensive AI Services: In this view, advanced AI systems are interpreted as "services" that deliver specialized results for a task given some resources and time. Since most of what humans do can be thought of as a task, including the task of creating a new service, the overall ecosystem of services is comprehensively intelligent. As AI advances, we will automate AI R&D tasks using AI services as well, leading to faster automation of the comprehensive services that define civilization forward, without building one superintelligent AGI.
  • Cooperative AI: Cooperative AI studies the development of cooperative intelligence in AI systems. Cooperation built everything civilization achieved while failures in cooperation caused humanity’s most important problems. As AIs increasingly take on tasks and roles formerly in the domain of humans, new multi-agent dynamics between AIs and between humans and AIs can give rise to failure modes and opportunities for cooperation that are qualitatively different from human to human cooperation. Ideally, we want to both address new collusion and deception risks, while also leveraging increasingly ubiquitous AI assistants to improve our institutions and overcome societal cooperation challenges.
  • Gaming the Future: A proposal to expand the interaction architecture of civilization that already facilitates voluntary cooperation amongst humans to include AI systems. Over centuries, humans developed systems of institutions, protocols, norms, rights and technologies that increasingly disincentivized involuntary interference and incentivized cooperation for mutual benefit. Updating this system to make it fit for cooperating with AI systems that differ substantially from humans involves strong computer security and the gradual embodiment of AI systems in digital networks and protocols that adapt human systems of checks and balances to the unique risks and opportunities these AI systems bring. Ultimately, advances in AI automation could profoundly reshape society by substantially increasing the potential gains from cooperation – more deals will seem attractive due to their absolute benefits even if their relative upside is low. This AI Paretotopia enhances long-term stability and prosperity by continuously reinforcing more mutually beneficial cooperation.

Each of these views addresses some risks arising in multipolar human - AI settings while trying to reap the new benefits that can emerge by having a more diverse set of intelligences collaborating. I would love to see more detailed scenarios in this spirit. If you know of any, please let me know. Foresight Institute is also interested in funding the creation of more scenarios and projects building the respective technology seeds.

Technology Seeds for Secure Multipolar AI Worlds

In addition to outlining alternatives to the singleton AGI scenarios, it would be useful to have concrete prototypes that can function as technology seeds of a beneficial multipolar world. Here are a few areas Foresight Institute is interested in funding through our grants program, divided into three larger categories. We are not the only ones interested in these approaches – Future of Life Institute, Forethought Foundation, Joe Carlsmith and many others have all independently proposed technology seeds, some of which overlap partially with the ones proposed below.

  1. Building Decentralized Security & Compute Foundations

Isolation of Components for Resilient Defense-in-Depth

The benefits of multipolarity, such as avoiding the creation of single points of failure, not only apply at the level of institutions but translate across the technology stack. We know how to build more resilient systems with multipolarity in mind, from the ground up. Take a formally verified system such as the seL4 microkernel which was used to secure autonomous systems that were able to successfully withstand a DARPA challenge. 

seL4 achieves security through the decentralization of individually breakable parts. For instance, seL4 uses “capabilities” as access control mechanisms, such that components can only interact with memory regions, threads and other objects if explicitly given the capability to do so. This type of security setup embodies the Principle of Least Authority (PoLA), which mandates that each component of the overall system only has access to the minimum authority necessary to complete its task – avoiding that an attacker can infiltrate more than the absolute minimum by hacking that part of the overall system.

Finally, seL4's isolation properties are proven through mathematical methods, creating a formal assurance of resilience. Combining mathematical provability with the decentralizing design features and rigorous DARPA red-teaming creates robust "defense in depth" - by which different security approaches are used in combination to fill in potential gaps in the other. SeL4 is a great example of how to design systems bottom-up in a decentralized manner such that individually breakable parts can nevertheless make up a resilient whole.

Improving Red-teaming

The release of many AI models showed that even extensive internal or outsourced red-teaming often misses issues in AI systems identified only by a much larger public tinkering with the models in hard-to-conceive ways after broader deployment. As AI systems get more complex, opening up AI models earlier and in more meaningful ways would more effectively mobilize the collective and cost-free intelligence of such broad userbases to find and patch issues fast. 

One actor in particular could become a valuable red-team reinforcement: Why not mandate the NSA to responsibly disclose the many zero-days, backdoors and other vulnerabilities on US companies they have already discovered or bought up on darknet marketplaces? Even better if they would provide a time window before the vulnerability gets publicly released to incentivize fast fixes – in addition to offering funding or technical support to help fix them. 

As AI systems get increasingly used by malicious actors to find and exploit vulnerabilities and flaws in other AI systems, defense complexity will likely soon outpace human capacity. Bug bounty programs that explicitly incentivize AI-driven automated vulnerability discovery, paired with the aforementioned responsible disclosure practices might become necessary to keep up. 

Privacy Tooling to Protect Human Autonomy 

Big-data driven AI progress in the hands of the wrong (nation-state) actors which find the promise of automated 24/7 targeted surveillance just too tempting pose serious risks to personal privacy and identity – in addition to providing vectors for centralized systems to lock in power. 

Fortunately, there is a whole stack of privacy-preserving technologies like self-sovereign identity (SSI), differential privacy, homomorphic encryption, zero-knowledge proofs (zk proofs), and secure multi-party computation (MPC) that could help mitigate some of these threats. However, for now at least, implementing many of these approaches in systems would make them a lot more difficult and costly to use than the standard, non-privacy-preserving solutions. R&D funding is needed to make them more competitive. 

Some areas for innovation, such as the handling of healthcare data or financial data are legally inaccessible to non-private AI systems, so private AI, even if likely less competitive in principle, could have a competitive advantage in these domains because it might be the only legal solution. We could build technology seeds in these areas first for proof of principle before scaling to other areas. 

  1. AI for Empowering Human Sensemaking & Cooperation

AI Fiduciaries for Cooperation on Shared Goals

A disadvantage of multipolar systems is that they can get stuck in “multipolar traps” – situations where the self-interest of many individuals compels them to act against their collective interest, leading to poorer outcomes for everyone.  

A centralized broker coming in to facilitate cooperation is not the only solution to such cases. Personal AIs might soon help provide some of the multi-way commitment dynamics that civil society currently lacks. Humans have become pretty good at cooperating; over centuries, we have evolved better norms, and more complex institutions to commit. Cooperation is one of the big multipliers for individual prosperity and civilizational progress alike. 

But things aren’t perfect. First, it’s hard to find someone out there in the sea of 7+bn people, whose wants are complementary to ours, allowing us to trade (search costs)– especially because nobody broadcasts all of their preferences publicly but tends to share a very concrete subset. Even when a potential match is made, we need to negotiate and agree on the terms, which can take its sweet time, especially when the stakes are high (bargaining costs). Once that’s done, ideally we enshrine everything into a legally binding document  – unless, of course, you know and trust the party personally or their excellent reputation online. Otherwise, if things go wrong and they bail, you risk an expensive, complex process to enforce whatever deal you agreed to (enforcement costs). 

Modernity developed rights, contracts, institutions, reputation systems and more as technologies for cooperation but they’re costly in time and money. AI systems acting as fiduciaries to their humans could dramatically reduce transaction costs further by taking on part of the process. Imagine your personal AI scours the web 24/7 to identify opportunities for mutually beneficial deals, before negotiating the terms, likely interacting primarily with other individuals’ AI agents. 

Andrew Critch et al have explored a special AI design that might unlock entirely new cooperative equilibria. Imagine that, quite unlike humans which are opaque to one another, you have open source bots capable of reading each other's code. If bots can reliably verify how another bot will respond to a request, they might cooperate in cases where uncertainty would otherwise prevent collaboration. 

Crucially, such mechanisms to bring down enforcement costs are not only useful for 1-1 cooperation. On a societal scale, (partially) open source AI assistants might not only help with finding and negotiating multi-way agreements but also make it easier for each party to commit to them because we can test in advance whether everyone will follow-through on the agreement. 

This might facilitate high-stakes agreements amongst nation state actors on a global scale but more importantly it might also help solve multipolar traps hindering bottom-up civil society cooperation. Building up the societal capacity for collaboration without relying on centralizing cooperation brokers could create powerful compensating dynamics against the centralization of power over time. 

Encrypted AI for Human in the Loop AI Cooperation

It’s often believed that centralized AI architectures will have a long-term advantage by leveraging economies of scale and big data to handle complex problems. Is that necessarily so? Perhaps not. Andrew Trask shows how to train a homomorphically encrypted neural network on unencrypted training data. In a nutshell, the network's intelligence is protected from theft, so valuable AIs can be trained in insecure environments. For instance, the model could be sent to user A with a public key, enabling them to train it on their own data before sending it back to get decrypted and re-encrypted with a different key to be sent to user B to train on their data. The model owner retains control over their network while each user controls their own data. 

Now different organizations can cooperate on shared projects without having to worry about exposing their IP. This allows for selective cooperation and lowers the need to centralize access to data and capabilities across organizations.

What’s more, if the AI model is encrypted and can only make encrypted predictions, then to it, the real world is encrypted. A human controls the secret key and can decide to decrypt individual predictions only or fully unlock the model itself. Another neat embodiment of PoLA. In practice, homomorphic encryption is not in a state yet where it could be implemented across large scale AI operations. But, as pointed out in the section on private AI above, there might be pockets of domains where private approaches to handling data are the only legal solution to make progress – so let’s start there.

Specialized Tool AI R&D Scientists 

We’re on track to developing more agentic systems. A common conception is that agency goes hand in hand with centralization of capabilities. Imagine a future version of DeepResearch paired with a future version of Operator, developing situational awareness over time. It’s not hard to see how an AI system that gets better at solving an ever more general domain of problems paired with an agentic ability to influence the world brings us one step closer to an AGI singleton. Can we push back against humans being out-competed and gradually disempowered by AI agents? 

A diversity of specialized research assistants are gradually surfacing across the scientific landscape, from BrainGPT, an open source neuroscience model that is already on par with human researchers when it comes to suggesting and predicting outcomes of neuroscience experiments, to FutureHouse’s open source Paper Q&A that can answer questions across biology papers. These tools share a focus on open source development, and interoperability to support human problem-solving. From individual AI scientists, we see the first interoperable ‘Research Fleets’ emerge: Stanford’s ‘Virtual Lab’ combines a meta agent that coordinates sub agents using Rosetta, AlphaFold2 etc.

Increasingly capable specialists that become superintelligent in a narrow problem-space, but are weaved into an open source framework of other human and AI specialists focused on a diversity of domains comes closer to the Comprehensive AI Services Model described earlier. The more incentives we provide for developing such specialized, domain-specific tool AI scientists, the quicker they could get copied to other domains, interoperate with each other and be used and built on by the scientific community, improving human researchers' problem-solving ability. Whether or not these AI scientists will steer clear of AI agency down the line might not matter as long as they are embedded early in a system of checks and balances.

AI-Automated Forecasting

Sense-making will be increasingly important and difficult in a rapidly changing world in which various AI developments affect not only the rate of AI progress but also of GDP growth and societal complexity. This is especially so if sense-making sources like traditional media are gradually superseded by social media filter bubbles, bot content, deepfakes etc. 

In theory, tools like prediction markets can support better sense-making by incentivizing users to put their money or reputation where their mouth is. The wisdom of the crowd (e.g. the market as a whole) often outperforms not only individual forecasters but also traditional expert surveys across topics. In an ideal scenario, predictions of AI development might open up a ‘Design Ahead’ window between current technological capabilities and future possibilities which could be leveraged to prepare for risks on the horizon. 

Unfortunately, in practice, prediction markets are bottlenecked by the fact that humans face high opportunity costs to predict on the many questions we want answers to, not to mention continuously updating their predictions across a complex array of topics. 

AI forecasting bots don’t face this opportunity cost and could forecast critical issues 24/7. First experiments exist in the shape of the Metaculus AI bot forecasting competition or the more in-depth prediction based FutuResearch reports. Gradually, an ecosystem of bots might emerge, trained on predictions by top forecasters, domain-knowledge, historic trends or the news across fields. Over time, we might not only want AI bots to compete on better forecasts but also add features for AIs to deliberate with each other to see if it improves market accuracy overall like it does for human forecasters.

Why is forecasting important for building better multipolar scenarios? First, predicting which path we’re on would be useful in determining if there are vectors to influence the path. Second, prediction markets are, in themselves, a more multipolar technology because they rely on a diversity of actors bringing their local knowledge together to compete in composing it into larger problem-solving, rather than centralizing analysis in one system. Strengthening collective sense-making systems that conserve pluralism could be a buffer against stagnation and the corrupt or deceptive information flows associated with negative unipolar scenarios.

  1. Neurotech for Upgrading Human Cognition

NeuroAI

While neural networks at first only crudely resembled human neural networks, the gap between neuroscience and AI is closing – albeit very slowly. Patrick Mineault et al show how this trend might turn neuroscience and neurotechnology into tools for making AI systems safer. For instance, rather than relying on RLHF and similar techniques, we might be able to fine-tune existing AI models to better align with what our human brains like by recording AI’s impact on human neural states as a feedback loop. One goal of the emerging field of NeuroAI is brain-informed AI supervision, enhancing both the interpretability and safety of AI behaviors through close alignment with human cognitive processes.

Brain-Computer Interfaces (BCIs)

Brain-computer interfaces (BCIs) represent theoretical pathways for significantly enhancing human cognition, as well as human communication and collaboration with advanced AI systems. Numerous innovative, non-invasive approaches are being explored, including technologies such as ultrasound-based interfaces for neural reading, writing, and mood modulation. For instance, Forest Neurotech is actively developing ultrasound methods for minimally invasive monitoring and therapeutic interventions in neurological and psychiatric conditions. Even if a focus on less invasive BCIs might be a positive step toward wider adoption, whether they can improve human AI communication will ultimately depend on the achievable bandwidth. However, at a minimum, advances in BCIs could substantially augment the human side of future human-AI interactions, closing the potentially emerging cognitive gap between humans and increasingly advanced AI systems.

Whole Brain Emulation (WBE) Paradigms

The human brain remains our only working example of general intelligence that integrates capabilities while (mostly) maintaining alignment with human values. Very crudely, Whole Brain Emulation (WBE) involves creating computational replicas of human brains, potentially offering human-aligned AGI that could be accelerated to superhuman performance levels through improved compute efficiency. 

“High-fidelity emulation” paradigms, such as detailed connectomics-based simulations, are normatively promising because they rely on exact mappings of a brain. Yet, their feasibility within short AGI timelines remains highly uncertain, despite promising recent breakthroughs—such as the connectomics revolution led by projects like E11’s fruit fly brain mapping, potentially advancing toward mouse and eventually human brain emulations within the next decade. 

Given the urgency of shorter AGI timelines, “lo-fi emulation” approaches offer a potentially cost-effective alternative. Rather than mapping the entire brain, these approaches use behavioral and neural data recordings, for instance collected through computer vision cage recordings and simultaneous coarse grain neural recording methods in the case of a mouse. The goal is to combine those with multimodal machine learning methods as a shortcut to create functionally predictive models of a model organisms behavior and brain state. Netholabs work on lo-fi mouse emulation is a real-life tech seed of this approach.

While the neuro approaches are possibly the most speculative technology seeds, they could also be the most long-term effective at counter-balancing the centralization of intelligence and power in a single AGI(-building entity). Not only might improved cognition aid us at developing better plans for dealing with AGI risks, but the very existence of a diversity of enhanced mind architectures might kickstart the creation of institutions of checks and balances among a multipolar society of minds. 

From Technology Seeds to Technological Realities

To build these technology seeds of a more human empowered world, Foresight Institute has a Secure AI & Neuro AI Grant Program (between $3-5M annually), in addition to a $1M prize focused specifically on the Lo-Fi area. 

We are looking for projects that are open source, interoperable, and rapidly scalable. However, we also acknowledge that any solutions come with new risks. For instance, potential sentience risks of failing to account for emerging moral patienthood in WBEs come to mind. We need evals to avoid false positives and false negatives and generally build with new risks and responses to these risks in mind.

Reality Check: Likelihood of Multipolar vs. Unipolar AI

I've looked at some reasons to believe unipolar AI scenarios might not be inherently safer than multipolar AI scenarios and at some technology seeds that could grow into components of multipolar AI scenarios that deal with some of the problems associated with risky multipolar scenarios. However, unipolar scenarios aren’t more prevalent just because they are considered safer but also because they are considered more likely. 

Is there any point in exploring multipolar scenarios or are they just too unrealistic to even bother? In this final section, let’s look at a few factors that are relevant for considering this question. 

This section is highly speculative. Each factor considered below is highly dynamic and continuously in flux. Rather than analyzing them in-depth here, they are mostly given as incomplete and hyperbolic checklist of some questions to keep in mind when considering the relative likelihood of different scenarios. 

  • How might geopolitical forces—like export controls, nationalization risks, and military conflict—shape whether we end up in a unipolar or multipolar AI world?
    Factors to consider: Trump administration’s deregulation, Stargate program, revocation of Biden-era safety policies favoring acceleration of private efforts with possible heightened risk for sudden nationalization vs. export controls and Deepseek-style innovation leading to balkanization of AI efforts; a projected 30% chance of a Taiwan invasion by 2030 raises the question: if U.S. AI supremacy is evident when China is ready to invade, might this itself trigger kinetic conflict?
  • How do the shifting positions of major AI leaders affect the multipolar vs. unipolar balance in AI development?
    Factors to consider: Mark Zuckerberg supports open source with guardrails; Elon Musk favors openness but has also called for pauses; Sam Altman pushes for for-profit dominance while wistfully hinting at a preference for openness; Dario Amodei supports stronger regulation and possible security protection by U.S. intelligence sector.
  • Do open source models actually increase the influence of open-source AI in shaping a multipolar future—or do they just reinforce dominant powers like the U.S.?
    Factors to consider: Emergence of Deepseek and LLaMA; open-source flywheels will create more multipolar ecosystem over time vs. skepticism about open-source impact frontier model development, U.S. might benefit more from open source than China because what we lack in energy we can make up for in talent
  • Does the dominance of centralized compute guarantee unipolar outcomes, or could efficiency gains enable a multipolar AI ecosystem?
    Factors to consider: Scaling Era favoring centralized systems vs. growing global economic complexity in the age of advanced AI reducing central planning effectiveness; hardware and model efficiency potentially enabling powerful edge devices and decentralization.
  • Will AI research automation lead to centralized general-purpose systems or to decentralized mixtures-of-experts that distribute capability across many actors?
    Factors to consider: Generalized systems like DeepResearch developing situational awareness vs. specialized systems like BrainGPT and FutureHouse supporting the Comprehensive AI Services model that achieves decentralization through specialization and interoperability of systems.
  • Will a decentralized agent economy sustain a multipolar AI world, or will a few dominant AI agents outcompete all others?
    Factors to consider: Increasing experimentation with a diversity of autonomous, immutable agents powered by cryptocurrency and Web3 integration vs. colluding mega minds of agents out-competing the rest; rethinking traditional institutions like property rights and capital to create new Schelling Points for coordination that protect humans actors in an AI economy.
  • Could emulations work in time for traditional AI timelines, and would they favor centralized or distributed control?
    Factors to consider: viability of lo-fi emulations potentially altering AGI trajectories; emulation scenarios as centralized mega-hiveminds vs. diverse, decentralized emulation networks; question if hyper competition among ems will lead to races to the bottom of subsistence.
  • Do historical patterns of centralization and decentralization suggest multipolar or unipolar AI scenarios will be more stable?  

    Factors to consider: Unipolar single points of failure vs. multipolar “small-kills-all” risks; AGI timelines are critical since longer timelines allow small actors to catch up and open-source advantages to compound; oscillation of unipolar vs. multipolar: systems often start decentralized, then centralize due to economies of scale, then decentralize into a new paradigm because of benefits of specialization etc.

Conclusion

Unipolar AI scenarios that are popular both because they are seen as more likely and safer, exhibit a number of risks that are under-explored. Conversely, it is possible to conceive of multipolar scenarios and technology seeds that avoid many of these risks, while avoiding at least some of the risks often associated with multipolar scenarios themselves. Given that the jury is still out on whether we are headed straight into a unipolar AGI world, with current societal and technological developments being compatible with either scenario, it would be useful to incentivize more work on beneficial multipolar AI scenarios and technology seed design. 

At Foresight Institute, we are funding some work in this area and are gearing up to fund more. I would be grateful for a) pointers to beneficial multipolar AI scenarios, b) projects building the technology seeds discussed above, c) any feedback and criticism on any of the above.