I really like the comparison to Intelsat. The best examples of institutions working well are usually those we do not know about because... working well is boring! But what do you think we should learn from the failure of the Baruch Plan, which also aimed at international control of a strategic technology (nuclear energy), and failed because the race dynamics it was supposed to prevent had already started? Seems like the race to AGI is already on between the US and China.
Your voting scheme says most decisions can be made by the US even if everyone else is against ("simple majority for most decisions" and the US has 52%) and major decisions can be made by Five Eyes even if everyone else is against ("two thirds for major decisions" and Five Eyes has 67%). So it's a permanent world dictatorship by Five Eyes: if they decide something, nobody else can do anything.
As such, I don't see why other countries would agree to it. China would certainly want more say, and Europe is also now increasingly wary of the US due to Greenland and such. The rest of the world would also have concerns: South America wouldn't be happy with a world dictatorship by the country that regime-changes them all the time, the Middle East wouldn't be happy with a world dictatorship by the country that bombs them all the time, and so on. And I personally, as a non-Five Eyes citizen, also don't see why I should be happy with a world dictatorship by countries in which I have no vote.
I'd be in favor of an international AI effort, but not driven by governments or corporations. Instead it should be a collaboration of people as equals across borders, similar to the international socialist movements. I know their history has been full of strife too, but it's still better than world dictatorship.
concentration and abuse of power.
Doesn't this depend on what the political/governance structure for the international project is? If multiple nations share power over the international project, that seems to mitigate concentration of power risks a lot. Also, private enterprise with regulation runs HUGE concentration of power risks already, e.g. it's not at all clear that multiple companies will be neck-and-neck through the singularity, instead quite plausibly one company will pull ahead of the others and become dominant. Moreover even if multiple companies are neck-and-neck, their CEOs are arguably more of a 'clique' than the heads of various nations would be. E.g. all the CEOs might be American, they might all have similar attitudes on various political and ethical topics, etc.
This note was written as part of a research avenue that I don’t currently plan to pursue further. It’s more like work-in-progress than Forethought’s usual publications, but I’m sharing it as I think some people may find it useful.
Introduction
There have been various proposals to develop AGI via an international project.[1]
In this note, I:
In an appendix, I give a plain English draft of a treaty to set up my ideal version of an international project. Most policy proposals of this scale stay very high-level. This note tries to be very concrete (at the cost of being almost certainly off-base in the specifics), in order to envision how such a project could work, and assess whether such a project could be feasible and desirable.
I tentatively think that an international AGI project is feasible and desirable. More confidently, I think that it is valuable to develop the best versions of such a project in more detail, in case some event triggers a sudden and large change in political sentiment that makes an international AGI project much more likely.
Is an international AGI project desirable?
By “AGI” I mean an AI system, or collection of systems, that is capable of doing essentially all economically useful tasks that human beings can do and doing so more cheaply than the relevant humans at any level of expertise. (This is a much higher bar than some people mean when they say “AGI”.)
By an “international AGI project” I mean a project to develop AGI (and from there, superintelligence) that is sponsored by and meaningfully overseen by the governments of multiple countries. I’ll particularly focus on international AGI projects that involve a coalition of democratic countries, including the United States.
Whether an international AGI project is desirable depends on what the realistic alternatives are. I think the main alternatives are 1) a US-only government project, 2) private enterprise (with regulation), 3) a UN-led global project.
Comparing an international project with each of those alternatives, here are what I see as the most important considerations:
Greater constraints on the power of any individual country, reducing the risk of an AI-enabled dictatorship.
More legitimate.
More likely to result in some formal benefit-sharing agreement with other countries.
Potentially a larger lead over competitors (due to consolidation of resources across countries), which could enable:
More bureaucratic, which could lead to:
More actors, which could make infosecurity harder.
Greater likelihood of a monopoly on the development of AGI, which could reduce racing and leave more time to manage misalignment and other risks.
More government involvement, which could lead to better infosecurity.
More centralised, which could lead to:
More feasible.
Fewer concessions to authoritarian countries.
Less vulnerable to stalemate in the Security Council.
Less legitimate.
Less likely to include China, which could lead to racing or conflict.
My tentative view is that an international AGI project is the most desirable feasible proposal to govern the transition to superintelligence, but I’m not confident in this view.[2] My main hesitations are around how unusual this governance regime would be, risks from worse decision-making and bureaucracy, and risks of concentration of power, compared to well-regulated private development of AGI.[3]
For more reasoning that motivates an international AGI project, see AGI and World Government.
If so, what kind of international AGI project is desirable?
Regardless of whether an international project to develop AGI is the most desirable option, there’s value in figuring out in advance what the best version of such a project would be, in case at some later point there is a sudden change in political sentiment, and political leaders quickly move to establish an international project.
Below, I set out:
I’m sure many of the specifics are wrong, but I hope that by being concrete, it’s easier to understand and critique my reasoning, and move towards something better.
General desiderata
In approximately descending order of importance, here are some desiderata for an international AGI project:
My view is that most of the gains come from having an international AGI project that (i) has a de facto or de jure monopoly on the development of AGI, and (ii) curtails the ability of the front-running country to slide into a dictatorship. I think it’s worth thinking hard about what the most-politically-feasible option is that satisfies both (i) and (ii).
A best guess proposal
In this section I give my current best guess proposal for what an international AGI project should look like (there’s also a draft of the relevant treaty text in the appendix). My proposal draws heavily from Intelsat, which is my preferred model for international AGI governance.
I’m not confident in all of my suggestions, but I hope that by being concrete, it’s easier to understand and critique my reasoning, and move towards something better. Here’s a summary of the proposal:
More detail, with my rationale:
How the project comes about:
Membership:
Non-members:
Vote distribution: Decisions are made by weighted voting based on equity:
Voting rule:
AI development:
On larger training runs:
Compute:
Infosecurity:
Why would the US join this project?
The Intelsat for AGI plan allows the US to entrench its dominance in AI by creating a monopoly on the development of AGI which it largely controls. There are both “carrot” and “stick” reasons to do this rather than to go solo. The carrots include:
The sticks include:
Many of these demands might seem unlikely — they are far outside the current realm of likelihood. However, the strategic situation would be very different if we are close to AGI. In particular, if the relevant countries know that the world is close to AGI, and that a transition to superintelligence may well follow very soon afterwards, then they know they risk total disempowerment if some other countries develop AGI before them. This would put them in an extremely different situation than they are now, and we shouldn’t assume that countries will behave as they do today. What’s more, insofar as the asks being made of the US in the formation of an international project are not particularly onerous (the US still controls the vast majority of what happens), these threats might not even need to be particularly credible.[12]
It’s worth dividing the US-focused case for an international AGI project into two scenarios. In the first scenario, the US political elite don’t overall think that there’s an incoming intelligence explosion. They think that AI will be a really big deal, but “only” as big a deal as, say, electricity or flight or the internet. In the second scenario, the US political elite do think that intelligence explosion is a real possibility: for example, a leap forward in algorithmic efficiency of five orders of magnitude within a year is on the table, as is a new growth regime with a one-year doubling time.
In the first scenario, cost-sharing has comparatively more weight; in the second scenario, the US would be willing to incur much larger costs, as they believe the gains are much greater. Many of the “sticks” become more plausible in the second scenario, because it’s more likely that other countries will do more extreme things.
The creation of an international AGI project is more likely in the second scenario than in the first; however, I think that the first scenario (or something close to it) is more likely than the second. One action people could take is trying to make the political leadership of the US and other countries more aware of the possibility of an intelligence explosion in the near term.
Why would other countries join this project?
If the counterfactual is that the US government builds AGI solo (either as part of a state-sponsored project, a public-private partnership, or wholly privately), then other countries would be comparatively shut out of control over AGI and AGI-related benefits if they don’t join. At worst, this risks total disempowerment.
Appendix: a draft treaty text
This appendix gives a plain English version of a treaty that would set up a new international organisation to build AGI, spelling out my above proposal in further detail.
Preamble
This treaty’s purpose is to create a new intergovernmental organisation (Intelsat for AGI) to build safe, secure and beneficial AGI.
“Safe” means:
“Secure” means:
“Beneficial” means:
“AGI” means:
This treaty forms the basis of an interim arrangement. Definitive arrangements will be made not more than five years after the development of AGI or in 2045, whichever comes sooner.
Founding members
Five eyes countries:
Essential semiconductor supply chain countries (excluding Taiwan):
Non-founding members
All other economic areas (primarily countries) and major companies (with a market cap above $1T) are invited to join as members. This includes China, the EU, and Chinese Taipei.
Obligations on member countries
Member countries agree to contribute to AGI development via financing and/or in-kind services or products.
They agree to:
Benefits to member countries
In addition to the benefits received by non-members in good standing, member countries receive:
Benefits to member non-countries
Companies and individuals can purchase equity in Intelsat for AGI. They receive a share of profit from Intelsat for AGI in proportion to their investment in Intelsat for AGI, but do not receive voting rights.
Benefits to non-member countries
There are non-members in good standing, and members that are not in good standing.
Members that are in good standing:
They receive:
Countries that are not in good standing do not receive these benefits, and are cut out of any AI-related trade.
Management of Intelsat for AGI
Intelsat for AGI contracts one or more companies to develop AGI.
Governance of Intelsat for AGI
Intelsat for AGI distinguishes between major decisions and all other decisions. Major decisions include:
Decisions are made by weighted voting, with vote share in proportion to equity. Major decisions are made by supermajority (⅔) vote share. All other decisions are made by majority of vote share.
Equity is held as follows. The US receives 52% of equity, and other founding members receive 15%. 10% of equity is reserved for all countries that are in good standing (5% distributed equally on a per-country basis, 5% distributed on a population-weighted basis). Non-founding members can buy the remaining 23% of equity in stages, including companies, but companies do not get voting rights.
50% of all Intelsat for AGI compute is located on US territory, and 50% on the territory of a Founding Member country or countries.
The intellectual property of work done by Intelsat for AGI, including the resulting models, is owned by Intelsat for AGI.
AI development will follow a responsible scaling policy, to be agreed upon by a supermajority of voting share.
Thanks to many people for comments and discussion, and to Rose Hadshar for help with editing.
This article was created by Forethought. See the original on our website.
Note that this is distinct from creating a standards agency (“an IAEA for AI”) or a more focused research effort just on AI safety (“CERN/Manhattan Project on AI safety”).
See here for a review of some overlapping considerations, and a different tentative conclusion.
What’s more, I’ve become more hesitant about the desirability of an international AGI project since first writing this, since I now put more probability mass on the software-only intelligence explosion being relatively muted (see here for discussion), and on alignment being solved through ordinary commercial incentives.
This situation underrepresents the majority of the earth's population when it comes to decision-making over AI. However, it might also be the best feasible option when it comes to international AGI governance — assuming that the US is essential to the success of such plans, and that the US would not agree to having less influence than this.
Which could be a shortening of “International AI project” or “Intelsat for AI”.
It is able to invest, as with other countries, as “Chinese Taipei”, as it does with the WTO and the Asia-Pacific Economic Cooperation. In exchange for providing fabs, it could potentially get equity at a reduced rate.
One argument: I expect the total amount of labour working on safety and other beneficial purposes to be much greater once we have AI-researchers we can put to the task; so we want to give us more time after the point in time at which we have such AI-researchers. Even if these AI-researchers are not perfectly aligned, if they are only around human-level, I think they can be controlled or simply paid (using similar incentives as human workers face.)
Plausibly, the US wouldn’t stand for this. A more palatable variant (which imposes less of a constraint on the US) is that each founding member owns a specific fraction of all the GPUs. Each founding member has the ability to destroy its own GPUs at any time, if it thinks that other countries are breaking the terms of their agreement. Thanks to Lukas Finnveden for this suggestion.
For example, using Shamir’s Secret Sharing or a similar method.
This could be particularly effective if the President at the time was unpopular among ML researchers.
Airbus was a joint venture between France, Germany, Spain and the UK to compete with Boeing in jet airliner technology, partly because they didn’t want an American monopoly. Airbus is now the majority of the market.
This was true in the formation of Intelsat, for example.