“Open Global Investment” (OGI) is a model of AI governance set forth by Nick Bostrom in a 2025 working paper.
OGI involves AI development being led by corporations within a government-set framework that backs their efforts to create AGI while enforcing safety rules. It requires these corporations to be open to investment from a wide range of sources, including individuals worldwide and foreign governments. Different versions of OGI could have different numbers of corporations participating (one vs. many) and being located in different countries (most likely the US). Models with N companies are called OGI-N.
Bostrom argues that this system is:
Bostrom compares OGI-1 and OGI-N with some alternative models based on the Manhattan Project, CERN, and Intelsat. The following overview table uses Bostrom’s opinion as a basis where available, but otherwise fills in our own judgment in many places:
Features | OGI-1 | OGI-N | Manhattan Project | Cern for AGI | Intelstat for AGI | Status Quo |
---|---|---|---|---|---|---|
Will incumbents be open to it? | Medium | Medium-high | Low | Low | Low | High |
Is it open to investment? | Yes; public | Yes; public | No | No | Yes, govs | Some private, some public |
Who gets a share in the benefits/control? | Investors, host gov | Investors, host gov | Host gov | All govs | All govs | Investors, lab leads |
Does it involve massiv gov funding? | No | No | Yes | Yes | Yes | No |
How much does it concentrate power? | Low | Very-low | High | High | Medium | Medium |
Effect on international conflict | Reduction | Reduction | Increase | Reduction | Reduction | Baseline |
Adaptability of framework | High | High | Low | Low | Medium | High |
Setup speed | Months | Months | Years | Years | Years | None |
Amount of novel regulations/norms/laws required? | Medium | Medium | Medium | Medium | Medium | Low |
Difficulty of securing IP? | Medium | Medium | Low | High | High | Medium |
Does it preclude other projects? | No | No | No | No | No | No |
Disincentivize to other projects? | Medium | Medium | High | High | Medium | Low |
Increased risk of government seizure? | No | No | Yes | Yes | Hard to say | Baseline |
Is it private or public? | Private | Private | Public | Public | Public | Private |
Ability to enforce safety standards in project | Medium | Medium | Medium-high | Medium-high | Medium-high | Low |
Who controls the project? | Host gov & lab leads | Host gov & lab leads | Host gov | UN/all govs | Participating govs | Lab Leads |
Profits taxed by | Host gov | Host gov | N/A | N/A | Participating govs | Host gov |
OGI is meant as a feasible compromise rather than as a perfectly fair system. Versions of OGI with multiple AI companies are not hugely different from the status quo. It’s also meant more as a temporary transitional system than as a model for governing superintelligence itself. The hope is that OGI would keep development relatively safe and conflict-free while humanity came up with something better for the long term.
The Open Global Investment (OGI) model has received three main types of criticism: it won’t work; even if it does, it won’t do enough; and it will have bad side-effects. Bostrom has given responses to each of these criticisms, emphasizing that OGI is meant as a politically feasible, near-term proposal for AGI governance. In Bostrom’s view, OGI should be compared with other AGI governance proposals within the Overton window, such as a Manhattan Project for AGI, or the current default: nothing.
One of the assumptions of the Open Global Investment model — that companies will not be nationalized and that investors and corporations will enjoy strong property rights — does not hold across the globe. E.g., the Chinese government limits foreign financial investment and exerts significant control over domestic corporations. Even the US government may not be able to credibly make such commitments.
To this point, Bostrom responds that the lack of suitable laws and norms for open global investment is not a criticism of the desirability of the model, but of its probability of being implemented. He also notes that high investor demand for US financial products, such as bonds, implies that investors are still confident that the US will continue to respect financial commitments.
Other critiques note that the OGI model does not deal with rogue AI or misuse, change the status quo, or build toward a pause.
Bostrom replies that the OGI model can be extended to handle most of these issues. Extending it is part of the plan, as OGI is a governance model for the near to medium term. The exception is pausing AI, which would be a better fit for a different proposal, one which is not intended as an alternate path to governance of AGI.
Wei Dai argues that the OGI model rewards lab leaders and early investors in AGI companies for increasing existential risk relative to the counterfactual, which is not something we want to incentivize.
Bostrom responds that we currently don’t tax people for increasing existential risk, and it might be bad to retroactively punish people financially for acting on the incentives that exist in our current system. He suggests moral praise and condemnation are more appropriate tools for this. He does say increased taxes would be compatible with the OGI model, though not to the extreme of taxes nearing 100%.