Abstract
Most proposals for governing artificial general intelligence (AGI) focus on institutional design, corporate incentives, or international coordination. Yet one crucial factor is systematically underestimated: human unpredictability, rooted in greed, struggles for power, and the repeated failures of historical attempts at global cooperation. This paper argues that no mathematical model or governance framework can fully capture this variable, and that ignoring it leads to utopian expectations detached from historical reality.
Introduction
This text is a critical commentary on Bostrom’s “Open Global Investments” (OGI) model. While Bostrom’s framework proposes innovative mechanisms for global AGI governance, it assumes that human actors will be able to coordinate and act in line with collective interests. History suggests the opposite: human unpredictability and greed have consistently undermined even the strongest institutions. From the collapse of utopian socialist experiments (Kolakowski, 1978) to the monopolization of technological breakthroughs (Hughes, 2004), this factor is precisely what the OGI model neglects.
The Missing Equation
There is no equation that encompasses the full range of human opportunism, irrationality, and shortsightedness. Attempts to formalize human behavior—whether through rational choice theory (Becker, 1976), game theory (von Neumann & Morgenstern, 1944), or behavioral economics (Kahneman, 2011)—have had partial success, but none have eliminated the uncertainty of human action. Unlike physical systems, social systems involve agents who can consciously break predicted equilibria out of passion, ideology, or sheer unpredictability.
Historical Evidence
- Energy and monopoly: Nikola Tesla’s vision of wireless energy failed not due to technical impossibility but because investors demanded measurable profit (Carlson, 2013).
- Revolutions and power: Marxist theory predicted emancipation but produced new hierarchies and repression (Figes, 1996).
- Attempts at global governance: From the League of Nations to the UN, institutional designs have consistently clashed with realpolitik and national interests (Ikenberry, 2001).
These examples highlight a constant: utopian frameworks collapse when confronted with unaccounted human greed, fear, or ambition.
Conclusion
Any AGI governance model that ignores the “equation of human unpredictability” risks repeating historical cycles of power concentration and social fragmentation. A realistic framework must begin not only with technological and institutional design but with explicit recognition of human limitations as systemic constants. Without this corrective factor, the promise of AGI will not lead to a singularity of well-being, but to a singularity of control.
References
- Becker, G. (1976). The Economic Approach to Human Behavior.
- Bostrom, N. (2025). Open Global Investments Model.
- Carlson, W. (2013). Tesla: Inventor of the Electrical Age.
- Figes, O. (1996). A People’s Tragedy: The Russian Revolution.
- Hughes, T. (2004). Human-Built World: How to Think About Technology and Culture.
- Ikenberry, J. (2001). After Victory: Institutions, Strategic Restraint, and the Rebuilding of Order after Major Wars.
- Kahneman, D. (2011). Thinking, Fast and Slow.
- Kolakowski, L. (1978). Main Currents of Marxism.
- von Neumann, J., & Morgenstern, O. (1944). Theory of Games and Economic Behavior.
Sincerely,
Đulović Nermin