I don’t believe the standard story of the resource curse. I also don’t think Norway and the Congo are useful examples, because they differ in too many other ways. According to o3, “Norway avoided the resource curse through strong institutions and transparent resource management, while the Congo faced challenges due to weak governance and corruption.” To me this is a case of where existing AI models still fall short: the textbook story leaves out key factors and never comes close to proving that good institutions alone prevented the resource curse.
Regarding the main content, I find the scenario implausible. The “social-freeze and mass-unemployment” narrative seems to assume that AI progress will halt exactly at the point where AI can do every job but is still somehow not dangerous. You also appear to assume a new stable state in which a handful of actors control AGIs that are all roughly at the same level.
More directly, full automation of the economy would mean that AI can perform every task in companies already capable of creating military, chemical, or biological threats. If the entire economy is automated, AI must already be dangerously capable.
I expect reality to be much more dynamic, with many parties simultaneously pushing for ever-smarter AI while understanding very little about its internals. Human intelligence is nowhere near the maximum, and far more dangerous intelligence is possible. Many major labs now treat recursive self-improvement as the default path. I expect that approaching superintelligence without any deeper understanding of the internal cognition this way will give us systems that we cannot control and that will get rid of us. For these reasons, I have trouble worrying about job replacement. You also seem to avoid mentioning the extinction risk in this text.
I don’t believe the standard story of the resource curse.
What do you think is the correct story for the resource curse?
I find the scenario implausible.
This is not a scenario, it is a class of concerns about the balance of power and economic misalignment that we expect to be a force in many specific scenarios. My actual scenario is here.
The “social-freeze and mass-unemployment” narrative seems to assume that AI progress will halt exactly at the point where AI can do every job but is still somehow not dangerous.
We do not assume AI progress halts at that point. We say several times that we expect AIs to keep improving. They will take the jobs, and they will keep on improving beyond that. The jobs do not come back if the AI gets even smarter. We also have an entire section dedicated to mitigating the risks of AIs that are dangerous, because we believe that is a real and important threat.
More directly, full automation of the economy would mean that AI can perform every task in companies already capable of creating military, chemical, or biological threats. If the entire economy is automated, AI must already be dangerously capable.
Exactly!
I expect reality to be much more dynamic, with many parties simultaneously pushing for ever-smarter AI while understanding very little about its internals.
"Reality will be dynamic, with many parties simultaneously pushing for ever-smarter AI [and their own power & benefit] while understanding very little about [AI] internals [or long-term societal consequences]" is something I think we both agree with.
I expect that approaching superintelligence without any deeper understanding of the internal cognition this way will give us systems that we cannot control and that will get rid of us. For these reasons, I have trouble worrying about job replacement.
If we hit misaligned superintelligence in 2027 and all die as a result, then job replacement, long-run trends of gradual disempowerment, and the increased chances of human coup risks indeed do not come to pass. However, if we don't hit misaligned superintelligence immediately, and instead some humans pull a coup with the AIs, or the advanced AIs obsolete humans very quickly (very plausible if you think AI progress will be fast!) and the world is now states battling against each other with increasingly dangerous AIs while feeling little need to care for collateral damage to humans, then it sure will have been a low dignity move from humanity if literally no one worked on those threat models!
You also seem to avoid mentioning the extinction risk in this text.
The audience is primarily not LessWrong, and the arguments for working on alignment & hardening go through based on merely catastrophic risks (which we do mention many times). Also, the series is already enough of an everything-bagel as it is.
Following up with some resource curse literature that understands the problem as incentive misalignment:
On how state revenue sources shape institutional development and incentives, Karl (1997) writes,
"Thus the fate of oil-exporting countries must be understood in a context in which economies shape institutions and, in turn, are shaped by them. Specific modes of economic development, adapted in a concrete institutional setting, gradually transform political and social institutions in a manner that subsequently encourages or discourages productive outcomes. Because the causal arrow between economic development and institutional change constantly runs in both directions, the accumulated outcomes give form to divergent long-run national trajectories. Viewed in this vein, economic effects like the Dutch Disease become outcomes of particular institutional arrangements and not simply causes of economic decline. This deeper explanation is revealed in the relentless interaction between a mode of economic development and the political and social institutions it fosters.
[...]
How are frameworks for decision-making created and reproduced in late-developing countries? I argue that determining the "structuring principle" for these countries—that is, the appropriate starting point for identifying how ranges of choice are constructed—should begin with their leading sector. This means examining the export dependence that molds their economies, societies, and state institutional capacities, and that, in turn, is either reinforced or transformed by them. My effort to understand this set of interactions begins with differentiating the asset specificity, tax structure, and other features inherent in the exploitation of one particular commodity, petroleum. It terminates by examining the state, where the impact of particular economic models and
A central corollary of this argument is that countries dependent on the same export activity are likely to display significant similarities in the capacity of their states to guide development. In other words, countries dependent on mining should share certain properties of "stateness," especially their framework for decision-making and range of choice, even though their actual institutions are quite different in virtually all other respects. This should be true unless significant state building has occurred prior to the introduction of the export activity.
The specific mechanism for the creation of this institutional sameness lies in the origin of state revenues. It matters whether a state relies on taxes from extractive activities, agricultural production, foreign aid, remittances, or international borrowing because these different sources of revenues, whatever their relative economic merits or social import, have a powerful (and quite different) impact on the state's institutional development and its abilities to employ personnel, subsidize social and economic programs, create new organizations, and direct the activities of private interests. Simply stated, the revenues a state collects, how it collects them, and the uses to which it puts them define its nature. Thus it should not be surprising that states dependent on the same revenue source resemble each other in specific ways (and consequently so do the decisions made by their leaders)."
I'd note that Karl's argument has nearly 5,000 citations and is one of the most common (if not the dominant) explanations of the resource curse.
From Cooper (2002) Chapter 7:
"Oil can turn a gatekeeper state into a caricature of itself. Unlike agriculture, which involves vast numbers of people in the production and marketing of exports, oil requires little labor, and much of it from foreigners. It also entails relationships between the few global firms capable of extracting it and the state rulers who collect the rents. It defines a spigot economy: whoever controls access to the tap, collects the rent."
On the importance of taxing citizens to state development, Centeno (1997) notes:
"The key to the relationship between war and state making in Western Europe is what Finer (1975) calls the “extraction-coercion” cycle. [...] For the “extraction-coercion cycle” to begin, the relevant states must not have alternative sources of financing while the domestic economy must be capable of sustaining the new fiscal and bureaucratic growth. Conflict-induced extraction will only occur if easier options are not available. Even then, the relevant societies might not be able to produce enough surplus to make the effort productive. Thus, for example, the availability of Latin American silver and the willingness of bankers to risk massive sums freed the Spanish Hapsburgs from imposing greater fiscal control over their provinces as a means to pay for their wars. Conversely, the relative scarcity of such external supports drove the expansion of the early English state."
On how non-taxation revenue inhibited state development in Latin America, and therefore did not follow Tilley's pattern of "war making states", Centeno (1997) argues:
"As in the European cases, war produced immediate deficits, but with one prominent exception, the Latin American states did not respond to these with increased extractions, at least not in the form of domestic taxes. [...] If they could not borrow on international markets (as was the case from roughly 1830 to 1870), Latin American states could sell access to a commodity. Guano allowed Peru to become what Shane Hunt (1973) has called a “rentier state.” The availability of guano revenues retarded the development of the state by allowing it to exist without the remotest contact with the society on which it rested and without having to institute a more efficient administrative machine. Guano did allow the removal of the regressive contribucion (in 1855), but it also permitted the state to avoid modernizing its fiscal structure while borrowing large amounts of money. A contemporary British observer (Markham 1883, p. 37; my emphasis) noted that “a wise government would have treated this source of revenues as temporary and extraordinary. The Peruvians looked upon it as if it was permanent, abolishing other taxes, and recklessly increasing expenditure.” Much like the guano bonanza in the Peruvian case, the conquest of nitrate territories allowed the Chilean state to expand without having to “penetrate” its society and confront the rampant inequality (Loveman 1979, p. 169; Sater 1986, p. 227). By 1900, nitrate and iodine were accounting for 50% of Chilean revenues and 14% of GDP (Mamalakis 1977, pp. 19–21; Sater 1986, p. 275)."
Happy to cite some more of the literature if it's helpful.
I've only skimmed this, but from what I've seen, you seem to be placing far too much emphasis on relatively weak/slow-acting economic effects.
If humanity loses control and it's not due to misaligned AI, it's much more likely to be due to an AI enabled coup, AI propaganda or AI enabled lobbying than humans having insufficient economic power. And the policy responses to these might look quite different.
There's a saying "when all you have is a hammer, everything looks like a nail" that I think applies here. I'm bearish on economics of transformative AI qua economics of transformative AI as opposed to multi-disciplinary approaches that don't artificially inflate particular factors.
We mention the threat of coups—and Davidson et. al.'s paper on it—several times.
Regarding the weakness or slow-actingness of economic effects: it is true that the fundamental thing that forces the economic incentives to percolate to the surface and actually have an effect is selection pressure, and selection pressure is often slow-acting. However: remember that the time that matters is not necessarily calendar time.
Of course, it's true that if takeoff is fast enough then you might get a singleton and different strategies apply—though of course singletons (whether human organizations or AIs) immediately create vast risk if they're misaligned. And if you have enough coordination, then you can in fact avoid selection pressures (but a world with such effective coordination seems to be quite an alien world from ours or any that historically existed, and unlikely to be achieved in the short time remaining until powerful AI arrives, unless some incredibly powerful AI-enabled coordination tech arrives quickly). But this requires not just coordination, but coordination between well-intentioned actors who are not corrupted by power. If you enable perfect coordination between, say, the US and Chinese government, you might just get a dual oligarchy controlling the world and ruling over everyone else, rather than a good lightcone.
If humanity loses control and it's not due to misaligned AI, it's much more likely to be due to an AI enabled coup, AI propaganda or AI enabled lobbying than humans having insufficient economic power.
AI-enabled coups and AI-enabled lobbying all get majorly easier and more effective the more humanity's economic role have been erased. Fixing them is also all part of maintaining the balance of power in society.
I agree that AI propaganda, and more generally AI threats to the information environment & culture, are a big & different deal that intelligence-curse.ai don't address except in passing. You can see the culture section of Gradual Disempowerment (by @Jan_Kulveit @Raymond D & co.) for more on this.
There's a saying "when all you have is a hammer, everything looks like a nail" that I think applies here. I'm bearish on [approaches] opposed to multi-disciplinary approaches that don't artificially inflate particular factors.
I share the exact same sentiment, but for me it applies in reverse. Much "basic" alignment discourse seems to admit exactly two fields—technical machine learning and consequentialist moral philosophy—while sweeping aside considerations about economics, game theory, politics, social changes, institutional design, culture, and generally the lessons of history. A big part of what intelligence-curse.ai tries to do is take this more holistic approach, though of course it can't focus on everything, and in particular neglects the culture / info environment / memetics side. Things that try to be even more holistic are my scenario and Gradual Disempowerment.
The three factors you identified: fast progress, vulnerabilities during times of crisis and AI progress increasing the chance of viable strategies being leveraged apply just as much, if not more, to coups, propaganda and AI lobbying.
Basically, I see two strategies that could make sense: either we attempt to tank these societal risks following the traditional alignment strategy or decide tanking is too risky and we mitigate the societal risks that are most likely to take us out (my previous comment identified some specific risks).
I see either of these strategies is defensible, but in neither does it make sense to prioritise the risks from the loss of economic power.
Really enjoyed your essay series. Appreciated it offered a positive future vision and then a roadmap for how to get there. Both are important. Too many people seem to be sleepwalking into a sketchy AGI future.
Here's my vision from a 2022 Future of Life Institute contest: "A future where sentient beings thrive due to widespread agreement on core values; improvements in education; Personal Agent AIs; social simulations; and updated legal systems (based on the core values) that are fair, nimble, and capable of controlling dangerous humans and AGIs. Of the core values, Truth and Civility are particularly impactful in keeping the world moving in a positive direction." Full scenario here.
Compare with yours:
We want to live in a world where:
- Humans can create economic value for themselves and can disrupt existing elites well after AGI.
- Everyone has an unprecedentedly high standard of living, both to meet their needs and to keep money flowing in the human economy.
- No single actor or oligarchy—whether that be governments, companies, or a handful of individuals—monopolizes AGI. By extension, no single actor monopolizes power.
- Regular people are in control of their destiny. We hold as a self-evident truth that humans should be the masters of their own futures.
Close enough.
Reflections and findings about the FLI contest are here.
Thoughts on Averting the Intelligence Curse via AI Safety via Law here.
Thoughts on Diffusing and Democratizing AI through next-generation virtual assistants (Personal Agents) here.
Anthony Aguirre's argument for pursuing narrow(er) AI over AGI here.
Hopefully something of interest.
I have already proposed the following radical solution to all problems related to the Intelligence Curse: have the AGI aligned to a certain treaty that requires the AGI, instead of obeying all orders except for the ones determined by the Spec, to harvest at most a certain share of resources and to help humans only in certain ways[1] that amplify humanity and don't cause it to degrade, like teaching humans about the facts that mankind has already discovered or pointing out mistakes in humans' works. Or protecting mankind from some other existential risks that are hard to deal with, like a nuclear war that might be caused by an accident.
It also seems to me that this type of alignment might actually be even easier to generalize to AGI than the ones causing the Curse. Or, even more radically, the types of alignment that cause the Curse might be totally impossible to achieve, but can be faked, as done by Agent-5 in the race ending of the AI-2027 forecast.
UPD: a prompt by Ashutosh Shrivastava with a similar premise is mentioned in AI overview #114.
I think claiming the above is a "radical solution to all problems related to the Intelligence Curse" is an overstatement. The three treaty elements you mention could be useful as part of AI-human social contracts--thus getting at a part of the Averting (i.e, AI Safety) piece . But many more treaty elements (Laws, Rules) are also needed IMO.
The Diffusing and Democratizing (and maybe other) pieces are also needed for an effective solution.
(Also, unclear what you mean by "obeying all orders except the ones determined by the Spec." What Spec?)
Now that I can answer, I will: if the ASI is ONLY willing to teach humans facts that other humans have discovered and not to do other work for them, then the ASI won't replace any other people whose work requires education. The Intelligence Curse is thus prevented.
We've published an essay series on what we call the intelligence curse. Most content is brand new, and all previous writing has been heavily reworked.
Visit intelligence-curse.ai for the full series.
Below is the introduction and table of contents.
We will soon live in the intelligence age. What you do with that information will determine your place in history.
The imminent arrival of AGI has pushed many to try to seize the levers of power as quickly as possible, leaping towards projects that, if successful, would comprehensively automate all work. There is a trillion-dollar arms race to see who can achieve such a capability first, with trillions more in gains to be won.
Yes, that means you’ll lose your job. But it goes beyond that: this will remove the need for regular people in our economy. Powerful actors—like states and companies—will no longer have an incentive to care about regular people. We call this the intelligence curse.
If we do nothing, the intelligence curse will work like this:
But this prophecy is not yet fulfilled; we reject the view that this path is inevitable. We see a different future on the horizon, but it will require a deliberate and concerted effort to achieve it.
We aim to change the incentives driving the intelligence curse, maintaining human economic relevance and strengthening our democratic institutions to withstand what will likely be the greatest societal disruption in history.
To break the intelligence curse, we should chart a different path on the tech tree, building technology that lets us:
In this series of essays, we examine the incoming crisis of human irrelevance and provide a map towards a future where people remain the masters of their destiny.
Chapters
1. Introduction
We will soon live in the intelligence age. What you do with that information will determine your place in history.
2. Pyramid Replacement
Increasingly powerful AI will trigger pyramid replacement: a systematic hollowing out of corporate structures that starts with entry-level hiring freezes and moves upward through waves of layoffs.
3. Capital, AGI, and Human Ambition
AI will make non-human factors of production more important than human ones. The result may be a future where today's power structures become permanent and frozen, with no remaining pathways for social mobility or progress.
4. Defining the Intelligence Curse
With AGI, powerful actors will lose their incentive to invest in regular people–just as resource-rich states today neglect their citizens because their wealth comes from natural resources rather than taxing human labor. This is the intelligence curse.
5. Shaping the Social Contract
The intelligence curse will break the core social contract. While this suggests a grim future, understanding how economic incentives reshape societies points to a solution: we can deliberately develop technologies that keep humans relevant.
6. Breaking the Intelligence Curse
Avert AI catastrophes with technology for safety and hardening without requiring centralizing control. Diffuse AI that differentially augments rather than automates humans and decentralizes power. Democratize institutions, bringing them closer to regular people as AI grows more powerful.
7. History is Yours to Write
You have a roadmap to break the intelligence curse. What will you do with it?