Preventing tech diffusion would in my opinion require a large and coordinated effort. That might happen if a small group stages a coup and explicitly tries to outgrow the world. Or it might happen if a US-led bloc tries hard to outgrow authoritarian countries.
I disagree, for reasons described in this old post (section "Leaks & Spies") https://www.lesswrong.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage
The key insight is that ordinary human leaking and spying and tech diffusion happens at ordinary human pace. If e.g. China is doing an intelligence explosion such that it's making tech and economic progress at 100x normal speed, the process of ideas gradually diffusing to the US and causing catch-up growth won't be 100x faster to compensate. Instead e.g. the lag between "the chinese are selling an exciting new product" and "our engineers have dissected it and deduced some interesting ideas about how it was made and replicated these ideas in our own manufacturing processes" will be... approximately the same? And so for practical purposes it might as well not be happening.
Moreover, and independently of the above, I'd argue that if one nation is first to do the intelligence explosion, or more generally is substantially ahead in the intelligence explosion (e.g. a few months) then the rate of tech diffusion via leaks and spying will go way down. Because the # of humans with the important sensitive knowledge will drop to zero, and the # of human spies will drop to zero (you can't have a human spy in a virtual corporation of superintelligent AI geniuses in a datacenter), and it won't be hard to stop all the AIs from leaking (unlike how hard it is to stop an entire industry of humans from leaking). As for cyberattacks, if you are deeper into the intelligence explosion than your adversaries I'm fairly confident that it becomes way harder for them to cyberattack you.
So yeah, I think even a small nation could easily outgrow the rest of the world IF (a) it has a sizeable head start in the intelligence explosion (Years definitely, months maybe) AND (b) the rest of the world doesn't physically attack it or do a super strong trade embargo/blockade against it. (And a trade embargo/blockade might not work either, it depends on how much of a manufacturing base & raw materials is needed to bootstrap to advanced robot economy)
Thanks!
The key insight is that ordinary human leaking and spying and tech diffusion happens at ordinary human pace. If e.g. China is doing an intelligence explosion such that it's making tech and economic progress at 100x normal speed, the process of ideas gradually diffusing to the US and causing catch-up growth won't be 100x faster to compensate. Instead e.g. the lag between "the chinese are selling an exciting new product" and "our engineers have dissected it and deduced some interesting ideas about how it was made and replicated these ideas in our own manufacturing processes" will be... approximately the same? And so for practical purposes it might as well not be happening.
I don't think it will be the same? If the economy is growing 100X as fast then that means:
Both these things help with diffusion between countries as well.
And if designing new tech is 100X faster, i'm not seeing why we would expect copying tech to be <100X faster. With copying, you start with additional clues -- e.g. a physical artefact or using a digital product or public materials (like OAI's o3 demo) -- and design the tech from there. Why would designing new tech from scratch be sped up by more than designing from clues? I.e. we've got super-smart AI doing both.
the rate of tech diffusion via leaks and spying will go way down
Yeah, agree spying will go way down once humans are replaced, and cyber hacks will go down too. Though i would have thought a small minority of tech diffusion comes from spies + hacks? E.g. i'd have thought selling and publishing about o1 was more important than those things, though i could be wrong and spies might be unually important for AI diffusion in particular.
...
I don't think you've responded to my argument that companies in the leading country could make way more short-term revenues by producing and selling their technologies in other countries? (See the argument in the box.) Bc ASI cognitive capabilities are massively complementary with human labour and physical capital inputs.
Re: producing and selling tech in other countries: Yes I expect them to do that. I agree with that bit.
I don't think the economy growing 100x as fast due to an ASI-led industrial explosion means that human firms who get to dissect the products produced will get to produce their own replication competitor products 100x as fast as it usually takes. Like, they are still human! Operating at human speed! I feel like you haven't really engaged with my arguments.
Sure, some physical objects will be transported 100x faster (though not all?) and some new tech will be diffusing 100x faster e.g. some new products will be selling like hotcakes. But that doesn't undermine the basic picture I sketched.
Why would designing new tech from scratch be sped up by more than designing from clues? I.e. we've got super-smart AI doing both.
OH I get it, you are imagining that you have similarly smart ASIs on both sides of this exchange whereas I was imagining one side is ASI and the other side is humans. I agree that if e.g. Germany, France, Brazil, etc. all had their own ASI labor force, then the process of tech diffusion might well be (almost) 100x sped up when the pace of progress is 100x'd. But that's not the relevant situation to be imagining; the situation to imagine is one where e.g. China or the US has ASI but no one else does for the next few months (years?) at least. Or, more nuanced, one country is deeper into the intelligence explosion than everyone else, even if everyone else also has quite powerful AIs.
Ah, cool.
I was roughly imagining:
So then the Q is why Europe can't quickly use its AI to copy the new physical tech?
And similarly for China, which can use its worse AI (and the US AI as well if the status quo continues).
Also, even absent copying the new tech, the amazing new physical robots will be made on European soil. (But maybe US companies still capture the profits?)
- Chinese AI used in Europe. US AI also used in Europe. Deep integration with the physical economies due to the massive complementarities.
If it's 6mo behind during the intelligence explosion, Chinese AI will be very uncompetitive. Perhaps its main use case will be trying to reverse engineer US products lol.
So then the Q is why Europe can't quickly use its AI to copy the new physical tech?
Let's suppose that the current overall speedup of the US-run intelligence & industrial explosion is "100x normal human speed." Then (I've argued above) unassisted German humans would still be at roughly 1x human speed at the task of reverse-engineering the US-designed products. Which means it takes months if not years, which is too late to matter. If the Chinese AIs are 6months behind, but still superhuman, then the reverse-engineering process goes faster 1x but slower than 100x. Depends on takeoff speeds.
How to estimate this?
Well, in AI 2027, when the algo progress speedup is 100x, a six-month-behind AI project would still not yet be at top human level at research (though they'd be almost at the Superhuman Coder Milestone). Now, speed-at-reverse-engineering stuff is different from this, but maybe this is an OK proxy. Eyeballing it, later on there seems to remain something like a 100x gap between the R&D multiplier at time t and the multiplier at time t-6mo.
So yeah. And remember, after reverse-engineering stuff, you then have to contend with the fact that your stuff is probably already obsolete. Even if it takes you only a month or two to reverse-engineer (consistent with having AIs that are only 3months behind the frontier, at least on AI 2027 takeoff speeds)
Also, even absent copying the new tech, the amazing new physical robots will be made on European soil. (But maybe US companies still capture the profits?)
Yes. Similarly, many amazing new factories and mines and plantations and windmills and railroads were built on African, American, Indian soil by European colonizers. Many of which was peaceful & free trade-y, e.g. "factories." Doesn't change the bottom line conclusion. (And also, decolonialism happened, but I don't think we should expect an analogous thing to happen in a hypothetical outgrow-the-world industrial explosion scenario.)
Yeah let's think.
If there's a AI-2027 style SIE, then 6 months is a gap of ~6 OOMs of effective compute, which is maybe equivalent to a having 1000X more researchers thinking 4X faster and much smarter. (Maybe 2X the gap as from top expert to median expert.)
That's a big advantage. Much bigger than US had over China over the last 70 years. So yeah it's pretty plausible that the reverse engineering would be too slow for Germany to catch up using Chinese AI...
Though once the SIE fizzles out Germany will have access to ~equally good AI. And if the industrial explosion has still got many years to go that could give them time to catch up.
There's also the question of how Germany is able to use the US AI. For example, suppose two US projects are neck and neck. Or suppose there's one US project, but multiple downstream 'product' companies that can sell fine-tune and sell access to the model. In that case, competition could lead for German companies to be able to access AI for a wide range of tasks. Including 'R&D' that really involves copying/adapting US tech developed elsewhere. In this case, Germany doesn't have to use worse AI.
OTOH, maybe there's just one US project and it's single-handedly making tech progress in the US and selling its systems abroad. In this case, it could have strong incentive to prevent German companies using its AI to copy US tech. Bc it developed that tech itself! This is a case where the US is strongly coordinated bc all US economic activity is just one company.
Another scenario. There's one US AGI project. It sells API access to existing companies in the US, at a very healthy profit, and those companies all make amazing new tech and own the IP. In this case, teh US project still has incentive to sell its AI to Germany companies that will use it to copy the tech developed in US companies. (In fact, there's no reason the US companies would be ahead of the German companies in developing the tech in this scenario.) Here is seems like USG needs to step in to make sure US companies keep the profits.
Like, here's one way of putting it. There's def a chance that one project outgrows the world. Especially if there's a monopoly on AI within the US with the support of the govt. That project would be internally coordinated and avoid trades that disadvantage it relative to RoW. But suppose one project doesn't outgrow the world. There's then a question of why the larger entity that does outgrow the world is the US. Like, why are other US companies in on the action, but no non-US companies. ASML and TSMC will have a lot to offer! My answer to this question is that USG would have to make specific efforts, significantly restricting trade, to make this happen.
Then the scenario where amazing tech is built in Germany by US companies. Let's put aside manipulation and military exploitation for a second. The GDP produced by these companies is German GDP and the German government can tax it. So, if most physical manufacturing is not on US soil and most GDP is from physical stuff being produced, then >25% GDP will be produced on non-US soil and could be taxed by non-US governments. But yeah, i agree that the analogy of colonization is chilling here.
(sorry, this isnt' very well organised. But overall my tentative position is that it is plausible US outgrows the world here, but it seems less certain and more complicated than what appears to be your position)
<3
To be clear, I think it's pretty obvious the US could outgrow the world in the scenario you describe, and am arguing for a stronger claim -- that China or even maybe Israel could outgrow the world if takeoff speeds are like they are in AI 2027 and they start with a 6-month lead in the intelligence explosion and there's no war or trade embargo.
Though once the SIE fizzles out Germany will have access to ~equally good AI. And if the industrial explosion has still got many years to go that could give them time to catch up.
OK, I agree that IF the natural shape of things is for a SIE to fizzle out and then there be a period of many years in which an industrial explosion gradually occurs, then yes, perhaps trailing nations could catch up or almost catch up at least, being substantially worse off in relative terms than they were when they started but not completely obsolete. However I don't think that's the natural shape of things. I think the intelligence explosion and industrial explosion will overlap / blur into each other somewhat, and I think the industrial explosion will be fast enough that by the time you've overcome your adversary's six-month lead and managed to get AIs that are almost as superintelligent where it counts, they'll be deep enough into the industrial explosion that they'll have more than 50% of the GWP already (in the relevant sense, i.e. weighted towards usefulness-for-future-growth) and can just cut you off and outgrow you. Like, concretely, suppose Israel is 6 months ahead in the intelligence explosion and has 5% of the world's compute, the US having more than an OOM more. (OK actually that case doensn't work because I think the US would actually catch up pretty fast with that compute advantage...) So suppose Israel has ASI and the US needs 6 more months to get ASI. Unlikely setup yes but suppose it happens. Then I think Israel will probably have some pretty nifty converted factories producing some pretty awesome cheap effective robot designs PLUS effectively unlimited income from various software and IP services and products (drugs, video games, movies, SAAS, consulting, ...) which it will be spending to suck rare minerals and other valuable materials in like a black hole. They'll be months, not years, away from beginning to tile their desert with fusion reactors and strip mines. Or nanobots, or whatever the new economic engine is. Sure, at some point they need to convert this massive economic advantage into actually conquering the territory of the globe, otherwise the rest of the globe will eventually catch up. (Or they can do the space route as you discuss). But when the US is just trying to convert its factories to build the first robots, Israel will already have super-advanced factories producing high quantities of super-advanced robots, weapons, etc. Not a good time to be the US.
Again referencing my earlier post -- a 20-year lead in military tech over the last century or so has arguably corresponded to a pretty decisive military advantage, decisive enough to let a small nation like Israel beat a large nation like the USA. If 20 is too much of a stretch, think 40. A six month lead in the industrial & intelligence explosions, assuming things are generally going 100x faster, should correspond to the military equivalent of a 50 year lead. Just think about what the military of Israel in year 19XX+50 could do to the military of the USA in year 19XX.
Then the scenario where amazing tech is built in Germany by US companies. Let's put aside manipulation and military exploitation for a second. The GDP produced by these companies is German GDP and the German government can tax it. So, if most physical manufacturing is not on US soil and most GDP is from physical stuff being produced, then >25% GDP will be produced on non-US soil and could be taxed by non-US governments. But yeah, i agree that the analogy of colonization is chilling here.
Yeah, I agree that if the country with the superintelligences allows other countries to tax, they'll be getting a substantial fraction of the profits. And of course there'll also be money rolling in from the payments for raw materials and labor too.
The question is what % of the lightcone they can buy with that money. Property rights over distant galaxies will by default be up to the whims of the superintelligent coalition that builds the spacecraft, not the German government with all their $ to get von neumann probes quick (but how? they don't have the tech yet...) or they need to rely on the morality of the superintelligences to give them some, perhaps in return for $$$ or labor or raw materials.
Great - I agree that if you can get to >50% physical capital (as valued by how useful it is for the new supercharged economy) within 6 months of the SIE fizzling out, then you can outgrow the world in the scenario you describe. Sounds like you're more bullish on a very fast industrial explosion than I am -- I think it's hard to know whether we'll get physical capital doubling times of <3 months within a few months of SIE. On longer timelines to an SIE, this seems more plausible as there's more time for robotics to improve in the meantime.
Another blocker -- cooperation from pre-existing companies
I want to mention another blocker to someone outgrowing the world via a very fast industrial explosion. You'll be able to go faster if you draw heavily on the knowledge, skills and machines of existing companies. So you'll need their cooperation. But they might not cooperate unless you give them a big fraction of the economic value. Which would prevent you from outgrowing the world.
Of course, you could "go it alone" and leapfrog the companies that won't do business with you. But then it's even harder to do a very fast industrial explosion. E.g. take ASML and TSMC. I predict it will be much more efficient for the leader in AI to work with those companies (e.g. selling them API access to ASI, or acquiring them) than to leapfrog. If they try to leapfrog, the laggard can team up with them and recover some of the lost lead.
Concretely, let's say OpenBrain gets to AGI first. Then its AIs are combined with ASML's existing knowledge to create EUV++ machines. Are the profits from these machines going to ASML or to OpenBrain? Seems unclear to me. ASML's bargaining position might be strong -- they have a strong monopoly and can wait 6 months and deal with the laggard instead.
So maybe other companies have a lot of economic leverage post-AGI, and they convert that to economic value. And, as discussed, maybe other governments tax the activites happening in their domains.
Will $ be helpful?
But you suggest these "mere $" might not be very useful, if ASI has colonised space and doesn't care. But $ can be used to buy AI and buy crazy new physical tech. So if other actors have lots of $, they can convert that to ability to colonise the stars themselves.
Q about your Israel scenario?
Are you basically assuming a merge between the lab and govt here? And then an nation-wide effort to utilise Israel's existing human and physical capital to kick-start the industrial explosion within it own borders?
Asking bc absent a merge i'm still not seeing a reason for all actors outside Israel to be left in the dust. The AI company would trade with some Israeli companies and (many more) non-Israeli ones. In which case Israel per se would only outgrow the world if the AI company outgrows the world by itself.
I agree there's lots of uncertainty about what the industrial explosion will look like; AI 2027 was my extremely unconfident best guess. I think this cuts both ways though; I think it's entirely plausible (>5% likely, though less than 50%) that the army of superintelligences in the datacenter could follow the following strategy in less than three months:
(1) Keep doing the intelligence explosion to create vastly superhuman AIs that are basically qualitatively different from human scientists at the task of figuring out how to rapidly design and build nanobots etc.
(2) Build nanobots etc. by doing tons of parallel experiments in various biolabs and physics labs and whatnot around the world (maybe thousands of such facilities could be acquired simply by spending OpenBrain's money?) The theory will have been worked out beforehand by the godlike AI scientists from step 1, and the experiments won't be fucking around, they'll be doing the minimum necessary experiments to efficiently cut down the search space and estimate the relevant parameters to fix the most viable designs, and then building those designs.
Sure sounds wild, sounds sci-fi even. But historically things that sound at least this wild and sci-fi sure do seem to happen sometimes e.g. nukes would have sounded this way to people beforehand, and the industrial revolution ("machines that move themselves and make other machines? Enabling peasants to live like kings after only a half-dozen generations of growth?") probably would have sounded this way to people in 1500 too. e.g. the modern cell phone would have sounded like this to someone in 1925 or even maybe 1975. And importantly, all these past examples of wild things happening happened with mere humans doing the R&D. I think we should firmly reserve >5% credence for "ASIs will basically just be gods as far as we can tell, just like how humans are basically gods from the perspective of dolphins or monkeys."
Re: Cooperation from pre-existing companies: I don't think this is going to really change the basic picture. Taking your example with OpenBrain and ASML. OpenBrain has ASI, no one else does. ASML's internal data is useful for the ASIs, maybe, and ASML's machinery is useful to the ASIs, probably. But the ASIs can probably find a workaround if ASML is obstinate. For example they can hack ASML and steal the data. Or they can do a hostile takeover maybe, and buy ASML. As for the machines, I bet they could offer giant gobs of money to ASML's suppliers to supply OpenBrain instead and then OpenBrain could build their own machines in less than six months. ASML will also be a fallible human institution, vulnerable to OpenBrain's ASI-powered trickery, persuasion, lobbying, and shenanigans (e.g. bribing some of the employees to quit and join OpenBrain, possibly even bribing the BoD)
And ASML is probably one of the best examples you've find, of the closest thing to a monopoly against the power of the AI company. Other stuff (robots, physical manufacturing, raw materials) is already less monopolistic and thus really hard to coordinate against OpenBrain.
Re: money:
But you suggest these "mere $" might not be very useful, if ASI has colonised space and doesn't care. But $ can be used to buy AI and buy crazy new physical tech. So if other actors have lots of $, they can convert that to ability to colonise the stars themselves.
No, the best AIs and physical tech will be kept by OpenBrain in house, and the second-best will be sold by OpenBrain, and the third-best will be way worse (since by hypothesis OpenBrain has ASI and no one else does). You can't use your $ to buy AI and crazy new physical tech unless OpenBrain lets you, and they won't let you if they suspect you'll be able to use it to colonize the stars yourselves, since OpenBrain wants those stars.
Re: Q about Israel: I was imagining close cooperation between the govt and the company, yes. but also, I think it's not necessary, I think the company itself could outgrow the rest of the world combined potentially. (Through trading with it. The company produces (a) cash cow products and (b) new sci-fi industrial tech that bootstraps towards a self-sustaining robot economy with a short doubling time, and it sells (a) to get cash to buy raw materials and physical labor with which to produce (b). After it's produced enough (b), it outgrows the rest of the world combined & all the cash becomes basically worthless.)
Thanks for this!
Yep i agree that scenario gets >5%!
Agree ASML is one of the trickiest cases for OpenBrain. Though I imagine there are many parts of the semiconductor supply chain with TSMC-level monopolies (which is less than ASML). And i don't think hacking will work. These companies already protect themselves from this, knowing that it's a threat today. Data stored on local physical machines that aren't internet connected, and in ppl's heads.
And i think you'll take many months of delay if you go to ASML's suppliers rather than ASML. ASLM built their factories using many years worth of output from suppliers. Even with massive efficiency gains over ASML (despite not knowing their trade secrets) it will take you months to replicate.
I agree that the more OpenBrain have super-strategy and super-persuasion stuff, the more likely they can capture all the gains from trade. (And military pressure can help here too, like with colonialism.)
Also, if OpenBrain can prevent anyone else developing ASI for years, e.g. by sabotage, i think they have a massive advantage. Then ASML loses their option of just waiting a few months and trading with someone else. I think this is your strongest argument tbh.
Biggest cruxes imo:
I'm curious how confident you are a company with a 6 month lead could outgrow the rest of the world by themselves?
Posting a brief exchange between myself and Sam Winter-Levy on this topic, relating to this piece of his.
Sam
Hi Tom,
Sam Winter-Levy here—I'm a fellow at the Carnegie Endowment for International Peace in DC. Really interesting piece on economic growth explosions. It overlaps with some work we're doing at Carnegie on "decisive strategic advantages," just wanted to share some quick reactions in case helpful!
I know you purposefully set aside military considerations, but I'm not sure I agree with the premise that outgrowing the world is a path to a "decisive strategic advantage without acting aggressively towards other countries." In particular, I think that so long as nuclear deterrence and MAD hold, they are a binding obstacle to any alleged DSA, no matter how much AI turbocharges economic growth. Today, the US economy is 15x Russia's and 1000x North Korea's, but the US has no ability to impose "complete domination" over either of them, to put it mildly, nor would it even if AI dramatically boosted the US economy still further—in large part because of Russia and North Korea's ability to credibly threaten to impose massive costs on the US through nuclear escalation on any issue of sufficient importance to either regime.
More broadly, I don't think most claims about the existence of a DSA take nuclear deterrence seriously enough. Before nuclear weapons existed, a state’s ability to defend itself from coercion depended crucially on its economic size, since relative forces mattered and force size/quality depended ultimately on having a large population and an advanced economy. But that's not true once you have a second-strike nuclear capability; nuclear weapons break the connection you emphasize between economic power and the ability to impose your political preferences on other nuclear-armed states. So I think even granting the argument that AGI could lead to some sort of economic explosion for one state, the amount of coercive leverage that would grant that state is likely to be highly limited, nowhere close to the Bostrom DSA definition you use, so long as nuclear deterrence remains intact.
In case of interest, here's a piece we published recently on these issues, coauthored with Nikita Lalwani (former director for tech/nat sec on the Biden NSC). We make a version of the above argument about the importance of MAD for the existence of any alleged DSA, and then lay out how exactly AGI might threaten nuclear deterrence. Would welcome any feedback! We're still tentatively thinking through these DSA issues ourselves.
Best,
Sam
Tom
Hi Sam,
Thanks for this great pushback!
I think you're right that this is an oversight of the piece. But here are some responses:
If a country gains this kind of massive economic advantage, it will gain a massive tech advantage as well that, at some point, is likely to allow it to remove the second strike capability. This is more speculative, but we are here talking about the limits of technology and 1-month doubling nanobots etc. Still, this at least should be addressed by the post.
I would go further, and say that if we condition on the US growing to become 70-99% of world GDP and growing super-exponentially for a time, then the fact that it can become a superpower in space pretty much neutralizes MAD, because it's industry is by and large off-world, meaning that a nuclear attack on the US 50 years after the SIE ended is basically a non-factor, because lots of US people will be in space and able to annihilate North Korea (as an example).
And there's no nuclear weapons equivalent outside of something like vacuum decay, which is very, very far off from our current state, such that we don't need to worry about it.
But if there’s multiple competing AI companies, the German companies providing the physical inputs might capture most of the gains from trade.
I doubt it. How many competing AI companies will there be? It won't be a perfectly efficient market, and also, superintelligences will be smart enough to coordinate well. There would have to be extremely strong regulation forcing aligned superintelligences to refrain from coordinating to get most of the surplus; otherwise, they'd coordinate and get most of the surplus.
(Analogy: Sure, it's 1500 and Europeans have good boats now and are about to sail all around the world and initiate a great economic growth followed by industrial revolution. But most of the raw materials and labor in the world is outside Europe! And there are multiple competing European companies and nations! So e.g. American and African and Asian polities providing the physical inputs and labor should capture most of the gains from trade.)
Thanks for this!
In this case, they’d need the capital stock to self-replicate once per month. ($1tr * 2^9 = $512tr.) I’d bet against physical capital self-replicating this quickly immediately after AGI – here I estimate that after AGI physical capital will have a ~one year doubling time. But it might be possible, for example if AGI can make very rapid technological progress.
Why look at the number immediately after AGI, here, rather than further into the intelligence explosion? (Where we expect growth speeds to get much faster.) Or earlier in the intelligence explosion, for that matter.
(Also, more minor point: Even if we start the clock at AGI, it's fine if the initial doubling speed is just ~2.4 months, if the rate of production continuously increase so as to double every 3 doublings.)
Why look at the number immediately after AGI, here, rather than further into the intelligence explosion?
Here was the thought:
If the lab hasn't outgrown RoW by the time RoW develops AGI then we're now in a position where RoW has more physical capital than the lab. This means that, in the next physical doubling, RoW will produce more physical capital than the lab, and so do more learning by doing than the lab has ever done. After that doubling, RoW will then have better tech than the lab and so their physical capital will double more quickly. So, after that doubling, RoW will have both more physical capital and their physical capital will double more quickly. This means they're on a trajectory to always have more physical capital and the lab can't outgrow them.
So this is implicitly assuming that the number of units produced is the hard bottleneck on the technology of physical capital, and the lab's additional AI cognitive labour doesn't help. This is in line with the classic economist view that you bump up against ultimate limits to what can be inferred from data. But I think qualitatively smarter AI could change this a lot. Maybe the lab has much smarter AI than RoW and so can learn much more per unit produced.
So, thinking about it more, I think this calc makes sense if we start the clock at the point when the SIE (software-only intelligence explosion) fizzles out. Because then, once RoW catches up, they'll have ~equal cognitive inputs to the lab. So the lab really does need to overtake RoW on physical capital before this point. (Maybe the lab can get more chips than RoW? But this is a type of physical capital.)
Anyway, accounting for this makes it more plausible the lab can outgrow RoW. They need the 1 month doubling time robots not when they first have AGI, but when the software intelligence explosion has fizzled. Which means they have more powerful AI and longer to do robot experiments.
Another reason my calc understates the lab's chance is that there may be higher value from experiments conducted in serial. RoW constructs all their physical capital in parallel in one doubling. Whereas the lab does it over multiple doublings one after the other. That could make a big difference.
How to do better?
The more nuanced way to game this out would be to represent, at each time-step, the physical capital, cognitive labour, and technology (for physical capital) of both the lab and RoW.
Then have a production function for how inputs of physical capital produced and cognitive labour improve technology in each time-step. Probably smg like g_A = (K^a * C^b)^lambda * A^-beta. Where a+b=1, lambda represents the benefits of doing research in serial, and beta accounts for ideas getting harder to find. (This accounts for your other point about the rate of production doubling every 3 doublings. The parameter beta controls whether it's "3" or some other number.)
Then in each timestep simulate each actor's change in:
My overall view
Maybe the most plausible scenarios involve ~6 month leads, ~$250b on physical capital, ~1 month robot doubling times at the end of the SIE, and doubling times becoming faster over time.
I made a simple of the dynamic here. (Still not modelling the gains from serial experiments and
So overall I think I'd still bet against the lab being in a position to do this on the economic fundamentals; but it is more plausible that I'd thought.
Great post.
This sort of superexponential growth vastly increases the amount of energy in the system and it seems to me that this amount of energy could very easily be enough to overcome the activation energy required to split groups (eg, countries) that are generally seen as stable.
If power/wealth becomes much more unevenly distributed within the AGI-owning group (top 1% currently at 67% of total wealth in USA, maybe ~20% of income?), why would they continue to support the rest of the group? Or, why exactly that group and not some other arbitrary group of their choosing? The government enforces/maintains the group boundary. What gives the government power to oppose the elites? The population. If the population is relatively poor, how can they maintain control of the government, and where would its power come from?
If the government cannot enforce the group boundary, decreasing the size of the group can greatly improve the group's ability to prevent diffusion, and can easily make coordination/shared ideology much stronger.
Ideology seems like it could play major role if groups can be formed/broken at will by elites, and I don't see why democratic/nationalistic ideologies would be favored in this case.
In this scenario, the US leads on AI but its AIs are used extensively in other countries. And new physical technologies are mostly produced in other countries. This makes preventing technological diffusion very hard!
Not really. Remember, these new technologies will be designed by superintelligences. Many of them simply won't be reverse-engineerable by humans because humans won't be smart enough to understand the principles of their operation and design. Others will be, but the manufacturing process will be finicky and tricky and beyond human reach. Still others will be simple enough that humans could eventually figure it out and replicate it -- but by the time they do, it'll be obsolete anyway, because remember we are in the midst of an industrial explosion.
The amount of tech that doesn't fall into one of the above categories will be a small fraction of the total.
as discussed on the other thread, i'm imagining other countries have access to powerful AI as well. Or even equally good AI if they are sold access or if another actor catches up after the SIE fizzles (as it must)
OK. And that's a crux then, I think that other countries mostly won't have access to similarly-powerful AI before it's too late. I agree they'd catch up eventually if left alone, but I don't expect them to be left alone. If the first faction to get to superintelligence calculates that, six months from now, a rival faction will have similarly-powerful AI, and therefore be able to compete with them during the industrial explosion etc. to divide up the world between them, then they'll think "if we can slow down that rival faction, we won't have to share power over the world with them" and then they'll think "I wonder if there are ways to slow down rival factions whilst we consolidate our advantage and do our industrial explosion... perhaps by using some weapons or political machinations cooked up in the next three months or so?" and they'll probably think of something fairly effective, since they are superintelligences without any significant rivals to contend with.
Do you think the same for a company within the US? That with a 6 month, or even just a 3 month going off recent trends, lead it would find a way to sabotage other companies?
I think it's plausible, but:
(I think a remerging crux here might be the power of AI's persuasion+strategy skills)
Yes, I do think the same for a company within the US. I think (a) it might be willing to do illegal things (companies do illegal things all the time when they think they can probably get away with it) and (b) some political maneuverings take very little time indeed; think about how much has happened in US politics since Trump took office less than a year ago. Elon's star rose and fell for example, DOGE happened, etc. And this is in the before times; during the singularity there'll be a general sense of crisis and emergency that makes 2025 feel like boring business as usual. A particular move I find all too plausible is "We should consolidate our compute into one big project, that shares model weights and info etc., so that we can go faster and beat china. (and, quietly, in the fine print, the AIs that should run on most of this compute should be the smartest ones and/or the people in charge should be the leadership of the most advanced company.)" In other words, we should basically grab all the compute from rival companies and give it to our project, though legally that's not what's happening and the narrative sugarcoats it.
Part where I am confused is why is this scenario considered as distinct over the standard ASI misalignment problem? A superintelligence that economically destroys and subjugates every country except ,perhaps, the country where it is based in is pretty close to the standard paperclip outcome right?
Whether I am turned into paperclips or completely enslaved by US-based superintelligence is rather trivial difference IMO and I think it could be treated as another variant of alignment failure.
Very interesting analysis.
Second, the company acquires >50% of the world’s physical capital.
I don't think this would change your argument too much, but it seems that if you had lots of skilled labor, you would not actually need greater than 50% of the world's physical capital to outgrow the rest of the world.
This post is speculative and tentative. I’m exploring new ideas and giving my best guess; the conclusions are lightly held.
Bostrom (2014) says that an actor has a “decisive strategic advantage” if it obtains “a level of technological and other advantages sufficient to enable it to achieve complete world domination”.
One obvious route to a decisive strategic advantage is military dominance.
This post explores a different route. Could the country that leads in AI outgrow the rest of the world economically? If so, they could get a decisive strategic advantage without acting aggressively towards other countries and without breaking any international norms.
I’ll focus here on the economic fundamentals, setting aside considerations like whether other countries would intervene militarily to prevent one country from pulling ahead.
I consider two arguments for why a country could outgrow the rest of the world.
First, the Superexponential Growth Argument. After developing AGI, economic growth may become faster and faster over time – superexponential growth. If countries follow superexponential growth trajectories, an initial economic lead becomes bigger and bigger over time. The leader eventually produces >99% of global output.
I consider a series of objections to this argument and find one of them fairly convincing: laggards might keep up via “technological diffusion”, copying technologies that the leader worked hard to develop.
My tentative conclusion here is that the leading AI country (i.e. the US or China) likely could outgrow the rest of the world, but only if it makes a concerted and well-coordinated effort to prevent technological diffusion. Whether the leading AI country will in fact make such an effort is unclear. The most likely way this happens is probably that a coalition of US+allies extend current export controls on AI chips to outgrow authoritarian countries.
Second, I discuss the Grabbing New Resources Argument. A country could outgrow the world by seizing control of unclaimed resources, especially in space. Less than 1 billionth of the sun’s energy hits earth. If a country dominates the solar system, they’ll dominate economically. A key uncertainty here is whether grabbing space resources involves a winner-takes-all dynamic (e.g. if a country with 60% of world GDP could grab >90% of the solar system’s resources). Absent such a dynamic, grabbing space resources could allow a country to lock in its dominance, but not to become dominant in the first place.
Lastly, an appendix quickly discusses whether just one company could outgrow the rest of the world. This is less likely than a country, but strikingly plausible.
In recent history, economic growth has been roughly exponential. Countries have doubled their output roughly every 30 years.
Under exponential growth, the relative size of different countries’ GDP is constant over time. If US GDP is 8X bigger than UK GDP today, and both countries grow at the same exponential rate, then this 8X gap won’t change over time.
As we transition to AGI (including automating both cognitive work and physical labour), there’s good reason to think that growth will become superexponential, with the growth rate increasing continuously over time. The key argument here is that AGI allows us to invest economic output into creating more labour, which unlocks a powerful feedback loop of more output → more labour → more output… This new feedback loop (in combination with already-existing economic feedback loops) drives superexponential growth. (Though most economists don’t expect AGI to cause a significant acceleration in growth – see discussion here and here.)
Let’s start with a simplified set-up where there’s no trade and no technology diffusion between countries. In this scenario, if the growth of each country becomes superexponential, then an initial lead in GDP will amplify over time as the bigger country will grow more quickly.
Naively, whichever country starts off the biggest will eventually outgrow the rest of the world, becoming an arbitrarily large fraction of world GDP. (Though we’ll see below that this scenario is unrealistic in many respects.)
For this superexponential growth dynamic to allow a country to become the vast majority of world GDP, growth would have to eventually become very fast. Economic output would have to double every couple of years.[1] But it seems plausible to me that, in the limit of technology, both cognitive labour and physical infrastructure could double much faster than this, in mere weeks.
Ok, that’s the basic Superexponential Growth Argument for thinking that one country could outgrow the world. Now let’s go through a series of objections.
In the previous scenario, the growth rate of each individual country depended on its own output. Bigger countries grew faster. That would make sense if countries were self-contained economic units, with the inputs to growth coming only from within.
But in fact, countries trade. If the US and UK trade extensively, then the relevant economic unit is not one individual country, but the combined economy of US+UK. Rather than “bigger countries grow faster”, we’ll see “bigger trading blocs grow faster”.
For example, the US is the world’s biggest economy at ~25% of GDP. Let’s assume it stays at 25% as the world enters a period of superexponential growth. Accounting for trade, the US couldn’t “go it alone” and outgrow the world. The rest of the world could form a trading bloc with 75% of GDP and outgrow the US.[2] (In this toy example, all the world simultaneously gets access to AI technology that allows it to start growing superexponentially, ignoring the fact that some countries lead on AI.)
This doesn’t mean that outgrowing the rest of the world is impossible. It just means that you need to start off with >50% of world GDP to do it. In this case, you could simply not trade with the remaining <50% and still ultimately outgrow them. And then if you do want to trade (because there are still big gains from trade), you have a strong BATNA and could in theory only accept trades that maintain your ability to outgrow the rest of the world (though we’ll see below that in practice this might be tricky due to technological diffusion).
So, could one country outgrow the world? It’s quite plausible that the US could rise to >50% of global output given its lead on AI. During the industrial revolution, Britain’s share of world GDP increased by 8X;[3] a similar increase during the AI transition would put the US at about 70%.[4] And similarly, if China ends up leading on AI then they might rise to >50%, especially if developing superintelligence requires a massive industrial buildout.[5]
Even if no individual country has >50% of GDP, a trading bloc might do so. US + allies already have >50% of GDP.[6] So they could form a trading bloc and outgrow the rest of the world.
That’s the first objection. The Superexponential Growth Argument still goes through, but we need one country (or trading bloc) to start with >50% of GDP. That’s a lot, but it’s still a plausible assumption. On to the next objection!
The economic models I’ve been using are obviously very oversimplified. Like many economic models, they treat “output” as a simple scalar quantity. Whereas in reality, there are many distinct types of output (roads, EUV machines, solar panels, robots) and you need many such outputs to sustain growth.
And, importantly, no one country can produce every type of essential output. For example, the semiconductor supply chain is incredibly complicated, with many distinct essential steps that are performed in different countries across the globe. No one country could make semiconductors all by themselves without a massive short-term productivity hit.
How does this affect the analysis?
It means that the gains from trade discussed above will likely be very large. Cutting off trade entirely requires recreating essential steps of production from scratch, with significant impact on total output in the short-term. Imagine the US having to rediscover all the progress made by ASML before it can produce more chips. So a sudden “no-trade” scenario is unlikely.
This doesn’t mean one country couldn’t outgrow the world. Again, consider the US and imagine that after AGI it’s >50% of GDP. If it suddenly ceases trade with the rest of the world, the outcome might be that all countries take a 3X short-term hit to output, still leaving the US with >50% of world output and on track to outgrow the world. With that strong a BATNA, the US might negotiate a trade deal that avoids that short-term hit to output for everyone, but leaves the US with the ability to ultimately outgrow the world. Or the US might gradually decouple from the rest of the world, such that its output never falls in absolute terms but its growth slows (relative to the counterfactual without decoupling). Gradual decoupling seems especially plausible.
It might seem unlikely that a country would accept any significant counterfactual decrease in its short-term GDP, even for the promise of future economic dominance. But GDP growth will already be unprecedentedly high. And the cost may be very short lived. If GDP eventually doubles every year or even every week, it will not be long before the country’s economy has recovered and it has become economically dominant. A few weeks of slower growth may seem a very small price to pay for economic dominance over rival nations!
That’s the second objection. It suggests that cutting off trade with other countries might be quite costly. But it’s realistic that a country would pay that cost. It could be a relatively small, temporary cost, well worth the benefit of economic dominance. And, with this strong negotiating position, a leading country might negotiate a trade deal where it avoids this cost entirely and can still outgrow the world. So I’m not convinced by this objection.
If you’re on the technological frontier, you have to do the difficult work of making scientific and technological discoveries. R&D is hard. By contrast, if you’re behind the frontier, you can potentially just steal or copy the new technologies that others have already discovered.
Indeed, during the 20th century many countries saw rapid “catch up growth”, where countries behind the economic frontier like China and Japan grew much more quickly than those at the frontier (e.g. the US) by adopting existing technologies.
More recently, the Chinese model DeepSeek-R1 was developed by copying (and improving upon) the training techniques developed by US companies. And China will no doubt benefit from TSMC’s innovations when developing its own semiconductor supply chain.
The same forces that have historically driven catch-up growth will make it harder for one country to outgrow the world. The leading country must discover new technologies from scratch; other countries can just copy their discoveries.
Of course, if the leading country is specifically trying to outgrow the world (rather than to just increase their own absolute level of GDP[7]), they could take steps to block tech diffusion. They could guard new technologies much more closely.
I don’t know how hard this would be. But it might be extremely hard. Even just OAI’s announcement of o1, with the graph of performance vs inference compute, made o1-style systems much easier to reproduce. If you sell API access to your AI, other countries can train their models on your trajectories. Blocking spies is hard. Preventing hacking is hard. For physical artefacts like chips or phones or robots, it seems unrealistic to prevent another country from obtaining any copies that they could disassemble and learn from.
At the very least, blocking technological diffusion would require a large government effort. Suppose the US develops AGI. Individual US businesses will compete to sell new technologies, both digital services and physical artefacts, to the broader global market. The US government would need to block many such sales, even though those sales would increase US GDP. (Blocking the sales would decrease US GDP but increase the US’ fraction of future world GDP by preventing technological diffusion.[8])
If the US was one perfectly coordinated actor that specifically wanted to outgrow the world, then perhaps it would make this large effort. But the US is not (currently!) coordinated in this way. Competitive dynamics within the US will drive technological diffusion, making it harder for the US to outgrow the world.
Further, a strongly coordinated effort to outgrow the rest of the world would conflict with many US values. People in the US value prosperity, freedom, liberalism, justice and democracy. Forcibly excluding other democratic countries from trade so that the US can dominate, and making the whole world (including the US) poorer in the process, would be unappealing to many elements of US society.
One reason the US might make a strongly coordinated effort to outgrow the rest of the world is if a small group stages a coup. The group might be generally power-seeking and so explicitly aim to outgrow the world. They could also obtain an unprecedented degree of control over the US by having obedient AIs automate the government and broader economy. With this control, they could block trades that enable technological diffusion.
Another possibility is that a trading bloc consisting of US + allies coordinate to outgrow authoritarian countries for ideological or national security reasons. This has already started to some extent with export controls on AI chips. If the US + allies restrict access to AGI, that could be sufficient for them to outgrow the world. And likewise, if China ultimately leads on AI then a China-led trading bloc might outgrow the world.
That’s the third objection. Preventing tech diffusion would in my opinion require a large and coordinated effort. That might happen if a small group stages a coup and explicitly tries to outgrow the world. Or it might happen if a US-led bloc tries hard to outgrow authoritarian countries.
This is the part of this piece where I have the most uncertainty. I don’t know how strong and rapid the forces of technological diffusion will be in a post-AGI world. I don’t know how much effort it would take to block those forces. And I don’t know how much effort a country will make. The box below has some additional thoughts on why I expect a lot of technological diffusion by default.
Let’s consider a scenario where the US leads on AGI. Here’s why I tentatively expect significant technological diffusion. The US will have abundant cognitive labour. That cognitive labour will be strongly complementary with other economic inputs like human manual labour, physical capital, and raw materials. Those other physical inputs are very much distributed worldwide. So getting the most economic value out of AGI will involve significant trading with other countries. US AI companies will contribute their cognitive labour, other countries will contribute their human manual labour, physical capital and raw materials. There will be very large gains from trade. US AI companies will compete with each other fiercely to make these trades. In practice, US-built superintelligent AI systems will instruct human workers in (e.g.) Germany on how to use existing German factories and machines to build new and improved physical technologies like robots. If there’s just one US AI company, they might sell cognitive labour at monopoly prices and extract most of the gains from trade. But if there’s multiple competing AI companies, the German companies providing the physical inputs might capture most of the gains from trade. Either way, the manufacturing will take place on German soil. The US government could ban AI companies from selling their services to companies abroad, but this would massively reduce their revenues and risk US AI companies losing business to Chinese AI companies. For this reason, I expect most post-AGI manufacturing to occur in non-US countries, even assuming the US has a big lead on AGI. The majority of relevant physical inputs (human workers, existing factories) are not in the US! And then that means the new suite of amazing physical technologies (self-replicating robots) will mostly be located outside of the US. This “outside of the US” dynamic will self-perpetuate, with (e.g.) each generation of self-replicating robots starting out distributed across the world and building the next generation to be similarly distributed. In this scenario, the US leads on AI but its AIs are used extensively in other countries. And new physical technologies are mostly produced in other countries. This makes preventing technological diffusion very hard! There are many further questions here. Could US companies capture most of the economic surplus even if physical production is located abroad? Could they then, over time, buy up most of the world’s physical capital so that US companies ultimately own both cognitive and non-cognitive inputs to production? Could US companies instruct their AIs not to help non-US companies copy the new technologies, thereby preventing technological diffusion? |
Now I’ll consider a final objection.
Imagine there are 100 people, each of whom control an equal fraction of GDP. The Superexponential Growth Argument tells us that 51 of them could get together, make a trading bloc, and completely outgrow the other 49.
So now 49 people have fallen into economic irrelevance and 51 remain. What’s then to stop 26 of the remaining people repeating the process? They could get together and outgrow the other 25. And once they’ve done that, the process could repeat again!
We might expect people to anticipate all of this in advance. They might prefer a regime where no faction ever outgrows the rest to a regime where they initially increase their share of world GDP but are later made economically irrelevant.
Similarly, a trading bloc of countries might worry that if they outgrow the rest of the world, they’d be setting a precedent for a subset of the bloc to later break away.
This is an interesting objection. But there’s a few ways it could fail:
Points #2 and #4 seems especially compelling to me.
The table below summarises the basic Superexponential Growth Argument and the objections that I’ve considered.
Consideration | Implication for whether a country could outgrow the world after AGI |
Basic argument | Whichever country has the largest GDP can outgrow the world. |
Objection: trade | If a country (or trading bloc) has >50% of GDP, they can outgrow the world. It’s plausible that the US will have >50% after it develops AGI. |
Objection: no country is self sufficient | If a country (or trading bloc) has >50% of GDP and is willing to sacrifice GDP in the short-term, they can outgrow the world. |
Objection: tech diffusion | If a country (or trading bloc) has >50% of GDP, is willing to sacrifice GDP in the short-term, is strongly internally coordinated and tries hard to prevent technological diffusion, they can outgrow the world. This could happen if the US+allies try to outgrow authoritarian countries for natsec or ideological reasons, extending existing export controls on AI chips. (Or vice-versa with China+allies.) It could also happen if a small group stages a coup in the US. |
Objection: setting a dangerous precedent | Countries might not worry about the precedent, e.g. because they can credibly commit to not repeat the process, or because they’re united by a shared ideology. And a country’s government can prevent internal splinter groups from breaking off. |
Table 1: A summary of the different considerations about whether a country could outgrow the world
Overall, I think the Superexponential Growth Argument is pretty strong, especially for a trading bloc. But I don’t know if it succeeds because the technology diffusion objection seems strong.
Now I’ll turn to another distinct argument for thinking that a country could outgrow the world – the Grabbing New Resources Argument.
To outgrow the world, a country could simply seize control of unclaimed resources like the high seas, patents, and (especially) non-earth based solar energy. Only 1 billionth of the sun’s energy lands on earth. When technology makes it feasible to enter space and harness all the sun’s energy, that could increase output by a factor of 1 billion (assuming raw materials aren’t a bottleneck). A country could use a temporary technological, economic or military lead to grab all the resources of the solar system. This would leave them controlling >99.9% of resources, an extremely strong position from which they could plausibly ensure they eventually control all resources beyond the solar system.
One key question here is: could a country seize >99% of the solar system’s energy without already being >99% of world GDP?
You might expect that, if the US was 60% of world GDP, then they’d only be able to grab 60% of new resources. Especially if other countries anticipate that, if they let the US grab everything, the US will become strong enough to completely dominate them. If so, the process of grabbing new resources won’t extremize world GDP at all.
On the other hand, the US might be able to credibly commit that they won’t infringe on other countries’ sovereignty. And there might be first-mover advantages, or winner-takes-all dynamics in grabbing space resources. I won’t try to settle this here.
Even if grabbing new resources doesn’t extremise world GDP – even if each country grabs the same fraction of space resources as their GDP – still, grabbing new resources could play a critical role. It could make an otherwise temporary imbalance in GDP permanent. Suppose that, due to the dynamics of the Superexponential Growth Argument, the US is far ahead on technology and so is 90% of world GDP. Absent grabbing new resources, they will eventually hit the ultimate limits on technology and stop growing. Other countries will then catch up. Their large lead in GDP will be temporary. But suppose instead that they grab the resources of the solar system while they’re 90% of GDP. This would convert their temporary technological lead into a permanent lead in the fraction of resources that they control.
So the Grabbing New Resources Argument shows two things:
Outgrowing the world is an interesting path to a decisive strategic advantage because it doesn’t require aggressive behaviour towards other nations.
The Superexponential Growth Argument is pretty strong. It’s plausible that the country that leads on AGI will end up as >50% of world GDP, especially if they prevent other countries from developing near-frontier AI. From there, they could outgrow the rest of the world, but only if they’re able to sufficiently block technological diffusion. This might take a very large effort and a significant degree of internal coordination. That effort might be made by a US-led trading bloc trying to exclude authoritarian countries, or by a power-seeking group that has staged a coup. Ultimately, I’m pretty unsure whether technological diffusion will prevent the US (or a US-led bloc) becoming >90% world GDP.
Then the Grabbing New Resources Argument comes in. The US will likely be a large fraction of world GDP and have a decent technological and military lead. It could likely make this lead permanent, and perhaps increase its fraction of GDP further, by grabbing the resources of the solar system.
As a reminder, I’ve purposefully set aside non-economic considerations, like whether a country could gain dominance militarily, or whether other countries would intervene militarily to stop a country from outgrowing the world.
This isn’t likely, but does seem surprisingly plausible to me. Here’s how it could happen.
First, a frontier AI company establishes a monopoly on frontier AI. The leading company might get a temporary lead by being the first to automate AI R&D and having a spurt of rapid algorithmic progress. They might embed this lead by buying up the vast majority of AI compute – e.g. because their far-superior demos attract more investment, or because they can make significantly more productive use of that compute with their superior algorithms. Alternatively, they could embed the lead by lobbying the government for favourable regulation or even by sabotaging their rivals’ development efforts (e.g. cyber attacks).
Maybe these strategies allow the company to maintain their monopoly on frontier AI for months or years, during which time they fiercely lobby the government (pointing out that disruption would have severe economic consequences), fight in court to avoid antitrust actions, and try to make the government overreliant on their services.
If the company gains this monopoly then they will ultimately control ~100% of the world’s quality-adjusted cognitive labour.[9] To outgrow the world, they need to be able to produce >50% of the world’s GDP by themselves. But GDP isn’t just produced by cognitive labour alone – you need complementary physical actuators. This brings us to the company’s next step.
Second, the company acquires >50% of the world’s physical capital. This is a big lift. Physical capital, and the know-how of how to produce it, is currently highly distributed. The company needs to aggressively leverage its advantage in cognitive labour to achieve this.
We can think of this as a race against time. Within a few months or years, the company may lose its monopoly on cognitive labour – another company may automate AI R&D and acquire a similar amount of computer chips.[10] Before that time, the company needs to acquire more physical capital than the rest of the world combined. They can’t buy the physical capital directly – even spending $1 trillion / year (which is more than USG spends on physical capital!) wouldn’t be enough to overtake the rest of the world, which has a total productive physical capital stock of ~$400 trillion.[11]
So the critical question is whether the company could quickly build physical capital that self-replicates very rapidly – rapidly enough that the company controls >50% of the world’s physical capital by the time they lose their edge on cognitive labour. Let’s generously assume that they acquire $1 trillion of physical capital and then (having acquired it) have a further 9 months to grow it to above the $400 trillion of physical capital controlled by the rest of the world. In this case, they’d need the capital stock to self-replicate once per month. ($1tr * 2^9 = $512tr.)
This toy spreadsheet model allows you to play around with the assumptions for this calculation.
I’d bet against physical capital self-replicating this quickly immediately after AGI – here I estimate that after AGI physical capital will have a ~one year doubling time. But it might be possible, for example if AGI can make very rapid technological progress.
Edited to add: I now think the right AI capability to consider for this is thought experiment is not AGI but the capability we have just after the software intelligence explosion. As a result, I think it’s more plausible that physical capital will double sufficiently quickly for the company to outgrow the world. See here for discussion.
During this second step of acquiring physical capital, the company would again have to lobby hard to resist government intervention. (And of course, the government would have very strong national security reasons to intervene, as the company would be gaining significant industrial might that could be converted into military power.)
Compared to a country, a company is more likely to be well coordinated internally and so more likely to only do trades that help it outgrow the world. Technological diffusion will be less of a problem.
Overall, I think it’s plausible a company could outgrow the world, but unlikely. They’d need to establish a strong monopoly on frontier AI and quickly develop very powerful self-replicating physical technologies, all without government intervention.
Thanks very much for comments from Damon Binder, Max Dalton, Ryan Greenblatt, Rose Hadshar, Basil Halperin, Tom Houlden, Chad Jones, William MacAskill, Fin Moorhouse, Phil Trammell, and Lizka Vaintrob.
This was created by Forethought. See our website for more research.
Why every couple of years? Suppose two countries are both following the same superexponential growth trajectory. The leader starts off with its GDP 50% bigger than the laggard. Then, at today’s 3% growth rate, the leading country is 14 years ahead. As both countries progress along the same superexponential trajectory, the leader will remain 14 years ahead. But, as growth accelerates, the size of its GDP lead increases. If GDP eventually doubles every 2 years then the leader will be 7 doublings ahead of the laggard – a factor of 128. The leader would be >99% of their total GDP.
Couldn’t the US trade with the individual countries in this trading bloc in such a way that US GDP rises above 50%? I’m skeptical. Suppose the US has 25% of world GDP, and the rest of the world are trading with each other. If the US doesn’t trade with the rest of the world, their fraction of world GDP will shrink, eventually to 0%. This is not a strong BATNA from which the US can negotiate trades that significantly increase its fraction of world GDP. In addition, there’s a very natural and fair-seeming deal in which the gains from trade are split so as to keep countries’ share of world GDP constant. And indeed, over the past 100 years the US hasn’t traded its way to a majority of world GDP.
Between 1500 and 1900 Britain’s share of world GDP increased from 1% to 8% – see table 1.
Currently the ratio of GDP US:RoW is 1:3. If US GDP increased by a factor of 8 then this would be 8:3. The US would produce 72% of world GDP.
China produces around 20% of world GDP, so the ratio of GDP between China and the rest of the world is currently 1:4. Increasing by a factor of 8 would be 8:4, or two thirds of world GDP.
Wait, wouldn’t outgrowing the world be the best way to maximise your country’s GDP? Not in the short term. Maximally increasing your GDP involves trades which lead to technological diffusion. The trades make your country richer, and make the global economic pie bigger, but they decrease your country’s fraction of the pie. If you’re specifically trying to outgrow the world, you’d avoid these trades. You’d be poorer counterfactually, but a bigger fraction of world GDP. And, in the long-run, you might eventually be richer if world GDP hits a plateau and you have a bigger fraction at that plateau. (Though you might not be, if continued trade allows the world to reach a higher plateau, as it does with today’s technology.)
Wait, didn’t we say earlier that because the US’ BATNA here is outgrowing the world, they should be able to find a win-win trading deal where they still outgrow the world? Yes, but there we were imagining that the US and the rest of the world (RoW) were both unified actors. In that case, the US could in principle share technological insights with RoW at a high price, both US and RoW would benefit, and the US would still outgrow RoW. But in this paragraph we are discussing deals between a US firm and a RoW firm. Such deals have a large positive externality for RoW due to the technological diffusion: not only does the recipient firm get to use the technology directly, the RoW learns more about how to create that technology themselves. Such firm-to-firm deals help the US firm and the RoW firm directly, but the externality reduces the US’ share of future global GDP. So, to outgrow the world, the US would need to ban such deals and only sell its technology when RoW pays the US for the positive externality (which should be possible in principle, but would require a lot of very impressive coordination).
AI cognitive labour will eventually dwarf human cognitive labour and the company controls ~100% of AI cognitive labour, so they’ll control ~100% of all cognitive labour.
The company could try to maintain their monopoly indefinitely by continually buying up the world’s supply of compute or by sabotaging other projects. If it succeeded, outgrowing the world would be much easier.
Based on ~$100 trillion world GDP and a capital-to-GDP ratio of 4.