Thanks for this!
Yep i agree that scenario gets >5%!
Agree ASML is one of the trickiest cases for OpenBrain. Though I imagine there are many parts of the semiconductor supply chain with TSMC-level monopolies (which is less than ASML). And i don't think hacking will work. These companies already protect themselves from this, knowing that it's a threat today. Data stored on local physical machines that aren't internet connected, and in ppl's heads.
And i think you'll take many months of delay if you go to ASML's suppliers rather than ASML. ASLM built their factories using many years worth of output from suppliers. Even with massive efficiency gains over ASML (despite not knowing their trade secrets) it will take you months to replicate.
I agree that the more OpenBrain have super-strategy and super-persuasion stuff, the more likely they can capture all the gains from trade. (And military pressure can help here too, like with colonialism.)
Also, if OpenBrain can prevent anyone else developing ASI for years, e.g. by sabotage, i think they have a massive advantage. Then ASML loses their option of just waiting a few months and trading with someone else. I think this is your strongest argument tbh.
Biggest cruxes imo:
I'm curious how confident you are a company with a 6 month lead could outgrow the rest of the world by themselves?
as discussed on the other thread, i'm imagining other countries have access to powerful AI as well. Or even equally good AI if they are sold access or if another actor catches up after the SIE fizzles (as it must)
Great - I agree that if you can get to >50% physical capital (as valued by how useful it is for the new supercharged economy) within 6 months of the SIE fizzling out, then you can outgrow the world in the scenario you describe. Sounds like you're more bullish on a very fast industrial explosion than I am -- I think it's hard to know whether we'll get physical capital doubling times of <3 months within a few months of SIE. On longer timelines to an SIE, this seems more plausible as there's more time for robotics to improve in the meantime.
Another blocker -- cooperation from pre-existing companies
I want to mention another blocker to someone outgrowing the world via a very fast industrial explosion. You'll be able to go faster if you draw heavily on the knowledge, skills and machines of existing companies. So you'll need their cooperation. But they might not cooperate unless you give them a big fraction of the economic value. Which would prevent you from outgrowing the world.
Of course, you could "go it alone" and leapfrog the companies that won't do business with you. But then it's even harder to do a very fast industrial explosion. E.g. take ASML and TSMC. I predict it will be much more efficient for the leader in AI to work with those companies (e.g. selling them API access to ASI, or acquiring them) than to leapfrog. If they try to leapfrog, the laggard can team up with them and recover some of the lost lead.
Concretely, let's say OpenBrain gets to AGI first. Then its AIs are combined with ASML's existing knowledge to create EUV++ machines. Are the profits from these machines going to ASML or to OpenBrain? Seems unclear to me. ASML's bargaining position might be strong -- they have a strong monopoly and can wait 6 months and deal with the laggard instead.
So maybe other companies have a lot of economic leverage post-AGI, and they convert that to economic value. And, as discussed, maybe other governments tax the activites happening in their domains.
Will $ be helpful?
But you suggest these "mere $" might not be very useful, if ASI has colonised space and doesn't care. But $ can be used to buy AI and buy crazy new physical tech. So if other actors have lots of $, they can convert that to ability to colonise the stars themselves.
Q about your Israel scenario?
Are you basically assuming a merge between the lab and govt here? And then an nation-wide effort to utilise Israel's existing human and physical capital to kick-start the industrial explosion within it own borders?
Asking bc absent a merge i'm still not seeing a reason for all actors outside Israel to be left in the dust. The AI company would trade with some Israeli companies and (many more) non-Israeli ones. In which case Israel per se would only outgrow the world if the AI company outgrows the world by itself.
Yeah let's think.
If there's a AI-2027 style SIE, then 6 months is a gap of ~6 OOMs of effective compute, which is maybe equivalent to a having 1000X more researchers thinking 4X faster and much smarter. (Maybe 2X the gap as from top expert to median expert.)
That's a big advantage. Much bigger than US had over China over the last 70 years. So yeah it's pretty plausible that the reverse engineering would be too slow for Germany to catch up using Chinese AI...
Though once the SIE fizzles out Germany will have access to ~equally good AI. And if the industrial explosion has still got many years to go that could give them time to catch up.
There's also the question of how Germany is able to use the US AI. For example, suppose two US projects are neck and neck. Or suppose there's one US project, but multiple downstream 'product' companies that can sell fine-tune and sell access to the model. In that case, competition could lead for German companies to be able to access AI for a wide range of tasks. Including 'R&D' that really involves copying/adapting US tech developed elsewhere. In this case, Germany doesn't have to use worse AI.
OTOH, maybe there's just one US project and it's single-handedly making tech progress in the US and selling its systems abroad. In this case, it could have strong incentive to prevent German companies using its AI to copy US tech. Bc it developed that tech itself! This is a case where the US is strongly coordinated bc all US economic activity is just one company.
Another scenario. There's one US AGI project. It sells API access to existing companies in the US, at a very healthy profit, and those companies all make amazing new tech and own the IP. In this case, teh US project still has incentive to sell its AI to Germany companies that will use it to copy the tech developed in US companies. (In fact, there's no reason the US companies would be ahead of the German companies in developing the tech in this scenario.) Here is seems like USG needs to step in to make sure US companies keep the profits.
Like, here's one way of putting it. There's def a chance that one project outgrows the world. Especially if there's a monopoly on AI within the US with the support of the govt. That project would be internally coordinated and avoid trades that disadvantage it relative to RoW. But suppose one project doesn't outgrow the world. There's then a question of why the larger entity that does outgrow the world is the US. Like, why are other US companies in on the action, but no non-US companies. ASML and TSMC will have a lot to offer! My answer to this question is that USG would have to make specific efforts, significantly restricting trade, to make this happen.
Then the scenario where amazing tech is built in Germany by US companies. Let's put aside manipulation and military exploitation for a second. The GDP produced by these companies is German GDP and the German government can tax it. So, if most physical manufacturing is not on US soil and most GDP is from physical stuff being produced, then >25% GDP will be produced on non-US soil and could be taxed by non-US governments. But yeah, i agree that the analogy of colonization is chilling here.
(sorry, this isnt' very well organised. But overall my tentative position is that it is plausible US outgrows the world here, but it seems less certain and more complicated than what appears to be your position)
Posting a brief exchange between myself and Sam Winter-Levy on this topic, relating to this piece of his.
Sam
Hi Tom,
Sam Winter-Levy here—I'm a fellow at the Carnegie Endowment for International Peace in DC. Really interesting piece on economic growth explosions. It overlaps with some work we're doing at Carnegie on "decisive strategic advantages," just wanted to share some quick reactions in case helpful!
I know you purposefully set aside military considerations, but I'm not sure I agree with the premise that outgrowing the world is a path to a "decisive strategic advantage without acting aggressively towards other countries." In particular, I think that so long as nuclear deterrence and MAD hold, they are a binding obstacle to any alleged DSA, no matter how much AI turbocharges economic growth. Today, the US economy is 15x Russia's and 1000x North Korea's, but the US has no ability to impose "complete domination" over either of them, to put it mildly, nor would it even if AI dramatically boosted the US economy still further—in large part because of Russia and North Korea's ability to credibly threaten to impose massive costs on the US through nuclear escalation on any issue of sufficient importance to either regime.
More broadly, I don't think most claims about the existence of a DSA take nuclear deterrence seriously enough. Before nuclear weapons existed, a state’s ability to defend itself from coercion depended crucially on its economic size, since relative forces mattered and force size/quality depended ultimately on having a large population and an advanced economy. But that's not true once you have a second-strike nuclear capability; nuclear weapons break the connection you emphasize between economic power and the ability to impose your political preferences on other nuclear-armed states. So I think even granting the argument that AGI could lead to some sort of economic explosion for one state, the amount of coercive leverage that would grant that state is likely to be highly limited, nowhere close to the Bostrom DSA definition you use, so long as nuclear deterrence remains intact.
In case of interest, here's a piece we published recently on these issues, coauthored with Nikita Lalwani (former director for tech/nat sec on the Biden NSC). We make a version of the above argument about the importance of MAD for the existence of any alleged DSA, and then lay out how exactly AGI might threaten nuclear deterrence. Would welcome any feedback! We're still tentatively thinking through these DSA issues ourselves.
Best,
Sam
Tom
Hi Sam,
Thanks for this great pushback!
I think you're right that this is an oversight of the piece. But here are some responses:
Ah, cool.
I was roughly imagining:
So then the Q is why Europe can't quickly use its AI to copy the new physical tech?
And similarly for China, which can use its worse AI (and the US AI as well if the status quo continues).
Also, even absent copying the new tech, the amazing new physical robots will be made on European soil. (But maybe US companies still capture the profits?)
Thanks!
The key insight is that ordinary human leaking and spying and tech diffusion happens at ordinary human pace. If e.g. China is doing an intelligence explosion such that it's making tech and economic progress at 100x normal speed, the process of ideas gradually diffusing to the US and causing catch-up growth won't be 100x faster to compensate. Instead e.g. the lag between "the chinese are selling an exciting new product" and "our engineers have dissected it and deduced some interesting ideas about how it was made and replicated these ideas in our own manufacturing processes" will be... approximately the same? And so for practical purposes it might as well not be happening.
I don't think it will be the same? If the economy is growing 100X as fast then that means:
Both these things help with diffusion between countries as well.
And if designing new tech is 100X faster, i'm not seeing why we would expect copying tech to be <100X faster. With copying, you start with additional clues -- e.g. a physical artefact or using a digital product or public materials (like OAI's o3 demo) -- and design the tech from there. Why would designing new tech from scratch be sped up by more than designing from clues? I.e. we've got super-smart AI doing both.
the rate of tech diffusion via leaks and spying will go way down
Yeah, agree spying will go way down once humans are replaced, and cyber hacks will go down too. Though i would have thought a small minority of tech diffusion comes from spies + hacks? E.g. i'd have thought selling and publishing about o1 was more important than those things, though i could be wrong and spies might be unually important for AI diffusion in particular.
...
I don't think you've responded to my argument that companies in the leading country could make way more short-term revenues by producing and selling their technologies in other countries? (See the argument in the box.) Bc ASI cognitive capabilities are massively complementary with human labour and physical capital inputs.
Why look at the number immediately after AGI, here, rather than further into the intelligence explosion?
Here was the thought:
If the lab hasn't outgrown RoW by the time RoW develops AGI then we're now in a position where RoW has more physical capital than the lab. This means that, in the next physical doubling, RoW will produce more physical capital than the lab, and so do more learning by doing than the lab has ever done. After that doubling, RoW will then have better tech than the lab and so their physical capital will double more quickly. So, after that doubling, RoW will have both more physical capital and their physical capital will double more quickly. This means they're on a trajectory to always have more physical capital and the lab can't outgrow them.
So this is implicitly assuming that the number of units produced is the hard bottleneck on the technology of physical capital, and the lab's additional AI cognitive labour doesn't help. This is in line with the classic economist view that you bump up against ultimate limits to what can be inferred from data. But I think qualitatively smarter AI could change this a lot. Maybe the lab has much smarter AI than RoW and so can learn much more per unit produced.
So, thinking about it more, I think this calc makes sense if we start the clock at the point when the SIE (software-only intelligence explosion) fizzles out. Because then, once RoW catches up, they'll have ~equal cognitive inputs to the lab. So the lab really does need to overtake RoW on physical capital before this point. (Maybe the lab can get more chips than RoW? But this is a type of physical capital.)
Anyway, accounting for this makes it more plausible the lab can outgrow RoW. They need the 1 month doubling time robots not when they first have AGI, but when the software intelligence explosion has fizzled. Which means they have more powerful AI and longer to do robot experiments.
Another reason my calc understates the lab's chance is that there may be higher value from experiments conducted in serial. RoW constructs all their physical capital in parallel in one doubling. Whereas the lab does it over multiple doublings one after the other. That could make a big difference.
How to do better?
The more nuanced way to game this out would be to represent, at each time-step, the physical capital, cognitive labour, and technology (for physical capital) of both the lab and RoW.
Then have a production function for how inputs of physical capital produced and cognitive labour improve technology in each time-step. Probably smg like g_A = (K^a * C^b)^lambda * A^-beta. Where a+b=1, lambda represents the benefits of doing research in serial, and beta accounts for ideas getting harder to find. (This accounts for your other point about the rate of production doubling every 3 doublings. The parameter beta controls whether it's "3" or some other number.)
Then in each timestep simulate each actor's change in:
My overall view
Maybe the most plausible scenarios involve ~6 month leads, ~$250b on physical capital, ~1 month robot doubling times at the end of the SIE, and doubling times becoming faster over time.
I made a simple of the dynamic here. (Still not modelling the gains from serial experiments and
So overall I think I'd still bet against the lab being in a position to do this on the economic fundamentals; but it is more plausible that I'd thought.
Thanks for this helpful comment.
Some quick follow-up Qs.
First:
Of course, scale does have an effect; larger animals have slower metabolisms (smaller specific surface area, difficulty with heat dissipation). However, you can see that there is no order-of-magnitude difference in metabolic rate between a hummingbird and an elephant, so this is not the primary contributing factor.
GPT-5 thought that the ratio of metabolic rates was 100X. Two OOMs!
Second, you say that nanotech could only replicate very fast if it harvests the ready-made compounds in the biosphere. But plant cells that make their food from photosynthesis can replicate in hours/days. E.g. duck weed.
Third, I'd be interested if you had sources for your claims against J/kg and W/kg
Do you think the same for a company within the US? That with a 6 month, or even just a 3 month going off recent trends, lead it would find a way to sabotage other companies?
I think it's plausible, but:
(I think a remerging crux here might be the power of AI's persuasion+strategy skills)