LESSWRONG
LW

World ModelingAI
Frontpage

125

The Industrial Explosion

by rosehadshar, Tom Davidson
26th Jun 2025
Linkpost from www.forethought.org
19 min read
55

125

World ModelingAI
Frontpage

125

The Industrial Explosion
31Vladimir_Nesov
4Davidmanheim
1Vladimir_Nesov
3Davidmanheim
2Vladimir_Nesov
2Davidmanheim
2Tom Davidson
2simeon_c
2Tom Davidson
22Thomas Larsen
8Tom Davidson
4Davidmanheim
4Davidmanheim
2Tom Davidson
4Davidmanheim
14Noosphere89
9Tom Davidson
10Mars_Will_Be_Ours
13ryan_greenblatt
1Mars_Will_Be_Ours
9Daniel Kokotajlo
5Tom Davidson
6Daniel Kokotajlo
2Tom Davidson
4Daniel Kokotajlo
4Tom Davidson
3Daniel Kokotajlo
6otto.barten
7Vladimir_Nesov
1otto.barten
3Vladimir_Nesov
10ryan_greenblatt
5Vladimir_Nesov
8ryan_greenblatt
2Vladimir_Nesov
-7otto.barten
4Donald Hobson
1otto.barten
4Petropolitan
2Tom Davidson
2Petropolitan
3Donald Hobson
2Vladimir_Nesov
2Donald Hobson
2Oscar
3Tom Davidson
1denkenberger
1denkenberger
1ankitmaloo
1BryceStansfield
1cadca
3rosehadshar
3Tom Davidson
1cadca
1hold_my_fish
New Comment
55 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:10 PM
[-]Vladimir_Nesov11d3113

I think biorobots (macroscopic biotech) should be a serious entry in a list like this, something that's likely easier to develop than proper nanotechnology, but already has key advantages over human labor or traditional robots for the purposes of scaling industry, such as an extremely short doubling time and not being dependent on complicated worldwide supply chains.

Fruit flies can double their biomass every 1-3 days. Metamorphosis reassembles biomass from one form to another. So a large amount of biomass could be produced using the short doubling time of small "fruit fly" things, and then merged and transformed through metamorphosis into large functional biorobots, with capabilities that are at least at the level seen in animals.

These biorobots can then proceed to build giant factories and mines of the more normal kind, which can manufacture compute, power, and industrial non-bio robots. Fusion power might let this quickly scale well past the mass of billions of humans. If the relevant kind of compute can be produced with biotech directly, then this scales even faster, instead of at some point being held back by not having enough AIs to control the biorobots and waiting for construction of fabs and datacenters.

(The "fruit flies" are the source of growth, packed with AI-designed DNA that can specialize the cells to do their part in larger organisms reassembled with metamorphosis from these fast-growing and mobile packages of cells. Let's say there are 1000 "fruit flies" 1 mg each at the start of the industrial scaling process, and we aim to produce 10 billion 100 kg robots. The "fruit flies" double in number every 2 days, which is 1e15x more mass than the initial 1000 "fruit flies", and so might take as little as 100 days to produce. Each ~100 kg of "fruit flies" can then be transformed into a 100 kg biorobot on the timescale of weeks, with some help from the previous biorobots or initially human and non-bio robot labor.)

Reply211
[-]Davidmanheim9d41

The idea that near-term AI will be able to design biological systems to do arbitrary tasks is a bit silly, based on everything we know about the question. That is, you'd need very strongly ASI-level understanding of biology to accomplish this, at which point the question of industrial explosion is solidly irrelevant.

Reply
[-]Vladimir_Nesov9d1-3

That is, you'd need very strongly ASI-level understanding of biology to accomplish this

That's in some sense close to the premise, though I think fast high-fidelity chemistry/biology simulators (or specialized narrow AIs) should be sufficient to get this done even at near-human level, with enough subjective time and simulation compute. My point is that "fruit flies"/biorobots should be an entry on a list that contains both traditional robots and nanotech as relevant for post-AGI industry scaling. There are some perceived difficulties with proper nanotech that don't apply to this biorobot concept.

In the other direction, a sufficiently effective software-only singularity would directly produce strong ASIs on existing hardware, without needing more compute manufactured first, and so won't need to bother with human labor or traditional robots, which again doesn't fit the list in this post. So the premise from the post is more that software-only singularity somewhat fizzles, and then AGI-supercharged industry "slowly" scales to build more compute, until enough time has passed and enough compute has been manufactured that nanotech-level things can be developed. In this setting, the question is whether macroscopic biotech could be unlocked even earlier.

(So I'm not making a general/unconditional prediction in this thread. Outside the above premises I'm expecting a software-only singularity that produces strong ASI on existing hardware without having much use for scaling traditional industry first, though it might also start scaling initially for some months to 1-2 years, perhaps mostly to keep the humans distracted, or because AGIs were directly prompted by humans to make this happen.)

Reply
[-]Davidmanheim8d31

Given the premises, I guess I'm willing to grant that this isn't a silly extrapolation, and absent them it seems like you basically agree with the post? 

However, I have a few notes on why I'd reject your premises.

On your first idea, I think high-fidelity biology simulators require so much understanding of biology that they are subsequent to ASI, rather than a replacement. And even then, you're still trying to find something by searching an exponential design space - which is nontrivial even for AGI with feasible amounts of "unlimited" compute.  Not only that, but the thing you're looking for needs to do a bunch of stuff that probably isn't feasible due to fundamental barriers (Not identical to the ones listed there, but closely related to them.)

On your second idea, a software-only singularity assumes that there is a giant compute overhang for some specific buildable general AI that doesn't even require specialized hardware. Maybe so, but I'm skeptical; the brain can't be simulated directly via Deep NNs, which is what current hardware is optimized for. And if some other hardware architecture using currently feasible levels of compute is devised, there still needs to be a massive build-out of these new chips - which then allows "enough compute has been manufactured that nanotech-level things can be developed." But that means you again assume that arbitrary nanotech is feasible, which could be true, but as the other link notes, certainly isn't anything like obvious.

Reply
[-]Vladimir_Nesov8d20

(It's useful to clearly distinguish exploration of what follows from some premises, and views on whether the premises are important/likely/feasible. Issues with the latter are no reason at all to hesitate or hedge with the former.)

But that means you again assume that arbitrary nanotech is feasible, which could be true, but as the other link notes, certainly isn't anything like obvious.

I mentioned arbitrary nanotech, but it's not doing any work there as an assumption. So it being infeasible doesn't change the point about macroscopic biotech possibly being first, which is technically still the case if nanotech doesn't follow at all.

Various claims that nanotech isn't feasible are indeed the major reason I thought about this macroscopic biotech thing, since existing biology is a proof of concept, so some of the arguments against feasibility of nanotech clearly don't transfer. It still needs to be designed, and the difficulty of that is unclear, but there seem to be fewer reasons to suspect it's not feasible (at a given level of capabilities).

Reply
[-]Davidmanheim8d20

The macroscopic biotech that accomplishes what you're positing is addressed in the first part, and the earlier comment where I note that you're assuming ASI level understanding of bio for exploring an exponential design space for something that isn't guaranteed to be possible. The difficulty isn't unclear, it's understood not to bebfeasible.

Reply
[-]Tom Davidson6d20

Fwiw, I'm happy to grant some chance that we skip the "robot" phase and go straight to nanotech or advanced small-scale biotech. The three stages of the post weren't meant to preclude skipping a stage, and i agree with you that we should broaden our 'nanotech' category to include small-scale biotech

Reply
[-]simeon_c10d23

It might be a dumb question but aren't there major welfare concerns with assembling biorobots?

Reply
[-]Tom Davidson11d22

Thanks for this!

 

We do mention fruit fly doubling times as a biological anchor for how fast doubling times could become as robots become smaller.

 

Potential blockers to this include:

  • Are there any examples of metamorphosis doing anything like this? From a quick glance it's about abrupt changes to one organism during its growth. But you're suggesting it could also allow millions of fruit-fly-sized-organisms to combine together into a large functional bio robot. That seems like a big jump.
    • And I don't think this is a nit pick from me. There's a clear pattern in biology where bigger and more sophisticated organisms take longer to reproduce. So unclear you can hack around that constraint as you're saying.
  • How will the fruit flies flexibly adapt their behaviour to the economic needs and situation, like human physical workers do? Unclear this can all be packed into their AI-designed DNA. And unclear if they can learn to receive instructions from the AI.
Reply1
[-]Thomas Larsen11d2223

This post seems systematically too slow to me, and to underrate the capabilities of superintelligence. One particular point of disagreement:

It seems reasonable to use days or weeks as an upper bound on how fast robot doublings could become, based on biological analogies. This is very fast indeed.20

When I read this, I thought this would say "lower bound". Why would you expect evolution to find globally optimal doubling times? This reads to me a bit like saying that the speed of a Cheetah or the size of an Blue Whale will be an upper bound on the speed/size of a robot. Why??? 

The case for lower bound seems clear: biology did it, probably a superintelligence could design a more functional robot than biology. 

Reply
[-]Tom Davidson11d80

It's not clear it's a lower bound bc it's unclear whether fruit flies have the physical and (especially) cognitive capabilities to reconstruct the whole economy. It's not enough to double quickly. You need to be able to make the robots that make the robots... that make anything.

 

But I agree that we might do way better than evolution. We might design things that double faster than fruit flies and can reconstruct the whole economy. So I agree i was wrong to describe this as an upper bound. 

 

Seems to me more like an estimate of the upper bound that could be biased in either direction. The upper bound might be faster bc we outperform evolution. Or it might be slower if fruit flies lack the capabilities to reconstruct the whole economy.

 

Keen to hear about other areas where you think we're being too conservative. It's definitely possible to point to particular assumptions that seem too conservative. But there's often counter-considerations. To give one quick example, only 20% of output today is reinvested, 80% is consumed. If this keeps happening during robot doublings, they'll happen 5X slower than our analysis. Our analysis implicitly assumes 100% reinvestment. 

Reply
[-]Davidmanheim9d4-4

I'm very confused by this response - if we're talking about strong quality superintelligence, as opposed to cooperative and/or speed superintelligence, then the entire idea of needing an industrial explosion is wrong, since (by assumption) the superintelligent AI system is able to do things that seem entirely magical to us.

Reply
[-]Davidmanheim9d40

How strong a superintelligence are you assuming, and what path did it follow? If it's already taken over mass production of chips to the extent that it can massively build out its own capabilities, we're past the point of industrial explosion. And if not, where did these (evidently far stronger than even the collective abilities of humanity, given the presumed capabilities,) capabilities emerge from?

Reply
[-]Tom Davidson6d20

Don't think i follow this. My last comment was about the ultimate limits to (nano)robot doubling times, after lots of time to experiment/iterate, not imagining AI designing this stuff a priori. 

 

The post assumes abundant AI cognitive labour on the level of top humans but nothing stronger. 

Reply
[-]Davidmanheim5d40

Yeah, I think Thomas was arguing the opposite direction, and he argued that you "underrate the capabilities of superintelligence," and I was responding to why that wasn't addressing the same scenario as your original post.

Reply
[-]Noosphere8910d1412

I want to flag a concern of @Davidmanheim that the section on AI-directed labor seems to be relying too heavily on the assumption of a normal distribution of worker quality, when a priori a fat tailed distribution is a better fit to real life data, and this means that the AI directed labor force could be dramatically underestimated if the best workers are many, many times better than the average.

Quotes below:

(David Manheim's 1st tweet) @rosehadshar/@Tom Davidson - assuming normality is the entire conclusion here, it's assuming away the possibility of fat-tail distribution in productivity. (But to be fair, most productivity measurement is also measuring types of performance that disallow fat tails.)

(Benjamin Todd's response) The studies I've seen normally show normally distributed output, even when they try to use objective measures of output.

You only get more clearly lognormal distributions in relatively elite knowledge work jobs:
https://80000hours.org/2021/05/how-much-do-people-differ-in-productivity

Though I agree the true spread could be understated, due to non-measured effects e.g. on team morale.

(David Manheim's 2nd tweet, in response to @Benjamin_Todd) When you measure the direct output parts of jobs - things like "dishes prepared" for cooks or "papers co-authored" - you aren't measuring outcomes, so you get little variation. When there is a outcome component, like profit margin or citation counts, it's fat-tailed.

(David Manheim's 3rd Tweet) So for manual workers, it makes sense that you'd find number of items produced has a limited range, but if you care about consistency, quality, and (critically) ability to scale up, the picture changes greatly.
 

Reply1
[-]Tom Davidson6d90

Adding my reply tweet:

Interesting, thanks. I wouldn't have thought that consistency and quality were still capable of massive improvement for manual workers already in the top 10%. Also, not sure it's realistic to assume that people that today are unproductive would rise to the very top 0.1% even with ideal AI coaching. But I do think the spread could be much bigger if AI is allowed to redesign factories from scratch. There's a lot of uncertainty here, even more than I'd realized

Would be interested in any evidence you have of fat tails for manual workers

 

And David's reply:

I don't have any evidence like that, but I also think that it wouldn't show up in numbers that get collected. (I would argue that workers who have output that passes QA at linearly better rates must have exponentially better "quality" in a sense, but it's not obviously true.)

 


 

Reply
[-]Mars_Will_Be_Ours10d102

I think that you may be significantly underestimating the minimum possible doubling time of a fully automated, self replicating factory, assuming that the factory is powered by solar panels. There is a certain amount of energy which is required to make a solar panel. A self replicating factory needs to gather this amount of energy and use it to produce the solar panels needed to power its daughter factory. The minimum amount of time it takes for a solar panel to gather enough energy to produce another copy is known as the energy payback time, or EPBT. 

Energy payback time (EPBT) and energy return on energy invested (EROI) of solar photovoltaic systems: A systematic review and meta-analysis is a meta-analysis which reviews a variety of papers to determine how long it takes various types of solar panels to produce the amount of energy needed to make another solar panel of the same type. It also provides energy returns on energy invested, which is a ratio which signifies the amount of excess energy you can harvest from an energy producing device before you need to build another one. If its less than 1, then the technology is not an energy source. 

The energy payback time for solar panels varies between 1 and 4 years, depending on the technology specified. This imposes a hard limit on a solar powered self replicating factory's doubling time, since it must make all the solar panels required for its daughter to be powered. Hence, it will take at least a year for a solar powered fully automated factory to self replicate. Wind has similar if less severe limitations, with Greenhouse gas and energy payback times for a wind turbine installed in the Brazilian Northeast finding an energy payback time of about half a year. This means that a wind powered self replicating factory must take at least half a year to self-replicate.  

Note that neither of these papers account for how factories are not optimized to take advantage of intermittent energy and as such, do not estimate the energy cost of the energy storage required to smooth out intermittencies. Since some pieces of machinery, such as aluminum smelters and chip fabs, cannot tolerate a long shutdown, a significant amount of energy storage will be required to keep these machines idling during cloudy weather or wind droughts. Considerations such as this will significantly increase the length of time it will take for a fully automated factory to self-replicate. Accounting for energy storage and the amount of energy needed to build a fully automated factory, I estimate that it would take years for a factory powered by solar or wind to self replicate. 

Reply
[-]ryan_greenblatt9d*132

I expect fusion will outperform solar and is reasonably likely to be viable if there is an abundance of extremely superhuman AIs.

Notably, there is no hard physical reason why the payback time required for solar panels has to a year rather e.g. a day or two. For instance, there exist plants which can double this quickly (see e.g. duckweed) and the limits of technology could allow for much faster double times. So, I think your analysis holds true for current solar technology (which maybe relevant to part of this post), but certainly doesn't hold in the limit of technology and it may or may not be applicable at various points in a takeoff depending on how quickly AIs can advance relevant tech.

Reply2
[-]Mars_Will_Be_Ours9d10

I mostly agree with your thinking. If there are multiple superintelligent AIs then one of then will likely figure out a method of viable fusion with a short payback period.

On the payback time of solar, it probably can be reduced significantly. Since the efficiency of solar panels cannot be increased much more (Shockley-Queisser limit for single junction cells, thermodynamic limit for any solar panel), then the only way to reduce the payback period will be to reduce the amount of embodied energy in the panel. I expect that the embodied energy of solar panels will stop falling once they start being limited by their fragility. If a solar panel cannot survive a windstorm, then it cannot be useful on Earth.

Your mention of biological lifeforms with a faster doubling time sent me on a significant tangent. Biological lifeforms provide an alternative approach, though any quickly doubling lifeform needs to either use photosynthesis for energy or eat photosynthetic plants. I expect there to be two main challenges to this approach. First, for the lifeform to be useful to a superintelligence, it needs to be hypercompetitive relative to native Earth life. This means that it needs to be much better at photosynthesis or digesting plant material compared to native Earth life. Such traits would allow it to fulfill the second requirement while remaining a functional lifeform. Second, the superintelligence needs to be able to effectively control the lifeform and have it produce arbitrary biomolecules on demand. Otherwise, the lifeform is not very useful to the superintelligence. I believe the first challenge is almost certainly solvable since photosynthesis on Earth is at best 5% efficient. The second will be more difficult. If the weakness in an organism a superintelligence needs to use to produce arbitrary biomolecules is too easily exploited, a virus, bacteria or parasite will evolve to exploit it, causing the population of the shackled synthetic organism to crash. If the synthetic organism has been designed such that it cannot evolve, its predators will keep it in check. Contrastingly, if the organism's weakness is not sufficiently embedded in the genome, then the synthetic organism will evolve to lose its weakness. Variants of the synthetic organism which will not produce arbitrary biomolecules on demand will outcompete those which will since producing arbitrary biomolecules costs energy. 

Reply
[-]Daniel Kokotajlo11d91

Nice work!

This is qualitatively and quantitatively similar to what I expect and AI 2027 depicts. I'm curious to get more quantitative guesses/estimates out of you. It seems like you think things will go, maybe, 2x - 4x slower than AI 2027 depicts?

Also: You have this great chart:

ectorHow many times production must double to halve the costSource
Chemical industries1 - 10Nagy et al (2013), Supporting Information 1
Hardware industries1 - 2.5Nagy et al (2013), Supporting Information 1
Energy industries2 - 10Nagy et al (2013), Supporting Information 1
Other industries (mostly electrical)2 - 5Nagy et al (2013), Supporting Information 1
Aggregate economy3Bloom et al (2020), Table 727
Moore’s law0.2Bloom et al (2020), Table 727
Agricultural sectors2 - 10Bloom et al (2020), Table 727

You then say:

Overall, it looks likely that the number of robots will double 1-5 times before the robot growth rate doubles.

I feel like Wright's Law should probably be different for different levels of intelligence. Like, for modern humans in hardware, it takes 1 - 2.5 doublings of production to halve cost, and in computer chips, it takes 0.2. But I feel like for superintelligences, the # of doublings of production needed to halve cost should be lower in both domains, because they can learn faster than humans can / require fewer experiments / require less hands-on-experience.

Reply
[-]Tom Davidson11d*50

Thanks!

Yep I'd be excited to hash this out and get quantitative. Doesn't seem like we're too far apart. 

I feel like Wright's Law should probably be different for different levels of intelligence. Like, for modern humans in hardware, it takes 1 - 2.5 doublings of production to halve cost, and in computer chips, it takes 0.2. But I feel like for superintelligences, the # of doublings of production needed to halve cost should be lower in both domains, because they can learn faster than humans can / require fewer experiments / require less hands-on-experience.

This is a great Q. 

My intuition would have been that superintelligence learns (say) 10X more per experiment. And 10X more unit produced. If that's right, Wright's Law won't change. You'll just get a one-time effect when humans are replaced by superintelligence. That one-time effect will mean that when you double production, you actually 20X the "effective production". So you'll get a sudden reduction in your doubling times, and then go back to the Wright's Law pattern that we're forecasting.

Or, to put it another way, let's assume that with human intelligence you'd get 1 month doubling times when you have 100 billion robots. Then with superintelligene you'll get it with just 10 billion robots. Bc you learn 10X as much per robot produced.

Reply
[-]Daniel Kokotajlo10d60

Interesting, plausible.

Would you say this goes in the other direction too? If a bunch of mediocre 10-year olds were producing robots, perhaps because their wealthy parents were forcing them to & funding them, would you model it as a one-time 10x penalty where they need to produce 10x as many robots to get to the same price point, but after that they'd be on the same curve, and after they got to e.g. 1M/yr production their robots would be just as good and just as cheap as Tesla when they are producing 100k/yr?

I think my main objection is that it just really seems like skill/intelligence should make a difference here. Like, prediction: I bet that if we had data on all car companies, we'd find that the slope of wright's law is somewhat different from company to company... Claude seems to agree: https://claude.ai/share/5fe19152-0958-4b6e-8f09-0989aa4c75bc

Reply
[-]Tom Davidson6d20

hmm the 10-year olds thought experiment is interesting.

I think that they might just plateau at a much sooner point entirely? I.e. they just can't make functioning robots at all, or can't bring them below a hugely-expensive price, and then they stop learning from experience? 

 

So the translation might be that we'd expect the experience curve to hit a plateau with human intelligence but to keep going to a higher plateau with superintelligence?

 

I bet that if we had data on all car companies, we'd find that the slope of wright's law is somewhat different from company to company.

Agreed. Over a few OOMs, that could be a temporarily different slope due to different starting levels of technology and/or different "amount learned per unit produced", but still the slope of the curves would become the same if you kept going for multiple OOMs. I.e. my explanation is entirely compatible with some companies trouncing others.

 

It seems like your assumption has some kinda wild consequences. If the slope is different then, as you go farther out on the curve, the ratio "amount learned by superintelligence per unit produced"/"amount learned by humans per unit produced" becomes increasingly extreme. Starts of at 10, but ends up >1000X. But why would we expect this ratio to increase? 

Reply
[-]Daniel Kokotajlo5d40

What's this about hitting plateaus though? Do experience curves hit plateaus? 

Re: the ratio becoming extreme: You say this is implausible, but it's exactly what happens when you hit a plateau! When you hit a plateau, that means that even as you stack on more OOMs of production, you can't bring the price below a certain level. 

Another argument that extreme ratios aren't implausible: It's what happens whenever engineers get something right on the first try, or close to the first try, that dumber people or processes could have gotten right eventually through trial and error. Possible examples: (1) Modern scientists making a new food product detect a toxic chemical in it and add an additional step to the cooking process to eliminate it. In ancient times, native cultures would have stumbled across a similar solution after thousands of years of cultural selection. (2) Modern engineers build a working rope bridge over a chasm, able to carry the desired weight (100 men?) on the first try since they have first principles physics and precise measurements. Historically ancient cultures would have been able to build this bridge too but only after ~thousand earlier failed attempts that either broke or consumed too much rope (been too expensive).

(For hundreds of thousands of years, the 'price' of firewood was probably about the same, despite production going up by OOMs, until the industrial revolution and mechanized logging)

Reply
[-]Tom Davidson5d40

Thanks - great points.

 

I'd guess that experience curves do hit plateaus as you approach the limits of what's possible with the current level of technology. Then you need R&D to get onto the next s-curve. If we're combining experience curves with R&D into entirely new approaches, then i'd expect they only approach a plateau when we approach ultimate tech limits, or perhaps the ultimate limits of what humans are smart enough to ever design (like with the 10 year olds).

 

Agree the ratio can become extreme if humans hit a plateau but superintelligence doesn't. But this looks like the same experience curve continuing for AIs and hitting a ceiling for humans. Whereas i thought you expected the experience curve for humans to keep going and the one for AIs to keep going at a permanently steeper rate. 

 

I suppose if the reason for experience curves is that humans get exponentially less productive at improving tech when it becomes more complex, then maybe this exponential decay won't apply as much to superintelligence and they could have a curve with a better slope... I think the normal understanding is that experience curves happen more because it takes exponentially more work to improve the tech when it becomes more complex -- but this does seem plausible.

 

I like your examples about modern science vs historical trial and error. Feels like a case of massive meta-learning. Humans (through a lot of trial and error) learnt the scientific method. Then that method is way more sample efficient. Similarly, perhaps superintelligence will learn (from other areas) new ways of structuring tech development with similar gains. Then they could have massive ratios over humans, like 1:10,000. Then that either manifests as a truly massive one-time-gain (before going back to the same exp curve as humans!) as perhaps it comes in gradually and is looks more like a permanently steeper exp curve

Reply
[-]Daniel Kokotajlo4d30

Cool. So, I feel pretty confident that via some combination of different-slope experience curves and multiple one-time gains, ASI will be able make the industrial explosion go significantly faster than... well, how fast do you think it'll go exactly? Your headline graph doesn't have labels on the x-axis. It just says "Time." Wanna try adding date labels?

Reply
[-]otto.barten11d61

Climate change exists because doing something that's bad for the world (carbon emission) is not priced. Climate change isn't much worse than it is already because most people still can't afford to live very climate unfriendly lives.

In this scenario, I'm mostly worried that without any constraints on what people can afford, not only might carbon emission go through the roof, but all other planetary boundaries that we know and don't know yet might also be shattered. We could of course easily solve this problem by pricing externalities, which would not be very costly in an abundant world. Based on our track record, I just don't think that we'll do that.

Will we still have rainforest after the industrial explosion? Seems quite unlikely to me.

Reply
[-]Vladimir_Nesov11d76

Will we still have rainforest after the industrial explosion? Seems quite unlikely to me.

This argument doesn't stop at the biosphere or at the surface. By the same token, it shouldn't be likely that we'll have Earth or the Sun still remaining as celestial bodies, in an entirely literal sense. It might be possible in principle to decide not to disassemble them for fuel and raw materials, but also the direction pointed by the market argument is not entirely without merit.

Reply
[-]otto.barten11d10

Agree about the celestial bodies. Can you explain what you mean by "but also the direction pointed by the market argument is not entirely without merit", and why the cited paper is relevant?

I would be reasonably optimistic if we had a democratic world government (or perhaps a UN-intent-aligned ASI blocking all other ASI) that we'd decide to leave at least some rainforest and the sun in one piece. I'm worried about international competition between states though where it becomes practically impossible due to such competition to not destroy earth for stuff. Maybe Russia will in the end win because it holds the greatest territory. Or more likely: the winning AI/industrial nation will conquer the rest of the world and will transform their earth to stuff as well.

Maybe we should have international treaties limiting the amount of nature a nation may convert to stuff?

Reply
[-]Vladimir_Nesov10d3-1

There are whole (very distant) galaxies just at the boundary of practical reachability, that can be reached if colonization probes leave earlier, but never again if the probes get delayed. So there is a lot of value in getting this process underway a little earlier.

Reply
[-]ryan_greenblatt10d106

We lose around 1 / 10 billionth of the resources every year due to expansion. This is pretty negligible compared to potential differences in how well these resources end up being utilized (and other factors).

Reply
[-]Vladimir_Nesov10d5-4

An additional hop to the nearby stars before starting the process would delay it by 10-50 years, which costs about 10 galaxies in expectation. This is somewhere between 1e8x and 1e14x more than the Solar System, depending on whether there is a way of using every part of the galaxy.

Mass is computation is people is value. Whether there is more than 1e8x-1e14x of diminishing returns in utility from additional galaxies after the first 4e9 galaxies is a question for aligned superphilosophers. I'm not making this call with any confidence, but I think it's very plausible that marginal utility remains high.

Reply
[-]ryan_greenblatt10d84

I'm not claiming that marginal utility is low, just that marginal utility is much higher for other things than speeding things up by a few years.

Reply
[-]Vladimir_Nesov10d2-1

I'm not seeing a tradeoff. If you speed things up by a few years, that's also a few years earlier that local superintelligences get online at all of the stars in the reachable universe and start talking to each other at the speed of light, in particular propagating any globally applicable wisdom for the frontier of colonization, or observations made from star-sized telescopes and star-sized physics experiments, or conclusions reached by star-sized superintelligences, potentially making later hops of colonization more efficient.

So maybe launching drones to distant galaxies is not the appropriate first step in colonizing the universe, this doesn't change the point that the Sun should still be eaten in order to take whatever step is actually more useful faster. Not eating the Sun at all doesn't even result in producing Sun-sized value. It really does need to be quite valuable for its own sake, compared to the marginal galaxies, for leaving the Sun alone to be the better option.

Reply
[+]otto.barten8d-70
[-]Donald Hobson9d40

This scenario is post superhuman AI. So rainforest exists iff the AI likes rainforest. Same goes for humans.

Reply
[-]otto.barten8d10

It looks to me like this is a scenario where superhuman AI is intent-aligned. If that's true, rainforests exist if humans prefer rainforests over mansions or superyachts or some other post-AGI luxury they could build from the same atoms. I'm afraid they won't.

Reply
[-]Petropolitan9d*4-5

Downvoted the post (which I do very rarely) because it considers neither the Amdahl's Law nor the factors of production, which is Economics 101.

Fully automated robot factories can't make robot factories out of thin air, they need energy and raw materials which are considered secondary factors of production in economics. As soon as there appears a large demand for them, their prices will skyrocket.

These are called so because they are acquired from primary factors of production, which in classical economics consist of land, labor and capital. Sure, labor is cheap with robots but land and capital will become very costly because they are complements. These primary and secondary factors will become bottlenecks, making the discussion of theoretical doubling rates moot.

Note that during most of European 2nd millennium, including times of Adam Smith and Karl Marx, labor was the most abundant and cheap primary factor, so that reversal from the now-expensive labor would not be something extraordinary.

P. S.

The following might not apply to the "post-AGI" world but this post gives a hint how hard automating manufacturing actually is: https://www.lesswrong.com/posts/r3NeiHAEWyToers4F/frontier-ai-models-still-fail-at-basic-physical-tasks-a

Reply
[-]Tom Davidson6d20

I agree a longer investigation of bottlenecks from raw materials would be useful.

 

But our estimate of robot doubling times does implicitly account for resource costs. We estimate how long it takes $1 of physical capital to create $1 of value-add. That value-add could come in the form of mining new resources or in the form of turning inputs (of raw materials or physical capital) into more useful outputs. The calc is saying: take the world's stock of physical capital; imagine they're used exclusively for mining materials and building more factories; how long until they can mine the materials and build physical capital to essentially double that initial stock. 

 

Yes raw material prices will rise as stocks diminish, as has happened historically. But also, as has happened historically, we'll find new sources of raw materials and we'll innovate to do without the scarce resources. By extrapolating historical trends we're assuming that these competing forces will play as they have done so before. I understand you think we should add a pessimistic adjustment here, implicitly assuming that the rising prices will win out over innovation more than has happened historically. But why?

Reply
[-]Petropolitan11h21

Normal timescales for building new mines using mature, already well-established technologies is over a decade for exploration to feasibility, 1.8 years of construction planning and getting environmental permits (unlike a robot factory, which you may built almost whenever you like, you have to build a mine on the actual minerals) and 2.6 of construction and production set up: https://www.statista.com/statistics/1297832/global-average-lead-times-for-mineral-resources-from-discovery-to-production This is the timescales on which commodity cycles boom and bust.

I do not assume that "innovation may lose", I actually do not consider new technologies for mining resources whatsoever in my reasoning, only the application of existing technologies to the deposits formerly uneconomical to extract. I try to explain that all these processes are just much slower than the timescales you are discussing.

And you seem to have missed my argument about capital: if the interest rates skyrocketed due to transformative AI (see, e. g., https://www.lesswrong.com/posts/k6rkFMM2x5gqJyfmJ/on-ai-and-interest-rates), how would you finance all these mines?

Reply
[-]Donald Hobson9d3-3

So, in this world, you have a post FOOM superintelligent AI. 

What does it take such an AI to bootstrap nanotech? If, as I suspect, the answer is 1 lab and a few days, then the rest of this analysis is mostly irrelevant. 

The doubling time of nanotech is so fast that the AI only wants macroscopic robots to the extent that they speed up the nanotech, or fulfill the AI's terminal values. 

Thus the AI's strategy, if it somehow can't make nanotech quickly, will depend on what the bottleneck is. Time? Compute? Lab equipment? 

Reply
[-]Vladimir_Nesov9d20

Compute could be a bottleneck, not just for AI but also for simulations of physical world systems that are good enough to avoid too many real experiments and thus dramatically speed up progress in designing things that will actually do what they need to do.

Without scaling industry first you can't get much more compute. And if you can't immediately design far future tech without much more compute, then in the meantime you'd have to get by with hired human labor and clunky robots, building more compute, thus speeding up the next phase of the process.

Reply
[-]Donald Hobson8d20

Compute could be a bottleneck, not just for AI but also for simulations of physical world systems that are good enough to avoid too many real experiments and thus dramatically speed up progress in designing things that will actually do what they need to do.

 

Imagine you have clunky nanotech. Sure it has it's downsides. It needs to run at liquid nitrogen temperatures and/or in high vacuum conditions. It needs high purity lab supplies. It's energy inefficient. It is full of rare elements. But if, being nanotech, it can make a wide range of molecularly precise designs in a day or less, and having self replicated to fill the beaker, can try ~10^9 different experiments at once. With experiment power like that, you don't really need compute.

So I suspect any compute bottleneck needs to happen before even clunky nanotech. And that would require even clunky nanotech to be Really hard to design. 

Reply
[-]Oscar11d20

AI direction could make most workers much closer in productivity to the best workers. The difference between the productivity of the average and the best manual workers is perhaps around 2-6X

Based on the derivation, it seems you mean the difference in productivity of workers doing similar tasks in the same industry, which seems important to specify. Otherwise as written, I would say the "difference between the productivity of the average and the best manual workers" is >1000x between e.g. surgeons in rich countries and e.g. farm hands/construction workers/salespeople, etc in poor countries.

But it's not clear to me the relevant multiplier is the one you pick within one country and industry. E.g. if we have abundant cheap AI cognitive labour, couldn't I set up a company producing widgets in e.g. India, employ heaps of low-skill workers for cheap but make them very productive with AI training and direction, and make a killing?

Maybe the bottleneck here is more on political economy and insitution quality, such that even with AGI not all poor countries suddenly become rich because they have productive AI-led firms.

Overall I feel a bit confused how big I think the one-time boost would be, but if we are counting across countries I would suspect >10x. Perhaps in practice the US (or whoever has the intelligence explosion) would limit access to cognitive abundance to itself and maybe a few allies.

Reply
[-]Tom Davidson11d30

Based on the derivation, it seems you mean the difference in productivity of workers doing similar tasks in the same industry, which seems important to specify.

Yes exactly right - thanks for flagging!

 

But it's not clear to me the relevant multiplier is the one you pick within one country and industry. E.g. if we have abundant cheap AI cognitive labour, couldn't I set up a company producing widgets in e.g. India, employ heaps of low-skill workers for cheap but make them very productive with AI training and direction, and make a killing?

Yep this is a way we might be too conservative. Our analysis makes more sense for the US but might be too conservative for other countries. OTOH, as you say maybe there are limits for other countries' output that isn't about worker ability. 

 

It's also a bit unclear whether you'll really be able to ratchet up worker productivity to the level of top workers just via AI instruction

Reply
[-]denkenberger15h10

More manual workers: ~2x

Globally, manual workers far outnumber knowledge workers. However, it's true that you could lure most subsistence farmers to factories with high wages and then use tractors to farm the land (demand for food would go way up with the increased global incomes, though this might be mitigated eventually with plant based/cultivated meat). I think many knowledge workers would take lower unemployment/UBI rather than becoming manual workers. So I still doubt you could double the manual workforce quickly.

Reply
[-]denkenberger21h10

How many times production must double to halve the cost

Moore’s Law: 0.2 Bloom et al (2020), Table 7

 

I got directed to “The Fall of the Labor Share and the Rise of Superstar Firms” which doesn’t have a table 7. AI says, “For transistors and integrated circuits, the cost reduction typically follows an experience curve where costs decrease by approximately 20-30% for every cumulative doubling of production volume.” This would be ~2 doublings to halve the cost, and is more consistent with my understanding. It's closer to the other examples. The difference is the that cumulative number of transistors doubles much faster per calendar year, and therefore the cost per transistor falls much faster per calendar year. If it only has to double 0.2 times to halve the cost, that would be 97% cost reduction for every cumulative doubling.

Reply
[-]ankitmaloo7d10

There seems to be a fundamental assumption that post superintelligence world factories would look exactly like how it's done today. A lot of work in factories and the machines that are designed are kept with actual humans in mind. The machines which automate the entire process look very different and improve the efficiency exponentially. 

Most superintelligent systems predicated on today's research and direction are looking at using Reinforcement learning. At some point, presumably we will figure out how to make an agent learn from the environment (still in RL realm) and then we will have so called superintelligent systems. My contention here is that, RL by definition is an optimizer where it figures out algos to do tasks that may or may not match human designed algos. (there is a 2023 deepmind paper where they taught robots to play football and they eventually played ). Most work happens in software even for robotics, and with enough compute you could arguably replicate years of learning within a week. Doubling times of a year are not too fast. 

That being said: Robotics is unlikely to follow the human-like distribution of labor. Some of the places we will see first adoption and highest gains is where historically there is a shortage of labor, (Eg: Fab assembly lines, rare earths metal extraction) or you need a specialization to qualify. That is replicated even in software already. 

Other aspect is what would assembly lines or factories look like if they are fully automated. I feel we havent even started to think about this in depth. At a very high level, the advanced form of robots will be like any other machines. Similar to the leap from washing clothes by hand or using a washing machine. That way, given the barriers for adoption are smaller (barriers and times are high if humans are supposed to learn how to use the said machines vs just take the final output and review its quality), the pace should be much much faster in theory. 

Reply
[-]BryceStansfield10d10

I'm not really seeing the point of AI augmented human labour here.

It seems like it's meant to fill the gap between now and the production of either generalised or specialised macrorobotics, but it seems to me that that niche is better filled by existing machinery.

 

Why go through the clunky process of instructing a human how to do a task, when you can commandeer an old factory, and repurpose some old drones to do most of the work for you? Human beings might *in theory* have a much higher ceiling for precise work, but realistically you can't micromanage someone into being good at a physical task; they need to build muscle memory, and that's gonna be hard to come by with the constantly changing industrial processes a super intelligence would presumably be implementing.

On the other hand, you could macgyver old commercial machinery into any shape you want, quickly spin up a virtual training environment, and have an agent trained up on any industrial process you want in presumably minutes.

 

I think you might be assuming that industrial robots are hard, just because humans are bad at designing them. But I reckon a little bit of superintelligence would go a long way in hacking together workable robotics.

Reply
[-]cadca11d10

I do think that this is an under-discussed aspect of the intelligence explosion. I might even argue that, instead of the intelligence explosion simply accelerating the industrial explosion, that the intelligence explosion would be incumbent on a large, rapid expansion in compute and energy production; something that would only be possible with an economic shift like this. 

I do wonder about the presentation of the individual stages. I agree with them in concept, but I do think that there's a disconnect between their names and their intended characteristics. Like, yes, nanotechnology would be the logical end-goal of stage three, but only the end-goal, and only based on the technology we understand now. I think it might be a bit clearer to communicate the stages by naming them based on the main vector of improvement throughout the entire stage, i.e. 'optimization of labor' for stage one, 'automation of labor' for stage two, 'miniturization' for stage three.

That being said, I also want to push back on the theory of stage one. The three points of increase you speculate are ~2x from more productive workers, ~2x from more laborers due to mass occupational shifts, and ~3x from organizational optimization, altogether totaling ~10x. While I do think ~10x is fairly reasonable, I don't think that it would necessarily be a 'one-time gain'; it would make more sense that adopting and adapting to these changes would take time, and that productivity, instead of doing this radical shaped curve you currently have, would look more exponential, leading into the more pronounced later stages.

Reply
[-]rosehadshar10d30

I think it might be a bit clearer to communicate the stages by naming them based on the main vector of improvement throughout the entire stage, i.e. 'optimization of labor' for stage one, 'automation of labor' for stage two, 'miniturization' for stage three.

I think these names are better names for the underlying dynamics, at least - thanks for suggesting them. (I'm less clear they are better labels for the stages overall, as they are a bit more abstract.)

Reply
[-]Tom Davidson11d30

Yep agreed you might need energy expansion for the IE. You might be interested in our post on this: https://www.forethought.org/research/three-types-of-intelligence-explosion

 

Yeah by one-time-gain i don't mean to imply it would literally all happen at once -- i think spread out over years.

 

But i do think there's a cap there (holding technology levels fixed) based on only having so many humans to direct. 

 

Perhaps innovations in company organisation will take years for superintelligence to discover and will allow massive gains to output though, like >100X. That would be a way in which "one time gain" is more misleading than helpful and your idea of continued productivity growth more fitting

Reply
[-]cadca10d10

That’s fair, I think I might’ve just gotten the wrong impression from the graph. Personally, I wouldn’t think there would be a hard cap, as the IE would naturally boost technology levels instead of holding them fixed. However, I do agree that, either way, there will eventually come a point where a generalized machine laborer will be more efficient and more productive than a human laborer. 

I just read your ‘Three Types’ essay, and I thought it was also really good! Particularly interesting to me was the idea of how, as the IE cascades towards the full-stack IE, the concentration of power becomes more decentralized. I’ve been working on a model to anticipate the geopolitical and social impacts of AI development (please check it out!), and I hadn’t previously considered how the IE itself could have centralizing/decentralizing vectors. 

Great work! I’ll definitely be keeping an eye out for more stuff from you guys. 
 

Reply
[-]hold_my_fish11d10

This analysis assumes that there hasn't already been mass deployment of generalist robots before an intelligence explosion, right? But such deployment might happen.

As a real-world example, consider the state of autonomous driving. If human-level AI were available today, Tesla's fleet would be fully autonomous--they are limited by AI, not volume of cars. Even for purely-autonomy-focused Waymo, their scale-up seems more limited by AI than by car production.

Drones are another example to consider. There are a ton of drones out there of various types and purposes. If human-level AI existed, it could immediately be put to use controlling drones.

So in both those cases, the hardware deployment is well ahead of the AI you'd ideally like to have to control it. The same might turn out to be true of the sort of generalist robot that could, if operated by human-level AI, build and operate a factory.

Reply
Crossposted to the EA Forum. Click to view 1 comment.
Moderation Log
Curated and popular this week
55Comments

Summary

To quickly transform the world, it's not enough for AI to become super smart (the "intelligence explosion"). 

AI will also have to turbocharge the physical world (the "industrial explosion"). Think robot factories building more and better robot factories, which build more and better robot factories, and so on. 

The dynamics of the industrial explosion has gotten remarkably little attention.

This post lays out how the industrial explosion could play out, and how quickly it might happen.

We think the industrial explosion will unfold in three stages:

  1. AI-directed human labour, where AI-directed human labourers drive productivity gains in physical capabilities.
    1. We argue this could increase physical output by 10X within a few years.
  2. Fully autonomous robot factories, where AI-directed robots (and other physical actuators) replace human physical labour.
    1. We argue that, with current physical technology and full automation of cognitive labour, this physical infrastructure could self-replicate about once per year.
    2. 1-year robot doubling times is very fast!
  3. Nanotechnology, where physical actuators on a very small scale build arbitrary structures within physical limits.
    1. We argue, based on experience curves and biological analogies, that we could eventually get nanobots that replicate in a few days or weeks. Again, this is very fast!

Intro

The incentives to push towards an industrial explosion will be huge. Cheap abundant physical labour would make it possible to alleviate hunger and disease. It would allow all humans to live in the material comfort that only the very wealthiest can currently achieve. And it would enable powerful new technologies, including military technologies, which rival states will compete to develop.

The speed of the industrial explosion matters for a few reasons:

  • Some types of technological progress might not accelerate until after the industrial explosion have begun, because they are bottlenecked by physical infrastructure for running experiments
  • If the industrial explosion is fast, a nation that leads on AI and robotics could quickly develop a decisive economic and military advantage over the rest of the world.
  • All things equal, a faster industrial explosion means the AI could overthrow humanity sooner – many routes to AI takeover go via AI controlling new physical infrastructure.
  • The industrial explosion will feed back into the intelligence explosion, by rapidly increasing the world’s supply of energy and compute.

This post presents an initial analysis of the dynamics of the industrial explosion. We argue that:

  • The industrial explosion will start after the intelligence explosion starts, and it will initially proceed more slowly than the intelligence explosion.
  • Schematically, we can think of the industrial explosion unfolding in three stages:
    • AI-directed human labour, where AI-directed human labourers drive productivity gains in physical capabilities.
    • Fully autonomous robot factories, where AI-directed robots (and other physical actuators) replace human physical labour.
    • Nanotechnology, where physical actuators on a very small scale build arbitrary structures within physical limits.
  • The industrial explosion could ultimately become extremely fast, with the amount of physical labour potentially doubling in days or weeks.
Three stages of the industrial explosion

                                     Three stages of the industrial explosion.

The industrial explosion will start after the intelligence explosion, and will proceed more slowly

The industrial explosion will likely start after the intelligence explosion because physical tasks will be automated after cognitive tasks. Cognitive tasks are easier to automate for a few reasons:

  • Cognitive tasks, especially for key domains like AI R&D, are less wide-ranging.
  • There’s much more data on how tasks are completed (like documents, meeting notes, potentially computer screen recordings). By contrast, it’s expensive to gather data for tasks in the physical world.
  • The tasks are entirely virtual, avoiding many tricky real-world frictions. 

As well as starting later, the industrial explosion will also be slower than the intelligence explosion. The first reason is that the current rate of technological improvement for AI cognition is faster than the rate of technological improvement in robotics. AI chips double in FLOP/$ every ~2 years. AI algorithms double in efficiency every year or less. We think that robot technology doubles in efficiency more slowly than this, perhaps every 1-4 years.1 So the technologies that will drive the intelligence explosion are increasing much faster than those that will drive the industrial explosion.

The second reason the industrial explosion will be slower is because the feedback loop of “robots make more robots” is has a bigger time lag than the feedback loop for “AI makes smarter AI”:

  • As soon as we train a powerful AI system, we’ll be able to run millions of copies in parallel. By contrast, once we develop perfect humanoid robots it will take years before we produce millions.2
  • It takes longer to build a complex factory from scratch than to train an AI system from scratch.3
The industrial explosion will start after the intelligence explosion, and will proceed more slowly

So the industrial explosion will start after the intelligence explosion, and happen more slowly.

Three stages of industrial explosion

Schematically, we can think of the industrial explosion unfolding in three phases:4

  • AI-directed human labour, where AI-directed human labourers drive productivity gains in physical capabilities.
  • Fully autonomous robot factories, where AI-directed robots (and other physical actuators) replace human labour.
  • Nanotechnology, where physical actuators on an atomic scale build arbitrary structures within physical limits.
Three stages of the industrial explosion

                                  Three stages of the industrial explosion.

AI-directed human labour

In the first phase, AI-directed human labour will drive large gains in the productivity of physical production.

Today, human physical labour is not maximally productive: 

  • Some workers are much more productive than others. You can see this from differences in salary worldwide and within countries, which are significant both between individuals and across one person’s career.
  • Even the most productive workers might not be at the physical limits of human productivity.

AI could bring the economic productivity of human manual workers close to or beyond the productivity of the very best human workers today.5 For example:

  • Human manual workers could wear sensors (e.g. phones with cameras and microphones) which allow real-time AI monitoring of their actions. With this vast amount of data, AI could generate specific real-time advice to each worker.
  • Using this data, AI could generate process improvements which increase the efficiency of whole factories and industries.
  • AI could also coordinate the actions of many disparate humans on complicated projects by tracking all their actions in real time and adjusting plans accordingly. 

Because AI-directed human labour only requires advances in cognitive capabilities, this phase will probably happen before fully autonomous robot factories or nanotechnology. It could in principle be rolled out quite quickly, though in practice this will depend on human adaptability, regulation and other human factors.

This phase will involve lots of humans doing physical labour, as their cognitive labour is no longer useful.6

Fully autonomous robot factories

After increasing the size of the physical economy by a moderate factor,7 AI-directed human labour will run into natural limits: humans can only work so efficiently.

At that point, further demand for physical labour could drive the development of robots and other physical actuators that can fully automate human physical labour.

In practice, physical labour will become increasingly automated in a gradual way:

  • Today, physical labour is done by a mixture of humans and specialised robots, and the process of production is ultimately directed by humans.
  • In the phase of AI-directed human labour, the process of production will come to be directed by AI systems.
  • The proportion of physical labour done by humans will begin to fall, as physical capabilities increase and better robots (and other physical actuators) are produced.
  • As robots continue to improve, humans will only perform tasks where humans have strong comparative advantage.
  • Eventually robots will be better than the best human manual labourers at all tasks.8

Of course, humans may choose not to fully automate physical labour. But absent human bottlenecks, economic incentives and increasing physical capabilities would eventually lead to robots (and other physical actuators) that can fully replace human workers.

If physical labour is fully automated, then an array of AI-directed robots and other physical actuators will be able to autonomously do all economic tasks, including making more robots. In other words, the robots can self-replicate. This is important, as it creates the positive feedback loop that’s required for an industrial explosion.

Indeed, autonomous robots may initially be specialised for the purpose of making more robots over other tasks, because this task will be so economically valuable. 

Sometimes ‘self-replicating robots’ is used as a shorthand for these AI-directed physical actuators. But it’s important to realise that:

  • Individual robots probably won’t be able to replicate themselves. It’s more plausible that there will be a whole array of actuators in a set of factories, each producing parts for machines which produce parts for machines… which are collectively capable of self-replicating.
  • The physical actuators won’t just be humanoid robots. Initially, many of the actuators may be humanoid robots, as most physical equipment is currently designed for human labourers.9 But ultimately there will likely be large efficiency gains from relaxing the constraint of human compatibility, and producing physical equipment and robots which are more optimised.

Nanotechnology

Eventually, fully automated physical labour will run into physical limits: it won’t be possible to build physical objects any faster.

But smaller objects are faster to build. We see this empirically, with bacteria and other small organisms self-replicating faster than larger organisms. There are also basic engineering principles which support this conclusion (for example, smaller objects have a bigger surface area to volume ratio, so can absorb more materials per unit mass).10

Because smaller objects are faster to build, there will be returns to designing smaller and smaller machines, with faster and faster throughput.

In the limit, an industrial explosion could enter into the third phase, nanotechnology, where physical actuators on a very small scale build a very wide range of structures.11

How fast could an industrial explosion be?

The speed of the industrial explosion will likely change over time. We can consider: 

  • How fast will the industrial explosion be initially?
  • How quickly will the industrial explosion accelerate?
  • How fast will the industrial explosion eventually become?

It’s hard to make substantive claims about the speed of the industrial explosion, as it requires making so many assumptions. Nevertheless, we can make some general claims.

Initial speed

One-time gain from AI-directed human labour

For the first phase, AI-directed human labour, we could operationalise the speed of the industrial explosion in terms of productivity.

How large an increase in total productivity might AI-directed human labour give?

AI direction could make most workers much closer in productivity to the best workers. The difference between the productivity of the average and the best manual workers is perhaps around 2-6X:

  • In the US, labourers earn $13-26 an hour.
  • Studies of the variation in output within a given firm suggest the very best workers are 1.5-2X more productive than the mean.12
  • AI will also make whole firms more productive by improving planning and organisation. Top firms are ~3X more productive than the mean.13

We should round this up further though, to account for the possibilities that:

  • AI leads to performance improvements beyond that of the best human workers today.
  • There is a one-time increase in total productivity from large numbers of cognitive labourers switching to manual labour. This could easily be another 2X.

We’re uncertain about how large these uplifts might be, but it looks like – combining the gains from more productive individual workers, more productive firms, and more total human workers – the overall increase in physical output here might be about 10X.

FactorIncrease in physical output
More productive workers~2X
Better run organisations~3X
More manual workers~2X
Total~10X

Initial doubling times for autonomous robots

As the industrial explosion transitions from AI-directed human labour to increasing and eventually full automation of physical labour, we can start to operationalise the speed of the industrial explosion in terms of robot doubling times: the time it takes to double the number of robots (and other types of physical actuators) in the world.14

The most recent doubling in the number of robots in the world took 6 years. It’s hard to say how quickly self-replicating robots could double in number, but in an appendix we use a couple of approaches to tentatively estimate that with current physical technology (but abundant AI cognitive labour) this might be on the order of a year, rather than a month or a decade. It could be faster still if AI can quickly drive rapid technological progress without an industrial explosion happening first (for example, by quickly developing advanced nanotechnology).

Acceleration

If robot technology remains constant, the growth rate of robots and other physical actuators will be constant (ignoring resource constraints for simplicity).15

But if technological improvements mean that robots become twice as easy to make, then the growth rate will double.

Ideally, we’d get data on how much you need to increase the stock of robots and other physical actuators before their price halves – an experience curve for robots. We don’t have trustworthy data on that unfortunately. But there are many papers estimating this quantity for related sectors:

SectorHow many times production must double to halve the costSource
Chemical industries1 - 10Nagy et al (2013), Supporting Information 1
Hardware industries1 - 2.5Nagy et al (2013), Supporting Information 1
Energy industries2 - 10Nagy et al (2013), Supporting Information 1
Other industries (mostly electrical)2 - 5Nagy et al (2013), Supporting Information 1
Aggregate economy3Bloom et al (2020), Table 716
Moore’s law0.2Bloom et al (2020), Table 716
Agricultural sectors2 - 10Bloom et al (2020), Table 716
Robots1ARK Invest (don’t provide raw data)

So if robot technology improves with the same learning curve as the aggregate economy, it will take 3 doublings before the cost of robots and other physical actuators halves and (as a consequence) the robot growth rate doubles. If it’s like Moore’s law, then it will accelerate much more quickly, but that is famously an outlier.

Overall, it looks likely that the number of robots will double 1-5 times before the robot growth rate doubles.

Maximum speed

We can upper bound how fast the industrial explosion could become by thinking about how fast robot doublings could become in the limit of technological feasibility (though human bottlenecks might cause us to move more slowly).

To sustain physical growth that would be valuable to humans, the self-replicating machines need to be complex enough to make the machines that make the machines… that make all machines in the modern economy. They might bootstrap by having biological instincts that, under certain conditions, cause them to stop replicating and instead start making increasingly complex machines. Alternatively, they might be configured so that they can receive instructions from AIs who would then direct their behaviour so that they build the desired machines.

How quickly could such machines replicate? One way to estimate this is to look at biological analogies. 

Some bacteria can double in hours. But these organisms are very simple and cognitively basic so may be unable to bootstrap to complex machines. 

Instead, we can look at the fastest doubling times for biological organisms which have brains, and therefore may be capable of executing sophisticated behaviour based on their sensory inputs. In optimal conditions, fruit fly populations can double in days.17 This is proof of concept that biological replicators with brains can double in days.18

Still, fruit flies are physically weak and cognitively fairly basic; perhaps they are too limited to rebuild the full physical economy. Rats are a more conservative example, and in good conditions they can double in about 6 weeks.19

One source of scepticism here is that the earth can only carry so many robots, and we might reach the limit before robot technology becomes good enough for such quick doublings. But a quick BOTEC suggests that, extrapolating the experience curves discussed above, we would get doubling times of less than a day before reaching the earth’s robot carrying capacity.

It seems reasonable to use days or weeks as an upper bound on how fast robot doublings could become, based on biological analogies. This is very fast indeed.20

Thanks to Owen Cotton-Barratt, Max Dalton, Oscar Delaney and Fin Moorhouse for helpful feedback.

Appendices

How fast could robot doubling times be initially?

Once AI can manipulate robots as well as humans can manipulate their bodies, how fast will robot doubling times be?

Here, we give a preliminary sketch of a rough order of magnitude estimate. We assume physical technology is the same as it is today, but assume that there is cheap and abundant AI cognitive labour to control robots and other types of physical capital.

We use two separate estimation approaches, though both have significant uncertainties.

EstimateHow it worksBottom line
How fast is physical capital at making more physical capital?

Look at a factory that makes more factories.

Doubling time = (value-added each year by the factory) / (value of the physical capital in the factory)

 

~1 year
How long would it take a humanoid robot to pay for its own construction?Doubling time = (wage of a productive manual worker) / (cost of making a humanoid robot)~1 year

How fast is today’s physical capital at making more physical capital?

(Thanks to Constantin Arnscheidt and Damon Binder for raising this approach to our attention.)

Self-replicating robots will involve a wide variety of physical capital – e.g. factories, machines and infrastructure – making more physical capital. So one question is, how quickly can today’s physical capital produce more physical capital?

We can estimate this by comparing the $ value of the physical capital in a particular factory to the value that factory produces in a year (in the form of new physical capital).21 For example, if a $1b factory produces $1b of value each year, then that suggests the total amount of physical capital stock could double in a year. If it only produces $0.5b of value, then a doubling would take 2 years.22

According to data from the Bureau of Economic Analysis the US manufacturing sector produced $2.6bn of value in 2022 using $5.4bn of physical capital.

These numbers naively suggest that self-replicating robots could double in about two years.

This estimate is a bit aggressive for a couple of reasons:

  • Complex factories often take more than a year to build. Building them faster might cost more. This is a small factor as very ambitious factories can be built in less than a year when people are really trying.
  • It ignores the human labour costs of producing physical capital. These are very roughly half of the costs, though robot costs will likely be lower. We think this factor will be outweighed by the fact that fully autonomous factories can operate day-and-night, unlike normal factories.

On the other hand, the estimate is very conservative in ignoring productivity improvements from abundant AI cognitive labour. Having physical capital (and robot labour) be controlled by superhumanly smart and motivated AIs could significantly boost productivity. This might reduce the doubling time, by a factor of 2-4X.23

So, all in all, this first approach suggests an initial robot doubling time of roughly 1 year or less.

How long would a humanoid robot take to pay for its own construction?

To begin with, we’ll think through an unrealistic but simple hypothetical scenario. Then we’ll consider how this might transfer to the real world.

Imagine a hypothetical where self-replicating humanoid robots drop from the sky tomorrow. They can perform all physical tasks as well as a human, cost the same as today’s robots, and are manipulated by a limitless supply of AI systems who direct them as well as the most productive human workers. These robots go on to self-replicate without any help from humans.

A very basic economic analysis suggests a robot doubling time of ~5 months:

  • Human manual labourers in the US earn ~$40k a year. But these robots will be 6X more productive as they work day and night (3X) and are more productive than average (2X24). So that’s ~$240k/year.
  • Humanoid robots currently cost ~$30-150k to produce commercially, so let’s say ~$100k.
  • This suggests that the robots could pay for their own construction every ~5 months.

This basic analysis suggests that in our hypothetical, robot doubling times will be on the order of months.

But this is too simplistic. There are two strong reasons to expect that the doubling time (even in our unrealistic hypothetical) would actually be longer:

  • It’s more expensive to build factories from scratch than to rent them. The cost of making a robot (~$100k) already takes into account the costs of renting the factory space. But at scale it would be necessary to build new factories,25 and this is much more expensive. Mathematically, we can think about this as a multiplier on the amortized construction cost of the physical capital used to build the robots. The multiplier equals the time over which the physical capital construction is currently amortized, divided by the robot doubling time. If the cost of building factories is currently amortized over 10 years, then a 12 month doubling time would increase these costs by 10x.
    • A 2 year doubling time seems compatible with this factor, as robots could produce 6X as much value as in 5 months, and the amortised costs would only be 5X higher.
  • It is more expensive to make things quickly than to make them slowly. Currently it takes 1-2 years to build a new factory. But to double the robot population in 5 months, new factories would need to be built in weeks, which is a 50X speedup. We should expect very large cost penalties for this, and it probably requires significant technological progress for this to even be possible.
    • Again, the penalties for a 2 year doubling seem like they wouldn’t be large.

These factors stop biting at around 2 years, so shift our estimated doubling time up to 1 - 2 years.

The hypothetical doubling time might be shorter due to lower labour costs. Currently, robot construction costs include paying for human cognitive labour. In our hypothetical scenario, there is abundant AI cognitive labour, so these costs don’t need to be paid. If half the cost of robot construction is currently human cognitive labour, this would be a 2x reduction in the doubling time.

This leaves our estimated doubling at about 1 year or less in our hypothetical scenario. We put less weight on this method than the one above.

How does this translate to the real world?

There are a few related reasons to think that once physical labour is fully automated in the real world, initial robot doubling times might be towards the shorter end of that range:

  • In our hypothetical, humanoid robots self-replicate without any help from humans. In the real world, humans will be helping to make more robots.
  • In our hypothetical, we assumed that the robots would need to build factories from scratch. In the real world, there will be a transition period where existing physical capital is reallocated to producing more robots.26 This will help us to build robots faster – and by the time it’s necessary to build new factories, robot technology will probably already have improved a lot, which will lower the cost of building new factories.
  • There is historical precedent for in-demand physical goods doubling every ~2 years, including solar panels, smartphones and electric cars.

Overall, we can tentatively say that initial robot doubling times are likely to be on the order of a few years, rather than months or decades.

How fast could robot doubling times accelerate?

If robot technology remains constant, the growth rate of robots and other physical actuators will be constant (ignoring resource constraints for simplicity).

But if technological improvements mean that robots become twice as easy to make, then the growth rate will double.

Ideally, we’d get data on how much you need to increase the stock of robots and other physical actuators before their price halves – Wright’s law for robots. I don’t have data on that unfortunately. But there are many papers estimating this quantity for related sectors:

SectorHow many times production must double to halve the costSource
Chemical industries1 - 10Nagy et al (2013), Supporting Information 1
Hardware industries1 - 2.5Nagy et al (2013), Supporting Information 1
Energy industries2 - 10Nagy et al (2013), Supporting Information 1
Other industries (mostly electrical)2 - 5Nagy et al (2013), Supporting Information 1
Aggregate economy3Bloom et al (2020), Table 727
Moore’s law0.2Bloom et al (2020), Table 727
Agricultural sectors2 - 10Bloom et al (2020), Table 727

So if robot technology improves with the same learning curve as the aggregate economy, it will take 3 doublings before the cost of robots (and other physical actuators) halves and the robot growth rate doubles. If it’s like Moore’s law, then it will accelerate much more quickly.

Overall, it looks likely that the number of robots will double 1-5 times before the robot growth rate doubles.

How quick might robot doublings become by the time we reach the earth’s carrying capacity?

This calculation has three steps:

  1. How fast would robot doubling times be with current technology?
  2. How many orders of magnitude will we scale up robot production before reaching the earth’s carrying capacity?
  3. How much will this scale-up reduce the doubling time, based on experience curves for cost vs production?

Step 1. Above, we estimated that with current physical technology and abundant AI cognitive labour, robot doubling times might be about one year.

Step 2. Today fewer than 100,000 humanoid robots have been produced.28

We expect that the earth’s robot carrying capacity will be constrained by energy not by raw materials (see fn 15 for discussion). Solar energy hitting the earth is 2e17 W, whereas the human body uses 100W. If 5% of solar energy is used to run humanoid robots with efficiency matching humans, you could run 1e16/100 = 1e14 humanoid robots. 

That’s a scale up of robot production of 9 orders of magnitude (1e14/1e5 = 1e9).

Step 3. Above we estimated that we might have to scale up robot production by 1-5 orders of magnitude to reduce the doubling time by one order of magnitude. 

Conservative calculation: robot doubling times fall by 9 / 5 = ~2 orders of magnitude to a few days.

Median calculation: robot doubling times fall by 9 / 3 = 3 orders of magnitude, to a few hours.

Aggressive calculation: robot doubling times fall by 9 / 1 = 9 orders of magnitude to less than a second.

This suggests we could reach the doubling times of days or weeks suggested by the biological anchors.

Caveat: One big uncertainty in this calculation is that it does not consider the other types of physical capital (e.g. factories, machines, infrastructure). If some type of physical capital has a less favourable experience curve (and there’s no alternative with a more favourable experience curve), then this could bottleneck growth and increase the doubling time.

This research was done at Forethought. See our website for more research.

Mentioned in
26AI #123: Moratorium Moratorium