Most people consume a fairly balanced basket of goods and services. If certain goods and services become cheaper, more of these may be consumed, but there is a limit to this. Similarly, most large productive elements in the economy (such as companies, factories, datacenters, or airports) require a fairly balanced basket of goods and services to make and run: just about everything needs concrete, and steel, and copper wire, for example. So yes, it's a graph — but most nodes are strongly dependent on a large number of other nodes, and often on rather similar lists of "staple goods", so it's a very highly-connected graph. Not a simplex, but a lot closer to a simplex than to a tree. Just about everything is within three hops of steel, or concrete, or copper wire, or a long list of other staples: even if it doesn't need them itself, it depends on something that does, or on something that depends on something that does — usually by many such paths. The very high connectivity of the economy is what makes Amdahl's Law a fairly good approximation. As you note:
Subgraphs of the overall graph can grow, limited only by ability to buy inputs at the edges where they interact with the parent graph.
My point is: those edges are long, heavily linked across, and very near to every node in the subgraph. Roughly speaking, everything connects to everything else within a few hops, so separating subgraphs is hard. For example, for your hypothetical self-contained piece of economy in Australia's outback to decouple from the rest of the economy, it needs to be able to produce almost all of these staples goods at prices comparable to what the entire rest of the economy can. So it needs to be equivalent to the economy of a large country, at a minimum. By that point, it contains most sectors of the economy. So Amdahl's Law becomes a fairly good approximation internally to it.
So yes, it's a graph — but it's one of sufficiently high connectivity that modelling it as acting more like a single unitary system is a pretty good first-order approximation.
Right now, most of the graph ties back to a single node very strongly. That node is labor.
If tomorrow we found that the productive machinery of society had doubled. Twice as many factories, trains, mines, etc. We wouldn't have the workers to run them.
Efficiency improvements that increase productivity per unit of labor do grow the economy but there's never a true decoupling of labor and productivity that would allow unlimited economic growth. That's the problem.
As I said in my post, and I'll make clearer now, this outcome relies on achieving a narrow form of STEM AGI. Heavy on the engineering. It seems likely that software will be solved sometime in the next few years. The cost of working software will trend towards zero. I expect the same to happen in the hardware space.
To put forward a crux, suppose in two years LLMs become capable of designing industrial machinery and troubleshooting automation systems the same way current LLMs seem on track to build arbitrary software and do sysadmin work. Yes, there is an enormous ecosystem of industrial processes that are necessary to build a truly closed economic subgraph. Some can be put off for a while (EG:semiconductors). But if you had a million me level engineers, I think full automation is doable. A lot of what goes wrong in the real world is really really stupid.
I have a gears level model of why full automation has failed. O-ring theory basically. I can back things up with examples. If AI can raise the competence floor in manufacturing sufficiently, full automation will work. A lot of the necessary capabilities are there already. AI can already do a lot of what I did over the last few years as a manufacturing engineer. There's still a ways to go on visuospatial capabilities. Current LLMs can't design anything mechanical. That's not there yet. But I'd wager a SOTA LLM today is better than the median engineer overall.
I'll add a little detail there. Processes and automation specifically need to be well tuned to be reliable. There's hundreds of subtle things that can go wrong. Engineers change things. Bad engineers don't know if their changes made things better or worse. If a process is 99.99% stable, a good change might make it 99.995% stable. A bad change might remove one of your nines. Enough bad changes can take a good process and ruin it slowly over time, until eventually the failures are obvious enough that even a bad engineer known to revert.
Corporations are really stupid. The number of times you find out that the person who build X doesn't work here anymore and no one has any idea how it works ... It's absurd. Coding agents are currently on track to fix a lot of that on the software side. Claude code already is good enough but for the risk of catastrophic damage. Someone will put it together with disk imaging/reversion, package it as "AI agent based tech support" and it will work wonderfully.
I plan on going into the details in another post, but to restate my crux, suppose we get AI that can reliable build and tune industrial automation. If we turn that AI towards automating the production of all the machines nessesary for a stripped down economic core (machine tools, wire/sheet fabrication, electric motors, (too many more to count)) would that economic core not be able to grow independently of labor inputs?
So yes, splitting off an independent subgraph is hard, and that graph is very very big, but it's nowhere near as big as the current economy, nor anywhere as big as all current manufacturing which is very much focused towards consumer goods.
Initially, yes, the 1/(1-x) maximum will hold. Eventually, once capabilities cross some threshold, it stops holding as the dependency chains start closing up and second order effects take over.
Right now, most of the graph ties back to a single node very strongly. That node is labor.
Modern labor is highly specialized, so this isn't a single node. A science fiction writer couldn't become a programmer overnight, despite both spending their days typing at a computer. Similarly, a commercial airline pilot couldn't retrain as a pediatrician in a single day, nor could a steelworker instantly become a linguist-translator. Moreover, the ability to switch careers declines with age—I myself changed professions at 30, and it was extremely difficult.
Therefore, it would be wise to proactively identify workers in which professions will need retraining most urgently and potentially make this a central goal of social programs.
There's still a ways to go on visuospatial capabilities.
Engineers are not the only dependency on labor. I gather that Tesla had a LOT of trouble getting industrial robots to handle sheets of very flexible material (cloth, or insulation), so still needed human workers on the assembly line. But obviously that's a solvable problem in robotics.
Agreed, there are a number of areas that are super-cursed. soft bodies are one of them. one thing you'll notice in EV manufacturing is a lot of high voltage components are stampings or rigid conductors, not flexible wires. Wiring harnesses require people. stacking/bolting rigid parts to create combined mechanical /electrical connections doesn't.
Product design that does away with flexible wires entirely is a big part of design for manufacture and design for automated assembly.
Anytime you're handling flexible foam sheets ... why aren't you rigidifying them? why aren't you just producing foam in place. Refrigerators are a picture perfect example of how to build with foam, you produce it in the cavity where it's needed, don't handle it manually.
Looks like I have to rush out that next part.
Better robotics could change a lot of those assumptions.
In my youth, I learned to use a ball-and-chain flail as a weapon. Yes, it's tricky (and hitting yourself somewhere delicate while learning it is no fun) — but it's a learnable skill. Humans learn to make beds, and that's even harder. This could even be learned in sim-to-real, if we had good enough physics models of flexible objects coupled to fluid dynamics — and that's a software problem.
I think I have a similar picture of things. Importantly, you could totally get a self-maintaining, self-replicating automated sub-economy where, say, raw materials are extracted and refined, manufacturing of equipment for manufacturing and extraction are created and transported, etc.
Some huge uncertainties for me:
The modern economy is quite integrated, so lots of things ultimately require quite a diverse range of raw and transformed materials. How diverse? What's the minimally-reachable (not at tech maturity, but starting where we are now) in-principle fully closed manufacturing loop? How big can it scale, and how soon do other loops get closed?
If something is in-principle self-sustaining, under what conditions does that get actualised? And how voraciously?
What are the timelines and bottlenecks to the AI components for something like this? What about the machinery and robotics components?
Optimistically, but still quite dauntingly, this points at a future where economic management is more like ecological management (with various interleaving self-sustaining, self-replicating production loops vying for resources, and some kind of successor to price/demand dynamics hopefully balancing the allocation in a good way). Pessimistically, it's the most plausible near-term self-replicating successor to human and biological life.
Summary: Analysis claiming that automating X% of the economy can only boost GDP by 1/(1-X) assumes all sectors must scale proportionally. The economy is a graph of processes, not a pipeline. Subgraphs can grow independently if they don't bottleneck on inputs from non-growing sectors. AI-driven automation of physical production could create a nearly self-contained subgraph that grows at rates bounded only by raw material availability and speed of production equipment.
Models being challenged:
This post is a response to Thoughts (by a non-economist) on AI and economics and the broader framing it represents. Related claims appear across LessWrong discussions of AI economic impact:
These framings share an implicit assumption: the economy is a single integrated production function where unautomated sectors constrain growth of automated sectors. I argue this assumption breaks when an automatable subgraph can grow without requiring inputs from non-automated sectors.
There's no rule in economics that says if you grow a sector of your economy by 10x, farm fields, lawyers and doctors must then produce 10x more corn, lawsuits and medical care respectively. The industrial revolution was a shift from mostly growing food to mostly making industrial goods.
For small growth rates, and small changes, the model is mostly true in much the same way that linear approximations to a function are locally true. But large shifts break this assumption as the economy routes around industries that resist growth.
During the industrial revolution, Factories needed more workers but were competing with agriculture. Without innovation, there could have been a hard limit on free labor available to allocate to factories without cutting into food production. Instead innovations like combine harvesters and other agricultural machinery freed up farm workers for factory jobs. There's economic pressure to route around blockages to enable growth.
A better model
The economy is a graph
Structural changes to the economy can route around bottleneck to enable growth.
An illustrative case
Consider automating mining and manufacturing. We build the following:
This self contained economic core can then grow limited only by raw materials, speed of the underlying machinery and whatever it cannot produce that it has to trade for with broader economy like semiconductors or rare earths.
AI and robots don't take vacations, go to the doctor, put their kids in daycare or school. They needs chips, electricity and raw materials to grow. There will be some engineers, lawyers and lobbyists employed but not much relative to similar industrial production in the legacy economy. The system doesn't need much in the way of non-self-produced "vitamins" per unit of production.
It doesn't matter that this system hasn't replaced doctors in hospitals or convinced San Francisco to build housing. It can grow independently.
A hypothetical timeline
Full automation has the potential to massively increase productivity/growth. Right now most manufacturing output is consumer goods. Only a small fraction is machinery to replace/expand production. That machinery is often badly designed.
AI raising the floor on competence and automation would usher in absurd productivity gains. Full integration happens fast once AI replaces/convinces non-engineering trained MBAs of the stakes. Full automation/integration drops the capital cost to upgrade or build new production capacity. Things go insane.
Timelines and trajectories are hard to predict; ASI could compress everything, AI wall would stop it, AI capabilities might not generalize to optimizing manufacturing, but I think they will. Broadly, this doesn't happen or happens much slower if:
My own experience in industry and interactions with AI make me think 1) is wrong. 2) isn't an issue so long as there aren't political barriers in all countries. 3) is possible if some critical patents don't get licensed out, but lower tech out of patent technologies can usually be substituted albeit with some efficiency penalty.
Future posts I plan on writing supporting sub-points of this:
I've seen examples of in-house-produced equipment that cost orders of magnitude less than comparable equipment from vendors while being simpler, more reliable, etc.