No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
AI assistance disclosure: I used large language models (e.g. Gemini / ChatGPT) for translation, phrasing and structural suggestions. I spent significant time editing, and I fully understand and endorse all of the arguments in this post; please treat it as my own work.
Epistemic Status:Speculative / First-principles heuristic. I have not run formal models yet; this argument relies on basic thermodynamic and economic constraints.
Editor's Note: I am a netizen from China. Recently, anxiety about AI has been spreading in my community: "Will AI take all our jobs? Are humans about to become useless?" Instead of answering only in terms of IQ, consciousness, or "souls", I want to approach the question from a physical angle: If we take thermodynamics and real-world costs seriously, is it actually efficient for AI to replace humans everywhere?
Introduction: The "Long Run" Is Now
We often imagine a simple story: First AI surpasses humans in intelligence. Then it surpasses us in physical capability. Then, sooner or later, it replaces us. This is usually framed in terms of power and intent: If a superintelligence wanted to wipe us out, nothing could stop it.
I want to add a different lens: cost — in the sense of thermodynamics and economics.
John Maynard Keynes famously said: "In the long run we are all dead." For human politics and economics, this is often an excuse to ignore far-future consequences. The world is big and slow; nature is vast enough to tolerate human inefficiency for a while.
But for a Superintelligence, the "long run" is much closer. Because it can plan over huge horizons and integrate a lot of information, the cost structure of its choices matters almost immediately. It cannot afford to ignore thermodynamic reality for long; it must "do the math" of survival and growth from the beginning.
When we do this math, I think we see two hard "walls" that AI cannot easily bypass:
A Thermodynamic Wall: in many messy, high-entropy environments, carbon-based life is cheaper and more robust than silicon hardware.
A Dimensionality Wall: humans occupy a high-dimensional, ever-shifting long tail of tasks that is very expensive to model and fully replace.
Crucially, this is not an argument that AI is automatically safe or benevolent. It is a conditional claim: if a powerful optimizer cares about long-term resource efficiency and continued growth, then these walls make a "full replacement" strategy unattractive.
1. The Thermodynamic Wall: The "Rainforest" Paradox
A common fear is: "AI robots will eventually replace humans even in the most extreme physical jobs." In clean, controlled environments, that might be true. But I want to highlight a different region of the world: high-entropy, hostile environments.
Consider a Tropical Rainforest: high humidity, extreme heat, swarms of insects, corrosive bacteria.
A dense tropical rainforest, a wet and high-entropy environment where biological systems are “native” and silicon systems are costly guests.
Now compare two kinds of "agents" trying to operate there at scale:
Silicon-based life To operate a high-performance robot here, you need:
active cooling to keep chips within limits,
sealed, corrosion-resistant casings,
regular maintenance to clean, lubricate and recalibrate,
reliable power and communication infrastructure in a place where trees fall, mudslides happen and roads wash out.
In other words, you must fight the environment on every front just to keep the hardware alive. The environment is constantly trying to heat your systems, corrode your contacts, clog your joints and cut your cables. The total energy and logistical cost of maintaining "silicon order" in this "biological chaos" grows very fast.
Carbon-based life (humans) By contrast, we:
are self-repairing units with immune systems,
run on organic matter found locally,
can heal, scar over, and adapt,
are born, grow up, and learn inside the same chaotic environment.
From a cost perspective, biology is letting the environment do most of the work: cells self-assemble, tissues self-heal, populations self-reproduce. The rainforest is not an enemy; it is the substrate.
So: It is often simpler and cheaper to leave the rainforests to humans (and other biological agents). "Rainforest" here is a stand-in for all harsh, unstructured, high-entropy physical environments: swamps, mountains, disaster zones, informal settlements, and so on.
Key Insight: In such environments, the Total Cost of Ownership (TCO) for a biological unit is orders of magnitude lower than for a silicon unit of comparable versatility.
Of course, the exact crossover point where silicon beats carbon will depend on many engineering details: cooling technologies, materials, energy prices, and so on. I am not claiming a sharp universal boundary. What I am claiming is more qualitative: There will likely remain large regions of the "state space" of physical environments where carbon-based systems enjoy a persistent cost advantage over silicon.
Any optimizer that cares about efficient resource use and long-term growth will find it unattractive to replace humans with robots everywhere in these regions. It is cheaper to let biological agents continue to do what they are already so good at.
2. The Dimensionality Wall: The "Long Tail" Strategy
Even if we leave the rainforest aside, what about "the rest" of human work? Here I think about a second wall: dimensionality.
When people say "AI will be better at everything than humans", they often imagine a one-dimensional ladder: a single "capability" axis. If AI climbs higher, we lose. But the real world is not one-dimensional. The space of tasks is more like a huge, evolving long tail:
new tools, norms and technologies keep appearing,
regulations and institutions change,
local cultures create idiosyncratic roles,
micro-niches are constantly created and destroyed.
AI in standardized domains AI dominates in standardized, low-entropy domains: well-defined exams, benchmark tasks, protocol-driven workflows, environments with lots of clean data and stable rules. In these domains, scaling laws favor silicon: more parameters and more data translate into smoother, more predictable improvements.
Humans in the long tail Humans, however, are very good at living in the weird part of the distribution: edge cases, messy social situations, one-off combinations of skills and constraints ("that job only exists in this particular town / company / family"). We don't just move along one axis ("be smarter at X"). We combine partial skills across domains, improvise with whatever tools and social capital we currently have, and pivot when a niche dries up.
This brings me to a small heuristic.
The Mathematics of "Being a Champion"
A toy rule for the long tail:
"As long as there are enough qualifiers, anyone can be a champion."
If you only optimize along one axis, like "running 100m fastest", then the top spot is brutally competitive. Improving from 90% to 99% in a single metric faces rapidly diminishing returns: each extra 1% costs massive energy and effort.
But if there are many axes — skill A + skill B + local context C + weird personality D — then:
almost nobody sits at the absolute top in any single dimension;
but many people can be locally unique combinations.
You don't need to be the best coder, or the best therapist, or the best plumber. You can be a "good enough" programmer + "good enough" storyteller + "good enough" plumber in a particular town with its own peculiar problems. In that small, shifting niche, you might be the best (or only) person who fits.
Formally modeling this is hard. Intuitively, it looks less like a single race and more like comparative advantage in a very high-dimensional space.
From the perspective of an AI that wants to exploit opportunities in this space, there is an important consequence: It may be computationally cheaper to outsource exploration of the long tail to humans than to centrally model and replace every niche itself. Humans are already running a gigantic, decentralized search process over possible roles and strategies. Replicating this centrally, in detail, is expensive.
Conclusion: A Niche That Is Hard to Pave Over
Putting it together:
Thermodynamic Wall: In many harsh, high-entropy environments, carbon-based life remains cheaper and more robust than silicon hardware. Replacing humans everywhere in these environments is thermodynamically and economically inefficient.
Dimensionality Wall: The human ecological niche is high-dimensional and fast-moving. Humans naturally spread into the long tail of weird, local, context-heavy tasks. Fully modeling and replacing every niche is information- and compute-intensive.
This brings me to the "Biological Tentacle" theory.
If AI wants to understand the physical world—especially the chaotic, long-tail parts of it—it faces a choice:
Simulation: Guess what's happening (prone to hallucinations/errors).
Robotics: Build expensive robots to go look (violates Moravec's paradox; hardware is expensive).
Outsourcing: Use existing, self-replicating, self-repairing sensors that are already everywhere. Humans.
Humans are the AI's natural "Biological Tentacles." We possess embodied intelligence. We are excellent "Out-of-Distribution" (OOD) detectors. There is no logical reason for an AI to "cut off" these tentacles; they are its cheapest way to interact with reality — at least under any objective that values accurate world models, economic growth, or long-term survival.
In this future symbiosis, humans verify the novelty that AI cannot predict from its data centers. Just as photography didn't kill painting but forced it to move into the "Impressionist" era (exploring subjectivity and abstraction), AI might force human intelligence to move deeper into the "Rainforest" — exploring the messy, high-dimensional reality that is too expensive to simulate.
(I have further thoughts on the Paperclip Maximizer problem and a theory of "Low-Entropy/Diversity/Growth" (LDG) alignment, which I hope to discuss in future posts.)
AI assistance disclosure: I used large language models (e.g. Gemini / ChatGPT) for translation, phrasing and structural suggestions. I spent significant time editing, and I fully understand and endorse all of the arguments in this post; please treat it as my own work.
Epistemic Status: Speculative / First-principles heuristic. I have not run formal models yet; this argument relies on basic thermodynamic and economic constraints.
Editor's Note: I am a netizen from China. Recently, anxiety about AI has been spreading in my community: "Will AI take all our jobs? Are humans about to become useless?" Instead of answering only in terms of IQ, consciousness, or "souls", I want to approach the question from a physical angle: If we take thermodynamics and real-world costs seriously, is it actually efficient for AI to replace humans everywhere?
Introduction: The "Long Run" Is Now
We often imagine a simple story: First AI surpasses humans in intelligence. Then it surpasses us in physical capability. Then, sooner or later, it replaces us. This is usually framed in terms of power and intent: If a superintelligence wanted to wipe us out, nothing could stop it.
I want to add a different lens: cost — in the sense of thermodynamics and economics.
John Maynard Keynes famously said: "In the long run we are all dead." For human politics and economics, this is often an excuse to ignore far-future consequences. The world is big and slow; nature is vast enough to tolerate human inefficiency for a while.
But for a Superintelligence, the "long run" is much closer. Because it can plan over huge horizons and integrate a lot of information, the cost structure of its choices matters almost immediately. It cannot afford to ignore thermodynamic reality for long; it must "do the math" of survival and growth from the beginning.
When we do this math, I think we see two hard "walls" that AI cannot easily bypass:
Crucially, this is not an argument that AI is automatically safe or benevolent. It is a conditional claim: if a powerful optimizer cares about long-term resource efficiency and continued growth, then these walls make a "full replacement" strategy unattractive.
1. The Thermodynamic Wall: The "Rainforest" Paradox
A common fear is: "AI robots will eventually replace humans even in the most extreme physical jobs." In clean, controlled environments, that might be true. But I want to highlight a different region of the world: high-entropy, hostile environments.
Consider a Tropical Rainforest: high humidity, extreme heat, swarms of insects, corrosive bacteria.
A dense tropical rainforest, a wet and high-entropy environment where biological systems are “native” and silicon systems are costly guests.
Now compare two kinds of "agents" trying to operate there at scale:
Silicon-based life To operate a high-performance robot here, you need:
In other words, you must fight the environment on every front just to keep the hardware alive. The environment is constantly trying to heat your systems, corrode your contacts, clog your joints and cut your cables. The total energy and logistical cost of maintaining "silicon order" in this "biological chaos" grows very fast.
Carbon-based life (humans) By contrast, we:
From a cost perspective, biology is letting the environment do most of the work: cells self-assemble, tissues self-heal, populations self-reproduce. The rainforest is not an enemy; it is the substrate.
So: It is often simpler and cheaper to leave the rainforests to humans (and other biological agents). "Rainforest" here is a stand-in for all harsh, unstructured, high-entropy physical environments: swamps, mountains, disaster zones, informal settlements, and so on.
Key Insight: In such environments, the Total Cost of Ownership (TCO) for a biological unit is orders of magnitude lower than for a silicon unit of comparable versatility.
Of course, the exact crossover point where silicon beats carbon will depend on many engineering details: cooling technologies, materials, energy prices, and so on. I am not claiming a sharp universal boundary. What I am claiming is more qualitative: There will likely remain large regions of the "state space" of physical environments where carbon-based systems enjoy a persistent cost advantage over silicon.
Any optimizer that cares about efficient resource use and long-term growth will find it unattractive to replace humans with robots everywhere in these regions. It is cheaper to let biological agents continue to do what they are already so good at.
2. The Dimensionality Wall: The "Long Tail" Strategy
Even if we leave the rainforest aside, what about "the rest" of human work? Here I think about a second wall: dimensionality.
When people say "AI will be better at everything than humans", they often imagine a one-dimensional ladder: a single "capability" axis. If AI climbs higher, we lose. But the real world is not one-dimensional. The space of tasks is more like a huge, evolving long tail:
AI in standardized domains AI dominates in standardized, low-entropy domains: well-defined exams, benchmark tasks, protocol-driven workflows, environments with lots of clean data and stable rules. In these domains, scaling laws favor silicon: more parameters and more data translate into smoother, more predictable improvements.
Humans in the long tail Humans, however, are very good at living in the weird part of the distribution: edge cases, messy social situations, one-off combinations of skills and constraints ("that job only exists in this particular town / company / family"). We don't just move along one axis ("be smarter at X"). We combine partial skills across domains, improvise with whatever tools and social capital we currently have, and pivot when a niche dries up.
This brings me to a small heuristic.
The Mathematics of "Being a Champion"
A toy rule for the long tail:
"As long as there are enough qualifiers, anyone can be a champion."
If you only optimize along one axis, like "running 100m fastest", then the top spot is brutally competitive. Improving from 90% to 99% in a single metric faces rapidly diminishing returns: each extra 1% costs massive energy and effort.
But if there are many axes — skill A + skill B + local context C + weird personality D — then:
You don't need to be the best coder, or the best therapist, or the best plumber. You can be a "good enough" programmer + "good enough" storyteller + "good enough" plumber in a particular town with its own peculiar problems. In that small, shifting niche, you might be the best (or only) person who fits.
Formally modeling this is hard. Intuitively, it looks less like a single race and more like comparative advantage in a very high-dimensional space.
From the perspective of an AI that wants to exploit opportunities in this space, there is an important consequence: It may be computationally cheaper to outsource exploration of the long tail to humans than to centrally model and replace every niche itself. Humans are already running a gigantic, decentralized search process over possible roles and strategies. Replicating this centrally, in detail, is expensive.
Conclusion: A Niche That Is Hard to Pave Over
Putting it together:
This brings me to the "Biological Tentacle" theory.
If AI wants to understand the physical world—especially the chaotic, long-tail parts of it—it faces a choice:
Humans are the AI's natural "Biological Tentacles." We possess embodied intelligence. We are excellent "Out-of-Distribution" (OOD) detectors. There is no logical reason for an AI to "cut off" these tentacles; they are its cheapest way to interact with reality — at least under any objective that values accurate world models, economic growth, or long-term survival.
In this future symbiosis, humans verify the novelty that AI cannot predict from its data centers. Just as photography didn't kill painting but forced it to move into the "Impressionist" era (exploring subjectivity and abstraction), AI might force human intelligence to move deeper into the "Rainforest" — exploring the messy, high-dimensional reality that is too expensive to simulate.
(I have further thoughts on the Paperclip Maximizer problem and a theory of "Low-Entropy/Diversity/Growth" (LDG) alignment, which I hope to discuss in future posts.)