Hmm, I have a few points to add regarding the self-replication cycle. There's actually existing research on Von Neumann probes and self-replicating factories that you might find relevant (e.g., "Replicating systems concepts: Self-replicating lunar factory and demonstration," and also this paper: https://arxiv.org/abs/2110.15198).
(My native language is not English, so this comment was translated by Gemini 2.5 pro. Please forgive me if it sounds a bit like an LLM.)
Based on this, I'd like to offer a few supplements to your article:
1. The doubling time for a macro-industrial system is indeed on the order of several months to a year, but this is based on very pessimistic technological assumptions. From an engineering perspective, it wouldn't be difficult to shorten it further.
2. The optimism about a short doubling time for nanomachines is somewhat unrealistic. The rapid replication of microorganisms has other reasons.
3. Nanomachines (or any small-scale self-replicating structures) have many disadvantages.
First, on the estimation of the doubling time.
Simply put, this order-of-magnitude estimate can be derived from more first-principle engineering and physics: every component has a specific power (W/kg) and a specific energy cost of production (J/kg). From these, we can calculate the Own Mass Production Time (OMPT) for each component, and then take a weighted sum of the OMPTs for all components. (Strictly speaking, this needs to be calculated twice, separating the power generation and power consumption parts, but it doesn't change the order of magnitude).
This estimation holds true under two premises:
① An ASI-driven, fully automated industrial system can achieve near-perfect JIT production. For example, a smelter runs at full power to process exactly the amount of ore extracted by excavators—no more, no less—with virtually no idle time or surplus production. In contrast, human market economies pay a heavy price for demand volatility: a capital good like an excavator, despite its high manufacturing cost, spends most of its lifecycle idle by the roadside, waiting for the next job. This is a colossal waste.
② The automated industrial system allocates only a small fraction of its productivity to manufacturing products for humans, while dedicating the vast majority of its capacity to its own reproduction and expansion.
For an ASI, neither of these premises would be difficult to satisfy. Therefore, this estimation method is viable.
For the entire industrial system, under current technology, the average production cost for all components is on the order of 100 MJ/kg, and their specific power is in the range of 1-10 W/kg. This indeed yields a doubling time of several months to a year. The energy consumption of various components corroborates this.
However, there is significant room for improvement in this specific power. This is because existing devices are optimized for profit, not specifically for minimizing OMPT.
For instance, the energy payback period for current photovoltaics (PV) can serve as a good proxy for the doubling time of an automated industrial system (because the generation power P_{gen} equals the sum of the power consumption of all other components ∑P_{dev}, and its specific energy cost and specific power are similar to most machinery, the total doubling time would be roughly twice the PV's own energy payback period), which is on the order of one year. However, a large portion of its energy cost is spent on the glass. If we were to use thin-film photovoltaics, the specific power would increase dramatically, and the payback period would be much shorter.
Next, on the matter of "doubling times of weeks or even hours for biological organisms."
From a more first-principles perspective, it's not just microorganisms that replicate quickly; most carbon-based life on Earth does.
The reason is simple: with the exception of producers, most organisms can directly utilize organic compounds from their environment or food that are already synthesized to a certain stage, such as various amino acids. The analogy for machines would be like finding components scattered everywhere by an ancient civilization, ready to be picked up and used without needing to smelt ore. This makes the specific energy cost for organisms extremely low—on the order of just 1 MJ/kg.
As for why large organisms reproduce more slowly than microorganisms, it's mainly because most of their metabolism is wasted—the majority of energy is spent on self-maintenance rather than on producing offspring at full capacity. If we only consider the mass doubling of their cells through metabolism, even large organisms increase their mass very quickly.
Of course, scale does have an effect; larger animals have slower metabolisms (smaller specific surface area, difficulty with heat dissipation). However, you can see that there is no order-of-magnitude difference in metabolic rate between a hummingbird and an elephant, so this is not the primary contributing factor.
In other words, we could indeed create nanomachines with very short doubling times, but they would have to use "pre-processed" materials on Earth—biomass. This advantage is finite. Once we exhaust the biosphere, we would have to return to the high-energy (~100 MJ/kg, determined by bond energies and hard to reduce) work of smelting rocks and minerals. Furthermore, the "nano" scale isn't advantageous because "smaller means faster replication," but because those carbon-based raw materials are best processed by carbon-based nanomachines (various enzymes).
Nanomachines offer no magical acceleration. While "large specific surface area for fast heat dissipation" does help increase specific power and reduce doubling time, we must remember we are not in the vacuum of space. When your nanomachines begin to pile up to the point of significantly obstructing each other, their total heat flux—whether through convection, conduction, or radiation—will converge with that of a single macroscopic structure.
Finally, regarding the function of nanomachines.
Small-scale structures are by no means a panacea. On the contrary, compared to traditional macroscopic structures, the small scale introduces a host of disadvantages.
First, the large-scale "factory model" is characterized by highly specialized components, separating the [delicate factory] from the [operational structure]. The [operational structure] is like a single organelle within a cell, designed for a specific task. Most of its mass is dedicated to that task, not to unrelated work. For example, an excavator doesn't need to carry an oil extraction and fractionation "digestive system" to produce its own fuel. In contrast, the self-replicating unit of a nanomachine (like a microbe) is minuscule, effectively concentrating its entire manufacturing plant within itself.
This has both advantages and disadvantages.
Advantages:
- High versatility. A very small individual can perform many different tasks, from hunting to bricklaying to problem-solving.
- Self-repair and self-replication are possible without relying on extensive external structures. A wound can heal without needing a factory for repairs or replacement parts.
Disadvantages:
- The environmental requirements for every component in the system must be met simultaneously, leading to poor environmental adaptability and extreme fragility. For instance, nucleic acids and proteins are not heat-resistant, so carbon-based life is confined to a very narrow temperature range. Because of this, carbon-based organisms cannot perform certain processes, like manufacturing metal in a steel furnace.
- Due to their high versatility, often only a few systems are active while the rest of the mass is idle. For example, when you're in a fight, you don't really need your digestive or reproductive systems. Converting that mass into muscle would clearly be more advantageous.
The pros and cons for machines are the opposite. Specialized engineering machinery doesn't carry its own production equipment and thus has difficulty with self-maintenance (at best, you get some "self-healing materials" that can only patch minor structural damage). But the point is, we are in a technological era, no longer surviving alone on a savanna surrounded by predators. If a machine breaks down, it can simply go to a nearby factory for repairs.
With the mass production of a technological civilization, the need for "self-maintenance" is almost non-existent. In a mainstream production environment, this disadvantage of machines ceases to be a disadvantage.
Second, many functions can only be achieved at a certain scale. Take intelligence, for example. When you need 1e14 parameters to achieve AGI, your central processing unit cannot be too small. Or consider radiation shielding; the half-value layer is several centimeters thick, so a single nanomachine cannot block radiation. As for "a large number of nanomachines collaborating to form a macroscopic structure," as mentioned before, this is a very fragile model because it requires each nanomachine to carry extra functionalities, making it more versatile but also more delicate, just like us biological beings.
btw, I should point out that "AI-directed human labor" and "fully autonomous production" are not necessarily sequential stages. They could proceed in parallel.
The reason is that although from a neoclassical economics perspective, ASI—as a factor of production that can almost perfectly substitute human labor with near-zero marginal cost—should penetrate the market rapidly and disruptively, several factors still hinder its large-scale application:
① The "last mile" problem: Real-world production environments are full of tacit knowledge—"unwritten rules," "tricks of the trade," and "industry experience," like knowing a particular machine has a quirk and you need to tighten a screw before using it. An ASI must first collect this information and learn it before it can interface with actual production. This "on-the-job training" takes time.
② If deployed without restraint, ASI would cause extremely severe structural unemployment. To protect jobs, governments might adopt cautious and conservative measures to directly or indirectly slow down AI proliferation, including but not limited to high AI taxes and stringent regulations.
③ Companies have sensitive information or trade secrets that they would be unwilling to process using third-party cloud AI services. However, deploying AI locally on-premise means high capital expenditures and maintenance costs.
Therefore, if policy allows, instead of patching the existing system, it might be better to directly establish a fully automated industrial chain—from upstream to downstream—governed by a (well-aligned) ASI. This could be done by creating a special zone (CDZ), governed by the ASI, which would bypass the resistance of transforming legacy assets. Specifically, a state could take the lead in designating a physical or policy-based special zone. Its initial resources could be acquired by outsourcing orders to traditional companies (later, the output of the CDZ would be returned as a dividend, thereby gaining support and reducing animosity).
Regarding the first point,
I apologize for the error made by the LLM during translation. The original phrase I provided was "the difference is not too significant" but it seems some words were omitted. However, I admit my example was weak and not suitable. First, birds have higher body temperatures, allowing for faster heat dissipation (so the hummingbird, compared to the elephant, is actually closer to the ^2/3 scaling law and deviates more from Kleiber's law). Second, evolution has favored lower resting metabolic rates to conserve energy, whereas our goal is maximum growth speed, so comparing basal metabolic rates is beside the point. We don't need to reduce specific surface area for the sake of energy conservation to avoid hypothermia; on the contrary, we should design structures to be flat.
So, I'd like to rephrase my argument with a few clearer points:
1. Specific surface area (for heat dissipation and material intake) is, of course, very important.
2. However, the size of the replication unit is not the most critical factor affecting the specific surface area. The geometry of the entire system is.
Even if the replication units are small, when they pile up and obstruct each other, unless you have a massive heat sink (like an ocean) to temporarily store the heat, the effective surface area for achieving true thermal equilibrium will converge to that of a macroscopic structure. Multicellular organisms are essentially large collections of micro-machines, but they are forced to slow down because they are not individually dispersed in water or air.
For a planetary industrial system, the mass at which this mutual obstruction becomes a necessary consideration is not very large. The total mass of the human technosphere is currently 1.1 Tt, and if spread evenly over the Earth's surface, it would already be a centimeter-scale layer.
Conversely, even macroscopic replication units can be made flat. In fact, our industrial systems are indeed spread out across the planet's surface.
Regarding the second point, this is worth a more in-depth discussion.
First, concerning the replication cycle of duckweed, the "in hours" figure might be a bit off. The results from DOI: 10.1111/plb.12184 show a range of 1.34 to 4.54 days. There are a few records of biomass doubling times under 24 hours, but not by much.
Second, duckweed has a very simple structure, so it doesn't need to expend energy maintaining stems, branches, root systems, lignified support structures, or searching for salt ions in soil. It consists of one or a few fronds, and nearly all its cells are involved in photosynthesis or direct nutrient absorption. Furthermore, it doesn't require sexual reproduction. Single-celled algae are even more extreme in this regard. They have an advantage in these aspects, so comparing them to consumers, which "have low energy costs for food but waste a lot of energy on things other than doubling," is unfair. I believe we cannot deny that the energy cost of raw materials remains a factor that influences the outcome by an order of magnitude.
Third, and this is something I missed before, we should actually categorize raw material sources into three tiers.
Tier 1: Sourcing from pre-processed biomass. These materials can even carry their own energy.
Tier 2: Sourcing from liquid or gaseous phases, mainly C, N, O, H. Other elements are also dissolved in a dispersed medium, allowing for direct extraction by molecular machines with specific affinities, which is low in energy cost.
Tier 3: Sourcing from solid phases. The key here is purification, which becomes considerably more energy-intensive. Gaseous CO₂, abundant water, and salts mean organisms don't need to perform this purification (otherwise, they'd have to gnaw on carbonate rocks). In contrast, an industrial system has to process large amounts of rock (consuming tens of MJ/kg for every kilogram) to extract a small fraction of useful elements. (Of course, for modern human industry, there's another negative factor: current production is not optimized for energy efficiency, so the energy efficiency of the production process itself is low).
So, we would first exhaust the biosphere (~few MJ/kg, total mass ~1 Tt), then all the carbon sources in the dispersed media (~tens of MJ/kg; water is abundant, but CO₂ is the limiting step. Atmospheric carbon is less than terrestrial biomass, while dissolved ocean carbon is ~100 Tt).
To purify solid materials, we always need to break the surrounding chemical bonds to extract the desired elements, for example, by melting or pulverizing, which increases the cost by another order of magnitude.
Regarding the third point, I'd be happy to provide sources. https://ntrs.nasa.gov/citations/19830007081, https://arxiv.org/abs/2110.15198 and https://doi.org/10.2514/6.2023-4636 all contain rough calculations for the average specific energy and specific power of Von Neumann probes, although the first is in a lunar environment, and the latter two are in asteroid environments. The first two also describe primary seed factories that don't heavily consider the electronics industry, but given its mass fraction, I think this can be overlooked.
As for a complete industrial system on Earth, perhaps due to its complexity, I've rarely seen estimates. However, a rough order-of-magnitude estimation method is to consider only the power generation system. This is because the power generated must equal the total power consumed by all other facilities. If the power generation system isn't specifically optimized to have values far exceeding other equipment (like thin-film photovoltaics), then we can assume its parameters are within the same order of magnitude as those of the entire industrial system we're considering. And there is a large body of research available on the energy payback period of power generation equipment.