Getting to the point where mechanical engineering is "easy to verify" seems extremely challenging to me. I used to work in manufacturing. Basically everyone I know in the field has completely valid complaints about mechanical engineers who are mostly familiar with CAD, simulations, and textbook formulas, because they design parts that ignore real world manufacturing constraints. AI that designs with simulations seems likely to produce the same result.
Additionally, I would guess that today's humanoid robots are already good enough on the mechanical side, and they could become self replicating if they were just more intelligent and dextrous.
One example of the sort of problem that could be difficult to simulate: I was working on a process where a robot automatically loaded parts into a CNC machine. The CNC machine produced metal chips as it removed material from the part. The chips would typically be cleared away by a stream of coolant from a mounted hose. Under certain angles of the hose, chips would accumulate in the wrong locations over the course of multiple hours, interfering with the robot's placement of the part. Even if the hoses were initially positioned correctly, they could move because someone bumped it when inspecting something or due to vibration.
Simulating how chips come off the part, how coolant flow moves them in the machine, etc, requires an incredible level of fidelity in the simulation and could be potentially intractable to simulate. And this is a very constrained manufacturing task that doesn't really have to interact with the real world at all.
In general, prototyping something that works is just pretty easy. The challenge is more:
I had some discussion on AI and the physical world here: https://www.lesswrong.com/posts/r3NeiHAEWyToers4F/frontier-ai-models-still-fail-at-basic-physical-tasks-a
Thank you for the real world feedback!
It does feel like current models are much better at software than hardware (especially after reading your post).
Do you think the difficulty (of simulating machines in the real world) is due to a lack of compute, or a lack of data?
E.g. if someone wanted to make a simulation of the CNC machine which includes material accumulating due to bad hose angles, would the main difficulty be a lack of computing power, or the tediousness of uploading the entire machine onto the simulation, and adding random forces/damage to account for people bumping into hoses?
Assuming AI is aligned, I don’t see how self-replicating machines would be useful. I doubt there’ll be much pressure for that. If AI is malicious, self-replication hardly matters.
The real-world data is likely the real obstacle. With enough, I expect they’ll be able to compensate for manufacturing defects, and have regular maintenance. I assume they’ll be able to self-improve with more experience (and likely pooling experiences)
It’s pure speculation though.
My speculation is that one day the AI might be able to engineer self replicating machines in a simulation, while it is still too shortsighted to realize that the machines will help humanity pause AI and thwart its chances of taking over the world.
This way self replicating machines may be useful even if the AI is misaligned.
I think uploading a lot of machines into the simulation, and performing manufacturing inside the simulation (in order to get data about manufacturing defects and wear and tear) will be very expensive. But it might be relatively far less ...
I don't necessarily anticipate that AI will become superhuman in mechanical engineering before other things, although it's an interesting idea and worth considering. If it did, I'm not sure self-replication abilities in particular would be all that crucial in the near term.
The general idea that "AI could become superhuman at verifiable tasks before fuzzy tasks" could be important though. I'm planning on writing a post about this soon.
I think one theoretical advantage AI has for engineering self replicating machines, is that self replicating machines requires a large "quantity" of engineering work. And AI is good at doing a very large quantity of work, a large quantity of generate and test. It's almost like evolution in this regard: evolution created self replicating life because although it isn't that smart, it does an extreme quantity of work.
The reason I think self replicating machines require a large quantity of engineering work, is this. The supply chain of machines in the world today, is already partially self replicating, but it still needs human input at various points.
If you take humans out of the loop and replace them with robots, it'll actually becomes less efficient (since now you have to build these robots, and current robots don't move very fast), but it might be barely manageable if the robots are sufficiently trained in simulations.
However, I speculate that one major advantage of taking humans out of the loop, is that you can re-scale all the machines to be much smaller. Smaller machines move faster (relative to their body length), and lift greater weights. You can observe this by how quickly ants move.
An object 1,000,000 times smaller, is 1,000,000 times quicker to move a bodylength at the same speed/energy density, or 10,000 quicker at the same power density, or 1000 times quicker at the same acceleration. It can endure 1,000,000 times more acceleration with the same damage. (Bending/cutting is still 1 times the speed at the same power density, but our supply chain would be many times faster if that became the only bottleneck)
You have to re-engineer the entire supply chain, but the rewards are great.
Random
The reason why biological cells can self replicate in only 20 minutes is because of their tiny scale. Biological processes are efficient from the point of view of minimizing entropy, but very inefficient from the point of view of speed. A cell manufactures proteins by simply waiting for the next amino acid (building block) of the protein to "bump into" the protein. There is this mRNA and tRNA ensuring that only the correct amino acid (out of 20 amino acids) is added to the protein. This sounds ludicrously slow, but because everything bumps into everything zillions of times a second at that scale, 3 amino acids are added each second.
Google's AlphaEvolve has recently started to make real world scientific discoveries, but only in domains where it's very cheap to verify the correct answer (e.g. matrix multiplication algorithms).
But if we can design sufficiently powerful physics simulations, mechanical engineering may one day become "cheap to verify." AI which uses similar strategies as AlphaEvolve might be able to engineer far better machines than humans, if the machines are verified by simulations.
These machines may be able to self replicate, and grow exponentially. This could bring near infinite abundance.
There might be no economical pressure to build even smarter AI, because there will already be unlimited resources.
Unfortunately, countries might still have military pressure to build even smarter AI, since the self replicating machines can still be defeated by even better engineered versions.
My hope is that once self replicating machines start to grow exponentially, the world will wake up to the dangerous power of AI, and negotiate a treaty banning other countries from developing their own AI (but promising them a share of the benefits). I'm not sure if this can succeed.
The first country to build self replicating machines technically has the military power to take over the world and enforce a ban unilaterally, but such brute force solutions never lead to good outcomes. However, the mere fact it has this power makes other countries more likely to trust its promise to share the benefits of AI, since other countries will think "if this country really was malicious, and doesn't intend to keep its promise. It could simply take over the world right here and now, and wouldn't need to negotiate with us."
Do you think humanity will survive if AI solves mechanical engineering before AI solves deception and scheming? Do you think the chances of survival are higher? Would better physics simulations be a net positive?