The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer.

Eliezer Yudkowsky

To control these atoms you need some sort of molecular chaperone that can also serve as a catalyst. You need a fairly large group of other atoms arranged in a complex, articulated, three-dimensional way to activate the substrate and bring in the reactant, and massage the two until they react in just the desired way. You need something very much like an enzyme.

Richard Smalley

My understanding is that anyone who can grasp what "orthos wildly attacking the heterodox without reading their stuff and making up positions to attack" looks like, considers that this is what Smalley did with Drexler - made up an unworkable approach and argued against it.

Eliezer Yudkowsky


In this post, I use "nanobots" to mean "self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior". Various specific differences from biological cells have been proposed. I've organized this post by those proposed differences.


1. localized melting

Most 3d printers melt material to extrude it through a nozzle. Large heat differences can't be maintained on a small scale.

2. rare materials

If a nanobot consists largely of something rare, getting more of that material to replicate is difficult outside controlled environments.

Growth of algae and bacteria is often limited by availability of iron, which is more common than most elements. Iron is the active catalytic site of many enzymes, and is needed by all known life. The growth of something made mostly of iron would be far more limited, and other metals have more limited availability than that.

3. metal surfaces

Melting material isn't feasible per (1), so material must be built up by adding to the surface. Since that's the case, the inside of structures must be chemically the same as what was the exterior.

Metal objects have a protective oxide layer. In an air or water environment, there's no way to add individual (eg) aluminum atoms to a metal surface and end up with metallic aluminum inside; the whole thing will typically be aluminum oxide or hydroxide.

Corrosion is also a proportionately bigger problem for smaller objects. A micrometer-scale metal structure will rapidly corrode, perhaps doing some Ostwald ripening.

4. electric motors

Normal "electric motors" are all electromagnetic motors, typically using ferromagnetic cores for windings. Bigger is better for those, up to at least the point where you can saturate cores.

On a very small scale, it's better to use electrostatic motors, and you can make MEMS electrostatic motors with lithography. (Not just theoretically; people actually do that.) But, per (2) & (3), bulk metals are a problem for a self-replicating system. If you need to have compounds floating around, electrical insulation is also difficult. You also need some way to switch current, and while small semiconductor switches are possible, per (3) building them is difficult.

Instead of electrostatic charge of metal objects, it's better to use ions. Ions could bind to some molecule, and electrostatic forces could cause that to rotate relative to another molecule. Hmm, this is starting to sound rather familiar.

5. inorganic catalysts

Lab chemistry and drug synthesis often use metal catalysts in solution, perhaps with a small ligand. Palladium acetate is used for making drugs, but it's very toxic to humans, because it...catalyzes reactions.

Life requires control of what happens, which means selective catalysis of reactions, which means molecules need to be selectively bound, which requires specific arrangements of hydrogen bond donors and acceptors and so on, and that requires organic compounds. Controlled catalysis requires organic compounds.

6. no liquid

Any self-replicating nanobot must have many internal components. If the interior is not filled with water, those components will clump together and be unable to move around, because electrostatic & dispersion interactions are proportionately much stronger on a small scale. The same is true to a lesser extent for the nanobots themselves.

Vacuum is even worse. Any self-replicating cell must move material between outside and multiple compartments. Gas leakage by the transporters would be inevitable. Cellular vacuum pumps would require too much energy and may be impossible. Also, strongly binding the compounds used (eg CO2) to carriers at every step would require too much energy. ("Too much energy" means too much to be competitive with normal biological processes.)

7. no water

Most enzymes maintain their shape because the interior is hydrophobic and the exterior is hydrophilic. If some polar solvent is used instead of water, then this stability is weakened; most organic solvents will denature most proteins. If you use a hydrophobic solvent, it can't dissolve ions or facilitate many reactions.

Ester and amide bonds are the best ways to reversibly connect organic molecules. Both involve making or taking water or alcohol. Alcohols have no advantages over water in terms of conditions where they're stable.

Water is by far the best choice of liquid. The effectiveness of water for dissolving ions is unique. Water can help catalyze reactions by donating and accepting hydrogen. Water is common on Earth, easy to get and easy to maintain levels of.

8. high temperatures

Per (5) you need organic molecules to selectively catalyze reactions.

Enzymes need to be able to change shape somewhat. Without conformational changes, enzymes can't grab their substrate well enough. Without conformational changes, there's no way to drive an unfavorable reaction with a favorable reaction, and that's necessary.

Because enzymes must be able to do conformational changes, they need to have some strong interactions and some weaker interactions that can be broken or shifted. Those weaker interactions can't hold molecules together at high temperatures. Some life can grow at 100 C but 200 C isn't possible.

This means that the reactions you can do are limited to what organic compounds can do at relatively low temperatures - and existing life can pretty much do anything useful in that category already.

9. diamond

It's possible to make molecules containing diamond structures at ambient temperature. The synthesis involves carbocations or carboanions or carbon radicals, which are all very unstable. The yields are mediocre and the compounds involved are reactive enough to destroy any conceivable enzymes.

Some people have simulated structures that could theoretically place carbon atoms on diamond in specific positions at ambient temperature. Here's a paper on that. Because diamond is so kinetically stable, the synthesis must be exothermic, with high-energy intermediates. So, high vacuum is required, which per (6) doesn't work.

Also, the chemicals consumed to make those high-energy intermediates are too reactive to plausibly be made by any enzyme-like system. And per (1) & (8) you can't use high temperatures to make them on a small scale.

Also, there is no way to later remove carbon atoms from the diamond at low temperature. How, then, would a nanobot with a diamond shell replicate?

10. other rigid materials

CaCO3, silica, and apatite are much easier to manipulate than diamond. They're used in (respectively) mollusk shells, diatom frustules, and bone.

If it was advantageous to use structures of those inside cells for reactions somehow, then some organisms would already do that. Enzymes generally must do conformational changes to catalyze reactions. A completely rigid diamond shape with functional groups would not make a particularly good enzyme.

And of course, just a small solid shape, with nothing attached to it - even if you can make arbitary shapes - isn't useful for much besides cell scaffolding, and even then, building diatom frustules out of linked diamond pieces seems worse than what they do now with silica. Sure, diamond is even stronger than silica, but that doesn't matter. And that's assuming you can make interlocking diamond pieces, which you can't.

11. 3d structures

Unlike cells, nanobots could make 3d structures, instead of being limited to a soup of folded linear structures.

Yes, believe it or not, I've seen people say that. But cells have eg microfilaments.

Again, enzymes must be able to do conformational changes to work. At ambient temperature, that means they're shaking violently, and if proteins are flopping around constantly, you can't have a rigid positioner move to a fixed position and assume you're placing something correctly.

What you can do is hold onto the end of a linear chain as you extrude it, then fold up that chain into a 3d structure. What you can do is use an enzyme that binds to 2 folded proteins and connects them together. And those are methods that are used by all known life.

12. active transport

Life relies on diffusion and random collisions; nanobots could intentionally move things around.

Yes, I've actually seen people say that, but cells do use myosin to transport proteins sometimes. That uses a lot of energy, so it's only used for large things.

13. combining reaction steps

Nanobots could put all the sequential reaction steps next to each other, making them much more efficient than cells.

Cells have compartments with proteins that do related reactions. Some proteins form complexes that do multiple reaction steps. Existing life already does this to the extent that it makes sense to.


14. positional nanoassembly

The above sections should be enough background to finally cover what's perhaps the most central concept of the genre of proposals called "nanobots".

Some people see 3d printers and CNC routers, and don't understand enzymes or what changes on a molecular scale very well, and think that cells that work more like 3d printers or gantry cranes would be better. Now, a FDM 3d printer has several components:

  • sensors that detect the current position
  • drivers that control motors based on sensors
  • 3 motors that do 3-axis movement
  • a rigid bed and rigid drive system
  • a good connection between the bed and material being printed
  • a nozzle that melts material

Protein-sized position sensors don't exist.

Molecular linear motors do exist, but 1 ATP (or other energy carrier) is needed for every step taken.

If you want to catalyze reactions, you need floppy enzymes. Even if you attach them to a rigid bed, they'll flop all over the place. (On a microscopic scale, normal temperatures are like a macroscopic 3d printer being shaken violently.)

Suppose you're printing diamond somehow. You need a seed that's rigidly connected to the printing mechanism. The connection would need to be removable in order to detach the product from the printer. In a large 3d printer, you can peel plastic off a metal surface, but that won't work for covalently bonded diamond. You would need a diamond seed with functional groups that allow it to be grabbed, and since you're not starting with a sheet, you'd need a 5-axis printer arm.


Drexler wrote a book that proposed mechanical computers which control positioner arms by lever assemblies. An obvious problem there is mechanical wear - yes, some MEMS devices have adequate lifetimes, but those just vibrate; their sliding friction is negligible. But suppose you can solve this by making everything out of diamond or using something like lubricin.

So, suppose you have a mechanical computer that moves arms that control placement of something. Diamond is impractical, so let's say silica is being placed. Whatever you're placing, you need chemical intermediates that go on the arms, and you need energy to power everything. Making energy from fuel or photosynthesis requires more specific chemicals, not just specific arrangements of some solid. To do the reactions needed for energy and intermediate production, you need things that can do conformational changes - enzymes.

Without conformational changes, enzymes can't grab their substrate well enough. Without conformational changes, there's no way to drive an unfavorable reaction with a favorable reaction, and that's necessary. You can't just use rigid positioners to drive reactions that way, because they have no way to sense that the reaction has happened or not...except through conformational changes of a flexible enzyme-like tooltip on the positioner, which would have the same issues here.

At ambient temperature, enzymes that can do conformational changes are shaking violently, and if proteins are flopping around constantly, you can't have a rigid positioner move to a fixed position and assume you're placing something correctly. Since you need enzymes, you need a ribosome, and production of monomers - and amino acids are the best choice, chemical elements are limited and there is no superior alternative.

Since all that is still needed, what are the positioners actually accomplishing? They'd only be needed to build positioners. The whole thing would be a redundant side system to enzymatic life.

OK, maybe you want to build some kind of mechanical computers too. Clearly, life doesn't require that for operation, but does that even work? Consider a mechanical computer indicating a position. It has some number, and the high bit corresponds to a large positional difference, which means you need a long lever, and then the force is too weak, so you'd need some mechanical amplifier. So that's a problem.

Consider also that as vacuum is impractical per (6), and enzymes and chemical intermediates are needed, you'd have stuff floating around. So you have all these moving parts, they need to interface with the enzymes so they can't just be separated by a solid barrier, and stuff could get in there and jam the system.

The problems are myriad, and I'd be well-positioned to see solutions if any existed. But suppose you solve them and make tiny mechanical computers in cells - what's the hypothetical advantage of that? The ability to "do computation"? Brains are more energy-efficient than semiconductor computers for many tasks, and the total embodied computation in cells is far greater than that of neurons' occasional spikes.

15. everything else

When someone has an idea about something cells could do, it's often reasonable to presume that it's either impossible, useless, or already used by some organism - but there are obviously cases where improvement is possible. It's certainly physically possible to correct harmful mutations with genetic engineering. There are also ongoing arms races between pathogens and hosts where each step is an informational problem.

But what about more basic mechanisms? Have basic mechanisms for typical Earth conditions been optimized to the point that no improvement is possible? That depends on their complexity. For example, glycolysis and the citric acid cycle are optimal, but here's a more-efficient CO2 fixation pathway I designed. (Yes, you'd want to assimilate the glycolaldehyde synthons by (erythrose 4-phosphate -> glucose 6-phosphate -> 2x erythrose 4-phosphate). I left that as a way for people to show they understood my blog.) See also my post on the origin of life for some reasons life works the way it does. (You can see I'm a big blogger - that's a good career plan, right?)


I wrote this post now as a sort of side note to my post on AI risks. But...what if a superintelligence finds something I didn't think of?

I know, right? What if it finds a way to travel faster than light and sets up in Alpha Centauri, then comes back? What if it finds a way to make unlimited free energy? What if it finds a friendly unicorn that grants it 3 wishes?

There's a gap between seeing that something is conceivably possible and seeing how to do it, and that's the only reason that things like research and planning and prediction about the future are possible. I understand Eliezer Yudkowsky thinks that someone a little smarter than von Neumann (who didn't invent the "von Neumann architecture" or half the other stuff he took credit for, but that's off topic) would be able to invent "grey goo" type nanobots. If that was the case, even I would at least be able to see how it would be done.

To be clear, I'm not trying to imply that a superintelligent AI wouldn't have any plausible route to taking over societies or killing most of humanity or various other undesirable outcomes. I'm only saying that worrying about "grey goo" is a waste of time. On the other hand, Smalley was mad at Drexler for scaring people away from research into carbon nanotubes, but carbon nanotubes would be a health hazard if they were used widely, and the applications Smalley hoped for weren't practical. Perhaps I would thank Drexler if he actually pushed people away from working on carbon nanotubes, but he didn't.

New Comment
109 comments, sorted by Click to highlight new comments since: Today at 4:54 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Not an expert in chemistry or biochemistry, but this post seems to basically not engage with the feasibility studies Drexler has made in Nanosystems, and makes a bunch of assertions without justification, including where Nanosystems has counterarguments. I wish more commenters would engage on the object level because I really don't have the background to, and even I see a bunch of objections. Nevertheless I'll make an attempt. I encourage OP and others to correct me where I am ignorant of some established science.

Points 1, 2, 3, 4 are not relevant to Drexlerian nanotech and seem like reasonable points for other paradigms.

Regarding 5, my understanding is that mechanosynthesis involves precise placement of individual atoms according to blueprints, thus making catalysts that selectively bind to particular molecules unnecessary.

6. no liquid
Any self-replicating nanobot must have many internal components. If the interior is not filled with water, those components will clump together and be unable to move around, because electrostatic & dispersion interactions are proportionately much stronger on a small scale. The same is true to a lesser extent for the nanobots themselves.

Vacuum is

... (read more)
1bhauth1y
No, that does not follow. ...for one thing, that's not airtight. No, the steps happen by diffusion so they become slower. That's why slower muscles are more efficient. see this reply
5Thomas Kwa1y
I don't know how to engage with the first two comments. As for diffusion being slow, you need to argue that it's so slow as to be uncompetitive with replication times of biological life, and that no other mechanism for placing individual atoms / small molecules could achieve better speed and energy efficiency, e.g. this one. I don't have the expertise to evaluate the comment by Muireall, so I made a Manifold market.
1bhauth1y
Such actuator design specifics aren't relevant to my point. If you want to move a large distance, powered by energy from a chemical reaction, you have to diffuse to the target point, then use the chemical energy to ratchet the position. That's how kinesin works. A chemical reaction doesn't smoothly provide force along a range of movement. Thus, larger movements per reaction take longer.

I want to remind everybody how efficient molecular machinery is in terms of thermodynamics:

this molecule [RNA] operates quite near the limit of thermodynamic efficiency [7 kcal/mol] set by the way it is assembled [~10 kcal/mol].

and

 these calculations also establish that the E. coli bacterium produces an amount of heat less than six times (220npep/42npep) as large as the absolute physical lower bound dictated by its growth rate, internal entropy production, and durability.

From an article Statistical Physics of Self-replication by Jeremy England

deriving a lower bound for the amount of heat that is produced during a process of self-replication in a system coupled to a thermal bath. We find that the minimum value for the physically allowed rate of heat production is determined by the growth rate, internal entropy, and durability of the replicator, and we discuss the implications of this finding for bacterial cell division, as well as for the pre-biotic emergence of self-replicating nucleic acids.

https://aip.scitation.org/doi/10.1063/1.4818538

That said I think that there may be many sweet spots for a combination of ma... (read more)

In the context of AI x-risk, I’m mainly interested in

  • (1) can an AI use nanotech as a central ingredient of a plan to wipe out humanity, and
  • (2) can an AI use nanotech as a central ingredient of a plan to operate perpetually in a world without humans?

[(2) is obviously possible once you have a few billion human-level-intelligent robots, but the question is “can nanotech dramatically reduce the amount of time that the AI is relying on human help, compared to that baseline?”. Presumably “being able to make arbitrarily more chips or chip-equivalents” would be the most difficult ingredient.]

In both cases it seems to me that the answer is “obviously yes”: 

  • super-plagues / crop diseases / etc. are an existence proof for (1),
  • human brains are an existence proof for (2).

Therefore grey goo as defined in this post doesn’t seem too relevant for my AI-related questions. Like, if the AI doesn’t have a plan to make nanotech things that can exterminate / outcompete microbes living in rocks deep under the seafloor—man, I just don’t care.

None of this is meant to be a criticism of this post, which I’m glad exists, even if I’m not in a position to evaluate it. Indeed, I’m not even sure OP would disagree with my comment here (based on their main AI post).

The merit of this post is to taboo nanotech. Practical bottom-up nanotech is simply synthetic biology, and practical top-down nanotech is simply modern chip lithography. So:

1.) can an AI use synthetic bio as a central ingredient of a plan to wipe out humanity?

Sure.

2.) can an AI use synthetic bio or chip litho a central ingredient of a plan to operate perpetually in a world without humans?

Sure

But doesn't sound as exciting? Good.

Another merit of the OP might be in pointing out bullshit by Eliezer Yudkowsky/Eric Drexler?

It's kind of unfortunate if key early figures in the rationalist community introduce some bullshit to the memespace and we never get around to purging it and end up tanking our reputation by regularly appealing to it. Having this sort of post around helps get rid of it.

I'd also be interested in:

  • (3) could an AI that is developing nanotech without paying attention to the full range of consequences accidentally develop a form of nanotech that is devastating to humanity

(Imagine if e.g. there is some nanotech that does something useful but also creates long-lasting poisonous pollution as a side-effect, for instance.)

I.e. is it sufficient safety that the AI isn't trying to kill us with nanotech? Or must it also be trying to not kill us?

1Fergus Fettes10mo
Also worth noting w.r.t this that an AI that is leaning on bio-like nano is not one that can reliably maintain control over its own goals-- it will have to gamble a lot more with evolutionary dynamics than many scenarios seem to imply meaning: - instrumental goal convergence more likely - paperclippers more unlikely So again, tabooing magical nano has a big impact on a lot of scenarios widely discussed.
2Steven Byrnes10mo
I don’t understand why evolution has anything to do with what I wrote. Evolution designed a genome, and then the genome (plus womb etc.) builds a brain. By the same token, it’s possible that a future AI could design a genome (or genome-like thing), and then that genome could build a brain. RIght? Hmm, I guess a related point is that an AI wanting to take over the world probably needs to be able to either make lots of (ideally exact) copies of itself or solve the alignment problem w.r.t. its successors. And the former is maybe infeasible for a bio-like brain-ish thing in a vat. But not necessarily. And anyway, it might be also infeasible for a non-bio-like computer made from self-assembling nanobots or whatever. So I still don’t really care.
2Fergus Fettes10mo
In the 'magical nano exists' universe, the AI can do this with well-behaved nanofactories. In the 'bio-like nano' universe, 'evolutionary dynamics' (aka game theory among replicators under high brownian noise) will make 'operate perpetually' a shaky proposal for any entity that values its goals and identity. No-one 'operates perpetually' under high noise, goals and identity are constantly evolving. So the answer to the question is likely 'no'-- you need to drop some constraints on 'an AI' or 'operate perpetually'. Before you say 'I don't care, we all die anyway'-- maybe you don't, but many people (myself included) do care rather a lot about who kills us and why and what they do afterwards.
5Steven Byrnes10mo
I’m imagining an exchange like this. ME: Imagine a world with chips similar to today’s chips, and robots similar to humans, and no other nano magic. With enough chips and enough robots, such a system could operate perpetually, right? Just as human society does. THEM: OK sure that could happen but not until there are millions or even billions of human-level robots, because chips are very hard to fabricate, like you need to staff all these high-purity chemical factories and mines and thousands of companies manufacturing precision equipment for the fab etc. ME: I don’t agree with “millions or even billions”, but I’ll concede that claim for the sake of argument. OK fine, let’s replace the “chips” (top-down nano) with “brains-in-vats” (self-assembling nano). The vats are in a big warehouse with robots supplying nutrients. Each brain-in-vat is grown via a carefully controlled process that starts with a genome (or genome-like thing) that is synthesized in a DNA-synthesis machine and quadruple-checked for errors. Now the infrastructure requirements are much smaller. ~~ OK, so now in this story, do you agree that evolution is not particularly relevant? Like, I guess a brain-in-a-vat might get cancer, if the AI can’t get DNA replication error rates dramatically lower than it is in humans (I imagine it could, because its tradeoffs are different), but I don’t think that’s what you were talking about. A brain-in-a-vat with cancer is not a risk to the AI itself, it could just dump the vat and start over. (This story does require that the AI solves the alignment problem with respect to the brains-in-vats.)
1Fergus Fettes10mo
If you construct a hypothetical wherein there is obviously no space for evolutionary dynamics, then yes, evolutionary dynamics are unlikely to play a big role. The case I was thinking of (which would likely be part of the research process towards 'brains in vats'-- essentially a prerequisit) is larger and larger collectives of designed organisms, forming tissues etc. It may be possible to design a functioning brain in a vat from the ground up with no evolution, but I imagine that  a) you would get there faster verifying hypotheses with in vitro experiments b) by the time you got to brains-in-vats, you would be able to make lots of other, smaller scale designed organisms that could do interesting, useful things as large assemblies And since you have to pay a high price for error correction, the group that is more willing to gamble with evolutionary dynamics will likely have MVOs ready to deploy sooner that the one that insists on stripping all the evolutionary dynamics out of their setup.
0Going Durden1y
* (1) can an AI use nanotech as a central ingredient of a plan to wipe out humanity, and * (2) can an AI use nanotech as a central ingredient of a plan to operate perpetually in a world without humans? Given the hard limitations on dry nanotech, and pretty underwhelming power of wet nanotech/biotech, both answers should be "...Eh." We have no plausible evidence that any kind of efficent nanotech that could be used for a Gray Goo scenario  is possible, and this post is one of the many arguments against it.  If we focus only on completely plausible versions of nanotech, the worst case scenario is a the AI creating a "blight" that could very, very, very slowly damage our agriculture, cause disease in humans, and expand the AI's influence, on the scale of decades or centuries. There is no plausible way to make an exponentially growing nanite cloud that would wipe us out and assemble into an AI God, the worst case scenario is an upjumped artificial slime mold that slowly creeps over everything, and can be fended off with a dustpan.
7Steven Byrnes1y
If an AI arranged to release a highly-contagious deadly engineered pathogen in an international airport, it would not take "decades or centuries" to spread. Right????
-1Going Durden1y
a pathogen is not grey nanotech, but biotech. And while it would be very, very dangerous, there is no plausible way for it to wipe out humanity. We already have highly-contagious deadly pathogens all over the planet, and they are sluggish to spread, and their deadliness is inverse to their contagiousness for obvious reasons (dead men don't travel very well).
3Steven Byrnes1y
I’m finding this conversation frustrating. It seems to me that your grandparent comment was specifically talking about biotech & pandemics. For example, you said “wet nanotech/biotech”. And then in that context you said “"blight" that could very, very, very slowly damage our agriculture, cause disease in humans, and expand the AI's influence, on the scale of decades or centuries”. This sure sounds to me like a claim that a novel pandemic would spread over the course of decades or centuries. Right? And such a claim is patently absurd. It did not take decades or centuries for COVID to spread around the world. (Even before mass air travel, it did not take decades or centuries for Spanish Flu to spread around the world.) Instead of acknowledging that mistake, your response is “a pathogen is not grey nanotech, but biotech”, which is missing the point—I was disputing a claim that you made about biotech. Famously, when you catch COVID, you can become infectious a day or two before you become symptomatic. (That’s why it was so hard to contain.) And COVID also could cause nerve-damage that presumably had nothing to do with its ability to spread. More generally, it seems perfectly possible for a disease to have a highly-contagious-but-not-too-damaging early phase and then a few days later it turns lethal, perhaps by spreading into a totally different part of the body. So I strongly disbelieve the claim that deadliness and contagiousness of engineered pathogens are inevitably inverse, let alone that this is “obvious”. I also suggest reading this article.
1Going Durden1y
sorry if the thread of my comment got messy, I did mention somewhere that COVID-like pathogen would likely be worst case scenario, for the reasons you mentioned above (long incubation). However, I believe that COVID pandemic actually proves that humanity is robust against such threats. Quarantine worked. Masks worked. Vaccines worked. Soap and disinfectant worked. As human response would scale up with the danger inherent in any pandemic, I think that anything significantly more deadly that COVID would be stopped even faster, due to far more draconian quarantine responses. With those in place, I do not see how a pathogen could be used to "wipe out humanity". Decimate, yes. Annihilate? No.  But as I agreed in another thread, we should cut that conversation now. Discussing this online is literally feeding ideas to our potential enemy (be it AI or misaligned humans).
8the gears to ascension1y
did we live through the same pandemic?
1Going Durden1y
we very likely did not, given the span of it, and various national responses.
2the gears to ascension1y
fair enough. most countries' responses left a lot to be desired. a few countries that are general known for having their act together overall did for covid too, but it didn't include some critical large population countries.

This might be the most erudite chemistry post ever on Less Wrong. @Eric Drexler actually comments here on occasion; I wonder what he would have to say. 

I have been trying to sum up my own thoughts without getting too deeply into it. I think I would emphasize first that the capabilities of plain old DNA-based bacteria are already pretty amazing - bacteria already live everywhere from the clouds to our bloodstreams - and if one is worried about what malevolent intent can accomplish on the nanoscale, they already provide reason to be worried. And I think @bhauth (author of this post) would agree with that. 

Now, regarding the feasibility of an alternative kind of nanobot, with a hard solid exterior, a vacuum interior, and mechanical components... All the physical challenges are real enough, but I'm very wary of supposing that they can't be surmounted. For example, synthesis of diamondoid parts might sound impossibly laborious; then one reads about "direct conversion of CO2 to multi-layer graphene", and thinks, could you have a little nano "sandwich maker" that fills with CO2 (purified by filter), has just the right shape and charge distribution on its inner surfaces to be a s... (read more)

8Metacelsus1y
>to our bloodstreams Nitpick: https://www.nature.com/articles/s41564-023-01350-w "No evidence for a common blood microbiome based on a population study of 9,770 healthy humans" Of course, skin, digestive tract, reproductive tract, etc. all have lots of bacteria.
5gilch1y
I almost replied with the same point, but thought, "Nah, bacteria do sometimes end up in blood." Bacteremia is an unnatural condition for humans. Either the immune system clears it, or it progresses to sepsis and you die.

I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it. That said, let me now go through my thoughts on various points:

  1. Rare materials: Yep, this is a real design constraint, but probably not that hard to design around? I'm not expecting nanobots to be made mostly out of iron.

  2. Metal surfaces: Why not just build up the metal object in an oxygen-free environment, then add on an external passivation layer at then end? The passivation layer could be engineered to be more stable than the naturally occurring oxidation layer the metal would normally have. There would still be a minimum size for metal objects, of course. (Or more precisely, a minimal curvature.) Corrosion could definitely be a problem, but Cathodic protection might help in some cases.

  3. Agree that electrostatic motors are the way to go here. I'm not sure the power supply necessarily has to be an ion gradient, nor that the motor would otherwise need to be made from metal. Metal might be actively bad, because it allows the electrons to slosh around a lot. What about this general scheme for a motor?: Wheel-shaped molecules, with sites that can hold electrons. A high voltage coming from

... (read more)

I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it.

Thanks, glad you liked it. You made quite the comment here, but I'll try to respond to most of it.

Metal surfaces: Why not just build up the metal object in an oxygen-free environment, then add on an external passivation layer at then end?

  1. To build up metal, you need to carry metal atoms somehow. That requires moving ions, because otherwise there's no motive force for the transfer, plus your carrier would probably be stuck to the metal.

Without proteins carrying ions in water, this is difficult. The best version of what you're proposing is probably directed electrochemical deposition in some solvent that has a wide electrochemical window and can dissolve some metal ions. Such solvents would denature proteins.

  1. Inputs and outputs need to be transferred between compartments. Cells do use "airlock" type structures for transferring material, but some leakage would be inevitable.

The passivation layer could be engineered to be more stable than the naturally occurring oxidation layer the metal would normally have. There would still be a minimum size for metal objects, of course. (Or more p

... (read more)
4DaemonicSigil1y
Thanks for the detailed reply. Jumping right in: Yep, I totally concede that the size and level of detail of exposed metal parts is going to be limited, the discussion would mostly be interesting in terms of whether or not nanomachines would be able to assemble large metal parts as an external product or metal parts that are fully embedded in another material (eg. copper wires embedded in diamond). The discussion about surface coatings and cathodic protection is just "haggling over the price", so to speak. The thing where if you stick a piece of metal in an electric field, charges build up on the surface of the metal to oppose the field. The original drawback I had in mind for ionic motors is that you need to drag a membrane everywhere that you want to use a motor, which is very inconvenient. Tubes are membranes, but they're rolled up, which makes them a lot more convenient. Diffusion of ions is very fast on these scales, so I'd guess that ions and electrons are about equally good as power sources, unless the motor is going to be correspondingly very fast at using up lots of ions. On the other hand, I think you're misunderstanding what I'm saying about the pure electrostatic motor. It doesn't need any external switching electronics to power the motor, and in particular not silicon electronics. It should just spin given DC power of the correct voltage. The switching would work via proximity, and would happen on the wheel molecules themselves. It's easiest for electrons to tunnel between two sites when they're physically close in space. Depending on how the wheels are rotated relative to each other, various sites will be closer to or farther from each other in space, and this changes as the wheel spins. Aren't there lots of proteins that undergo conformational changes in ways that don't look like "having arms"? Alternatively, I can make my arm negatively charged and put it on a tower-thingy made of lots of covalently bonded carbon so that it doesn't bend. That tow
6bhauth1y
Condensation reactions are only possible in certain circumstances. Maybe read about the mechanism of aldol condensation and get back to me. Also, methanediol is in equilibrium with formaldehyde in water. I realize you don't know my background, but if you want to say I'm wrong about something chemistry-related, you'll have to put in a little more effort than that.

I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it. That said, let me now go through my thoughts on various points...

+1. I'd add that, besides the specific objections to points here, the overall argument of the post has a major conjunction problem: it only takes one or maybe two of the points to be wrong, in order for the end-to-end argument to fall apart. And a lot of these points do not have the sort of watertight argument which establishes anywhere near 90% confidence, and 90% per-step would already be on the low side for a chain with 10+ mostly-conjunctive steps.

On top of that, the end-to-end argument mostly seems to argue against some rather specific pictures (e.g. diamondoids, nano-3d printing), which are a lot narrower than "grey goo" in general.

So I think the actual headline argument is pretty weak. But even so, I strong-upvoted the post, because I love the object-level analysis of the individual points on their own.

One of the contentions of this post is that life has thoroughly explored the space of nanotech possibilities. This hypothesis makes the failures of novel nanotech proposals non independent. That said, I don’t think the post offers enough evidence to be highly confident in this proposition (the author might privately know enough to be more confident, but if so it’s not all in the post).

Separately, I can see myself thinking, when all is said and done, that Yudkowsky and Drexler are less reliable about nanotech than I previously thought (which was a modest level of reliability to begin with), even if there are some possibilities for novel nanotech missed or dismissed by this post. Though I think not everything has been said yet.

2jacob_cannell1y
Regardless of the specific argument here, biological cells are already near pareto optimal robots in terms of thermodynamic efficiency. There is essentially no potential improvement for designs that are better at converting energy into replication of code, or just converting energy into carefully arranged nanostructures in general. This is a much stronger airtight argument not against the possibility of nanotech, but against the promise of nanotech.

This is nice to see, I’ve been generally kind of unimpressed by what have felt like overly generous handwaves re: gray gooey nanobots, and I do think biological cells are probably our best comparison point for how nanobots might work in practice.

That said, I see some of the discussion here veering in the direction of brainstorming novel ways to do harm with biology, which we have a general norm against in the biosecurity community – just wanted to offer a nudge to y’all to consider the cost vs. benefit of sharing takes in that direction. Feel free to follow up with me over DM!

7Vladimir_Nesov1y
I don't see specifically gray gooey nanobots having a visible presence on LW. When people gesture at nanotech, it's mostly in the sense of molecular manufacturing, local self-contained infrastructure for producing advanced things like computers, a macroscale activity. This is important for quickly instantiating designs that can't be constructed on existing infrastructure, bootstrapping molecular manufacturing capability starting from things like existing RNA printers. This way, bringing new things into physical existence only requires having their designs, given a sufficiently versatile manufacturing toolset. If there is no extended delay with incrementally upgrading production facilities all over the world, ability to design machines thousands of times faster than human civilization directly translates into ability to quickly manufacture them. (The diamondoid bacterium things Yudkowsky keeps mentioning don't particularly need self-replication capabilities to make the same point, they could just as well be pumped out by Zerg queens foraging underground. The details of this don't matter for the point being made, there are many independent ways of eating the world that don't overall become less effective because some of them are on further reflection infeasible.)
5cwbakerlee1y
Strong +1 to this I'm also happy to discuss stuff about norms further 1 on 1 -- the best way to contact me, anonymously or non-anonymously, is through this short form.
5Davidmanheim1y
I assume the strong +1 was specifically on the infohazards angle? (Which I also strongly agree with.) 
1cwbakerlee1y
Yep, that's right -- thanks for clarifying!
2gilch1y
It's a fair point that this topic touches on potential infohazards. I don't think anything I've said so far is particularly novel, although in the saying I'm perhaps making the ideas less obscure. I also haven't really gone into much depth of detail (mostly because of my relative lack of expertise). My main aim has been to nudge others into taking the threats more seriously, even after seeing a related strawman cut down.

edit: (link)green goo is plausible

The AI can kill us and then take over with better optimized biotech very easily.

  • Doubling time for
    • Plants (IE:solar powered wet nanotech) > single digit days
    • Algae in ideal conditions 1.5 days
    • E. Coli 20 minutes
  • There are piles of yummy carbohydrates lying around (Trees, plants, houses)
    • The AI can go full Tyranid
  • The AI can re-use existing cellular machinery. No need to rebuild the photosynthesis or protein building machinery, full digestion and rebuilding at the amino acid level is wasteful.
    • Sub 2 minute doubling times are plausible for a system whose rate limiting step is mechanically infecting plants with a fast acting subversive virus. Spreading flying things are self replicators that steal energy+cellular machinery from plants during infection (IE:mosquito like). Onset time could be a few hours till construction of shoggoth like things. Full biosphere assimilation could be limited by flight speed.

Nature can't do these things since they require substantial non-incremental design changes. Mosquitoes won't simultaneously get plant adapted needles + biological machinery to sort incoming proteins and cellular contents + continuous gr... (read more)

1Ponder Stibbons1y
“Design is much more powerful than evolution since individually useless parts can be developed to create a much more effective whole. Evolution can't flip the retina or reroute the recurrent laryngeal nerve even though those would be easy changes a human engineer could make.” But directed evolution of a polymeric macromolecule (E.g. repurposing an existing enzyme to process a new substrate) is so much easier practically speaking than designing and making a bespoke macromolecule  to do the same job. Synthesis and testing of many evolutionary candidates is quick and easy, so many design/make/test cycles can be run quickly. This is what is happening at the forefront of the artificial enzyme field. 
1anithite1y
Yes, designing proteins or RNAzymes or whatever is hard. Immense solution space and difficult physics. Trial and error or physically implemented genetic algorithms work well and may be optimal. (EG:provide fitness incentive to bacteria that succeed (EG:can you metabolize lactose?)) Major flaw in evolution: * nature does not assign credit for instrumental value * assume an enzymatic pathway is needed to perform N steps * all steps must be performed for benefit to occur * difficulty of solving each step is "C" constant * evolution has to do O(C^N) work to solve problem * with additional small constant factor improvement for horizontal genetic transfer and cooperative solution finding (EG: bacterial symbiosis) * intelligent agent can solve for each step individually for O(C*N) (linear) work * this applies also to any combination of structural and biochemical changes. Also, nature's design language may not be optimal for expressing useful design changes concisely. Biological state machines are hard to change in ways that carry through neatly to the final organism. This shows in various small ways in organism design. Larger changes don't happen even though they're very favorable (EG:retina flip would substantially improve low light eye capabilities (it very much did in image sensors)) and less valuable changes not happening and not varying almost at all over evolutionary time implies there's something in the way there. If nature could easily make plumbing changes, organisms wouldn't all have similar topology (IE:not just be warped copies of something else). New part introduction and old part elimination can happen but it's not quick or clean. Nature has no mechanisms for making changes at higher levels of abstraction. It can change one part of the DNA string but not "all the start codons at once and the ribosome start codon recognition domain". Each individual genetic change is an independent discovery. Working in these domains of abstraction

I'm puzzled that this post is being upvoted. The author does not sound familiar with Drexler's arguments in NanoSystems.

I don't think we should worry much about how nanotech might affect an AI's abilities, but this post does not seem helpful.

6Thomas Kwa1y
I agree and expanded on this in a comment.
3JenniferRM1y
Voting is, of necessity, pleiotropically optimized. It loops into reward structures for author motivation, but it also regulates position within default reading suggestion hierarchies for readers seeking educational material, and it also potentially connects to a sense that the content is "agreed to" in some sort of tribal sense. If someone says something very "important if true and maybe true" that's one possible reason to push the content "UP into attention" rather than DOWN. Another "attentional" reason might be if some content says "the first wrong idea that occurs to nearly everyone, which also has a high quality rebuttal cleanly and saliently attached to it". That is, upvotes can and maybe should flow certain places for reasons of active value-of-information and/or pedagogy. Probably there are other reasons, as well! 😉  A) As high-quality highly-upvoted rebuttals like Mr Kwa's have arrived, I've personally been thinking that maybe I should reverse my initial downvote, which would make this jump even higher. I'm a very unusual voter, but I've explained my (tentative) theories of upvoting once or twice, and some people might have started to copy me. B) I could imagine some voters were hoping (as I might if I thought about it some more and changed my mind on what my voting policy should be in very small ways) to somehow inspire some good rebuttals, by pre-emptively upvoting things in high VoI areas where LW simply hasn't had much discussion lately? C) An alternative explanation is of course that a lot of LW voters haven't actually looked at nanotech very much, and don't have good independent object level takes, and just agreed with the OP because they don't know any better and it seemed plausible and well written. (This seems the most likely to me, fwiw.) D) Another possibility is, of course, that there are a lot of object level agreement voters on LW and also all three of us are wrong about how nano could or would "really just work" if the best research
[-]Roko1y147

Organic Life is Unlikely

(list of reasons why any kind of organic life ought to be impossible, which must to some extent actually be correct because the Fermi Observation shows that it is extremely rare)

I don't really think this approach of listing a bunch of problems is a way to get a high level of certainty about this. In a certain sense, you should treat this like a math problem and insist on a formal proof that nanotech is impossible starting from the Schrodinger Equation. And of course, such a proof would have the very difficult task of ruling out nanotech without ruling out actual bacteria.

self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior

I think Wet Nanotech might qualify then.

Consider a minor modification to a natural microbe: a different genetic code. I.e., a codon still codes for an amino acid, but which corresponds to which could differ. (This correspondence is universal in natural life, with a few small exceptions.) Such an organism would effectively be immune to all of the viruses that would affect its natural counterpart, and no horizontal gene transfer to natural life would be possible.

One could also imagine further modifications. Greater resistance to mutations, perhaps using a more stable XNA and more repair genes. More types of amino acids. Reversed chirality of various biomolecules as compared to natural life, etc. Such an organism (with the appropriate enzymes) could digest natural life, but not the reverse.

There's nothing here that seems fundamentally incompatible with our understanding of biochemistry, but with enough of these changes, such an organism might then become an invasive species with a massive competitive advantage over natural life, ultimately resulting in an ecophagy scenario.

That has already happened naturally and also already been done artificially.

See this paper for reasons why codons are almost universal.

That third link seems to be full of woo.

Where was the optimization pressure for better designs supposed to have arisen in the "communal" phase?

Thus, we may speculate that the emergence of life should best be viewed in three phases, distinguished by the nature of their evolutionary dynamics. In the first phase, treated in the present article, life was very robust to ambiguity, but there was no fully unified innovation-sharing protocol. The ambiguity in this stage led inexorably to a dynamic from which a universal and optimized innovation-sharing protocol emerged, through a cooperative mechanism. In the second phase, the community rapidly developed complexity through the frictionless exchange of novelty enabled by the genetic code, a dynamic we recognize to be patently Lamarckian (19). With the increasing level of complexity there arose necessarily a lower tolerance of ambiguity, leading finally to a transition to a state wherein communal dynamics had to be suppressed and refinement superseded innovation. This Darwinian transition led to the third phase, which was dominated by vertical descent and characterized by the slow and tempered accumulation of complexity.

They claim that unive... (read more)

5bhauth1y
You're misunderstanding the point of those proposed amino acids. They're proposals for things to be made by (at least partly) non-enzymatic lab-style chemical processes, processed into proteins by ribosomes, and then used for non-cell purposes. Trying to use azides (!) or photocrosslinkers (?) in amino acids isn't going to make cells work better. There really isn't much improvement to be had by using different amino acids.

The new aminoacids might be "essential" (not manufacturable internally) and have to come in as "vitamins" potentially. This is another possible way to prevent gray goo on purpose, but hypothetically it might be possible to find ways to move that synthesis into the genome of neolife itself, if that was cheap and safe. These seem like engineering considerations that could change from project to project.

Mostly I have two fundamental points:

1) Existing life is not necessarily bio-chemically optimal because it currently exists within circumscribed bounds that can be transgressed. Those amino acids are weird and cool and might be helpful for something. Only one amino acid (and not even any of those... just anything) has to work to give "neo-life" some kind of durable competitive advantage over normal life.

2) All designs have to come from somewhere, with the optimization pressure supplied by some source, and it is not safe or wise to rely on random "naturally given" limits in the powers of systems that contain an internal open-ended optimization engine. When trying to do safety engineering, and trying to reconcile inherent safety with the design of something involving autonomous (potentia... (read more)

3avturchin1y
I am also for Wet Nanotech. But different genetic code is not needed or it is not an important thing. The main thing is to put a Turing computer inside a living cell similar to E.Coli and to create a way for two-side communication with externals computer. Such computer should be genetically encoded, so if the cell replicates the computer also replicates. The computer has to have the ability to get data from some sensors inside the cells and output some proteins.  Building such Wet Nanotech is orders of magnitude simpler than real nanotech. The main obstacle for AI is the need to perform real-world experiments. In classical EY paradigm, first AI is so superintelligent that it does not need to perform any experiments, as it could guess everything about the real world and will do everything right from the first attempt. But if first AI is still limited by amount of available compute, or its intelligence or some critical data, it has to run tests. Running experiments is longer and AI is more likely to be cought in its first attempts. This will slower its ascend and may force it to choose the ways there it cooperates with humans for longer periods of time.
1bhauth1y
You want to grow brains that work more like CPUs. The computational paradigm used by CPUs is used because it's conceptually easy to program, but it has some problems. Error tolerance is very poor; CPUs can be crashed by a single bit-flip from cosmic rays. CPUs also have less computational capacity than GPUs. Brains work more like...neural networks; perhaps that's where the name came from.
2avturchin1y
No, I didn't mean brains. I mean digital computers inside the cell; but they can use all the ways of error-correction including parallelism.
4mako yass1y
Have you heard of the Arc protein? It's conceivable that it's responsible for transmitting digital information in the brain, like, if that were useful, it would be doing that, so I'd expect to see computation too. I sometimes wonder if this is openworm's missing piece. But it's not my field.
2JenniferRM1y
That is so freakin' cool. Thank you for this link. Hadn't heard about this yet...  ...and yes, memory consolidation is on my list as "very important" for uploading people to get a result where the ems are still definitely "full people (with all the features which give presumptive confidence that the features are 'sufficient for personhood' because the list as been constructed in a minimalist way such that the absence of a sufficient feature 'breaking personhood', then not even normal healthy humans are 'people')".
2Gunnar_Zarncke1y
There is a new paper by Jeremy England that seems relevant: Self-organized computation in the far-from-equilibrium cell https://aip.scitation.org/doi/full/10.1063/5.0103151 

Just to be clear, a point which the post seems to take for granted, but which people not familiar with the topic might not think about, is:

Life is already selected for inclusive genetic fitness, so if nanobots do not unlock powerful capacities that life does not already have, then you cannot have a gray goo scenario because ordinary life will outcompete your nanobots for resources.

9Charlie Steiner1y
I dunno, I agree with the post but disagree that this is much of a safety factor for nanotech. There are things that are easy for design that are impossible for evolution. Like if you make a cyanobacterium with an alternate genetic code so that it's immune to all current viruses, this would outcompete unmodified cyanobacteria. But evolution is never going to change the entire genome all at once to caputure this advantage. Artificial life can probably do a lot of weird and powerful stuff even if the "diamandoid nanobot" picture is wrong.
2bhauth1y
replied here
2tailcalled1y
I might be wrong but I think the idea you have here of something with immunity to all current viruses would constitute a genuine counterargument to the OP? Possible I'm misunderstanding the scope of what OP is arguing about.

OP said:

I use "nanobots" to mean "self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior".

I think that there are lots of plausible “invasive species from hell” scenarios where an organism is sufficiently edited so as to have no natural viruses (because its genome is weird) and no natural predators (because its sugars are weird or it has an exotic new toxin) and so on. They would still have ecological niches where they wouldn’t be able to thrive, and they would still presumably get predators and diseases eventually. But a lot of destruction could happen in the meantime, including collapsing critical ecosystems etc., and it could happen fast (years not decades, but also not weeks) if the organism is introduced in lots of places at once, I would assume.

Those scenarios are important, but they’re not “nanobots” by OP’s definition.

7[anonymous]1y
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3133615/ Here's likely what Steiner is referencing. Take a genome, add 1 base to each codon. Something you can do in a python script chatGPT can write in 2 minutes. But effectively impossible for nature to ever do - likely evolution will hit time limit exceeded - we only have about 1 billion years left on this star and it took 3 billion to reach that point - before doing this even once. The reason is the computational mechanism to do this is complex and 1 time use without an evolutionary pressure vector pointing towards it. It will never be found by evolution. The 4 codon based life will be possibly superior to all life because it can access a 4 times larger library of possible protein components and it automatically becomes immune to all viruses. (Until new virii evolve) Modified life in a way that let's it outcompete existing life is not grey goo, it's "green goo" and a totally different scenario. Green goo also will be limited by energy and barriers protecting existing life, for example cellulose is hard to break and this may not be solvable. So the green goo might grow and outcompete life slowly, taking centuries to cover the planet.
2bhauth1y
replied here

OK, maybe you want to build some kind of mechanical computers too. Clearly, life doesn't require that for operation, but does that even work? Consider a mechanical computer indicating a position. It has some number, and the high bit corresponds to a large positional difference, which means you need a long lever, and then the force is too weak, so you'd need some mechanical amplifier. So that's a problem.

Drexler absolutely considered thermal noise. Rod logic uses rods at right angles whose positions allow or prevent movement of other rods. That's the amplif... (read more)

Drexler's calculations concern the thermal excitation of vibrations in logic rods, not the thermal excitation of their translational motion. Plugging his own numbers for dissipation into the fluctuation-dissipation relation, a typical thermal displacement of a rod during a cycle is going to be on the order of the 0.7nm error threshold for his proposed design in Nanosystems.

That dissipation is already at the limit (from Akhiezer damping) of what defect-free bulk diamond could theoretically achieve at the proposed frequency of operation even if somehow all thermoelastic damping, friction, and acoustic radiation could be engineered away. An assembly of non-bonded rods sliding against and colliding with one another ought to have something like 3 orders of magnitude worse noise and dissipation from fundamental processes alone, irrespective of clever engineering, as a lower bound. Assemblies like this in general, not just the nanomechanical computer, aren't going to operate with nanometer precision at room temperature.

6anithite1y
edit: This was uncharitable. Sorry about that. This comment suggested not leaving rods to flop around if they were vibrating. The real concern was that positive control of the rods to the needed precision was impossible as described below.
9Muireall1y
I've given it some thought, yes. Nanosystems proposes something like what you describe. During its motion, the rod is supposed to be confined to its trajectory by the drive mechanism, which, in response to deviations from the desired trajectory, rapidly applies forces much stronger than the net force accelerating the rod. But the drive mechanism is also vibrating. That's why I mentioned the fluctuation-dissipation theorem—very informally, it doesn't matter what the drive mechanism looks like. You can calculate the noise forces based on the dissipation associated with the positional degree of freedom. There's a second fundamental problem in positional uncertainty due to backaction from the drive mechanism. Very informally, if you want your confining potential to put your rod inside a range Δx with some response speed (bandwidth), then the fluctuations in the force obey ΔxΔF≥ℏ/2×bandwidth, from standard uncertainty principle arguments. But those fluctuations themselves impart positional noise. Getting the imprecision safely below the error threshold in the presence of thermal noise puts backaction in the range of thermal forces.
4anithite1y
Sorry for the previous comment. I misunderstood your original point. My original understanding was, that the fluctuation-dissipation relation connects lossy dynamic things (EG, electrical resistance, viscous drag) to related thermal noise (Johnson–Nyquist noise, Brownian force). So Drexler has some figure for viscous damping (essentially) of a rod inside a guide channel and this predicts some thermal W/Hz/(meter of rod) spectral noise power density. That was what I thought initially and led to my first comment. If the rods are moving around then just hold them in position, right? This is true but incomplete. You pointed out that a similar phenomenon exists in *whatever* controls linear position. Springs have associated damping coefficients so the damping coefficient in the spring extension DOF has associated thermal noise. In theory this can be zero but some practical minimum exists represented by EG:"defect-free bulk diamond" which gives some minimum practical noise power per unit force. Concretely, take a block of diamond and apply the max allowable compressive force. This is the lowest dissipation spring that can provide that much force. Real structures will be much worse. Going back to the rod logic system, if I "drive" the rod by covalently bonding one end to the structure, will it actually move 0.7 nm? (C-C bond length is ~0.15 nm. linear spring model says bond should break at +0.17nm extension (350kJ/mol, 40n/m stiffness)). That *is* a way to control position ... so if you're right, the rod should break the covalent bond. My intuition is thermal energy doesn't usually do that. What are the the numbers you're using?(bandwidth, stiffness, etc.)? Does your math suggest that in the static case rods will vibrate out of position? Maybe I'm misunderstanding things. (Nanosystems PP344 (fig 12.2) Having the text in front of me now, the rods supposedly have "alignment knobs" which limit range of motion. The drive springs don't have to define rod position to wi
2Muireall1y
No worries, my comment didn't give much to go on. I did say "a typical thermal displacement of a rod during a cycle is going to be on the order of the 0.7nm error threshold for his proposed design", which isn't true if the mechanism works as described. It might have been better to frame it as—you're in a bad situation when your thermal kinetic energy is on the order of the kinetic energy of the switching motion. There's no clean win to be had. That's correct, although it increases power requirements and introduces low-frequency resonances to the logic elements. In this design, the bandwidth requirement is set by how quickly a blocked rod will pass if the blocker fluctuates out of the way. If slowing the clock rate 10x includes reducing all forces by a factor of 100 to slow everything down proportionally, then yes, this lets you average away backaction noise like √10 while permitting more thermal motion. If you keep making everything both larger and slower, it will eventually work, yes. Will it be competitive with field-effect transistors? Practically, I doubt it, but it's harder to find in-principle arguments at that level. That noted, in this design, (I think) a blocked rod is tensioned with ~10x the switching drive force, so you'd want the response time of the restoring force to be ~10 ps. If your Δx is the same as the error threshold, then you're admitting error rates of 10−1. Using (100 GHz, 0.07 nm [Drexler seems to claim 0.02nm in 12.3.7b]), the quantum-limited force noise spectral density is a few times less than the thermal force noise related to the claimed drag on the 1GHz cycle. What I'm saying isn't that the numbers in Nanosystems don't keep the rod in place. These noise forces are connected with displacement noise by the stiffness of the mechanism, as you observe. What I'm saying is that these numbers are so close to quantum limits that they can't be right, or even within a couple of orders of magnitude of right. As you say, quantum effects shouldn'
2anithite1y
Yeah, transistor based designs also look promising. Insulation on the order of 2-3 nm suffices to prevent tunneling leakage and speeds are faster. Promises of quasi-reversibility, low power and the absurdly low element size made rod logic appealing if feasible. I'll settle for clock speeds a factor of 100 higher even if you can't fit a microcontroller in a microbe. My instinct is to look for low hanging design optimizations to salvage performance (EG: drive system changes to make forces on rods at end of travel and blocked rods equal reducing speed of errors and removing most of that 10x penalty.) Maybe enough of those can cut the required scale-up to the point where it's competitive in some areas with transistors. But we won't know any of this for sure unless it's built. If thermal noise is 3OOM worse than Drexler's figures it's all pointless anyways. I remain skeptical the system will move significant fractions of a bond length if a rod is held by a potential well formed by inter-atomic repulsion on one of the "alignment knobs" and mostly constant drive spring force. Stiffness and max force should be perhaps half that of a C-C bond and energy required to move the rod out of position would be 2-3x that to break a C-C bond since the spring can keep applying force over the error threshold distance. Alternatively the system *is* that aggressively built such that thermal noise is enough to break things in normal operation which is a big point against.
2Muireall11mo
Just to follow up, I spell out an argument for a lower bound on dissipation that's 2-3 OOM higher in Appendix C here.
3Thomas Kwa1y
I'm not sure how to evaluate this, so I made a Manifold market for it. I'd be excited for you to help me edit the market if you endorse slightly different wording. https://manifold.markets/ThomasKwa/does-thermal-noise-make-drexlerian
3bhauth1y
Yes, you need some kind of switch for any mechanical computer. My point was that you need multiple mechanical "amplifiers" for each single positioner arm, the energy usage of that would be substantial, and if you have a binary mechanical switch controlling a relatively large movement, then the thermal noise will put it in an intermediate state a lot of the time so the arm position will be off.
3anithite1y
That's not how computers (the ones we have today or the rod logic ones proposed work). Each rod or wire represents a single on/off bit. Yes, doing mechanosynthesis is more complicated and precise sub nm control of a tooltip may not be competitive with biology for self replication. But if the AI wants a substrate to think on that can implement lots of FLOPs then molecular rod logic will work. For that matter protein based mechanical or hybrid electromechanical computers are plausible. Likely with lower energy consumption per erased bit than neurons and certainly with more density. Human computers have nm sized transistors. There's no reason to think that neurons and synapses are the most efficient sort of biological computer.
7jacob_cannell1y
Bio-neuron based brains are extremely efficient, and close to pareto-optimal. We are near the end of moore's law and the viable open routes for forward progress in energy efficiency are essentially neuromorphic.
2anithite1y
edit: continued partially in the original article That post makes a fundamental error about wiring energy efficiency by ignoring the 8 OOM difference in electrical conductivity between neuron saltwater and copper. (0.5 S vs 50 MS) There's almost certainly a factor of 100 energy efficiency gains to be had by switching from saltwater to copper in the brain and reducing capacitance by thinning the wires. I'll be leaving a comment soon but that had to be said. energy/bit/(linear distance) agreement points to underlying principle of "if you've thinned the wires why haven't you packed everything in tighter" leading to similar capacitance and therefore energy values/(linear distance) face to face die stacking results suggest that computers could be much more efficient if they weren't limited to 2d packing of logic elements. A second logic layer more than halved power consumption at the same performance and that's with limited interconnect density between the two logic dies. The Cu<-->saltwater conductivity difference leads to better utilisation of wiring capacitance to reduce thermal noise voltage at transistor gates. Concretely, there are more electrons able to effectively vote on the output voltage. For very short interconnects this matters less but long distance or high fanout nodes have lots of capacitance and low resistance wires make the voltage much more stable.
7jacob_cannell1y
Electrical conduction through "Neuron saltwater" is not how neuronal interconnect works, its electrochemical. You are simply mistaken, as copper interconnect wire energy limits and neuron wire energy efficiency limits are essentially the same and both approach the theoretical landauer minimum as explained in the article.
0DaemonicSigil1y
Mandatory footnote for this comment: The Landauer limit puts the energy cost to erase a bit at about 0.02eV at room temperature. For comparison, the energy in a single photon of visible light is about 1eV. Already we can see that the brain is not going to get anywhere close to this. 1eV is a molecular energy scale, not a cellular one. The brain requires about 20 Watts of power. Running this directly through the Landauer limit, we get 10^21 bits erased per second. For comparison, the number of synapses is about 2*10^14 (pulled from jacob_cannell's post linked above) and this gives about 600kB of data erased per synapse per second. This is not a reasonable number! It's justified in the post by assuming that we're banned from using regular digital logic to implement binary arithmetic and are instead forced into using heaps of "counters" where the size of the heap is the number you're representing, and this comes along with shot noise, of course. The section on "interconnect" similarly assumes that we're forced to dissipate a certain amount of energy per bit transferred per unit length of interconnection. We're banned from using superconducting interconnect, or any other creative solution here. Also, if we could shrink everything, the required length of interconnect would be shorter, but the post just does the calculation for things being normal brain size. I'd further argue that, even if interconnect requirements are as a matter of engineering practicality close the the limits of what we can build, we should not confuse that with being "close to the thermodynamic limits". Moving a bit from here to there should have no thermodynamic cost, and if we can't manage it except by dissipating a huge amount of energy, then that's a fact about our engineering skills, not a fact about the amount of computation the brain is doing. In short, if you assume that you have to do things the way the brain does them, then the brain is somewhat close to "thermodynamic limits", but wit
5jacob_cannell1y
No it does not - that is one of many common layman misunderstandings, which the article corrects. The practical Landauer limit (for fast reliable erasures) is closer to 1eV. Digital multipliers use similar or more energy for low precision multiply but are far larger, as discussed in the article with numerous links to research literature. (And most upcoming advanced designs for approaching brain energy efficiency use analog multipliers - as in memristor crossbar designs). That is indeed how conventional computing works. You obviously didn't read the post as indeed it discusses this - see the section on size and temperature. As discussed in the post - you absolutely can move bits without dissipating much energy using reversible interconnect (ie optics), but this does not come without enormous fundamental disadvantages in size.
8DaemonicSigil1y
So this is how the 1eV value is derived, right? Start with a bit that we want to erase. Set things up so there's an energy gap of ΔE between the 0 state and the 1 state. Then couple to the environment, and wait for some length of time, so the probability that the bit has a value of 0 becomes: 11+e−βΔE This is the probability of successful erasure, and if we want to get a really high probability, we need to set ΔE=50kT or something like that. But instead imagine that we're trying to erase 100 bits all at once. Now we set things up so that the 2100−1 bit strings that aren't all zeros have an energy of ΔE and the all-zeros bit string has an energy of 0. Now if we couple to the environment, we get the following probability of successful erasure of all the bits: 11+(2100−1)e−βΔE This is approximately equal to: 11+2100e−βΔE=11+e100log2−βΔE Now, to make the probability of successful erasure really high, we can pick: ΔE=50kT+100(kTlog2) The 100(kTlog2) is there to cancel the 100log2 in the exponent. This is just the familiar Landauer limit. And the 50kT is there to make sure that we get the same level of reliability as before. But now that 50kT is amortized over 100 bits, so the extra reliability cost per bit is much less. So if I'm not wrong, the theoretical limit per bit should still be kTlog2.
5jacob_cannell1y
The article has links to the 3 good sources (Landauer, Zhirnov, Frank) for this derivation. I don't have time to analyze your math in detail but I suspect you are starting with the wrong setup - you need a minimal energy well to represent a bit stably against noise at all, and you pay that price for each bit, otherwise it isn't actually a bit. My prior that you find an error in the physics lit here is extremely low - this is pretty well established at this point.
3DaemonicSigil1y
I've taken a look at Michael P. Frank's paper and it doesn't seem like I've found an error in the physics lit. Also, I still 100% endorse my comment above: The physics is correct. So your priors check out, but how can both be true? To use the terminology in Frank, this is Esig you're talking about. My analysis above applies to Ediss. Now in section 2 of Frank's paper, he says: The formula kTlogr shows up in section 2, before Frank moves on to talking about reversible computing. In section 3, he gives adiabatic switching as an example of a case where Ediss can be made much smaller than Esig. (Though other mechanisms are also possible.) About midway through section 4, Frank uses the standard kTlog2 value, since he's no longer discussing the restricted case where Ediss=Esig.
2jacob_cannell1y
Adiabatic computing is a form of partial reversible computing.
2Muireall1y
If you can only erase bits 100 at a time, you don't really have 100 bits, do you? Now your thermal state just equalizes probabilities across those nonzero bit strings.
1anithite1y
That point (compute energy/system surface area) assumes we can't drop clock speed. If cooling was the binding constraint, drop clock speed and now we can reap gains in eficiency from miniaturization. Heat dissipation scales linearly with size for a constant ΔT. Shrink a device by a factor of ten and the driving thermal gradient increases in steepness by ten while the cross sectional area of the material conducting that heat goes down by 100x. So if thermals are the constraint, then scaling linear dimensions down by 10x requires reducing power by 10x or switching to some exotic cooling solution (which may be limited in improvement OOMs achievable). But if we assume constant energy per bit*(linear distance), reducing wire length by 10x cuts power consumption by 10x. Only if you want to increase clock speed by 10x (since propagation velocity is unchanged and signal travel less distance). Does power go back up. In fact wire thinning to reduce propagation speed gets you a small amount of added power savings. All that assumes the logic will shrink which is not a given. Added points regarding cooling improvements: * brain power density of 20mW/cc is quite low. * ΔT is pretty small (single digit °C) * switching to temperature tolerant materials for higher ΔT gives (1-1.5 OOM) * phase change cooling gives another 1 OOM * Increasing pump power/coolant volume is the biggie since even a few Mpa is doable without being counterproductive or increasing power budget much (2-3 OOM) * even if cooling is hard binding, if interconnect density increases, can downsize a bit less and devote more volume to cooling.
3jacob_cannell1y
The brain is already at minimal viable clock rate. Your comment now seems largely in agreement: reducing wire length 10x cuts interconnect power consumption by 10x but surface area decreases 100x so surface power density increases 10x. That would result in a 3x increase in temp/cooling demands which is completely unviable for a bio brain constrained to room temp and already using active liquid cooling and the entire surface of the skin as a radiator. Digital computers of course can - and do - go much denser/hotter, but that ends up ultimately costing more energy for cooling. So anyway the conclusion of that section was:
1anithite1y
What sets the minimal clock rate? Increasing wire resistance and reducing the number of ion channels and pumps proportionally should just work. (ignoring leakage). It is certainly tempting to run at higher clock speeds (serial thinking speed is a nice feature) but if miniaturization can be done and then clock speeds must be limited for thermal reasons why can't we just do that? That aside, is miniaturization out of the question (IE:logic won't shrink)? Is there a lower limit on number of charge carriers for synapses to work? Synapses are around 1µm³ which seems big enough to shrink down a bit without weird quantum effects ruining everything. Humans have certainly made smaller transistors or memristors for that matter. Perhaps some of the learning functionality needs to be stripped but we do inference on models all the time without any continuous learning and that's still quite useful.
1bhauth1y
Signal propagation is faster in larger axons.
2jacob_cannell1y
Evolutionary arms races: ie the need to think quickly to avoid becoming prey, think fast enough to catch prey, etc. The prime overall size constraint seems may be surface/volume ratios and temp as we already discussed, but yes synapses are already pretty minimal for what they do (they are analog multipliers and storage devices). Synapses are equivalent to entire multipliers + storage devices + some extra functions, far more than transistors.
1bhauth1y
you might find this post interesting

I don't think this is the right mind frame, thinking about how something specific appears too hard or even infeasible. A better frame is "say, you are given $100B/year, can hire the best people in the world, and have 10 years to come up with viable self-replicating nanobots, or else we all die, how would you go about it?" 

9bhauth1y
Is that a question? If I'm given an impossible task, I try to find a way around it. The details would depend on the specifics of your hypothetical situation. Or are you saying that the flaw in my argument is that...I didn't have the right emotional state while writing it? I'm not sure I understand your point.
5shminux1y
I guess the latter? But maybe also the former. Trying to solve the problem rather than enumerating all the ways in which it is unsolvable.
[-]bhauth1y2523

That framing is unnatural to me. I see "solving a problem" as being more like solving several mazes simultaneously. Finding or seeing dead ends in a maze is both a type of progress towards solving the maze and a type of progress towards knowing if the maze is solvable.

3JenniferRM1y
I'd like to say up front that I respect you both, but I think shminux is right that bhauth's article (1) doesn't make the point it needs to make to change the "belief about the whether a set of 'mazes' exist whose collective solution gives nano" for many people working on nano and (2) this is logically connected to issue of "motivational stuff". A key question is the "amount of work" necessary to make intellectual progress on nano (which is probably inherently cross-disciplinary), and thus it is implicitly connected to motivating the amount of work a human would have to put in. This could be shorted to just talking about "motivation" which is a complex thing to do for many reasons that the reader can surely imagine for themselves. And yet... I shall step into the puddle, and see how deep it might be!🙃 I. Close To Object Level Nano Stuff For people who are hunting, intellectually, "among the countably infinite number of mazes whose solution and joining with other solved mazes would constitute a win condition by offering nano-enabling capacities" they have already solved many of the problems raised in the OP, as explained in in Thomas Kwa's excellent top level nano-solutions comment. One of Kwa's broad overall points is "nano isn't actually going to be a biological system operating on purely aqueous chemistry" and this helps dodge a huge numbers of these objections, and has been a recognized part of the plan for "real nanotech" (ie molecular manufacturing rather the nanoparticle bullshit that sucked up all the nano grant money) since the 1980s. If bhauth wants to write an object level follow-up post, I think it might be interesting to read an attempt to defend the claim: "Nanotechnology requires aqueous biological methods... which are incapable of meeting the demand". However, I don't think this is something bhauth actually agrees with, so maybe that point is moot? II. What Kinds Of Psychologizing Might Even Be Helpful And Why?? I really respect your engagemen
7tailcalled1y
The hard part isn't self-replicating nanobots. The hard part is self-replicating nanobots that are efficient enough to outcompete life.

This means that the reactions you can do are limited to what organic compounds can do at relatively low temperatures - and existing life can pretty much do anything useful in that category already.

We find that bacteria sometimes do manage to work at higher temperatures as well. Thermus aquaticus that gave us Taq polymerase works for example at higher temperatures than most other bacteria. 

Generally, it's very hard for eukaryotes or prokaryotes to evolve the usage of new amino acids. It's unclear what we could do with artificial designed proteins when ... (read more)

None of this argues that creating grey goo is an unlikely outcome, just that it's a hard problem. And we have an existence proof of at least one example of a way to make gray goo that covers a planet, which is life-as-we-know-it, which did exactly that.

But solving hard problems is a thing that happens, and unlike the speed of light, this limit isn't fundamental. It's more like the "proofs" that heavier than air flight is impossible which existed in the 1800s, or the current "proofs" that LLMs won't become AGIs - convincing until the counterexample exists, but not at all indicative that no counterexample does or could exist.

7Steven Byrnes1y
OP said: (And I believe they’re using “grey goo” the same way.) So I think you’re using a different definition of “grey goo” from OP, and that under OP’s definition, biological life is not an existence proof. I think the question of “whether grey-goo-as-defined-by-OP is possible” is an interesting question and I’d be curious to know the answer for various reasons, even if it’s not super-central in the context of AI risk.
5Davidmanheim1y
He excludes the only examples we have, which is fine for his purposes, though I'm skeptical it's useful as a definition, especially since "some difference" is an unclear and easily moved bar. However, it doesn't change the way we want to do prediction about whether something different is possible. That is, even if the example is excluded, it is very relevant for the question "is something in the class possible to specify." 

Thanks so much for this post, I've been wishing for something like this for a long time. I kept hearing people grumbling about how EY & Drexler were way too bullish about nanotech, but no one had any actual arguments. Now we have arguments & a comment section. :)

I object to the implication that Eliezer and Drexler have similar positions. Eliezer seems to seriously underestimate how hard nanotech is. Drexler has been pretty cautious about predicting how much research it would require.

Huh, interesting. I am skeptical. Drexler seems to have thought that ordinary human scientists could get to nanotech in his lifetime, if they made a great effort. Unless he's changed his mind about that, that means he agrees with Yudkowsky about nanotech, I think. (As I interpret him, Yudkowsky takes that claim and then adds the additional hypothesis that, in general, superintelligences will be able to do research several OOMs faster than human science, and thus e.g. "thirty years" becomes "a few days." If Drexler disagrees with this, fine, but it's not a disagreement about nanotech it's a disagreement about superintelligence.)

Can you say more about what you mean?

2PeterMcCluskey1y
I can't point to anything concrete from Drexler, beyond him being much more cautious than Eliezer about predicting the speed of engineering projects. Speaking more for myself than for Drexler, it seems unlikely that AI would speed up nanotech development more than 10x. Engineering new arrangements of matter normally has many steps that don't get sped up by more intelligence. The initial nanotech systems we could realistically build with current technology are likely dependent on unusually pure feedstocks, and still likely to break down frequently. So I expect multiple generations of design before nanotech becomes general-purpose enough to matter. I expect that developing nanotech via human research would require something like $1 billion in thoughtfully spent resources. Significant fractions of that would involve experiments that would be done serially. Sometimes that's because noise makes interactions hard to predict. Sometimes it's due to an experiment needing a product from a prior experiment. Observing whether an experiment worked is slow, because the tools for nanoscale images are extremely sensitive to vibration. Headaches like this seem likely to add up.
3jacob_cannell1y
Chip litho (practical top-down nanotech) is already approaching the practical physical limits for non-exotic computers (and practical exotic computers seem harder/farther than cold fusion). Biology is already at the key physical limits (thermodynamic efficiency) for nanoscale robotics. It doesn't matter what materials you use to construct nanobots, they can't have large advantages over bio cells, because bio cells are already near optimal in terms of the primary constraints (which are thermodynamic efficiency for copying and spatially arranging bits).
1Noosphere891y
I basically agree with this take, assuming relatively conventional computers and no gigantic size computers like planet computers. And yeah, I think Eliezer's biggest issue on ideas like nanotechnology, and his general his approach of assuming most limitations away by future technology isn't that they can't happen, but that he ignores that getting to that abstracted future state takes a lot more time than Eliezer thinks, and that time matters more than Eliezer thinks, especially in AI safety. and generally requires more contentious assumptions than Eliezer thinks.

Thanks. So helpful to have this comprehensive argument.

low commit here but I've previously used nanotech as an example (rather than a probable outcome) of a class somewhat known unknowns - to portray possible future risks that we can imagine as possible while not being fully conceived. So while grey goo might be unlikely, it seems that precursor to grey goo of a pretty intelligent system trying to mess us up is the thing to be focused on, and this is one of its many possibilities that we can even imagine

>what if a superintelligence finds something I didn't think of?

 

I'm not a superintelligence, and I know of at least one plausible "green goo" scenario involving rogue microbes.