What are the plausible scientific limits of molecular nanotechnology?

Richard Jones, author of Soft Machines has written an interesting critique of the room-temperature molecular nanomachinery propounded by Drexler:

Rupturing The Nanotech Rapture

If biology can produce a sophisticated nanotechnology based on soft materials like proteins and lipids, singularitarian thinking goes, then how much more powerful our synthetic nanotechnology would be if we could use strong, stiff materials, like diamond. And if biology can produce working motors and assemblers using just the random selections of Darwinian evolution, how much more powerful the devices could be if they were rationally designed using all the insights we've learned from macroscopic engineering.

But that reasoning fails to take into account the physical environment in which cell biology takes place, which has nothing in common with the macroscopic world of bridges, engines, and transmissions. In the domain of the cell, water behaves like thick molasses, not the free-flowing liquid that we are familiar with. This is a world dominated by the fluctuations of constant Brownian motion, in which components are ceaselessly bombarded by fast-moving water molecules and flex and stretch randomly. The van der Waals force, which attracts molecules to one another, dominates, causing things in close proximity to stick together. Clingiest of all are protein molecules, whose stickiness underlies a number of undesirable phenomena, such as the rejection of medical implants. What's to protect a nanobot assailed by particles glomming onto its surface and clogging up its gears?

The watery nanoscale environment of cell biology seems so hostile to engineering that the fact that biology works at all is almost hard to believe. But biology does work--and very well at that. The lack of rigidity, excessive stickiness, and constant random motion may seem like huge obstacles to be worked around, but biology is aided by its own design principles, which have evolved over billions of years to exploit those characteristics. That brutal combination of strong surface forces and random Brownian motion in fact propels the self-assembly of sophisticated structures, such as the sculpting of intricately folded protein molecules. The cellular environment that at first seems annoying--filled with squishy objects and the chaotic banging around of particles--is essential in the operation of molecular motors, where a change in a protein molecule's shape provides the power stroke to convert chemical energy to mechanical energy.

In the end, rather than ratifying the ”hard” nanomachine paradigm, cellular biology casts doubt on it. But even if that mechanical-engineering approach were to work in the body, there are several issues that, in my view, have been seriously underestimated by its proponents.

...

Put all these complications together and what they suggest, to me, is that the range of environments in which rigid nanomachines could operate, if they operate at all, would be quite limited. If, for example, such devices can function only at low temperatures and in a vacuum, their impact and economic importance would be virtually nil.

The entire article is definitely worth a read. Jones advocates more attention to "soft" nanotech, which is nanomachinery with similar design principles to biology -- the biomimetic approach -- as the most plausible means of making progress in nanotech.

As far as near-term room-temperature innovations, he seems to make a compelling case. However the claim that "If ... such devices can function only at low temperatures and in a vacuum, their impact and economic importance would be virtually nil" strikes me as questionable. It seems to me that atomic-precision nanotech could be used to create hard vacuums and more perfectly reflective surfaces, and hence bring the costs of cryogenics down considerably. Desktop factories using these conditions could still be feasible.

Furthermore, it bears mentioning that cryonics patients could still benefit from molecular machinery subject to such limitations, even if the machinery is not functional at anything remotely close to human body temperature. The necessity of a complete cellular-level rebuild is not a good excuse not to cryopreserve. As long as this kind of rebuild technology is physically plausible, there arguably remains an ethical imperative to cryopreserve patients facing the imminent prospect of decay.

In fact, this proposed limitation could hint at an alternative use for cryosuspension that is entirely separate from its present role as an ambulance to the future. It could perhaps turn out that there are forms of cellular surgery and repair which are only feasible at those temperatures, which are nonetheless necessary to combat aging and its complications. The people of the future might actually need to undergo routine periods of cryogenic nanosurgery in order to achieve robust rejuvenation. This would be a more pleasant prospect than cryonics in that it would be a proven technology at that point; and most likely the vitrification process could be improved sufficiently via soft nanotech to reduce the damage from cooling itself significantly.

New to LessWrong?

New Comment
53 comments, sorted by Click to highlight new comments since: Today at 12:34 PM
[-]darius13y160

There might be more agreement here than meets the eye. Drexler often posts informatively and approvingly about progress in DNA nanotechnology and other bio-related tech at http://metamodern.com ; this is the less surprising when you remember his very first nanotech paper outlined protein engineering as the development path. Nanosystems is mainly about establishing the feasibility of a range of advanced capabilities which happen to not already be done by biology, and for which it's not obvious how it could. Biology and its environment being complicated and all, as Jones says.

Freitas in Nanomedicine addresses applying a Nanosystems technology base to our bio problems, or at least purports to -- I haven't been able to get into it because it's really long-winded and set in tiny type. Nanosystems was more inviting.

[-]soreff13y100

For example, if you try to make very small, spherical diamond crystals, a layer or two of carbon atoms at the surface will spontaneously rearrange themselves into a new form--not of diamond, but of graphite.

What do we count as "spherical"? Adamantane is a symmetrical 10 carbon atom piece of a diamond lattice, with surface bonds terminated with hydrogen atoms. It is stable enough to be melted at 270C and recrystallized from the melt. It does not rearrange itself into graphite.

More generally: AFMs and STMs routinely use atomically precise positioning in the presence of thermal noise (the vibrational analog of Brownian motion) at room temperature. Set aside Drexler's analyses of thermal and quantum motions in molecular scale device for a moment: At this point we've had multiple decades of experimental experience of atomically precise positioning at room temperature. The tips of these devices are molecular scale structures being positioned with atomic precision. Sufficient?

The watery nanoscale environment of cell biology seems so hostile to engineering that the fact that biology works at all is almost hard to believe. But biology does work--and very well at that

Doesn't this mean that we shouldn't put too much weight on our intuition when estimating the long-term potential of nanotech?

I posit that if you haven't read and understood a significant amount of the core research papers on any type of technology, you shouldn't put much weight at all on your intuition when considering its potential. (This of course includes cases where there isn't a significant amount of research papers on a certain technology.)

[Note: temporarily ignore this advice if you plan to make money by writing an entertaining pop futurology book.]

I'm currently writing a futurology book which I hope will prove entertaining and personally profitable. If someone accuses me of relying too much on my intuition in areas beyond my expertise I will direct them to your comment.

Like most critics of the molecular manufacturing concept, Jones is attacking a straw man. Yes, Drexler’s initial vision of how to design nano-scale machines was a bit naïve, but we’ve known that since the 1990s and no one is claiming otherwise. The fact that mechanical engineering at the nano scale is different than at the macro scale should surprise no one, but different does not mean impossible (or even more difficult, really). See, for example, the extensive work of Robert Freitas analyzing the actual challenges and opportunities of nano-scale medical engineering.

If, for example, such devices can function only at low temperatures and in a vacuum, their impact and economic importance would be virtually nil.

Yeah, they would be limited to unimportant, fringe activities like:

  • Scanning cryogenicly preserved human brains prior to uploading
  • Repair and maintenance of deep space probes
  • Isotope separation of deuterium, tritium, and He3 for fusion power.
  • Mining operations on asteroids, comets, and gas-giant moons.
  • Fabrication of carbon-nanotube-based structures for use in space-elevator and tether propulsion applications.

Minor, unimportant stuff like that. ;)

Well, impact and economic importance would be "virtually nil" until the second half of this century, maybe.

Isotope separation of deuterium, tritium, and He3 for fusion power.

Technically, that's more easily done with a centrifuge, or perhaps distillation. But I agree with your other points. Carbon nanotubes, here we come!

I figure most nanotechnology will work at room temperature and pressure.

If a particular manufacturing step requires a vaccuum, you can always just pump all the undesirable atoms out with a pump. If a low temperature is required, you can always just use a minature freezer. However, there is going to be plenty you can do without doing either.

If a particular manufacturing step requires a vaccuum, you can always just pump all the undesirable atoms out with a pump.

All? This sounds like something from someone who's never worked with vacuums. Vacuums are expensive and asymptotic. Whether or not you can get "enough of" the undesirable atoms out is still a question for many nanotech applications.

I figure most nanotechnology will work at room temperature and pressure.

This sounds correct to me. My current mental model is that low temperatures could be essential for general-purpose assembly and/or repair, but most of the products created (particularly the diamondoid ones) would function at higher temperatures.

Lower temperatures allow for advantages like superconductivity and higher CPU speeds, and vacuum allows for super-insulation and frictionlessness. So there will probably be cases where these conditions are desirable for efficiency's sake even if not absolutely required.

Also, if nanotech can only work at low temperatures and in a vacuum, it's not much of an existential risk.

That doesn't mean it couldn't be used to build threatening things, or threatening quantities of things, that can function in more normal conditions.

Lasers? EMPs that can take down a planet? And more than 99% of the universe is a low-temperature vacuum, so I wouldn't rule out a grey-goo scenario if the nanobots get into space.

Assuming they can build their components out of hydrogen, or if they resort to asteroid mining.

These scenarios assume an AGI directing them. And an unfriendly AGI is an existential risk with or without nano.

It might be a general existential risk but without nanotech the space of things that an unfriendly AGI can do goes down a lot. Lack of practical nanotech reduces chance to FOOM.

And that's why it's so important to distinguish a judgment that an AGI is unFriendly from a hasty, racist assumption about how a different kind of intelligent being might want to act. Just because a being doesn't want to combine some of its macromolecules with other versions of itself doesn't mean it's okay to be racist against it.

Anyone here know anybody like that?

Technical misuse of 'racist'. Bigoted is a potential substitute. Egocentric would serve as spice.

One could speculate on how deep the act actually is here. One recurring feature of the Clippy character is that he attempts to mimic human social behavior in crude and clumsy ways. Maybe Clippy noticed how humans throw accusations of "racism" as an effective way to shame others into shutting up about unpleasant questions or to put them on the defensive, and is now trying to mimic this debating tactic when writing his propaganda comments. So he ends up throwing accusations of "racism" in a way that seems grotesque even by the usual contemporary standards.

Whoever stands behind Clippy, if this is what's actually going on, then hats off for creativity.

Whoever stands behind Clippy, if this is what's actually going on, then hats off for creativity.

Ever consider he might be the real thing?

[-][anonymous]13y-10

Haha! That would be a funny train of thought. An AI hanging out on a blog set up by a non-profit dedicated to researching AI.

[-]Clippy13y-10

I'm behind Clippy, non-ape.

Now, now.

The connotations of calling Vladimir "ape" are insulting among humans; the implication is not just that he is family Hominidae, which he is, but also that he shares other characteristics (such as subhuman intelligence, socially unacceptable hygiene levels, and so forth) with other hominoids like gorillas, orangutans, gibbons and so forth, which he does not.

Let's try to avoid throwing insults around, here.

Admittedly, the comment you're responding to used some pretty negative language to describe you as well; describing your social behavior as "crude and clumsy" is pretty rude. And the fact that the comment was so strongly upvoted despite that is unfortunate.

Still, I would rather you ask for an apology than adopt the same techniques in response.

Just to be clear: this has nothing whatsoever to do with the degree to which you are or aren't a neurotypical human. I would just prefer we not establish the convention of throwing insults at each other on this site.

Okay, thanks for clarifying all of that. You're a good human.

(blink)

OK, now I'm curious: what do you mean by that?

My first assumption was that it was a "white lie" intended to make me feel good... after all, the thing Clippy uses "good" to refer to I decidedly am not (well, OK, I do contribute marginally to an economy that causes there to be many more paperclips than there were a thousand years ago, but it seems implausible that you had that in mind).

In other words, I assumed you were simply trying to reward me socially.

Which was fine as far as it went, although of course when offered such a reward by an entity whose terminal values are inconsistent with my continued existence, I do best to not appreciate it... that is, I should reject the reward in that case in order to protect myself from primate social biases that might otherwise compel me to reciprocate in some way.

(That said, in practice I did appreciate it, since I don't actually believe you're such an entity. See what I mean about pretending to be human being useful for Clippy's purposes? If there are other paperclip-maximizers on this site, ones pretending to be human so well it never occurs to anyone to question it, they are probably being much more effective at generating paperclips than Clippy is. By its own moral lights, Clippy ought to stop presenting itself as a paperclip-maximizer.)

But on subsequent thought, I realized you might have meant "good human" in the same way that I might call someone a "good paperclip-maximizer" to mean that they generate more paperclips, or higher-quality paperclips, than average. In which case it wouldn't be a lie at all (although it would still be a social reward, with all the same issues as above).

(Actually, now that I think of it: is there any scalar notion of paperclip quality that plays a significant role in Clippy's utility function? Or is that just swamped by the utility of more paperclips, once Clippy recognizes an object as a paperclip in the first place?)

The most disturbing thing, though, is that the more I think about this the clearer it becomes that I really want to believe that any entity I can have a conversation with is one that I can have a mutually rewarding social relationship with as well, even though I know perfectly well that this is simply not true in the world.

Not that this is a surprise... this is basically why human sociopaths are successful... but I don't often have occasion to reflect on it.

Brrr.

I called you a good human before because you did something good for me. That's all.

Now you seem to be a weird, conflicted human.

Well, I am without question a conflicted human. (As are most humans.)

Whether I'm a weird human or not depends a lot on community norms, but if you mean by the aggregated standards of all of humanity, I am either decidedly a weird human (as are most humans) or I'm not weird at all, I'm not entirely sure which, and it depends to some degree on how you do the aggregation.

I am confused by your explanation, though. How did what I did for you cause there to be more paperclips?

You helped me understand how to interface with humans with less conflict.

Ah... that makes sense.

You're entirely welcome.

an entity whose terminal values are inconsistent with my continued existence

Indeed, but in the larger scheme of possible universe tiling agent space, Clippy and us don't look so different. Clippy would tile the universe with computronium doing something like recursively simulating universes tiled with paperclips. We would likely tile the universe with computronium simulating lots of fun-having post-humans.

It's a software difference, not a hardware difference, and it would be easy to propose ways for us and Clippy to cooperate (such as Clippy commits to dedicating x% of resources to simulating post-humans if he tiles the universe, and we commit to dedicating y% of resources to simulating paperclips if we tile the universe).

Clippy would tile the universe with computronium doing something like recursively simulating universes tiled with paperclips.

That is an interesting claim. I would be surprised to find that Clippy was content with simulated clips. Humans seem more likely to be satisfied with simulation than paperclippers. We identify ourselves by our thoughts.

Well, no, he's not just happy with simulated paperclips. The computronium he would tile is paperclip shaped, and presumably better to have that paperclipcomputronium simulating paperclips than anything else?

presumably better to have that paperclipcomputronium simulating paperclips than anything else?

Given that Clippy makes computronium at all, sure, but computronium is probably less efficient than some other non-work-performing material at forming paperclips.

Well, you know him better than I! You have a business relationship and all.

By its own moral lights, Clippy ought to stop presenting itself as a paperclip-maximizer.

Clippy can simultaneously present in one account as a paperclip maximiser, and in another as human.

The interplay between Clippy and a fake-human account could serve to create an environment more conducive to Clippy's end-goal.

Or, of course, Clippy might be programmed to achieve vis aims solely through honest communication. Would be an interesting, but incomplete, safeguard on an AI.

I struggle to understand the mentality that would put safeguards like that on an AI and then instruct it to maximize paperclips.

Well, let's just be thankful they didn't create the AI equivalent of a "Hello, world" program. That would be really annoying.

Well, it would have to be a paperclip manufacturer I suppose.

Either that or a very strange experiment.

Maybe Mythbusters?

Clippy can simultaneously present in one account as a paperclip maximiser, and in another as human.

(nods) I stand corrected... that is a far better solution from Clippy's perspective, as it actually allows Clippy to experimentally determine which approach generates the most paperclips.

Or, of course, Clippy might be programmed to achieve vis aims solely through honest communication.

The question would then arise as to whether Clippy considers honest communication to be a paperclip-maximizing sort of thing to do, or if it's more like akrasia -- that is, a persistent cognitive distortion that leads Clippy to do things it considers non-paperclip-maximizing.

Any AGI that isn't Friendly is UnFriendly.

I have never been sexually attracted to any entity or trait, real or fictional. People generally aren't bigoted against me-- the worst I've seen is people treating me like an interesting novelty, which can be somewhat condescending. So there is hope for those with nonstandard goals, at least on some level! :)

Presumably, humans will resort to asteroid mining at some point. They might use hard nanotech for that purpose. If they aren't careful in how they do so, a gray goo might end up taking over any body in the solar system not too warm to support it.

Intentionally designed replicators with thermal shields and heat pumps could be more aggressive. However they would probably tend to be larger and hence less difficult to locate and destroy.

True, though such things (NBC weapons, most likely) would not possess the particular type of world-ending unstoppability that science fiction gray goo does.

Biological nano-scale engineering has an additional constraint: it must be evolvable. The amount of bandwidth transmitted into the genome from the world via selection is surprisingly small.

In terms of software architecture, brownian motion gives a sort of message-broadcasting architecture - very decoupled. The messages (proteins) know how to execute themselves (very object oriented). The entity building it from data (ribosome) doesn't know what it's building. The entity powering it (ATP synthase) doesn't know what it's powering.

In this design, in order to one location to communicate a message to another location, some mass has to brownian-motion its way across. Suppose in a redesign, locations that needed to communicate messages were wired together with flexible polymers. Moving electrons, waves of configuration changes, or even molecular messages along a guide would be significantly faster, particularly over long distances; latency is proportional to square of distance for brownian motion. (Indeed, in latency-critical applications, biology does use wire-ish communication; neurons.)

Even admitting that Drexler's nanomachines probably look more ridiculous to a future experienced nanoscale engineer than Da Vinci's machines do to a mechanical engineer, there's obvious room for improvements. We cannot assume that biology is anywhere close to the limits on efficiency imposed by the laws of physics.

Biological nano-scale engineering has an additional constraint: it must be evolvable.

Could you explain this claim?

In order for us to observe biological (as opposed to intelligently designed) nanoscale engineering in the wild, it must be possible for it to have evolved.

If you look at genetic algorithms, they don't find all, many, most, or the best solutions - they find solutions which have paths of a certain type leading to them. You could call these paths axis-aligned, where each gene corresponds to an axis. E.g. http://www.caplet.com/MannaMouse.html

The applet only has two genes, and so doesn't have any of the changing-numbers-of-genes phenomena that we expect in the real world, but it gives a rough sense that evolution works in a specific, simple, and not very smart manner over the fitness landscape.

Indeed. An even bigger constraint is energy consumption - natural life forms operate under absurdly constrained energy budgets compared to machinery, which sharply limits the materials they can be made of and the performance they can deliver.

Ok, yes, I understand that anything interesting we find in the wild must have arisen by evolution, and hence that it must be evolvable. But I understood your reference to "engineering" to mean "designed by an intelligent human being". In which case, evolvability is rather irrelevant.

You apparently are anthropomorphizing Nature as an engineer. That is OK with me, but please don't imagine that we are not capable of doing some biological nanoscale engineering on our own, making no further use of evolution than to utilize the enzyme systems and ribosomes with which Nature has already presented us.

Yes, you're correct, I was anthropomorphizing evolution as an engineer; my "biological" corresponds to your "in the wild".

I never thought that kind of nanotechnology would work outside a vacuum. This never seemed like much of a problem to me.

Consider. Using such technology, it's relatively trivial to create a vacuum, including macroscale devices with internal vacuums.

They'd need a significant amount of shielding to prevent atmosphere from leaking in, yes. Millimeters, possibly. Is that.. really a problem?

The watery nanoscale environment of cell biology seems so hostile to engineering that the fact that biology works at all is almost hard to believe. But biology does work--...aided by its own design principles, which have evolved over billions of years to exploit those characteristics.

This is pure dark arts. Emphasis added.

The article seems pretty reasonable overall but enough of it is technical enough that I'd be wary.

immanent --> imminent

Thanks. I thought that looked wrong.