In 1986, Drexler predicted (in Engines of Creation) that we'd have molecular assemblers in 30 years. They would roughly act as fast, atomically precise 3-d printers. That was the standard meaning of nanotech for the next decade, until more mainstream authorities co-opted the term.

What went wrong with that forecast?

In my review of Where Is My Flying Car? I wrote:

Josh describes the mainstream reaction to nanotech fairly well, but that's not the whole story.

Why didn't the military fund nanotech? Nanotech would likely exist today if we had credible fears of Al Qaeda researching it in 2001.

I recently changed my mind about that last sentence, partly because of what I recently read about the Manhattan Project, and partly due to the world's response to COVID.

Drexler's vision, in Engines of Creation, was based on the assumption that at least one government was able to do projects such as Manhattan and Apollo.

I've now decided that there was something quite unusual about the US ability to put the best and the brightest in charge of that many resources. The US had it from something like 1940 through 1969.

At some point in the 1970s, people stopped worrying that the Soviet Union had such an ability (I'm guessing the Soviets never had that ability, althought they occasionally came close). Without an enemy that people believed was capable of conquering us, politics as usual crept back into all large projects.

I'm suggesting that Drexlerian nanotech needs the kind of competence that the Manhattan Project demonstrated. That likely overestimates the difficulty of nanotech today, unless we impose a requirement that it be done on a 3 year schedule. I'm unsure whether that's enough of an overestimate to alter the analysis.

Nanotech requires a significant amount of basic research on tools, and on finding an affordable path that's feasible given the tools we can produce.

That research could in principle be rewarded by academic fame. Academia seems uninterested so far in the kind of vision that would produce such fame. The military risks of nanotech might make that an appropriate default. And even if academia were promoting nanotech research, I doubt it would be sufficiently results-oriented to fulfill Drexler's forecast.

That research is unlikely to be financed by VCs or big companies. They're deterred by a combination of long time horizons (maybe 10-20 years on a typical budget?) and the difficulty of capturing profits from the research. The first general-purpose assemblers will be used in part to produce better assemblers (easier to program, less likely to break down, less demanding of pure feedstocks, etc.). I expect it will resemble how early work on operating systems didn't give IBM, Bell Labs, and DEC much of an advantage over Microsoft and Apple.

Al Qaeda could in principle have been the kind of enemy that caused the US to become more united. It takes a good deal of imagination to generate a scenario under which Al Qaeda could plausibly have conquered the US. But I'll assume such a scenario for the sake of my argument.

The US was less close in 2001 than in 1940 to being the kind of nation that could support a Manhattan Project.

The 1930s had some key political divisions between traditional libertarianism, technocratic nationalism, and technocratic Marxism. But I get the impression that those divisions were more focused than are today's political divisions on disputes about what strategies would best achieve the commonly agreed on goals.

Christian culture in the 1930s provided a unifying force that sometimes enabled the US to transcend those political divisions.

What happened to that unifying force?

My main guess: Science grabbed a bit more prestige than it had earned. Science used that prestige to weaken support for religion. It seems theoretically possible to weaken belief in the supernatural without weakening the culture associated with belief in the supernatural. But few influential people had enough foresight to aim for that.

What we got instead was reduced support for Christianity, without reducing the basic desires that led people to become religious. So Christianity was partly replaced with new religions (such as Green fundamentalism) that were optimized more for looking scientific than for improving society.

Christianity produced a high-trust society in part by ensuring a good deal of overlap between the goals of most people in that society. So weakening Christianity reduced trust, for reasons that were mostly independent of what replaced it.

I'm unsure how much that explains the changes since the 1940s. There's less expectation of progress today. That might be a separate contributor.

The US still has some heroes. A number of competent people quickly prepared to handle COVID-19. Those heroes were unambiguously defeated, by forces other than the virus.

That's rather different from the project that eradicated smallpox. Those heroes were able to route around bureaucracy and break rules when that was needed.

What projects might we have seen if the 2010s were like the 1940s? If climate change were as much of an emergency as WWII was, I'd guess we'd see a major effort for fusion power. We might also have AGI, a cure for cancer, the eradication of more infectious diseases, etc.

I don't know whether it's good or bad that nanotech has been delayed. Nanotech offers plenty of improvements to normal life, but also some risk of destabilizing military changes.

The inability to accomplish such major projects has happened for lousy reasons, but won't necessarily cause much harm. Maybe nearly all of the competent people have gone to startups. I think I've seen a somewhat steady increase in competent companies that are 1% as ambitious as the Manhattan Project. Maybe we're going to get fusion soon. We're probably a bit safer for not having an arms race attitude toward AGI. But I wish medicine had something a bit closer to a Manhattan Project.

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 1:59 PM

It’s my opinion that Drexler simply underestimated the basic scientific problems that yet needed to be solved. The discrete nature of atoms and the limited range of geometries that can be utilised for structures at the nanoscale alone make complex molecular machine design extraordinarily challenging. Drug design is challenging enough and all we usually need to do there is create the right shaped static block to fit the working part of the target protein and stop it functioning (OK, I over-simplify, but a drug is a very long way from a molecular machine). Additionally the simulation tools needed to design molecular machines are only now becoming accurate enough, largely because it is only now that we have cheap enough and fast enough compute power to run them in reasonable real time.

It will happen, in time, but there is still a large amount of basic science to be done first IMO. My best guess is that self-assembling biomimetic molecular machines, based on polypeptides, will be the first off the blocks. New tools such as AlphaFold will play an important role in their design.

This took a weird turn in an article that I thought would explore the basic scientific challenges of nanotech and why we don't have it yet. I do think that the erosion of religion has some negative externalities in modern society (ie. lack of easy meaning and direction in life and downfall of local community kinship), but no I don't think that is the main reason why we don't have nanotech specifically. I don't even think that's the primary reason we are more polarized politically now (my current thoughts lean towards changes in information consumption, communication, and trust in institutions).

But specifically nano-printers? Of course people want that, as much as people want quantum computing, fusion energy, brain-computer interfaces, and life-extension. The benefits are obvious, from the money-making opportunities alone. Maybe reality is just disapointing: it's a harder problem than people originally expected without any economically viable intermediate steps (the kind that bolster AI reasearch now) so progress is stuck in a quagmire.  

Nope. Fusion is the tech you want if you want to make arguments like this. Nanotech has actual serious problems, and is going to have to look a lot more like life (using diffusion for transport, using reactions driven by entropy and small energy differences rather than large energy differences, probably some other compromises like modularity) because the world is simply too dirty and noisy machinery driven by large energy differences will find something unintended to react with, and break.

because the world is simply too dirty

What stops the nanoassembly taking place in a box that keeps the dirt out? Perhaps a whole manufacturing environment that doesn't contain a single extranious atom.  Taking place at liquid nitrogen temperatures to minimize thermal noise (if need be). The solution evolution found is not the only or best one. 

I did my PhD doing STM. So on the one hand, yes, you're right. I know first-hand how to prepare an environment where gas molecules are hitting any given point on your sample less than once per week. But if you're trying to work with stuff that's highly highly reactive (particularly to hydrogen gas) it will get dirty anyway, and after a 1% of a week it will be ~1% covered in dirt.

I also know how sticky, erratic, and overall annoying to work with atoms are. Low temperatures will usually not save you because there will be multiple metastable configurations of most non-crystalline blobs of atoms and anything interesting is locally-energetic-enough to excite some of those transitions. If you try to grab atoms from feedstock along pre-planned trajectories, it will fail, because you will non-deterministically pick up clusters of atoms of different sizes. If you try to replace a bulk feedstock with a molecular ratchet to dispense single atoms, it will fail, because the ratchet will run backwards sometimes and not dispense an atom when you want it. And then your atom-grabber will stick itself to something important one a day because why not.

I feel like this is a mechanical engineer complaining about friction and bearings seizing up. Those are real phenomena, but we have ways of keeping them in control, and they make engineering a bit harder, not impossible.

If you want a nano-mechanical ratchet that will never thermally reverse, the obvious trick is to drive it with lots of energy. The probability of reversal goes down exponentially with energy, so expend 20x the typical thermal energy, and the chance of brownian motion knocking it backwards is basically 0. 

There are more subtle things you can do. For example there is the unreliable train model, it goes forwards more often than it goes backwards, and the tracks only lead one place, so eventually it gets to its destination. If the whole nanomachine sometimes runs backwards in sync, thats not a problem, its only parts running backwards that's a problem. I suspect there is something really fancy involving uncomputing compression algorithms on thermal noise.

These are engineering problems, with engineering solutions.  Not fundamental physical rules prohibiting nanotech.

Well, sometimes when mechanical engineers warn you about friction (or about your building materials not being suitable for the structure you've dreamed up), they're right. I think the whole "print single atoms into structures of more than 200 atoms" paradigm is a dead end for reasons that can basically be described as "warnings about atoms as a building material."

An analogy would be a robot that can pick up a deck of cards and shuffle them, without sensors. And the robot is built out of Jell-O.

What is noticeably different between these 2 worlds, the world where that kind of nanotech is just a fairly tricky engineering problem, and the world where nanotech is impossible? In both worlds, you struggled to get a handful of atoms to behave, just in one world you are battling the limitations of the crude current tools, and in the other world, what you are battling is a mixture of current technical limits and fundamental limits. 

"Iron is an inherently lumpy material. When a bloom comes out a bloomery its made of loads of lumps all stuck together.  And each batch is a bit different. You can try to hammer it into shape, but the more you hammer it, the more grit comes off your hammer and the more the iron forms an ore-like layer on its surface. The only thing that can hammer iron is rocks, and most rocks can't be made into a sharp point. Flint can, but if you hit it too hard it will flake off. These gear things are impossible, you can never make them precise enough to actually fit together. Do you have any idea how hard it is to make iron into any usable tool at all. These aren't just engineering details, these are fundamental limitations of the building material."

Sure. But us humans are never guaranteed an "aha" moment that lets us distinguish the two. If you have no physics-level guarantee that your technology idea will be useful, and no physics-level argument for why it won't, then you may for a long time occupy the epistemic state of "I'm sure that almost this exact idea is good, it just has all these inconvenient engineering-type problems that make our current designs fail to work. But surely we'll figure out how to bypass all these mere engineering problems without reevaluating the basic design."

In this case, we face a situation of uncertainty. Two biases here dominate our thinking on tech:

  1. Optimism bias. We are unduly optimistic about when certain things will happen, especially in the short term. Pessimists are usually right that timelines to technology take longer than you think. At this point, the evidence is telling us that nanotechnology is not a simple trick or will happen easily, but it doesn't mean that it's outright impossible. The most we can say is that it's difficult to make progress.

  2. Conjunction fallacy. People imagine routes to technology as conjunctive, when they are usually disjunctive. This is where pessimists around possibilities are usually wrong. In order to prove Drexler wrong, you'd have to show why any path to nanotechnology has fundamental problems, and you didn't do this. At best you've shown STM has massive problems. (And maybe not even that.)

So my prior is that Nanotechnology is possible, but it will take much longer than people think.

[-][anonymous]2y10

Charlie, you obviously have expert level knowledge on this.  

Are you saying you ultimately concluded that:

(1) nanoassembly machinery won't work, in the problem space of "low temperatures, clean vacuum, feedstock bonding to target".  Obviously nanoassembly machinery works fine at liquid water temperatures, in solvent, for specific molecules.

(2) it would be too difficult for you or a reasonable collection of research labs working together to solve the problems or make any meaningful progress.  That a single piece of actually working nano-machinery would be so complex and fragile to build with STMs that you basically couldn't do it, it would be like your robot with jello hands example.

I will note that you could shuffle cards blind most of the time if you're allowed to harden up the robot and get a really accurate model of how jello physics work.

I have expert-level knowledge on something, but probably not this precise thing. As for 1 or 2, it's a bit complicated.

Let me start by defending 1. The paradigm of "atom by atom 3D printing of covalently-bonded materials" just has too many problems to actually work in the way Drexler envisioned, AFAICT. Humans might be able to build nanomachinery that 3D prints covalent bonds for a while before it breaks. But that is the jello robot, and no matter how optimized you make a jello robot, it's not really going to work.

But even though I say that, maybe that's a bit hypoerbolic. A superintelligent AI (or maybe even humans with plenty of time and too many resources) could probably solve a lot of those problems in a way that keeps the same general aesthetic. The easiest problem is vacuum. We could do better than 10^-13 Torr if we really wanted to, it's just that every step is expensive, slow, and makes it harder to do experimentation. If you make the vacuum five orders of magnitude better, and cool everything down to a few microKelvin, that would mean you can let all of your steps take a lot longer, which loosens a lot of the constraints based around hysteresis or energy getting dumped into the system by your actuators.

Could a superintelligence solve all the problems? Well, there are likely problems with state transitions that are still an issue even at low temperature due to quantum tunneling. I suspect a superintelligent AI (or a human with a sufficiently powerful computer) could solve these problems, but I'm not confident that they would do so in ways that keep the "3D printing aesthetic" intact.

So I subscribe to 1 in the strict sense of "the exact things Drexler is talking about will have problems." I subscribe to 2 in the sense of "Trying to fix this without changing the design philosophy will be really hard, if it's possible." And I want to point at some third thing like "A superintelligent AI trying to produce the same output would probably do it in a way that looks different than this."

[-][anonymous]2y40

What 3d printing aesthetic? I understand the core step of drexelerian nanoassembly is a target molecule is being physically held in basically a mechanical jig. And feedstock gas is introduced - obviously filtered enough it's pure element wise though nanotechnology only operates on electron cloud identity like all chemistry - is introduced to the target mechanically.  The feedstock molecules are chosen where bonding is energetically favorable.  A chemical bond happens, and the new molecule is sent somewhere else in the factory.  

The key note is that the proposal is to use machinery to completely enclose and limit the chemistry to the reaction you wanted. And the machinery doing this is specialized - it immediately starts working on the exact same bonding step again. It's similar to how nature does it except that biological enzymes are floppy, which lets errors happen, and they rely on the properties of water to "cage" molecules and otherwise act as part of the chemistry, vs the drexler way would require an actual physical tool to be robotically pressed into place, forcing there to be exactly one possible bond.

Did you read his books? I skimmed them and recall no discussion of direct printing, chemistry can't do that.  

So a nanoforge at a higher level is all these assembly lines that produce exactly one product. The larger molecules being worked on can be sent down variant paths and at the larger subsystem and robotic machinery assembly levels there are choices. At the point there are choices these are big subassemblies of hundreds of Daltons, just like how nature strings peptides out of fairly bulky amino acids.

Primarily though you should realize that while a nanoforge would be a colossal machine made of robotics, it can only make this limited "menu" of molecules and robotic parts, and in turn almost all these parts are used in itself. When it isn't copying itself, it can make products, but those products are all just remixes from this limited menu. 

It's not an entirely blind process, robotic assembly stations can sense if a large molecule is there, and they are shaped to fit only one molecule, so factory logic including knowing if a line is "dead" is possible. (Dead lines can't be repaired so you have to be able to route copies of what they were producing from other lines, and this slows the whole nanoforge down as it 'ages' - it has to construct another complete nanoforge before something critical fails and it ceases to function).

Similarly other forms of errors may be reportable.  

What I like about the nanoforge hypothesis is that we can actually construct fairly simply programmatic goals for a super intelligent narrow AI to follow to describe what this machine is, as well as a whole tree of subtotals*. For every working nanoforge there is this immense combinatorial space of designs that won't work, and this is recursively true down to the smallest discrete parts, as an optimization problem there is a lot of coupling. The small molecule level robotic assembly stations need to reuse as many parts as possible between them, because this shrinks the size and complexity of the the overall machine for instance. 

This doesn't subdivide well between design teams of humans.

Another coupling example: suppose you discover a way to construct an electric motor at the nanoscale and it scores the best on a goal heuristic, after years of work.  

You then find it can't be integrated into the housing another team was working on.

For an AI this is not a major problem - you simply need to remember the 1 million other motor and housing candidates you designed in simulation and begin combinatorially checking how they match up. In fact you never really commit to a possibility but always just maintain lists of possibilities as you work towards the end goal.  

I have seen human teams at Intel do this but they would have a list length of 2. "If this doesn't work here's the backup".

Right, by 3d printing I mean the individual steps of adding atoms at precise locations.

Like in the video you linked elsewhere - acetylene is going to leak through the seal, or it's going to dissociate from where it's supposed to sit, and then it's going to at best get adsorbed onto your machinery before getting very slowly pumped out. But even adsorbed gas changes the local electron density, which changes how atoms bond.

The machinery may sense when it's totally gummed up, but it can't sense if unluckily adsorbed gas has changed the location of the carbon atoms it's holding by 10 pm, introducing a small but unacceptable probability of failing to bond, or bonding to the wrong site. And downstream, atoms in the wrong place means higher chance of the machinery bonding to the product, then ripping atoms off of both when the machinery keeps moving.

[-][anonymous]2y41

Individual steps of adding atoms is what you do in organic synthesis with catalysts all the time. This is just trying to make side reactions very very rare, and to do that one step is to not use solvents because they are chaotic. Bigger enclosing catalysts instead.

 Countless rare probability events will cause a failure. Machinery has to do sufficient work during it's lifetime to contribute enough new parts to compensate for the failures. It does not need to be error free just low enough error to be usable.  

The current hypothesis for life is that very poor quality replicators - basically naked RNA - evolved in a suitable host environment and were able to do exactly this, copying themselves slightly faster than they degrade.  This is laboratory verified.  

So far I haven't really heard any objections other than we are really far from the infrastructure needed to build something like this. Tentatively I assume the order or dependent technology nodes is :

Human level narrow AI -> general purpose robotics -> general purpose robotic assembly at macroscale -> self replicating macroscale robotics -> narrow AI research systems -> very large scale research complexes operated by narrow AI.

The fundamental research algorithm is this:

The AI needs a simulation to determine if a candidate design is likely to work or not.  So the pipeline is

(sim frame) -> engine stage 1 -> neural network engine -> predicted frames, uncertainty

This is recursive of course, you predict n frames in advance by using the prior predicted frames.

 The way an AI can do science is the following:

   (1) identify simulation environment frames of interest to the task of it's end goal with high uncertainty

   (2) propose experiments to reduce uncertainty

    (3) sort the experiments by a heuristic of cost, information gain

Perform the top 1000 or so experiments in parallel, and update the model, back to the beginning.

All experiments are obviously robotic, ideally with heterogeneous equipment.  (different brand of robot, different apparatus, different facility, different funding source, different software stack) 

Anyways that's how you unlock nanoforges - build thousands or millions of STMs and investigate this in parallel.  Likely not achievable without the dependent tech nodes above.

The current model is that individual research groups have what, 1-10 STMs?  A small team of a few grad students?  And they encrypt their results in a "research paper" deliberately designed to be difficult for humans to read even if they are well educated?  So even if there were a million labs investigating nanotechology, nearly all the papers all of them write are not read by any but a few of the others.  Negative results and raw data are seldom published so each lab is repeating the same mistakes others made thousands of times already.  

This doesn't work.  It only worked for lower hanging fruit.  It's the model you discover radioactivity or the transistor with, not the model you build an industrial complex than crams most of the complexity of earth's industrial chain into a small box.

This. Any real nanoassembly will take place in a small controlled environment. That's still revolutionary, but it does cut off grey goo concerns. Also, I think you might not need the liquid nitrogen for products that work in room temperature.

I don't think "controlled environment" cuts off all grey goo concerns. Imagine a diamond box, one micrometer across. Inside the box is atomically precise nanomanufacturing environment. On the outside, little robot arms grab ambient atoms and feed them into input ports. The input ports are designed to only let specific molecular groups through. One port only accepts water and nothing else. 

Inside the box, a flat pack version of the whole machine is assembled. Once manufacture is completed, an exit port opens up, and the flat pack pops out without any atoms being allowed in. The flat pack pops up, and replication is complete. 

The small controlled environment could be very small indeed, and this allows grey goo.

Current tools do not seem adequate for directly building Drexler's designs. That doesn't mean Drexler's designs are impossible.

The first few generations of molecular assemblers will likely use diffusion and small energy differences. They'll be hard to use, and will break frequently. They'll be used to build tools to roughly atomic precision. Those tools will bootstrap numerous generations of systems which get closer to Drexler's designs. With enough trial and error, we'll eventually find ways to handle the problems you describe.

I won't bet much money that it's easier than the Manhattan Project. I'm moderately confident that nanotech needs less total resources, but it likely needs a larger number of smart people doing lots of exploration.

I think this is a somewhat reasonable take. But I am absolutely saying that AFAICT, Drexler's designs will not work. What you might think are merely problems with building them are, from my view, problems with the entire picture of atom-by-atom 3D printing (especially using reactive species in non-crystalline arrangements) that will persist even if you have nano-infrastructure.

[-][anonymous]2y52

Did Drexler have a mechanism for his 30 year projection?

Let me give an example of the mechanism.  AGI is likely available within 10 years, where AGI is defined as a "machine that can perform, to objective standards, at least as well as the average human on a set of tasks large enough to be about the size of the task-space the average human can perform".

So the mechanism is: (1) large institutions create a large benchmark set (see Big Bench) of automatically gradable tasks that are about as difficult as tasks humans can perform (2) large institutions test out AGI candidate architectures, ideally designed by prior AGI candidates, on this benchmark.  (3) score on the benchmark > average human?  You have AGI.

This is clearly doable and just a matter of scale.

Contrast this to nanotechnology.

See here: 

A nanoassembler is a vast parallel set of assembly lines run by nanomachinery robotics.  So you have to develop at a minimum, gears and dies and sensors and motors and logic and conveyer lines and bearings and...

Note that reliability needs to be very high: a side reaction that causes an unwanted bond will in many cases "kill" that assembly line.  Large scale "practical" nano-assemblers will need redundancy.  

And at a minimum you need to get at least one functioning subsystem of the nano-machinery to work to start to produce things to begin to bootstrap.  Also you are simply burning money until you basically get a mostly or fully functioning nanoassembler - the cost to manipulate atoms is very high doing it with conventional methods, things like STM microscopes that each are very expensive and can only move usually a single head around.  

The problem statement is "build a machinery set of complex parts able to produce every part used in itself".

So it's this enormous upfront investment, it looks nothing like anything people have made before, we can't even "see" what we are doing, and the machinery needed is very very complicated and made using methods people have never used before.  

Note that we don't have macroscale assemblers.  3d printers can't copy themselves, nothing can.  We fill in the gaps in our industrial manufacturing lines with humans who do the steps robots aren't doing.  Humans can't be used to gap-fill at the nanoscale.

I don't see a basis for the "30 year" estimate.  The Manhattan project was "let's purify an isotope we know exists so it will chain react, and let's make another fissionable by exposing it to a neutron flux from the first reactor".  There were a lot of difficulties but it was a straightforward, reasonably simple thing to do.  "purify this isotope", "let nature make this other element (plutonium)", "purify that".  

There were several methods tried for each step and as it so happened almost everything eventually worked.  

Arguably if you think seriously about what a nanoassembler will require the answer's obvious.  You need superintelligence - a form of trees of narrow AI that can learn from a million experiments in parallel, have subgoals to find routes to a nanoassembler, run millions of robotic STM stations in parallel, and systematically find ways around the problems.

The sheer amount of data and decisions that would have to be made to produce a working nanoassembler is likely simply beyond human intelligence at all and always was.  You might need millions or billions of people or more to do it with meatware.

The "mechanism" you describe for AGI does not at all sound like something that will produce results within any predictable time.

[-][anonymous]2y0-1

?  Did you not read https://www.deepmind.com/publications/a-generalist-agent or https://github.com/google/BIG-bench or https://cloud.google.com/automl or any of the others?

The "mechanism" as I describe it is succinctly, 'what Google is already doing, but 1-3 orders of magnitdue higher".  Gato solves ~200 tasks to human level.  How many tasks does the average human learn to do competently in their lifetime?  2000?  20k?  200k?

It simply doesn't matter which it is, all are are within the space of "could plausibly solve within 10 years".

Whatever it is, it's bounded, and likely the same architecture can be extended to handle all the tasks.  I mention bootstrapping (because 'writing software to solve a prompt' is a task, and 'designing an AI model to do well on an AGI task' is a task) because it's the obvious way to get a huge boost in performance to solve this problem quickly.  

I notice that of the Manhatten and Apollo program, one is actively destructive, while the other is kind of pointless. So while the ability to work together on big projects might have been lost, arguably we were so bad at choosing the project that this is no bad thing.

Have you noticed that an offshoot of the company that makes Canada's bank notes, holds a number of recent patents concerning "mechanosynthesis"? 

Adding to my first comment, another basic problem that at least applies to organic chemical assemblies, is that easily constructed useful engineering shapes such as straight lines (acetylenes, polyenes), planes (graphene ) or spherical/ellipsoidal curves (buckminsterfullerene like structures) are always replete with free electrons. This makes them somewhat reactive in oxidative atmospheres. Everybody looked at the spherical buckminsterfullerene molecule and said “wow, a super-lubricant!” Nope, it is too darn reactive to have a useful lifetime. This is actually rather reassuring in the context of grey goo scenarios.

Excessive reactivity in oxidative atmospheres may perhaps be overcome if we use metal-organic frameworks to create useful engineering shapes (I am no expert on these so don’t know for sure). But much basic research is still required.

To me, bacteria are also nanotech. There is already "Grey Goo" trying to dissolve us and make us into copies of itself...

We are the danger.

We are the evil AI we warn ourselves about.