Confidence level: I’m a computational physicist working on nanoscale simulations, so I have some understanding of most of the things discussed here, but I am not specifically an expert on the topics covered, so I can’t promise perfect accuracy.

I want to give a huge thanks to Professor Phillip Moriarty of the university of Nottingham for answering my questions about the experimental side of mechanosynthesis research.

Introduction:

A lot of people are highly concerned that a malevolent AI or insane human will, in the near future, set out to destroy humanity. If such an entity wanted to be absolutely sure they would succeed, what method would they use? Nuclear war? Pandemics?

According to some in the x-risk community, the answer is this: The AI will invent molecular nanotechnology, and then kill us all with diamondoid bacteria nanobots.

This is the “lower bound” scenario posited by Yudkowsky in his post AGI ruin:

The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. 

The phrase “diamondoid bacteria” really struck out at me, and I’m not the only one. In this post by Carlsmith (which I found very interesting), Carlsmith refers to diamondoid bacteria as an example of future tech that feels unreal, but may still happen:

Whirling knives? Diamondoid bacteria? Relentless references to paper-clips, or “tiny molecular squiggles”? I’ve written, elsewhere, about the “unreality” of futurism. AI risk had a lot of that for me.

Meanwhile, the controversial anti-EA crusader Emille Torres cites the term “diamondoid bacteria” as a reason to dismiss AI risk, calling it “patently ridiculous”.

I was interested to know more. What is diamondoid bacteria? How far along is molecular nanotech research? What are the challenges that we (or an AI) will need to overcome to create this technology?

If you want, you can stop here and try and guess the answers to these questions.

It is my hope that by trying to answer these questions, I can give you a taste of what nanoscale research actually looks like. It ended up being the tale of a group of scientists who had a dream of revolutionary nanotechnology, and tried to answer the difficult question: How do I actually build that?

What is “diamondoid bacteria”?

The literal phrase “diamondoid bacteria” appears to have been invented by Eliezer Yudkowsky about two years ago. If you search the exact phrase in google scholar there are no matches:

If you search the phrase in regular google, you will get a very small number of matches, all of which are from Yudkowsky or directly/indirectly quoting Yudkowsky. The very first use of the phrase on the internet appears to be this twitter post from September 15 2021. (I suppose there’s a chance someone else used the phrase in person).

I speculate here that Eliezer invented the term as a poetic licence way of making nanobots seem more viscerally real. It does not seem likely that the hypothetical nanobots would fit the scientific definition of bacteria, unless you really stretched the definition of terms like “single-celled” and “binary fission”. Although bacteria are very impressive micro-machines, so I wouldn’t be surprised if future nanotech bore at least some resemblance.

Frankly, I think inventing new terms is an extremely unwise move (I think that Eliezer has stopped using the term since I started writing this, but others still are). “diamondoid bacteria” sounds science-ey enough that a lot of people would assume it was already a scientific term invented by an actual nanotech expert (even in a speculative sense). If they then google it and find nothing, they are going to assume that you’re just making shit up.

But diamondoid nanomachinery has been a subject of inquiry, by actual scientific experts, in a research topic called “diamondoid mechanosynthesis”.

What is “diamondoid mechanosynthesis”

Molecular nanotech (MNT) is an idea first championed by Eric Drexler, that the same principles of mass manufacturing that are used in todays factories could one day be miniaturized to the nanoscale, assembling complex materials molecule by molecule from the ground up, with nanoscale belts, gears, and manipulators. You can read the thesis here, It’s an impressive first theoretical pass at the nanotech problem, considering the limited computational tools available in 1991, and helped inspire many in the current field of nanotechnology (which mostly does not focus on molecular assembly).

However, Drexlers actual designs of how a molecular assembler would be built have been looked on with extreme skepticism by the wider scientific community. And while some of the criticisms have been unfair (such as accusations of pseudoscience), there are undeniably extreme engineering challenges. The laws of physics are felt very differently at different scales, presenting obstacles that have never been encountered before in the history of manufacturing, and indeed may turn out to be entirely insurmountable in practice. How would you actually make such a device?

Well, a few teams were brave enough to try and tackle the problem head on. The nanofactory collaboration, with a website here, was an attempt to directly build a molecular assembler. It was started in the early 2000’s, with the chief players beings Freitas and Merkle, two theoretical/computational physicists following on from the work of Drexler. The method they were researching to make this a reality was diamondoid mechanosynthesis(DMS).

So, what is DMS? Lets start with Mechanosynthesis. Right now, if you want to produce molecules from constituent molecules or elements, you would place reactive elements in a liquid or gas and jumble them around so they bump into each other randomly. If the reaction is thermodynamically favorable under the conditions you’ve put together (temperature, pressure, etc.), then mass quantities of the desired products are created.

This is all a little chaotic. What if we wanted to do something more controlled? The goal of mechanosynthesis is to precisely control the reactive elements we wish to put together by using mechanical force to precisely position them together. In this way, the hope is that extremely complex structures could be assembled atom by atom or molecule by molecule.

The dream, as expressed in the molecular assembler project, was that mechanosynthesis can be mastered to such a degree that “nano-factories” could be built, capable of building many different things from the ground up, including another nanofactory. If this could be achieved, then as soon as one nanofactory is built, a vast army of them would immediately follow through the power of exponential growth. These could then build nanomachines that move around, manipulate objects, and build pretty much anything from the ground up, like a real life version of the Star Trek matter replicator.

If you want to convert a dream into a reality, you have to start thinking of engineering, If you could make such a nano-factory, what would it be made out of? There are a truly gargantuan number of materials out there we could try out, but almost all of them are not strong enough to support the kind of mechanical structures envisaged by the nanofactory researchers. The most promising candidate was “diamondoid”.

Now, what is “diamondoid”? You’d expect this to be an easy question to answer, but it’s actually a little thorny. The more common definition, the one used on wikipedia and most journal papers, is that diamondoid refers to a specific family of hydrocarbons like the ones shown below, with the simplest one being “adamantane”, with it’s strong, cage-like structure, and the other ones being formed by joining together multiple cages.

Image taken from here

These cages are incredibly strong and stable, which makes them a promising candidate material for building up large structures, and keeping them stable for assembly purposes.

The other definition, which seems to be mainly used by the small community of molecular nanotech(MNT) proponents, is that “diamondoid” just means “any sufficiently strong and stiff nanoscale material”. See this passage from the “molecular assembler” website:

Diamondoid materials also may include any stiff covalent solid that is similar to diamond in strength, chemical inertness, or other important material properties, and possesses a dense three-dimensional network of bonds. Examples of such materials are carbon nanotubes (illustrated at right) or fullerenes, several strong covalent ceramics such as silicon carbide, silicon nitride, and boron nitride, and a few very stiff ionic ceramics such as sapphire (monocrystalline aluminum oxide) that can be covalently bonded to pure covalent structures such as diamond.

This passage is very out of line with mainstream definitions. I couldn’t find a mention of “diamondoid” in any top carbon nanotube article. I’ve done a little research on aluminium oxide, and I have never in my life heard it called “diamondoid”, considering it neither contains the same elements as diamond, nor does it take the same structure as diamond or diamondoid hydrocarbons. This kind of feels like the “radical sandwich anarchy” section of this chart.

I really don’t want to get sidetracked into semantic debates here. But just know that the MNT definition is non-standard, might annoy material scientists, and could easily be used against you by someone with a dictionary.

In any case, it’s not a huge deal, because the molecular assembler team was focused on carbon-based diamond and diamondoid structures anyway.  

The plan was to engage in both theoretical and experimental research to develop nanotech in several stages. Step 1 was to achieve working prototypes of diamond mechanosynthesis. Step 2 was to build on this to actually assemble complex molecular structures in a programmable mechanical manner. Step 3 was to find  a way to parallelize the process, so that huge amounts of assembly could be done at once. Step 4 was to use that assembly to build a nanofactory, capable of building a huge number of things, including a copy of itself. The proposed timeline for this project is shown below:

They thought they would have the first three steps finished by 2023, and have working commercialized nanofactories by 2030. Obviously, this is not on track. I’m not holding this against them, as extremely ambitious projects rarely finish on schedule. They were also underfunded compared to what they wanted, furthering hampering progress.

How far did the project go, in the end?

DMS research: The theoretical side

The nanofactory collaboration put forward a list of publications, and as far as I can tell, every single one is theoretical or computational in nature. There are a few book chapters and patent applications, as well as about a dozen peer-reviewed scientific articles, mostly in non-prestigious journals1.

Skimming through the papers, they seem fine. A lot of time and effort has gone into them, I don’t see any obvious problems with their methodology, and the reasoning and conclusions seem to be a reasonable.  Going over all of them would take way too long, but I’ll just pick one that is representative and relatively easy to explain: “Theoretical Analysis of Diamond Mechanosynthesis. Part II. C2 Mediated Growth of Diamond C(110) Surface via Si/Ge-Triadamantane Dimer Placement Tools”.

Please don’t leave, I promise you this is interesting!

The goal of this paper is simple: we want to use a tooltip to pick up a pair of carbon atoms (referred to as a “dimer”), place the dimer on a carbon surface (diamond), and remove the tooltip, leaving the dimer on the surface.

In our large world, this type of task is pretty easy: you pick up a brick, you place it where you want, and then you let it go. But all the forces present at our scale are radically different at the nanoscale. For example, we used friction to pick the brick up, but “friction” does not really exist at the single atom scale. Instead, we have to bond the cargo element to our tool, and then break that bond at the right moment. It’s like if the only way to lay bricks was to glue your hand to a brick, glue the brick to the foundation, and then rip your hand away.

Below we have the design for their tooltip that they were investigating here. We have our diamondoid cages from earlier, but we replace a pair of corner atoms with Germanium (or Si) atoms, and bond the cargo dimer to these corners, in the hopes it will make them easier to detach:

The first computational result is a checking of this structure using DFT simulations. I have described DFT and it’s strengths and shortcomings in this previous post. They find that the structure is stable in isolation.

Okay great, it’s stable on it’s own, but the eventual plan is to have a whole ton of these around working in parallel. So the next question they ask is this: if I have a whole bunch of these together, are they going to react with each other and ruin the tooltip? The answer, they find, is yes, in two different ways. Firstly, if two of these meet dimer-to-dimer, it’s thermodynamically favorable for them to fuse together into one big, useless tooltip. Secondly, if one encounters the hydrogen atoms on the surface of the other, it would tear them out to sit on the end of the cargo dimer, rendering it again useless. They don’t mention it explicitly, but I assume the same thing would happen if it encountered stray hydrogen in the air.

This is a blow to the design, and would mean great difficulty in actually using the thing large scale. In theory you could still pull it off by keeping the tools isolated from each other.

They check the stability of the tooltip location itself using molecular dynamics calculation, and find that it’s stable enough for purpose, with a stray that is smaller than the chemical bond distances involved.

And now for the big question: can it actually deposit the dimer on the surface? The following graph summarizes the DFT results:

On the left side, we have the initial state. The tooltip is carrying the cargo dimer. At this step, and at every other, a DFT calculation is taken out to calculate the entire energy of the simulation.

In the middle, we have the middle state. The tooltip has been lowered, carrying the tooltip to the surface, where the carbon dimer is now bonded both to the tooltip and to the diamond surface.

On the right, we have the desired final state. The tooltip has been retracted and raised, but the carbon is left behind on the surface.

All three states have been simulated using DFT to predict their energy, and so have a number of intermediate steps in between. From this, we can see that the middle step is predicted to be 3 eV more energetically favorable than the left state, meaning that there will be no problem progressing from left to middle.

The real problem they find is in going from the middle state to the right state. There is about a 5 eV energy barrier to climb to remove the tooltip. This is not a game ender, as we can apply such energy mechanically by pulling on the tooltip (I did a back of the envelope calculation and the energy cost didn’t seem prohibitive2).

No, the real problem is that when you pull on the tooltip, there no way to tell it to leave the dimer behind on the surface. In fact, it’s lower energy to rip up the carbon dimer as well, going right back to the left state, where you started.

They attempt a molecular dynamics simulation, and found that with the Germanium tip, deposition failed 4 out of 5 times (for silicon, it failed every time). They state this makes sense because the extra 1 eV barrier is small enough to be overcome, at least some of the time, by 17eV of internal (potential+kinetic) energy. If I were reviewing this paper I would definitely ask for more elaboration on these simulations, and where exactly the 17 eV figure comes from. They conclude that while this would not be good enough for actual manufacturing, it’s good enough for a proof of concept.

In a later paper, it is claimed that the analysis above was too simplistic, and that a more advanced molecular dynamics simulation shows the Ge tool reliably deposits the dimer on the surface every time. It seems very weird and unlikely to me that the system would go to the higher energy state 100% of the time, but I don’t know enough about how mechanical force is treated in molecular dynamics to properly assess the claim.

I hope that this analysis has given you a taste of the type of problem that is tackled in computational physics, and how it is tackled. From here, they looked at a few other challenges, such as investigating more tip designs, looking at the stability of large diamondoid structures, and a proposed tool to remove hydrogen from a surface in order to make it reactive, a necessary step in the process.

Experimental diamondoid research

Recall that the goal of this theoretical research was to set the stage for experimental results, with the eventual goal of actually building diamondoid. But if you look at the collaborators of the project, almost everyone was working on theory.  Exactly one experimentalist team worked on the project.

The experimentalist in question was university of Nottingham professor Phillip Moriarty, of sixty symbols fame (he has a blog too). Interestingly enough, the collaboration was prompted by a debate with an MNT proponent in 2004, with Moriarty presenting a detailed skeptical critique of DMS proposals and Drexler-style nanotech in general. A sample of his concerns:

While I am open to the idea of attempting to consider routes towards the development of an implementation pathway for Mann et al.’s Si/Ge-triadamantane dimer placement reaction, even this most basic reaction in mechanochemistry is practically near-impossible. For example, how does one locate one tool with the other to carry out the dehydrogenation step which is so fundamental to Mann et al.’s reaction sequence?

….

Achieving a tip that is capable of both good atomic resolution and reliable single molecule positioning (note that the Nottingham group works with buckyballs on surfaces of covalently bound materials (Si(111) and Si(100)) at room temperature) requires a lot of time and patience. Even when a good tip is achieved, I’ve lost count of the number of experiments which went ‘down the pan’ because instead of a molecule being pushed/pulled across a surface it “decided” to irreversibly stick to the tip.

Despite the overall skepticism, he approved of the research efforts by Freitas et al, and the correspondence between them led to Moriarty signing on to the nanofactory project. Details on what happened next are scarce on the website.

Rather than try and guess what happened, I emailed Moriarty directly. The full transcripts are shown here.

Describing what happened, Moriarty explained that the work on diamond mechanosynthesis was abandoned after ten months:

Diamond is a very, very difficult surface to work with. We spent ten months and got no more than a few, poorly resolved atomic force microscopy (AFM) images. We’re not alone. This paper -- https://journals.aps.org/prb/cited-by/10.1103/PhysRevB.81.201403 (also attached)-- was the first to show atomic resolution AFM of the diamond surface. (There’d previously been scanning tunnelling microscopy (STM) images and spectroscopy of the diamond (100) surface but given that the focus was on mechanical force-driven chemistry (mechanosynthesis), AFM is a prerequisite.) So we switched after about a year of that project (which started in 2008) to mechanochemistry on silicon surfaces – this was much more successful, as described in the attached review chapter.

Inquiring as to why diamond was so hard to work with, he replied:

A key issue with diamond is that tip preparation is tricky. On silicon, it’s possible to recover atomic resolution relatively straight-forwardly via the application of voltage pulses or by pushing the tip gently (or not so gently!) into the surface – the tip becomes silicon terminated. Diamond is rather harder than silicon and so once the atomistic structure at the end is lost, it needs to be moved to a metal sample, recovered, and then moved back to the diamond sample. This can be a frustratingly slow process.

Moreover, it takes quite a bit of work to prepare high quality diamond surfaces. With silicon, it’s much easier: pass a DC current through the sample, heat it up to ~ 1200 C, and cool it down to room temperature again. This process routinely produces large atomically flat terraces.

So it turns out that mechanosynthesis experiments on diamond are hard. Like ridiculously hard. Apparently only one group ever has managed to successfully image the atomic surface in question. This renders attempts to do mechanosynthesis on diamond impractical, as you can’t tell whether or not you’ve pulled it off or not.

This is a great example of the type of low-level practical problem that is easy to miss if you are a theoretician (and pretty much impossible to predict if you aren’t a domain expert).

So all of those calculations about the best tooltip design for depositing carbon on diamond ended up being completely useless for the problem of actually building a nanofactory, at least until imaging technology or techniques improve.

But there wasn’t zero output. The experimental team switched materials, and was able to achieve some form of mechanosynthesis. It wasn’t on diamond, but Silicon, which is much easier to work with. And it wasn’t deposition of atoms, it was a mechanical switch operated with a tooltip, summarized in this youtube video. Not a direct step toward molecular assembly, but still pretty cool.

As far as I can tell, that’s the end of the story, when it comes to DMS. The collaboration appears to have ended in the early 2010’s, and I can barely find any mention of the topic in the literature past 2013. They didn’t reach the dream of a personal nanofactory: they didn’t even reach the dream of depositing a few carbon atoms on a diamond surface.

A brief  defense of dead research directions

I would say that DMS research is fairly dead at the moment. But I really want to stress that that doesn’t mean it was bad research, or pseudoscience, or a waste of money.

They had a research plan, some theoretical underpinnings, and explored a possible path to converting theory into experimental results. I can quibble with their definitions, and some of their conclusions seem overly optimistic, but overall they appear to be good faith researchers making a genuine attempt to expand knowledge and tackle a devilishly difficult problem with the aim of making the world a better place. That they apparently failed to do so is not an indictment, it’s just a fact of science, that even great ideas mostly don’t pan out into practical applications.

Most research topics that sound good in theory don’t work in practice, when tested and confronted with real world conditions. This is completely fine, as the rare times when something works, a real advancement is made that improves the lives of everyone. The plan for diamondoid nanofactories realistically had a fairly small chance of working out, but if it had, the potential societal benefits could have been extraordinary. And the research, expertise, and knowledge that comes out of failed attempts are not necessarily wasted, as they provide lessons and techniques that help with the next attempt.

And while DMS research is somewhat dead now, that doesn’t mean it won’t get revived. Perhaps a new technique will be invented that allows for reliable imaging of diamondoid, and DMS ends up being successful eventually. Or perhaps after a new burst of research, it will prove impractical again, and the research will go to sleep again. Such is life, in the uncertain realms of advanced science.

Don’t worry, nanotech is still cool as hell

At this point in my research, I was doubting whether even basic nanomachines or rudimentary mechanosynthesis was even possible. But this was an overcorrection. Nanoscience is still chuggin along fine. Here, I’m just going to list a non-exhaustive list of some cool shit we have been able to do experimentally. (most of these examples were taken from “nanotechnology: a very short introduction”, written by Phillip Moriarty (the same one as before).

First, I’ll note that traditional chemistry can achieve some incredible feats of engineering, without the need for mechanochemistry at all. For example, in 2003 the Nanoputian project successfully built a nanoscale model of a person out of organic molecules. They used cleverly chosen reaction pathways to produce the upper body, and cleverly chosen reaction pathways to produce the lower body, and then managed to pick the exact right conditions to mix them together in that would bond the two parts together.

Similarly, traditional chemistry has been used to build “nanocars” , nanoscale structures that contain four buckyball wheels connected to a molecular “axle”, allowing it to roll across a surface. Initially, these had to be pushed directly by a tooltip. In later versions, such as the nanocar race, the cars are driven by electron injection or electric fields from the tooltip, reaching top speeds of 300 nm per hour. Of course, at this speed the nanocar would take about 8 years to cross the width of a human finger, but it’s the principle that counts.

The Nobel prize in 2016 was awarded to molecular machines, for developing molecular lifts, muscles, and axles.

I’ll note that using a tooltip to slide atoms around has been a thing since 1990, when IBM wrote their initials using xenon atoms.  A team achieved a similar feat for selected silicon atoms on silicon surfaces in 2003, using purely mechanical force.

As for the dream of molecular assembly, the goal of picking atoms up and placing them down has been achieved by a UK team, which were able to use a chemical arm to pick up a cargo molecule bonded on one side, transfer it to another side, and drop it and leave it in place:

This is not mechanosynthesis as it is not powered by direct mechanical force, but from chemical inputs, such as varying the acidity of the solution. It is also based on more complex organic molecules, rather than diamondoid structures.

This brings us to what seems the most interesting and promising area : DNA based nanotech. This makes sense: over billions of years evolution already figured out a way to build extremely complex self-replicating machines, which can also build little bots as small as 20nm across. Actual bacteria are larger scale and more fragile than hypothetical nanofactories, but have the distinct advantage of actually existing. Why reinvent the wheel?

I have very little background in biology, so I won’t venture too deeply into the topic (which deserves a whole post of it’s own), but there have been a number of highly impressive achievements in DNA based nanotech. The techniques of DNA origami allow for DNA structures to fold up among themselves to form a variety of structures, such as spheres, cubes, and nanoflasks. One team used one such DNA nanorobot to target tumour growth in mice. The research is still some ways from practical human applications (and many such promising medical technologies end up being impractical anyway). Nonetheless, I’m impressed, and will be watching this space closely.

So are diamondoid bots a threat?

It’s very hard to prove that a technology won’t pan out, if it doesn’t inherently break the laws of physics. But a tech being “not proven to be impossible” does not mean the tech is “inevitable”.

With regards to diamondoid specifically, the number of materials that are not diamondoid outnumbers the number of materials that are by a truly ridiculously large margin. And although diamondoid has a lot going for it in terms of stiffness and strength, we saw that it also has shortcomings that make it difficult to work with, and potential minefields like the tooltips theoretically sticking to each other. So my guess is that if Drexler-style nanofactories are possible, they will not be built up of diamondoid.

How about nanofactories made of other materials? Well, again, there are a truly gargantuan number of materials available, which does give some hope. But then, this is also a ridiculously hard problem. We haven’t even scratched the surface of the difficulties awaiting such a project. Depositing one measly dimer on a surface turned out to be too hard, but once we achieved that, you have to figure out how to place the next one, and the next one, and build a proper complex structure without getting your tooltip stuck. You need a way to harvest your sources of carbon to build things up with. If you want to be truly self-sufficient and self-replicating, you need a source of energy for the mechanical force needed to rip atoms away, and a means of propulsion to move your entire nanofactory around.

Designs have been proposed for a lot of these problems (like in Drexlers thesis), but each step is going to be beset with endless issues and engineering challenges that would have to be trudged through, one step at a time. We’ve barely gotten to step 1.

Fusion power is often accurately mocked for having been “20 years away” for over three decades. It had proofs of concept and was understood, it seemed that all was left was the engineering, which ended up being ridiculously hard. To me, molecular nanotech looks about 20 years away from being “20 year away”. At the current rate of research, I would guess it won’t happen for at least 60 years, if it happens at all. I would be happy to be proven wrong.

I consulted professor Moriarty whether he thought the scenario proposed by Yudkowsky was plausible:

We are a long, long, loooong way from the scenario Yudkowsky describes. For example, despite it being 33 years since the first example of single atom manipulation with the STM (the classic Eigler and Schweizer Nature paper where they wrote the IBM logo in xenon atoms), there’s yet to be a demonstration of the assembly of even a rudimentary 3D structure with scanning probes: the focus is on the assembly of structures by pushing, pulling, and/or sliding atoms/molecules across a surface. Being able to routinely pick up, and then drop, an atom from a tip is a much more complicated problem.

Marauding swarms of nanobots won’t be with us anytime soon.

This seems like a good place to note that MNT proponents have a record of extremely over-optimistic predictions. See this estimation of MNT arrival from Yudkowsky in 1999:

As of '95, Drexler was giving the ballpark figure of 2015 (11).  I suspect the timetable has been accelerated a bit since then.  My own guess would be no later than 2010.

Could the rate of research accelerate?

Now, I can’t leave without addressing the most likely objection. I said greater than 60 years at the current rate of research. But what if the rate of research speeds up?

One idea is that the DNA or bio-based robots will be used to build a drexler-style nanofactory. This is the “first stage nanofactory” that yud mentions in list of lethalities, and it was the first step proposed by Drexler as well. I see how this could enable better tools and more progress, but I’m not sure how this would affect the fundamental chemistry issues that need to be overcome to build a non-protein based machine. How will the biobot stop two tooltips from sticking together?. If you want to molecularly assemble something, would in really be better for a tooltip to be held by a wiggly biologically based bot, instead of a precisely computerized control tooltip?

The more common objection is that artifical intelligence will speed this research up. Well, now we’re working with two high uncertainty, speculative technologies. To keep this simple I’ll restrict this analysis to the short term (the next decade or so), and assume no intelligence explosion occurs. I might revisit the subject in more depth later on.

First, forget the dream of advances in theory rendering experiment unnecessary. As I explained in a previous post, the quantum equations are just way too hard to solve with 100% accuracy, so approximations are necessary, which themselves do not scale particularly well.

Machine learning in quantum chemistry has been investigated for some time now, and there are promising techniques that could somewhat speed up a subset of calculations, and make some larger-scale calculations feasible that were not before. For my research, the primary speedups from AI come from using chatGPT to speed up coding a bit and helping to write bureaucratic applications.

I think if the DMS project were ran today, the faster codes would allow for slightly more accurate results, more calculations per paper allowing for more materials to be investigated, and potentially the saved time from writing and coding could allow for another few papers to be squeezed out. For example, if they used the extra time to look at silicon DMS as well as carbon DMS, they might have gotten something that could actually be experimentally useful.

I’m not super familiar with the experimental side of things. In his book, Moriarty suggests that machine learning could be applied to:

image and spectral classification in various forms of microscopy, automation of time-consuming tasks such as optimization of the tip of an STM or AFM, and the positioning of individual atoms and molecules.

So I think this could definitely speed up parts of the experimental process. However, there are still going to be a lot of human-scale bottlenecks to keep a damper on things, such as sample preparation. And as always with practical engineering, a large part of the process will be figuring out what the hell went wrong with your last experiment. There still is no AI capable of figuring out that your idiot labmate Bob has been contaminating your samples by accident.

What about super-advanced AGI? Well, now we’re guessing about two different speculative technologies at once, so take my words (and everyone else’s) with a double grain of salt. Obviously, an AGI would speed up research, but I’m not sure the results would be as spectacular as singularity theorists expect.

An AI learning, say, Go, can play a hundred thousand games a minute with little difficulty. In science, there are likely to be human-scale bottlenecks that render experimentation glacial in comparison. High quality quantum chemistry simulations can take days or weeks to run, even on supercomputing clusters. On the experimental side humans have to order parts, deliver them, prepare the samples, maintain the physical equipment, etc. It’s possible that this can be overcome with some sort of massively automated robotic experimentation system… but then you have to build that system, which is a massive undertaking in itself. Remember, the AI would not be able to use MNT to build any this. And of course, this is all assuming the AI is actually competent, and that MNT is even possible in practicality.

Overall, I do not think trying to build drexler-style nanomachinery would be an effective plan for the adversaries of humanity, at least as things currently stand. If they try, I think we stand a very good chance of detecting and stopping them, if we bother to look instead of admitting premature defeat.

Summary

  1. “Diamondoid bacteria” is a phrase that was invented 2 years ago, referring obliquely to diamondoid mechanosynthesis (DMS)
  2. DMS refers to a proposed technology where small cage-like structures are used to position reactive molecules together to assemble complex structures.
  3. DMS research was pursued by a nanofactory collaboration of scientists starting in the early 2000’s.
  4. The theoretical side of the research found some potentially promising designs for the very initial stages of carbon deposition but also identified some major challenges and shortcomings.
  5. The experimental side was unsuccessful due to the inability to reliably image the diamond surface.
  6. Dead/stalled projects are not particularly unusual and should not reflect badly on the researchers.
  7. There are still many interesting advances and technologies in the field of nanoscience, including promising advances in DNA based robotics.
  8. At the current rate of research, DMS style nanofactories still seem many, many decades away at the minimum.
  9. This research could be sped up somewhat by emerging AI tech, but it is unclear to what extent human-scale bottlenecks can be overcome in the near future.

New to LessWrong?

New Comment
81 comments, sorted by Click to highlight new comments since: Today at 8:16 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Veedrac7mo4416

I'm not sure how to put this, but while this post is framed as a response to AI risk concerns, those concerns are almost entirely ignored in favor of looking at how plausible it is for near-term human research to achieve it, and only at the end is it connected back to AI risk via a brief aside whose crux is basically that you don't think Yudkowsky-style ASI will exist.

I like a lot of the discussion if I frame it in my head to be about what it is actually arguing for. Taking it as given, it seems instead broadly non-sequiter, as the evidence given basically doesn't relate to resolving the disagreement.

[-]titotal7mo2115

At no point did I ever claim that this was a conclusive debunking of AI risk as a whole, only an investigation into one specific method proposed by Yudkowksy as an AI death dealer.

In my post I have explained what DMS is, why it was proposed as a technology, how far along the research went, the technical challenges faced in it's construction, some observations of how nanotech research works, the current state of nanotech research, what near-term speedups can be expected from machine learning, and given my own best guess on whether an AGI could pull off inventing MNT in a short timeframe, based on what was learned. 

This is only "broadly non-sequiter" if you think that none of that information is relevant for assessing the feasibility of diamondoid bacteria AI weapons, which strikes me as somewhat ridiculous. 

1Veedrac7mo
Rather than focusing on where I disagree with this, I want to emphasize the part where I said that I liked a lot of the discussion if I frame it in my head differently. I think if you opened the Introduction section with the second paragraph of this reply (“In my post I have explained”), rather than first quoting Yudkowsky, you'd set the right expectations going into it. The points you raise are genuinely interesting, and tons of people have worldviews that this would be much more convincing to than Yudkowsky's.
[-]1a3orn7mo113

What would qualify as an evidence against how ASI can do a thing, apart from pointing out the actual physical difficulties in doing the thing?

7Veedrac7mo
This excludes most of the potential good arguments! If you can show that large areas of the solution space seem physically unrealizable, that's an argument that potentially generalizes to ASI. For example, I think people can suggest good limits on how ASI could and couldn't traverse the galaxy, and trivially rule out threats like ‘the AI crashes the moon into Earth’, because of physical argument. To hypothesize an argument of this sort that might be persuasive, at least to people able to verify such claims: ‘Synthesis of these chemicals is not energetically feasible at these scales because these bonds take $X energy to form, but it's only feasible to store $Y energy in available bonds. This limits you to a very narrow set of reactions which seems unable to produce the desired state. Thus larger devices are required, absent construction under an external power source.’ I think a similar argument could plausibly exist around object stickiness, though I don't have the chemistry knowledge to give a good framing for how that might look. There aren't as many arguments once we exclude physical arguments. If you wanted to argue that it was plausibly physically realizable but that strong ASI wouldn't figure it out, I suppose some in-principle argument that it involves solving a computationally intractable challenge in leu of experiment might work, though that seems hard to argue in reality. It's generally hard to use weaker claims to limit far ASI, because, being by definition qualitatively and quantitatively smarter than us, it can reason about things in ways that we can't. I'm happy to think there might exist important, practically-solvable-in-principle tasks that an ASI fails to solve, but it seems implausible for me to know ahead of time which tasks those are.
3Portia7mo
I think the text is mostly focussed on the problems humans have run into when building this stuff, because these are known and hence our only solid empirical detailed basis, while the problems AI would run into when building this stuff are entirely hypothetical. It then makes a reasonable argument that AI probably won't be able to circumvent these problems, because higher intelligence and speed alone would not plausibly fix them, and in fact, a plausible fix might have to be slow, human-mediated, and practical. One can disagree with that conclusion, but as for the approach, what alternative would you propose when trying to judge AI risk? 
2Veedrac7mo
I think I implicitly answered you elsewhere, though I'll add a more literal response to your question here. On a personal level, none of this is relevant to AI risk. Yudkowsky's interest in it seems like more of a byproduct of his reading choices when he was young and impressionable than anything else, which is not reading I shared. Neither he nor I think this is necessary for xrisk scenarios, with me probably being on the more skeptical side, and me believing more in practical impediments that strongly encourage doing the simple things that work, eg. conventional biotech. Due to this not being a crux and not having the same personal draw towards discussing it, I basically don't think about this when I think about modelling AI risk scenarios. I think about it when it comes up because it's technically interesting. If someone is reasoning about this because they do think it's a crux for their AI risk scenarios, and they came to me for advice, I'd suggest testing that crux before I suggested being more clever about de novo nanotech arguments.
3faul_sname7mo
My impression is that the nanotech is a load bearing part of the "AI might kill all humans with no warning and no signs ofpreparation" story - specifically, "a sufficiently smart ASI could quickly build itself more computing substrate without having to deal with building and maintaining global supply chains, and doing so would be the optimal course of action" seems like the sort of thing that's probably true if it's possible to build self- replicating nanofactories without requiring a bunch of slow, expensive, serial real-world operations to get the first one built and debugged, and unlikely to be true if not. That's not to say human civilisation is invulnerable, just that "easy" nanotech is a central part of the "everyone dies with no warning and the AI takes over the light cone uncontested" story.
6Veedrac7mo
I was claiming that titotal's post doesn't appear to give arguments that directly address whether or not Yudkowsky-style ASI can invent diamondoid nanotech. I don't understand the relevance to my comment. I agree that if you find titotal's argument persuasive then whether it is load bearing is relevant to AI risk concerns, but that's not what my comment is about. FWIW Yudkowsky frequently says that this is not load bearing, and that much seems obviously true to me also.
1adastra222mo
Both this and his earlier article by titotal on computational chemistry make the argument that development of molecular nanotechnology can't be a one-shot process because of intrinsic limitations on simulation capabilities. Yudkowsky claims that one-shot nanotechnology is not load bearing, yet literally every example he gives when pressed involves one-shot nanotech.
2Veedrac2mo
Could you quote or else clearly reference a specific argument from the post you found convincing on that topic?
2adastra222mo
OP's post regarding the topic is here (it is also linked to in the body of this article): https://titotal.substack.com/p/bandgaps-brains-and-bioweapons-the I'm running a molecular nanotechnology company, so I've had to get quite familiar with the inner workings and limitations of existing computational chemistry packages. This article does a reasonable job of touching on the major issues. I can tell you that experimentally these codes don't even come close to the precision required to find trajectories that reliably drive diamondoid mechanosynthesis. It's really difficult to get experiments in the lab to match the (over simplified, reduced accuracy) simulations which take days or weeks to run. Having any hope of adequately simulating mechanosynthetic reactions requires codes that scale with O(n^7). No one even writes these codes, because what would be the point? The biggest thing you could simulate is helium. There are approximations with O(n^3) and O(n^4) running time that are sometimes useful guides, but can often have error bars larger than the relevant energy gaps. The acid test here is still experiment: try to do the thing and see what happens. AI has potential to greatly improve productivity with respect to nano-mechanical engineering designs because you can build parts rigid and large enough, and operating at slow enough speeds that mechanical forces are unlikely to have chemical effects (bond breaking or forming), and therefore faster O(n^2) molecular mechanics codes can be used. But all the intelligence in the world isn't going to make bootstrapping molecular nanotechnology a one-shot process. It's going to be years of hard-won lab work, whether it's done by a human or a thinking machine.
5Veedrac2mo
Well yes, nobody thinks that existing techniques suffice to build de-novo self-replicating nano machines, but that means it's not very informative to comment on the fallibility of this or that package or the time complexity of some currently known best approach without grounding in the necessity of that approach. One has to argue instead based on the fundamental underlying shape of the problem, and saying accurate simulation is O(n⁷) is not particularly more informative to that than saying accurate protein folding is NP. I think if the claim is that you can't make directionally informative predictions via simulation for things meaningfully larger than helium then one is taking the argument beyond where it can be validly applied. If the claim is not that, it would be good to hear it clearly stated.
2adastra222mo
I don't understand your objection. People who actually use computational chemistry--like me, like titotal--are familiar with its warts and limitations. These limitations are intrinsic to the problem and not expressions of our ignorance. Diamondoid mechanosynthetic reactions depend on properties which only show up in higher levels of theory, the ones which are too expensive to run for any reasonably sized system. To believe that this limitation won't apply to AI, just because, is magical thinking. I wasn't saying anything with respect to de-novo self-replicating nano machines, a field which has barely been studied by anyone and which we cannot adequately say much about at all.
2Veedrac2mo
And what reason do you have for thinking it can't be usefully approximated in some sufficiently productive domain, that wouldn't also invalidly apply to protein folding? I think it's not useful to just restate that there exist reasons you know of, I'm aiming to actually elicit those arguments here.
1adastra222mo
The fact that this has been an extremely active area of research for over 80 years with massive real-world implications, and we're no closer to finding such a simplified heuristic. This isn't like protein folding at all--the math is intrinsically more complicated with shared interaction terms that require high polynomial count running times. Indeed it's provable that you can't have a faster algorithm than those O(n^3) and O(n^4) approximations which cover all relevant edge cases accurately. Of course there could be some heuristic-like approximation which works in the relevant cases for this specific domain, e.g. gemstone crystal lattice structures under ultra-high vacuum conditions and cryogenic temperatures. Just like there are reasonably accurate molecular mechanics models that have been developed for various specific materials science use cases. But here's the critical point: these heuristics are developed by evolving models based on experiment. In other domains like protein folding you can have a source of truth for generative AI training that uses simulation. That's how AlphaFold was done. But for chemical reactions of the sort we'd be interested here the ab-initio methods that give correct results are simply too slow to use for this. Not "buy a bigger GPU" slow, but "run the biggest supercomputer for a quintillion years" slow. So the solution is to do the process manually, running actual experiments to refine the heuristic models at each step, much machine learning training models would do. These are called semi-empirical methods, and they work. But we don't have a good semi-empirical model for the type of reactions concerned here, and it would take years of painstaking lab work to develop, which no amount of super intelligence can avoid. It's magical thinking to assume that an AI will just one-shot this into existence.
5Veedrac2mo
Thanks, I appreciate the attempt to clarify. I do though think there's some fundamental disagreement about what we're arguing over here that's making it less productive than it could be. For example, I think both: 1. Lack of human progress doesn't necessarily mean the problem is intrinsically unsolvable by advanced AI. Humans often take a bunch of time before proving things. 2. It seems not at all the case that algorithmic progress isn't happening, so it's hardly a given that we're no closer to a solution unless you first circularly assume that there's no solution to arrive at. If you're starting out with an argument that we're not there yet, this makes me think more that there's some fundamental disagreement about how we should reason about ASI, more than your belief being backed by a justification that would be convincing to me had only I succeeded at eliciting it. Claiming that a thing is hard is at most a reason not to rule out that it's impossible. It's not a reason on its own to believe that it is impossible. With regard to complexity, * I failed to understand the specific difference with protein folding. Protein folding is NP-hard, which is significantly harder than O(n³). * I failed to find the source for the claim that O(n³) or O(n⁴) are optimal. Actually I'm pretty confused how this is even a likely concept; surely if the O(n³) algorithm is widely useful then the O(n⁴) proof can't be that strong of a bound on practical usefulness? So why is this not true of the O(n³) proof as well? It's maybe true that protein folding is easier to computationally verify solutions to, but first, can you prove this, and second, on what basis are you claiming that existing knowledge is necessarily insufficient to develop better heuristics than the ones we already have? The claim doesn't seem to complete to me. Please note that I've not been making the claim that ASI could necessarily solve this problem. I have been making the claim that the arguments in this post d
3adastra222mo
Do you have a maths or computer science background? I ask because some of the questions you ask are typical of maths or CS, but don't really make sense in this context. Take protein folding. Existing techniques for estimating how a protein folds are exponential in complexity (making the problem NP-hard), so running time is bounded by O(2^n). But what's the 'n'? In these algorithms it is the number of amino acids because that's the level at which they are operating. For quantum chemistry the polynomial running times have 'n' being the number of electrons in the system, which can be orders of magnitude higher. That's why O(n^4) can be much, much worse than O(2^n) for relevant problem sizes--there's no hope of doing full protein folding simulations by ab initio methods. The second reason they're not comparable is time steps. Ab initio molecular mechanics methods can't have time steps larger than 1 femtosecond. Proteins fold on the timescale of milliseconds. This by itself is what makes all ab initio molecular mechanics methods infeasible for protein folding, even though it's a constant factor ignored by big-O notation. Staying on the issue of complexity, from a technical / mathematical foundation perspective every quantum chemistry code should also have exponential running time, and this is trivially proved by the fact that every particle technically does interact with every other particle. But all of these interaction terms fall off with at least 1/r^2, and negative power increases with each corrective term. For 1/r^2 and 1/r^3, there are field density tricks that can be used for accurate sub-exponential modeling. For 1/r^4 or higher it is perfectly acceptable to just have distance cutoffs which cutout the exponential term. How accurate a simulation you will get is a question of how many of these corrective terms you include. The more you include, the more higher-order effects you capture, but you also get hit with higher-order polynomial running times. To bring a
3Veedrac2mo
If you say “Indeed it's provable that you can't have a faster algorithm than those O(n^3) and O(n^4) approximations which cover all relevant edge cases accurately” I am quite likely to go on a digression where I try to figure out what proof you're pointing at and why you think it's a fundamental barrier, and it seems now that per a couple of your comments you don't believe it's a fundamental barrier, but at the same time it doesn't feel like any position has been moved, so I'm left rather foggy about where progress has been made. I think it's very useful that you say since this seems like a narrower place to scope our conversation. I read this to mean: 1. You don't know of any in principle barrier to solving this problem, 2. You believe the solution is underconstrained by available evidence. I find the second point hard to believe, and don't really see anywhere you have evidenced it. As a maybe-relevant aside to that, wrt. I think you're talking of ‘mere application of thought’ like it's not the distinguishing feature humanity has. I don't care what's ‘in line with the actual history’ of AI, I care what a literal superintelligence could do, and this includes a bunch of possibilities like: * Making inhumanly close observation of all existing data, * Noticing new, inhumanly-complex regularities in said data, * Proving new simplifying regularities from theory, * Inventing new algorithms for heuristic simulation, * Finding restricted domains where easier regularities hold, * Bifurcating problem space and operating over each plausible set, * Sending an interesting email to a research lab to get choice high-ROI data. We can ignore the last one for this conversation. I still don't understand why the others are deemed unreasonable ways of making progress on this task. I appreciated the comments on time complexity but am skipping it because I don't expect at this point that it lies at the crux.
2adastra222mo
By "proof" I meant proof by contradiction. DFT is a great O(n^3) method for energy minimizing structures and exploring electron band structure, and it is routinely used for exactly that purpose. So much so that many people conflate "DFT" with more accurate ab initio methods, which it is not. However DFT utterly ignores exchange correlation terms and so it doesn't model van der Waals interactions at all. Every design for efficient and performant molecular nanotechnology--the ones that get you order-of-magnitude performance increases and therefore any benefit over existing biology or materials science--involve vdW forces almost exclusively in their manufacture and operation. It's the dominant non-covalent bonded interaction at that scale. That's the most obvious example, but also a lot of the simulations performed by Merkle and Freitas in their minimal toolset paper give incorrect reaction sequences in these lower levels of theory, as they found out when they got money to attempt it in the lab. Without pointing to their specific failure, you can get a hint of this in surface science. Silicon, gold, and other surfaces tend to have rather interesting surface clustering and reorganization effects, which are observable by scanning probe microscopy. These are NOT predicted by the cheaper / computationally tractable codes, and they are an emergent property of higher-order exchange correlations in the crystal structure. These nevertheless have enough of an effect to drastically reshape the surface of these materials, making calculation of those forces absolutely required for any attempt to build off the surface. Attempting to do cheaper simulations for diamondoid synthesis reactions gave very precise predictions that didn't work as expected in the lab. How would your superintelligent AI know that uncalculated terms dominate in the simulation, and make corrective factors without having access to those incomputable factors? I think you vastly overestimate how much knowledge
4ryan_greenblatt2mo
One quick intuition pump: do you think a team of 10,000 of the smartest human engineers and scientists could do this if they had perfect photographic memory, were immortal, and could think for a billion years? To keep the situation analogous to an AI needing to do this quickly, we'll suppose this team of humans is subject to the same restrictions on compute for simulations (e.g. over the entire billion years they only get 10^30 FLOP) and also can only run whatever experiments the AI would get to run (effectively no interesting experiments). I feel uncertain about whether this team would succeed, but it does seem pretty plausible that they would be able to succeed. Perhaps I think they're 40% likely to succeed? Now, suppose the superintelligence is like this, but even more capable. See also That Alien Message ---------------------------------------- Separately, I don't think it's very important to know what an extremely powerful superintelligence could do, because prior to the point where you have an extremely powerful superintelligence, humanity will already be obsoleted by weaker AIs. So, I think Yudkowsky's arguments about nanotech are mostly unimportant for other reasons. But, if you did think "well, sure the AI might be arbitrarily smart, but if we don't give it access to the nukes what can it even do to us?" then I think that there are many sources of concern and nanotech is certainly one of them.
2adastra222mo
By merely thinking about it, and not running any experiments? No, absolutely not. I don't you understood my post if you assume I'd think otherwise. Try this: I'm holding a certain number of fingers behind my back. You and a team of 10,000 of the smartest human engineers and scientists have a billion years to decide, without looking, what your guess will be as to how many fingers I'm holding behind my back. But you only get one chance to guess at the end of that billion years. That's a more comparable example. Please don't use That Alien Message as an intuition pump. There's a tremendous amount wrong with the sci-fi story. Not least of which is that it completely violates the same constraint you put into your own post about constraining computation. I suggest doing your own analysis of how many thought-seconds the AI would have in-between frames of video, especially if you assume it to be running as a large inference model. The best thing you can do is rid yourself of the notion that superhuman AI would have arbitrary capabilities. That is where EY went wrong, and a lot of the LW crowd too. If you permit dividing by zero or multiplying by infinity, then you can easily convince yourself of anything. AI isn't magic, and AGI isn't a free pass to believe anything. PS: We've had AGI since 2017. That'd better be compatible with your world view if you want accurate predictions.
4ryan_greenblatt2mo
I don't understand where your confidence is coming from here, but fair enough. It wasn't clear to me if your take was more like "wildly, wildly superintelligent AI will be considerably weaker than a team of humans thinking for a billion years" or more like "literally impossible without either experiments or >>10^30 FLOP". I generally have an intuition like "it's really, really hard to rule out physically possible things out without very strong evidence, by default things have a reasonable chance of being possible (e.g. 50%) when sufficient intelligence is applied if they are physically possible". It seems you don't share this intuition, fair enough. (I feel like this applies for nearly all human inventions? Like if you had access to a huge amount of video of the world from 1900 and all written books that existed at this point, and had the affordances I described with a team of 10,000 people, 10^30 FLOP, and a billion years, it seems to me like there is a good chance you'd be able to one-shot reinvent ~all inventions of modern humanity (not doing everything in the same way, in many cases you'd massively over engineer to handle one-shot). Planes seem pretty easy? Rockets seem doable?) I think this is an ok, but not amazing intuition pump for what wildly, wildly superintelligent AI could be like. I separately think it's not very important to think about the abilities of wildly, wildly superintelligent AI for most purposes (as I noted in my comment). So I agree that imagining arbitrary capabilities is probablematic. (For some evidence that this isn't post-hoc justification, see this post on which I'm an author.) Uhhhh, I'm not sure I agree with this as it doesn't seem like nearly all jobs are easily fully automatable by AI. Perhaps you use a definition of AGI which is much weaker like "able to speak slightly coherant english (GPT-1?) and classify images"?
1adastra222mo
By the way, where's this number coming from? You keep repeating it. That amount of calculation is equivalent to running the largest supercomputer in existence for 30k years. You hypothetical AI scheming breakout is not going to have access to that much compute. Be reasonable. Ok let's try a different tract. You want to come up with a molecular mechanics model that can efficiently predict the outcome of reactions, so that you can get about designing one-shot nanotechnology bootstrapping. What would success look like? You can't actually do a full simulation to get ground truth for training a better molecular mechanics model. So how would you know the model you came up will work as intended? You can back-test against published results in the literature, but surprise surprise, a big chunk of scientific papers don't replicate. Shoddy lab technique, publication pressure, and a niche domain combine to create conditions where papers are rushed and sometimes not 100% truthful. Even without deliberate fraud (which also happens), you run into problems such as synthesis steps not working as advertised, images used from different experimental runs than the one described in the paper, etc. Except you don't know that. You're not allowed to do experiments! Maybe you guess that replication will be an issue, although why that would be a hypothesis in the first place without first seeing failures in the lab first isn't clear to me. But let's say you do. Which experiments should you discount? Which should you assume to be correct? If you're allowed to start choosing which reported results you believe and which you don't, you've lost the plot. There could be millions of possible heuristics which partially match the literature and there's no way to tell the difference. So what would success look like? How would you know you have the right molecular mechanics model that gives accurate predictions? You can't. Not any more than Descartes could think his way to absolute truth. Also for
2ryan_greenblatt2mo
FWIW, I do taboo this term and thus didn't use it in this conversation until you introduced it.
1adastra222mo
You highlighted "disagree" on the part about AGI's definition. I don't know how to respond to that directly, so I'll do so here. Here's the story about how the term "AGI" was coined, by the guy who published the literal book on AGI and ran the AGI conference series for the past two decades: https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/ LW seems to have adopted some other vague, ill-defined, threatening meaning for the acronym "AGI" that is never specified. My assumption is that when people say AGI here they mean Bostrom's ASI, and they got linked because Eliezer believed (and believes still?) that AGI will FOOM into ASI almost immediately, which it has not. Anyway it irks me that the term has been coopted here. AGI is a term of art in the pre-ML era of AI research with a clearly defined meaning.
3ryan_greenblatt2mo
Definition in the OpenAI Charter: A post on the topic by Richard (AGI = beats most human experts).
2ryan_greenblatt2mo
In case this wasn't clear from early discussion, I disagree with Eliezer on a number of topics, including takeoff speeds. In particular I disagree about the time from AI that is economically transformative to AI that is much, much more powerful. I think you'll probably find it healthier and more productive to not think of LW as an amorphous collective and instead note that there are a variety of different people who post on the forum with a variety of different views. (I sometimes have made this mistake in the past and I find it healthy to clarify at least internally.) E.g. instead of saying "LW has bad views about X" say "a high fraction of people who comment on LW have bad views about X" or "a high fraction of karma votes seem to be from people with bad views about X". Then, you should maybe double check the extent to which a given claim is actualy right : ). For instance, I don't think almost immediate FOOM is the typical view on LW when aggregating by most metrics, a somewhat longer duration takeoff is now a more common view I think.
2ryan_greenblatt2mo
Also, I'm going to peace out of this discussion FYI.
2ryan_greenblatt2mo
Extremely rough and slightly conservatively small ball park number for how many FLOP will be used to create powerful AIs. The idea being that this will represent roughly how many FLOP could plausibly be available at the time. GPT-4 is ~10^26 FLOP, I expect GPT-7 is maybe 10^30 FLOP. Perhaps this is a bit too much because the scheming AI will have access to far few FLOP than exist at the time, but I expect this isn't cruxy, so I just did a vague guess.
2ryan_greenblatt2mo
I wasn't trying to justify anything, just noting my stance.
4Veedrac2mo
I greatly appreciate the effort in this reply, but I think it's increasingly unclear to me how to make efficient progress on our disagreements, so I'm going to hop.
[-]Max H7mo4224

Evolution managed to spit out some impressively complex technology at both the cellular level (e.g. mitochondria), chemical level (e.g. venom, hormones), and macro level (e.g. birds) via random iterative mutations of DNA.

Human designers have managed to come up with completely different kinds of complex machinery (internal combustion engines, airplanes, integrated circuits) and chemicals which aren't found anywhere in nature, using intelligent top-down design and industrial processes.

I read "diamondoid bacteria" as synecdoche for the obvious possible synergy between these two design spaces, e.g. modifying natural DNA sequences or writing new ones from scratch using an intelligent design process, resulting in "biological" organisms at points in the design space that evolution could never reach. For example, cells that can make use of chemicals that can (currently) only be synthesized at scale in human-built industrial factories, e.g. diamond or carbon nanotubes.

I think such synergy is pretty likely to allow humans to climb far higher in the tech tree than our current level, with or without the help of AI. And if humans can climb this tech tree at all, then (by definition) human-level AGIs can also climb it, perhaps much more rapidly so.

I'm open to better terminology though, if anyone has suggestions or if there's already something more standard. I think "diamondoid mechanosynthesis" is overly-specific and not really what the term is referring to.

Offtopic: I find it hilarious that professor Moriarty is telling us about the technology for world domination.

6Said Achmiz7mo
Pictured: the explanation received by OP.

As a historical note and for further context, the diamondoid scenario is at least ~10 years old, outlined here by Eliezer, just not with the term "diamondoid bacteria":

The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and then, rather than bothering with the digital counters that humans call money, the superintelligence solves the protein structure prediction problem, emails some DNA sequences to online peptide synthesis labs, and gets back a batch of proteins which it can mix together to create an acoustically controlled equivalent of an artificial ribosome which it can use to make second-stage nanotechnology which manufactures third-stage nanotechnology which manufactures diamondoid molecular nanotechnology and then... well, it doesn't really matter from our perspective what comes after that, because from a human perspective any technology more advanced than molecular nanotech is just overkill.  A superintelligence with molecular nanotech does not wait for you to buy things from it in order for it to acquire money.  It just moves atoms around into whatever molecular struct

... (read more)

This is just a version of "grey goo", a concept which has been around since 1986 and which was discussed here in April

DMS research is fairly dead at the moment

I have learned that there are at least two, maybe three private enterprises pursuing it. The "maybe" is the biggest, Atomic Machines

>For example, in 2003 the Nanoputian project successfully built a nanoscale model of a person out of organic molecules. They used cleverly chosen reaction pathways to produce the upper body, and cleverly chosen reaction pathways to produce the lower body, and then managed to pick the exact right conditions to mix them together in that would bond the two parts together

As a chemist by training, I don't think this is actually that impressive. They basically did a few Sonogashira couplings, which are rather easy reactions (I did them regularly as an undergrad).

If you want something impressive, look at the synthesis of vitamin B12: https://en.wikipedia.org/wiki/Vitamin_B12_total_synthesis

7chemslug7mo
Another way to think about diamandoids is to consider what kind of organic chemistry you need to put them together the "traditional" way.  That'll give you some insight into the processes you're going to be competing with as you try to assemble these structures, no matter which technique you use.  The syntheses tend to go by rearrangements of other scaffolds that are easier to assemble but somewhat less thermodynamically stable (https://en.wikipedia.org/wiki/Diamantane#Production for example).  However, this technique gets arduous beyond 4 or 5 adamantane units: https://en.wikipedia.org/wiki/Diamondoid Agreed that the Nanoputians aren't impressive.  Lots of drugs are comparably complex, and they're actually designed to elicit a biological effect.   The B12 synthesis is sweet, but I'll put in a vote for the Woodward synthesis of strychnine (done using 1954 technology, no less!): https://en.wikipedia.org/wiki/Strychnine_total_synthesis#Woodward_synthesis
1Metacelsus7mo
Yeah, Woodward was a real trailblazer (interestingly, my undergrad PI was one of his last students)

It's good to hear from an actual expert on this subject. I've also been quite skeptical of the diamondoid nanobot apocalypse on feasibility grounds (though I am still generally worried about AI, this specific foom scenario seems very implausible to me).

Maybe you could also help answer some other questions I have about the scalability of nanomanufacturing.  Specifically, wouldn't the energy involved in assembling nanostructures be much much greater than snapping together ready made proteins/nucleic acids to build proteins/cells?  I am not convince... (read more)

8PeterMcCluskey7mo
Freitas' paper on ecophagy has a good analysis of these issues.
2Mikola Lysenko7mo
That's a great link, thanks! Though it doesn't really address the point I made, they do briefly mention it: > Interestingly, diamond has the highest known oxidative chemical storage density because it has the highest atom number (and bond) density per unit volume. Organic materials store less energy per unit volume, from ~3 times less than diamond for cholesterol, to ~5 times less for vegetable protein, to ~10–12 times less for amino acids and wood ... >  > Since replibots must build energy-rich product structures (e.g. diamondoid) by consuming relatively energy-poor feedstock structures (e.g., biomass), it may not be possible for biosphere conversion to proceed entirely to completion (e.g., all carbon atoms incorporated into nanorobots) using chemical energy alone, even taking into account the possible energy value of the decarbonified sludge byproduct, though such unused carbon may enter the atmosphere as CO2 and will still be lost to the biosphere. Unfortunately they never bother to follow up on this with the rest of their calculations, and instead base their estimate for replication times on how long it takes the nanobots to eat up all the available atoms.  However, in my estimation the bottleneck on nanobot replication is not getting materials, but probably storing up enough joules to overcome the gibbs free energy of assembling another diamondoid nanobot from spare parts.  I would love to have a better picture for this estimate since it seems like the determining factor in whether this stuff can actually proceed exponentially or not.
3MichaelStJules7mo
Is 1% of the atmosphere way more than necessary to kill everything near the surface by attacking it?
2avturchin7mo
Based on you numbers, it would require around 10^26 J to convert all CO2 and this will take 10^9 seconds =30 years.  I like your argument anyway, as it clear that quick solar-powered diamond apocalypses is unfeasible. But if AI kills people first and moves its computations into diamonds, it will have plenty of time.
6dr_s7mo
Also unsure why you would go for CO2 in the atmosphere as a source of carbon rather than more low entropy, easily harvested ones (like fossil fuels, plastics, or, well, biomass).
1MichaelStJules7mo
Eliezer's scenario uses atmospheric CHON. Also, I guess Eliezer used atmospheric CHON to allow the nanomachines to spread much more freely and aggressively.

No, but . . . you don't need "diamondoid" technology to make nano-replicators that kill everything. Highly engineered bacteria could do the trick.

[-]1a3orn7mo4538

I think it's good epistemic hygiene to notice when the mechanism underlying a high-level claim switches because the initially-proposed mechanism for the high-level claim turns out to be infeasible, and downgrade the credence you accord the high level claim at least somewhat. Particularly when the former mechanism has been proposed many times.

Alice: This ship is going to sink. I've looked at the boilers, they're going to explode!

Alice: [Repeats claim ten times]

Bob: Yo, I'm an expert in thermodynamics and steel, the boilers are fine for X, Y, Z reason.

Alice: Oh. Well, the ship is still going to sink, it's going to hit a sandbar.

Alice could still be right! But you should try to notice the shift and adjust credence downwards by some amount. Particularly if Alice is the founder of a group talking about why the ship is going to sink.

The original theory is sabotage, not specifically boiler explosion. People keep saying "How could you possibly sabotage a ship?", and a boiler explosion is one possible answer, but it's not the reason the ship was predicted to sink. Boiler explosion theory and sabotage theory both predict sinking, but it's a false superficial agreement, these theories are moved by different arguments.

Reply2111
[-]1a3orn7mo2513

If someone had said "Yo, this one lonely saboteur is going to sink the ship" and consistently responded to requests for how by saying "By exploding the boiler" -- then finding out that it was infeasible for a lone saboteur to sink the ship by exploding the boiler would again be some level of evidence against danger of the lone saboteur, so I don't see how that changes it?

Or maybe I'm misunderstanding you.

To make the analogy more concrete, suppose that Alice posts a 43-point thesis on MacGyver Ruin: A List Of Lethalities, similar to AGI Ruin, that explains that MacGyver is planning to sink our ship and this is likely to lead to the ship sinking. In point 2 of 43, Alice claims that:

MacGyver will not find it difficult to bootstrap to overpowering capabilities independent of our infrastructure. The concrete example I usually use here is exploding the boilers, because there's been pretty detailed analysis of how what definitely look like physically attainable lower bounds on what should be possible with exploding the boilers, and those lower bounds are sufficient to carry the point. My lower-bound model of "how MacGyver would sink the ship, if he didn't want to not do that" is that he gets access to the boilers, reverses the polarity of the induction coils, overloads the thermostat, and then the boilers blow up.

(Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even MacGyver could gain access to the boilers, if he didn't already have a gun?" but one hears less of this after the advent of MacGyver: Lost Treasure of Atlantis, fo

... (read more)
[-]1a3orn7mo105

It doesn't measurably update my probability of the ship sinking

When you say, doesn't "measurably," do you mean that it doesn't update all or doesn't update much? I'm not saying you should update much. I'm just saying you should update some. Like I'm nodding along at your example, but my conclusion is instead simply the opposite.

Like suppose we've been worried about the imminent unaligned MacGyver threat. Some people say there's no way he can sink the ship; other people say he can. So the people who say he can confer and try to offer 10 different plausible ways he could sink the ship.

If we found out all ten didn't work, then -- considering that these examples were selected for being the clearest ways he can destroy this ship -- it's hard for me to think this shouldn't move you down at all. And so presumably finding out that just one didn't work should move you down by some lesser amount, if finding out 10 didn't work would also do so.

Imagine a a counterfactual world where people had asked, "how can he sink the ship" and people had responded "You don't need to know how, that's would just a concrete example, concrete examples are irrelevant to the principle which is simply that... (read more)

2Shankar Sivarajan7mo
I think the chess analogy is better: if I predict that, from some specific position, MacGyver will play some sequence of ten moves that will leave him winning, and then try to demonstrate that by playing from that position and losing, would you update at all?
0Martin Randall7mo
I meant "measurably" in a literal sense: nobody can measure the change in my probability estimate, including myself. If my reported probability of MacGyver Ruin after reading Alice's post was 56.4%, after reading Bob's post it remains 56.4%. The size of a measurable update will vary based on the hypothetical, but it sounds like we don't have a detailed model that we trust, so a measurable update would need to be at least 0.1%, possibly larger. You're saying I should update "some" and "somewhat". How much do you mean by that?
2MichaelStJules7mo
This is the more interesting and important claim to check to me. I think the barriers to engineering bacteria are much lower, but it’s not obvious that this will avoid detection and humans responding to the threat, or that timing and/or triggers in bacteria can be reliable enough.
4Metacelsus7mo
Unfortunately, explaining exactly what kind of engineered bacteria could be dangerous is a rather serious infohazard.
4dr_s7mo
We do at least have one example of something like this happening already for natural causes, the Great Oxigenation Event. How long did that take? Had we been anaerobic organisms at the time, could we have stopped it?
2Shankar Sivarajan7mo
Don't worry, I know of a way to stop any engineered bacteria before they can do any harm. No, I'm not going to tell you what it is. Infohazard.
1MichaelStJules7mo
Possibly, but by limiting access to the arguments, you also limit the public case for it and engagement by skeptics. The views within the area will also probably further reflect self-selection for credulousness and deference over skepticism. There must be less infohazardous arguments we can engage with. Or, maybe zero-knowledge proofs are somehow applicable. Or, we can select a mutually trusted skeptic (or set of skeptics) with relevant expertise to engage privately. Or, legally binding contracts to prevent sharing.

High quality quantum chemistry simulations can take days or weeks to run, even on supercomputing clusters.

This doesn't seem very long for an AGI if they're patient and can do this undetected. Even months could be tolerable? And if the AGI keeps up with other AGI by self-improving to avoid being replaced, maybe even years. However, at years, there could be a race between the AGIs to take over, and we could see a bunch of them make attempts that are unlikely to succeed.

7jacopo7mo
That's one simulation though. If you have to screen hundreds of candidate structures, and simulate every step of the process because you cannot run experiments, it becomes years of supercomputer time.
7jacopo7mo
Although they never take the whole supercomputer, so if you have the whole supercomputer for yourself and the calculations do not depend on each other you can run many in parallel
1adastra222mo
They do in fact routinely take up the entire supercomputer. I don't think you are comprehending the absolute mind-bogglingly huge scale of compute required to get realistic simulations of mechanosynthesis reactions. It's utterly infeasible without radical advances beyond the present state of the art classical compute capabilities.
2jacopo2mo
My job was doing quantum chemistry simulations for a few years, so I think I can comprehend the scale actually. I had access to one of the top-50 supercomputers and codes just do not scale to that number of processors for one simulation independently of system size (even if they had let me launch a job that big, which was not possible)
3adastra222mo
Yeah just for scaling problems are an issue. These are not embarrassingly parallel problems that can be easily sharded onto different CPUs. A lot of interconnect is needed, and as you scale up that becomes the limiting factor. What codes did you use? The issue here is that DFT gives bad results because it doesn’t model the exchange-correlation energy functional very well, which ends up being load bearing for diamondoid construction reaction pathways performed in UHV using voltage-driven scanning probe microscopy to perform mechanochemistry. You end up having to do some form of perturbation theory, of which there are many variants all giving significantly different results, and yet are all so slow that running ab initio molecular dynamics simulations are utterly out of the question so you end up spending days of supercompute time just to evaluate energy levels of a static system and then use your intuition to guess whether the reaction would proceed. Moving to an even more accurate level of theory that would indisputably give good results for these systems but would have prohibitive scaling factors [O(n^5) to O(n^7)] preventing their use for anything other than toy examples. I am at a molecular nanotechnology startup trying to bootstrap diamondoid mechanosynthesis build tools, so these sorts of concerns are my job as well.
1jacopo2mo
Ahh for MD I mostly used DFT with VASP or CP2K, but then I was not working on the same problems. For thorny issues (biggish and plain DFT fails, but no MD) I had good results using hybrid functionals and tuning the parameters to match some result of higher level methods. Did you try meta-GGAs like SCAN? Sometimes they are suprisingly decent where PBE fails catastrophically...
2adastra222mo
For the most part we're avoiding/designing around compute constraints. Build up our experimental validation and characterization capabilities so that we can "see" what happens in the mechsyn lab ex post facto. Design reaction sequences with large thermodynamic gradients, program the probe to avoid as much as possible side reaction configurations, and then characterize the result to see if it was what you were hoping for. Use the lab as a substitute for simulation. It honestly feels like we've got a better chance of just trying to build the thing and repeating what works and modifying what doesn't, than even a theoretically optimal machine learning algorithm could do using simulation-first design. Our competition went the AI/ML approach and it killed them. That's part of why the whole AGI-will-design-nanotech thing bugs me so much. An immense amount of money and time has been wasted on that approach, which if better invested could have gotten us working nano-machinery by now. It really is an idea that eats smart people.
1jacopo2mo
You could also try to fit an ML potential to some expensive method, but it's very easy to produce very wrong things if you don't know what you're doing (I wouldn't be able for one)
2adastra222mo
It's coming along quite fast. Here's the latest on ML-trained molecular dynamics force fields that (supposedly) approach ab initio quality: https://www.nature.com/articles/s41524-024-01205-w These are potentially tremendously helpful. But in the context of AI x-risk it's still not enough to be concerning. A force field that gives accurate results 90% of the time would tremendously accelerate experimental efforts. But it wouldn't be reliable enough to one-shot nanotech as part of a deceptive turn.
2MichaelStJules7mo
Also, maybe we design scalable and efficient quantum computers with AI first, and an AGI uses those to simulate quantum chemistry more efficiently, e.g. Lloyd, 1996 and Zalka, 1996. But large quantum computers may still not be easily accessible. Hard to say.
[-]dr_s7mo40

I think you're somewhat downplaying the major impacts even just human level (say, as good as a talented PhD student) AGI could have. The key difference is just by how much the ceiling on specialised intellectual labour would be lifted. Anything theoretical or computational could have almost infinite labour thrown at it, you could try more and risk more. And I'd be really surprised if you couldn't for example achieve decent DFT, or at least a good fast XC functional, using a diffusion model or such, given enough attempts. AGI coupled with robotic chemical s... (read more)

Very interesting. A few comments.

I think you mentioned something like this, but Drexler expected a first generation of nanotechnology based on engineered enzymes. For example, in "Engines of Creation", he imagines using enzymes to synthesize airplane parts. Of course the real use of enzymes is much more restricted: cleaning products such as dishwasher detergent, additives in food, pharmaceutical synthesis. It has always seemed to me that someone who really believed Drexler and wanted to bring his imagined future about would actually not be working on anyth... (read more)

0pmcarlton7mo
  Yes, this exactly. I can't envision what kind of informationally-sensitive chemistry is supposed to happen at standard temperature and pressure in an aqueous environment, using "diamondoid".    Proteins are so capable, precisely because they are free to jiggle around, assume different configurations and charge states, etc. Without a huge amount of further clarification, I think this "nanotech doom" idea has to go. (and I'm not aware of any other instant, undetectable AI takeover scheme suggestions that don't rely on new physics)

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?

I found this very interesting, and I appreciated the way you approached this in a spirit of curiosity, given the way the topic has become polarized. I firmly believe that, if you want any hope of predicting the future, you must at minimum do your best to understand the present and past.

It was particularly interesting to learn that the idea has been attempted experimentally.

One puzzling point I've seen made (though I forget where) about self-replicating nanobots: if it's possible to make nano-sized self-replicating machines, wouldn't it be easier to create larger-sized self-replicating machines first? Is there a reason that making them smaller would make the design problem easier instead of harder?

It seems very weird and unlikely to me that the system would go to the higher energy state 100% of the time

I think vibrational energy is neglected in the first paper, it would be implicitly be accounted for in AIMD. Also, the higer energy state could be the lower free energy state - if the difference is big enough it could go there nearly 100% of the time.

For significant speedup of computations, super-advanced AI needs new computational medium, and nanotech could be such medium. 

But this creates a problem of chicken and an egg: to invent nanotech, the AI has to be able to perform significantly more computations which are available now. But it can't do this without nanotech.

This creates an obstacle to the idea that first AI will be able to rush to create nanotech. 

1MichaelStJules7mo
Are you thinking quantum computers specifically? IIRC, quantum computers can simulate quantum phenomena much more efficiently at scale than classical computers. EDIT: For early proofs of efficient quantum simulation with quantum computers, see: 1. Lloyd, 1996 https://fab.cba.mit.edu/classes/862.22/notes/computation/Lloyd-1996.pdf 2. Zalka, 1996 https://arxiv.org/abs/quant-ph/9603026v2
1avturchin7mo
I didn't think about QC. But the idea still holds: if runaway AI needs to hack of build advance QC to solve diamondoid problem, it will make it more vulnerable and observable.