I don't know much about nanotech/biotech, but what little I know suggests that this will be the earliest failure point where AI can cause doom for humans. I thought that because of this, I should start learning more about nanotech/biotech, and I thought that asking LessWrong for direction might be a place to start.

My heuristic for why nanotech/biotech is critical, and for why I am lumping them together:

  • AI doom due to misalignment is arguably about the "attack surface"; that is, if there is a high-dimensional way to have big effects on humans, then various standard arguments about the importance of alignment will apply. This seems to be the case for nanotech/biotech, in the sense that there are many kinds of germs, nanobots, chemicals, and so on that we could imagine could be made if we had the correct technological development, and these would exist in ~the same environment as people (e.g. they could get released into the atmosphere). Human civilization essentially makes the assumption that people can safely breathe the atmosphere, but that assumption could be broken by nanotech/biotech.
  • Nanotech/biotech is not the only thing with an exponentially high-dimensional attack surface; there's also social networks, computer networks, probably more than that. However, nanotech/biotech seems to have the "advantage" of being small-scale; it can equillibriate in milliseconds-to-minutes, and can exist in cubic millimeters to cubic meters, which makes it much more feasible to model and collect data on than grand societal things. This suggests that you would not need all that advanced of an AI to amplify nanotech/biotech. It doesn't even need to be a general intelligence, it just needs to come up with more powerful ways of doing nanotech/biotech. So AI-powered nanotech/biotech seems likely to arrive years if not decades before AGI. (Similar to how people see GPT-3 as the precursor to Prosaic AGI, think of AlphaFold 2 as the precursor to AI-powered biotech.)
  • Incidentally, this also makes it much harder to align. A common counterargument to AI x-risk is "wouldn't a superintelligence understand that we don't want it to do bad stuff?", after which a common counterreply is "yeah, but we need some way of specifying that it should care about what we want". However, an AI that doesn't understand large-scale things such as humans and our wants might still understand small-scale stuff like nanotech/biotech. It might literally destroy humanity not because it doesn't realize that it should care about us, but instead because it doesn't even realize we exist.

Now, this was all a thought I came up with yesterday based on very little knowledge about nanotech/biotech, so this might be totally wrong and naive. But it seems very different from the common AI risk models, so I thought it would be strategically super important to consider if it's true.

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

Hard nanotech (the kind usually envisioned in sci-fi) may be physically impossible, and at the very least is extremely difficult. The type of nanotech that is more feasible is 1.) top-down lithography (ie chips), and 2.) bottom up cellular biology, or some combinations thereof.

Biological cells are already near optimal nanotech robots in both practical storage density and computational energy efficiency (landauer limit). Even a superintelligence, no matter how clever, will not be able to design nanobots that are vastly more generally capable than biological cells. Robots are fundamentally limited by energy efficiency and storage density and biology is already operating at the physical limits for those key constraints. So plausible bottom up nanotech just looks like more 'boring' advanced biotech.

It would make evolutionary sense for current cells to be near optimal, and therefore for there to not be much opportunity for biotech/nanotech to do big powerful stuff. However, I notice that this leaves me confused about two things.

First, common rhetoric in the rationalist community is that this is a big risk. E.g. Robin Hanson advocated banning mirror cells, and I regularly hear people suggest working on preventing pandemics as an x-risk, or talk about how gain-of-function research is dangerous.

Secondly, there's the personal experience that we just had t... (read more)

Mirror cells and novel viruses are well within 'boring' advanced biotech, which can be quite dangerous. My argument of implausibility was directed at sci-fi hard nanotech, like grey goo. That seems plausible. The risk is that an unaligned AGI could kill or weaken humanity through advanced biotech. I don't think this is the most plausible outcome of unaligned AGI; more likely it would instead just soft takeover the world without killing us. If it did kill humanity that would come later, but it probably wouldn't need to.
As an aside, mirror cells aren't actually a problem, and non-mirror digestive systems and immune systems can break them down, albeit with less efficiency. Church's early speculation that these cells would not be digestible by non-mirror life forms doesn't actually make work, per several molecular biologists I have spoken to since then.
My answer is fundamentally due to what I think is Jacob Cannell's world model for how AGI is going to go: 1. It's built solely or mostly out of classical computing, so no exotica is there. This also disallows practical superconductors. 2. Thus, Landauer bounds it, including it's bio/nano capabilities, which are already very close to what biology has. 3. Thus, no major improvements can be made. I largely agree with this as an average or modal path. I'd assign it a 95-99% credence in this playing out for the first AGI. My major reservations with his model of nanotechnology is that I would be much more careful in assuming that means AGI isn't going to be able to do so, primarily because of the tail risk that exotic computers have, and I would place much lower credences on the statement, "Even a superintelligence, no matter how clever, will not be able to design nanobots that are vastly more generally capable than biological cells.", more in the realm of 20% than his seeming certainty. Don't get me wrong, it still matters since it mentions limitations that matter for the discussions, and even if we do end up with the assumptions being wrong, it does give a way to control the capabilities of AGI while we're aligning it.
I guess maybe I'm misunderstanding/overreading things. I read jacob_cannell as implying that the biotech/nanotech route would not be more dangerous than natural biology but maybe the point was just to provide a tangentially relevant piece of information without commenting on the danger relative danger?
I guess my point is that it doesn't seem like you can start at "any biotech/nanotech built via support from an AI will still be bounded by the Landauer limit" to "no biotech/nanotech built via support from an AI will lead to something like a worldwide pandemic or atmospheric change that kills everyone". I might be missing something but the Landauer limit doesn't seem that relevant to killing everyone to me. It's a limit to computation, but I'm not suggesting the biotech/nanotech is computing some dangerous function, I'm suggesting it might produce something dangerous, such as more copies of itself. Of course due to evolution it seems like in theory, there should be "efficient markets" in producing more copies of oneself, so maybe there is a blocker there. But as I said it seems like that blocker doesn't really hold because we just had a pandemic.
Yeah basically this - there already is an efficient market for nanotech replicators. The most recent pandemic was only a minor blip in the grand scheme of things, it would take far more to kill humanity or seriously derail progress, and unaligned AGI would not want to do that anyway vs just soft covert takeover.