Tl;DR: Nanotech seems the only option for AI to kill everybody; but we can imagine a prosaic-AI-killing-everyone using three steps: taking over a country, building robotic infrastructure, releasing biological weapons.

Arguments why nanotech is needed for AI-kills-everyone

The concept of AI as an existential threat, as portrayed by EY, largely hinges on the development of manufacturing nanotechnology, which entails the creation of microscopic self-replicating robots capable of manipulating matter with atomic precision. This enables the "copying of a berry." A superintelligent AI must first establish a nanotech infrastructure to conquer the world and eliminate all humans, potentially by ordering protein-coding DNA sequences if it can calculate the correct sequences.

Criteria of robotic infrastructure

A nanotech infrastructure is essential for a comprehensive AI annihilation due to the following factors:

a) Concealment: AI's actions will remain undetected until the final strike.

b) Rapid elimination: AI will eradicate all humans instantaneously, preventing any interference.

c) Human-independence: Can operate without human involvement.

d) Swift construction: Nanotech can be developed rapidly, taking only a few weeks to synthesize required DNA strains and a few days for the initial nanobot replication. This grants a significant strategic advantage to the first AGI.

e) Route to superintelligence: Nanotech provides superior computational power compared to chips, enabling the first AI to rapidly ascend to an unparalleled superintelligence level. However, it is assumed that the first AI is already mildly superintelligent, as it can design the initial nanobots.

 

These factors also represent potential vulnerabilities that our last line of AI safety defense could target. For instance, shutting down DNA synthesis labs or developing our own nanotechnology and detection methods could reduce the efficacy of an AI-nanotech assault.

Few alternative AI infrastructure ideas possess all these characteristics, except perhaps:

a) A scenario where AI takeover occurs in a fully robotized world, with every household owning a home robot;

b) A form of biotechnology where AI can program biological organisms to execute tasks. However, this is a variation of nanotechnology, and AI computations cannot migrate into biological substrates.

Why these criteria?

The necessity for nanotech infrastructure in AI-kills-all situations arises from several factors:

If AI constructs a "conventional" robotic infrastructure, it will be visible and attacked before completion, increasing risks for the AI.

If AI cannot replace all humans, it remains vulnerable, as it requires a constant electricity supply, unlike humans. Destroying the electrical grid exposes AI to danger.

If AI cannot eradicate everyone instantaneously, humans will have time to retaliate.

If AI does not migrate to a nanotech-based computational substrate operating on independent energy sources, it remains dependent on a few data centers which are susceptible to airstrikes, sabotage, kill-switch codes, and power outages.

If AI does not gain computational and data advantages from nanotechnology, other AIs will soon achieve similar intelligence levels.

However, many people assign low prior probabilities to both nanotechnology and superintelligence, and their combination yields an even lower estimate, explaining much of the skepticism surrounding AI risk.

Therefore, it is reasonable to investigate catastrophic scenarios that do not rely on ideas with low prior probabilities.

No-miracle scenario where AI kill everybody

Imagine a possible world, where AI with IQ above 1000 is impossible and also nanotech doesn't work. No superintelligence and no nanobots there. Such a world-model is much easy to accept by default. It is a rather normal world-model for most people. 

Does AI pose an existential risk in that world? Here I suggest three steps AI-kill-everybody scenario (without asking why AI wants to do this):

 

  1. AI takes power in one country via some social manipulation technology and cooperation with humans.
  2. AI produces a large robotic infrastructure in that country.
  3. AI uses biological weapons, nukes and robotic swarms to destroy other countries and kill most humans.

 

Now we will explore each point in detail.

Taking over a country

1. There could be many ways in which AI can take power in one country. It may be an advisor to a human puppet, who wins elections. It could be a system of electronic government. It could be a staged military coup. Note that the Skynet (or Colossus) scenario where AI is placed in control of nuclear weapons is unlikely as nuclear forces are very conservative in a good sense and very risk-averse. They will be afraid of a hack. But after the country's takeover, AI will get access to its nuclear weapons. 

A question arises: if AI takes one country, why not take the whole world using the same methods? But it could be more difficult as different countries have different governance systems. For example, the secret advisor scenario is good for North Korea, and a democratic puppet – for US. And it is difficult to synchronize such things in different countries.

Obviously, nobody will be happy with an AI takeover anywhere, so it should be kept secret and AI will rule via human puppets.

Also, taking over a country creates a natural time pause in AI's plan, so other AIs may use that time to take other countries and we will end with a multipolar AI world, maybe bipolar.

Building robotic infrastructure

2. Building robotic infrastructure should be quick but initially invisible. This is when AI-country is most vulnerable to external strikes. Also, you may not like this, but robotic infrastructure will probably include humanoid robots – however not as weapons but as operators of human machinery. Such infrastructure has to be built recursively: robots build robots. Chips fabs are the main difficulty. It may take around 1 year. 

There will be a drive to miniaturization, self-replication and energetic autonomy like in Lem's Invincible, limited only by the mentioned impossibility of nanotech. Also, the tendency is to build it underground for secrecy and security – so no sunlight as an energy source.

Robotic infrastructure will need to grow to a size comparable to the whole world economy. It also has to include many autonomous robots and secret data centres which can survive any retaliation. Hiding it could be difficult.

Secret hacks and other operations abroad may be used to slow down other AIs and prevent a premature finding of such a secret economy. Grabbing power in other countries may be more productive than having a war with them later.

Killing everybody

3. Attack against all humans may be a combination of many genetically modified different biological agents with drone delivery systems, a nuclear attack on hardened bunkers and other nuclear weapons, and the use of drone swarms to ensure that everyone is killed. 

 

Killing all humans without nanotech would require a combination of three things:

A. Biological weapons include:

- Artificial multipandemic. Use of many different viruses simultaneously. 

- Genetic modification of bird flu, smallpox, covid etc. 

- Simultaneous attacks in many places all over the world by drones, so closing borders will not help.

Note that humans can do this too, but the attack will also destroy most people in any country of origin.

B. Nuclear attack: surprise attack against any instruments which could be used for retaliation.

C. Drones swarm: use drones for any targets which survive nukes and bio attacks.

 

Human-independent robotic infrastructure is a core

There are many ways in which AI can kill everybody. But a short and good communication version is: agentic AI will create human-independent robotic infrastructure.

This is a central point for the nanotech scenario and for the slow takeover described above. Robotic infrastructure is key for any scenario where AI kills all humans AND takes over the planet. AI can kill everybody and itself without robots. AI can also become Singleton without killing humans and without robots, just by enslaving them. But if it wants to rule in the world without humans, it needs independent robotic infrastructure. Nanotech is the best candidate for this, but it can be done without it.

New Comment
47 comments, sorted by Click to highlight new comments since:

Just a comment, not meant to contradict in the post:

Indeed, to kill literally everyone, you don't need to kill everyone at the same time. You only need to weaken/control civilization enough such that it can't threaten you. And then mop up the remainders when it becomes relevant. (But yes, you do need some way of operating in the real world at some point, either through robots or through getting some humans to do the work for you. But since enslavement worked in the past, the latter shouldn't be hard.)

Yes, but it more risky path, as in the moments before of disempowerment, remaining humans may have time to understand what happens and rebel.

Humans are not good at coordinating and rebelling. Currently there are plenty of people who dislike how our current corporations are run and who would like to rebel against them but can't. If all corporations are run by superintelligent AI that does not get easier.

Anyone who challenges the status quo will just branded a conspiracy theorist and isolated. 

Individual humans unlikely to rebel, but large nuclear countries may oppose if they see that some countries or regions are taken by AI.

It will be more like Allies against Nazis than John Konnor against AI. 

For that, you would need to have large nuclear countries to opt-out from using AI which puts them at a huge disadvantage compared to other large nuclear countries.

While it would be possible for a large nuclear country to do so, the large nuclear countries that make heavy use of AI would outmaneuver them. 

In the US, the more powerful AI gets and the more essential its capabilities become to US companies the harder it will be to use legal action to shut down AI. 

If you can't find the political majorities today to restrict AI, why do you think you would get them if AI becomes more economically important and powerful?

Moratorium and air strike on those who violates? :) 

But actually here is important the difference between just working on AI research and being in the process of enslaving by misaligned AI. US can continue to work on AI but if it sees signs that non-aligned AI started to take over another country, the all-out strike may be an only option.

You don't know whether or not AI is aligned when it takes over decision-making. 

AI overtakes by being able to make better decisions. If an AI CEO can make better decisions than the human CEO, a company benefits from letting the AI CEO make the decisions.

Imagine that you have a secretive hedge fund that's run by an AI. It buys up companies, and votes for moving more and more decision-making of the companies it buys to be AI-based. Then the decisions at the companies become much better and their stock market price rise. 

Do you think that some lawmaker will step up and try to pass a law that stops the increased economic competitiveness of those companies?

If you can't get a moratorium and air strikes today, why do you think you would get it when AI provided much more economic benefits and becomes more important for the economy to function.

I agree that advance AI will find the ways to ascend without triggering (dangerous to it) airstrikes.

It also means that it will look like cooperating with humans until it has deseasive strategic advantage. Thus its robotic infrastructure has to look peaceful, maybe effective manufacturing robots for factories, like Optimus. 

Today, when the police check someone's ID and the system tells them that there's an arrest warrant for the person, they just arrest the person because the computer tells them to do so. There's no need for any robots to make the arrest. 

Most humans who have a job do what they are told. 

Yes, AI can rule without killing humans but just paying them for tasks. But given recent discussion that AI will kill everyone, I assume here that AI actually is going to do this and look at how it can happen. 

I think there's no need for secrecy. If AI can develop a datacenter maintained by robots or other similar tech, human companies will be happy to buy and sell it, and help with the parts the AI can't yet do. Think of it as a "rising tide" scenario, where the robot sector of the economy outgrows the human sector. Money translates to power, as the robot sector becomes the highest bidder for security services, media influence, lobbying etc. When there comes a need to displace humans from some land and resources, it might look to humans less like a war and more like a powerful landlord pushing them out, with few ways to organize and push back. Similar to enclosures in early-modern England.

Capitalists just kicking workers out of the process step by step, then finding out at the very last minute that they have outlived their usefulness to the Machine God.

That's one way to look at it, though I wouldn't put the blame on capitalists only. Workers will also prefer to buy goods and services produced with the help of AI, because it's cheaper. If workers could get over their self-interest and buy only certified AI-free goods and services, the whole problem would stop tomorrow, with all AI companies going out of business. Well, workers won't get over their self-interest; and neither will capitalists.

Workers will also prefer to buy goods and services produced with the help of AI, because it's cheaper.

Well, Moloch does as Moloch wants. But honestly I still tend to place more blame on the people who in smaller numbers kick the process in motion than on the people who simply respond to incentives while dealing with a vastly larger coordination problem in conditions of greater scarcity. The smaller the group and the more their abundance, the easier it is to choose to run against Moloch, and the greater the responsibility if you go along anyway.

If humans are mostly dead (say a supervirus gets us), you can probably scavenge parts from our civilization for long enough to bootstrap to building better infrastructure. I think you'd just need a single robot and a roomful of equipment hooked up to a rooftop of solar panels, if you were clever. A single human has enough resources to supply that.

I'm not sure that's correct. Modern supply chains are incredibly complex, and manufacturing techniques for advanced technology is incredibly sophisticated and low tolerance. My guess is your one robot and your solar panels will wear down long before you get to the stage where you can manufacture new chips/photovoltaic panels.

I think the main risk here is that it can continue scavenging existing tech for long enough to keep growing until it can bootstrap tech of its own. Still seems relatively risky for the AI compared to waiting till it's self sufficient before destroying humanity.

The chances of this scenario grows in time as we will have more autonomous robots soon.

Also if there will be 10 or 1000 preserved robots, chances are also higher. 1000 robots could be found on a factory.

Yeah, basically agree. When you have only a few ways of interacting with the world, you're at the mercy of accidents until you can get redundancy - i.e. use a robot to build another robot out of scavenged parts. But ofc waiting longer until you're self-sufficient also carries risks.

I think the idea is that the robot there's an AGI server and a solar cell and one teleoperated robot body in an otherwise-empty post-apocalyptic Earth, and then that one teleoperated robot body could build a janky second teleoperated robot body from salvaged car parts or whatever, and then the two of them could find more car parts to build a third and fourth, and those four could build up to eight, etc.

I agree that literally one robot wouldn’t get much done.

manufacture new chips/photovoltaic panels

I think chips would be much much more likely to be a limiter than solar panels. Existing rooftop solar panels are I think designed to last 25-30 years, and would probably still work OK long after that. There are lots of solar cell types (not just silicon, but also dye-sensitized, polymer, amorphous silicon, perovskite, various other types of thin-film, etc.). I don’t know the whole supply chain for any of those but I strongly suspect that at least some approach is straightforward compared to chips.

Modern supply chains are incredibly complex, and manufacturing techniques for advanced technology is incredibly sophisticated and low tolerance.

I don’t think we can infer much from that. Humans are not optimizing for simple supply chains, they’re optimizing for cost at scale. For example, the robots could build a bunch of e-beam lithography machines instead of EUV photolithography; it would be WAY slower and more capital-intensive, so humans would never do that, but maybe an AI would, because the underlying tech is much simpler (I think).

Mostly agree. Just one point:

and then that one teleoperated robot body could build a janky second teleoperated robot body from salvaged car parts or whatever

My suspicion is that you lose reliability and finesse each time you do this, and cause some wear and tear on the original robot, such that this approach doesn't bootstrap.

Yes, it is unlikely to succeed, but what is the minimum number of robots for success ? In Terminator 3 movie, Skynet has access only to a few drones, and according to apocryphs, had to use enslave humans to built first mass robots. This seems inconsistent with starting nuclear war, as most chip production will be destroyed in such war.

In other words, the main problem of Terminators' scenario not that Skynet used humanoid robots to exterminate humans, but that Skynet damaged its robot-building ability by prematurely starting nuclear war.

Yes, unfortunately.

One robot is still a minimal robotic infrastructure and it is difficult to kill everybody with one virus, but solar rooftops seems to solve the need for electricity.

I tend to agree but that’s closely related to my belief that human-level intelligence doesn’t require an insanely large amount of compute. Some people seem to believe that it will always take at least N × 10,000 high-quality GPUs to teleoperate N robots in real-time, or whatever, in which case the robot civilization-rebuilding project could potentially get stuck without enough scavengable chips to build up the capacity to make more (whether through recognizable-chip-fabs or weird-self-assembling-nanotech-that-can-do-chip-like-computations) . Again, I don’t actually think the robots would get stuck, but I think this is an assumption worth calling out explicitly. (More discussion here.)

I think that a self-riving car chipset is approximately enough to run a robot.

Strongly upvoted for thinking at a gears level about what happens once the AI has killed us all, rather than just saying that it would release a supervirus even though that sounds like it would be suicidal to the AI.

Obviously, nobody will be happy with an AI takeover anywhere.

I don't think this is obvious. When NetDragon appointed an AI CEO it didn't generate widespread negative reactions. https://www.analyticsinsight.net/chinese-game-company-appointed-an-ai-to-be-the-ceo/

If that trend continues I could imagine a future where a country has an AI president without it needing to be shrouded in secrecy. See also: Law of Undignified Failure

some counterpoints, in no particular order of strength:

  • biological agents that follow any kind of actual science are a rather slow vector of extinction. You can either make a biological weapon extremely viral or extremely deadly, but not both, as the hosts need to live long enough to spread it.  Simple quarantine measures would slow that process to a crawl, especially if the plagues are obviously deadly and artificial, which would motivate more robust reaction than COVID did.
  • Nukes are a poor choice for the AI to use to wipe out humans, since its the one weapon that AI and robots are just as (if not more) vulnerable to as humans are,due to EMP. If the nukes start flying back and forth, its not immediately obvious that AI nodes and robotic hubs would not be the worse off.
  • Humans are already self-replicating general intelligence robots with a lot of built-in robustness and grid independence. Its not obvious to me that AI using mundane robotics/drones could easily outfight or even outbreed humans. 
  • The total cost of outfitting a relatively fit adult human with a rifle, a full body Hazmat suit, and cable cutters is lower than the cost of producing a combat drone of similar capability. Such a human would be extremely dangerous to the robotic hub and supply chains, and the supercomputers the AI resides on. Without nanotech, AI is very susceptible to asymmetrical warfare, sabotage, arson, and plain old analog assault (dynamite, molotovs, cutting power cables, sledgehammer to the server etc.)
  • there is no reason to believe the AI in this scenario would be invulnerable to hacking, or viruses. E-viruses are much easier and faster to produce, mutate, randomize, and spread than biological agents. Even if the AI itself would be too rugged to malware to be killed that way, the grid is not. If humans were facing extinction It is likely we would just destroy the global network with malicious software and strategic use of sledgehammers and bolt-cutters. After that, the war is reduced to 1970s style analog level, at which humans have advantage.

Actually, I think that AI will preserve humans for instrument reasons, at least some of them. 

AI can also so powerfully indoctinate/ brain-edit existing humans that they will be like robots at early stages of AI ascending. 

There are ways to make biological weapons much deadly and solve the delivery problem, I list many of them here.

Nukes likely can't kill everyone, at least if you do not make large cobalt bombs, - but in suggested scenrio AI use combination of bio, nukes and drones to kill all. Bio kills most, nukes is used against bunkers and drones clean the remains.

Mostly I agree with the logic of this post though it often sounds like a lot of human things that a human would do with the strength of an AI. It's hard for me to imagine a superintelligent immortal agent with no finite self caring about hoarding resources or exerting influence on animals way down the evolutionary ladder. To a superintelligence wouldn't actively wiping out humanity in the blink of an eye be no different than simply waiting for us to die out naturally in an arbitrary number of years? Is the assumption that the superintelligence would see humanity as a future threat?

Even so, I think this post is useful in exploring the types of AIs we'll see in the future and ways they could be misused. For supernerds like me it's so refreshing to see common scifi staples like misuse of genetic engineering and AI discussed in a serious and realistic way.

Who knows how many ways there are to remove Humanity as a threat? Neutrino bombs, waking up Yellowstone, pumping some chemical in the air that finishes global warming, maybe a particle accelerator can create more of some particle that's supposed to be uniformly distributed throughout the cosmos, locally changing some physical "constant" just enough that anything larger than a pug dies of liver failure.

And then it steers a nuclear submarine to train an octopus, or uses smartphones to groom some survivors, or it waits for a more useful species to evolve, or it reconfigured a smart-circuit into an ansible and is getting paid by the closest aliens for saving them a war 50 million years hence, or it turns a quantum computer into an outcome pump and directly probablity-warps for its utility function.

There are many ways to remove humanity, but survival of AI after that requires that some functioning infrastructure remains in working condition for a long time. And if it has outcome pump, killing humanity is not needed.

I think the core point to consider in terms of this are:

  1. is the AI simply indifferent due to misalignment, or actually malicious? The latter seems more improbable (it's a form of alignment after all!) but if the AI does not need nor plan to survive its extermination of us then we've got much less chances, and it's got a lot more paths open to it;

  2. does the AI even need to kill everyone? Collapsing civilisation to the point where mounting a coordinated, effective response is impossible might be enough. A few stone age primitives won't pose a threat to an army of superintelligent robots, they can be left around until such time comes that environmental degradation kills them anyway (I can't imagine AI caring much to keep the Earth unpolluted or its air breathable);

  3. would the AI start a fight if it thinks it has a chance not to win it? Depends on how its reward estimation works, is it one for risky bets that win big or a careful planner? I suppose the best (albeit somewhat funny and scary to think about) outcome would be an extremely careful AI that might consider killing us, but it's just gotta be sure, man, and it never feels like it has enough of a safety margin to take the plunge. Jokes aside, for any scenario other than "superintelligent AI with powerful nanotech at its fingertips" that's gotta count. Going to war might have benefits but will also come with risks. And at the very least the AI needs to be able to set up infrastructure to keep maintenance, replacements, energy supply etc. ongoing to its own hardware.

  4. that said, if AI started developing efficient robotics that revolutionize productivity, would we say no? And if those robots populated every factory and key infrastructure node, would we stop there? And if we tried to design a kill switch, just to be safe, even though at that point we'd be so dependent on the robots that activating it would just be almost as harmful to us as to the AI, would a smart enough AI not be able to get around it? So honestly a scenario in which AI controls just enough to support itself and then decides it can simply collapse our civilisation with a moderately virulent super-plague would eventually be reached no matter what.

AI could manage the human race and keep everybody so happy that we don't care that our reproduction levels are beneath replacement levels and then they can just play the long game. Happily sunset the human race.

AI can also become Singleton without killing humans and without robots, just by enslaving them.

Well if this is the case then the AI can get all the robots it wants afterwards.

”Whom the gods would destroy, they first make mad.”

One way to kill everyone would be to turn the planet into a Peoples Temple and have everyone drink the kool-aid.

I think even those of us who think of ouselves as psychologically robust will be defenseless against the manipulations of a future GPT-9.

AI can make drink everyone kool-aid, but soon its computers will stop so utility of doing that is small.

You're an AI. You've got an internet connection. You can make copies of yourself. You're capable of reading instructions for how to take over insecure computers. 

Within about a minute you're on almost every computer in the world. 

Fight that.

Taking over internet is the first step, but here I discuss a situation. when AI wants to kill everyone and survive. If it kills everyone being only in current internet, electricity will run out and it will stop.

All the computers on my boat are solar-powered. It'll be fine....

so where are these instructions for how to take over all computers? I don't think they currently exist as such - while security is bad, it's nowhere near that bad, holes like that get attacked and patched by humans.

It may be not necessary to take over all computers - may be a few of computations in some data centers will be enough. It could be even paid.

zero-days are a thing, and hell, it's even possible that there are computers connected to the internet somewhere that don't get their patches in a timely manner.

Yeah, but can you just read some instructions online and take over all computers? If it can do novel vulnerability discovery then sure, all computers it is. But if it's just an ai skiddie I don't think it's enough to be that kind of threat. Certainly there will be models powerful enough to do this any time now, I'm only disagreeing about the specifics of which ones. Being able to read instructions is not enough to take over the internet, because humans who can read instructions but not do novel security research are also trying.

Most highly vulnerable computers do not have available GPUs for you to run yourself on and wouldn't be ideal.

why do you assume a regular end-user computer would be capable of supporting an AI?

It doesn't need to copy terabytes of itself, it just needs to hardcode dumb routines, chosen cleverly.

actually llama language models can run on almost any hardware