This is a special post for quick takes by Kene David Nwosu. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
4 comments, sorted by Click to highlight new comments since: Today at 8:19 AM

Small thing I'm not sure about: under the more common AI doom models (a la MIRI, perhaps), do AI systems take over just the human niche or do they also extinct/enslave most other species?

A pretty common MIRI-ish model is "AI probably disassembles the Earth to create machines taking over the galaxy".

The other species have more of a chance than the human species does because (at least on timescales less than millions of years) the other species are no threat to the AI. (Humanity is a threat to the AI because having created one super-intelligent AI, humanity might create a second one, which is unlikely to want the same thing the first one wants.)

Still, the AI might choose a course of action that unintentionally kills off all life on Earth. For example, the AI might choose to remove all the oxygen from the atmosphere (either because the oxygen interferes with machinery the AI plans to build and maintain on Earth or because it wants to liquify the oxygen, then use it as rocket fuel). It might use the Earth for some industrial process that raises the temperature of the entire surface of the Earth above the temperature survivable by any species bigger than a microbe. It might choose to use the Earth as a (slow) interstellar spaceship, launching it out of the solar system and cooling it so much that no life could survive. It might choose to mine the Earth's core, and the most efficient ways of doing that probably involve tearing the Earth into chunks (e.g., by smashing the Moon into it, then smashing together the two largest chunks that result from the first smash).

You know how bright healthy young people who live in cities and have exciting careers are almost always in a hurry to accomplish one thing or another? Well, the consensus opinion is that almost all sufficiently intelligent things are like that. Since time is limited, sufficiently intelligent agents do not waste time. I mention that to explain why a super-intelligent AI that comes into existence on Earth would choose the Earth for the geophysical indignities I just described when there is all this other matter in the universe. (Namely, choosing the Earth eliminates the time needed to travel to the other location it might choose.)

I claim to know MIRI leadership's thinking well enough to say that that is probably roughly how they would answer your question (and I got the oxygen example from Max Tegmark).

Note: other species can eventually evolve to be intelligent and then create an AI, so they're still a threat on longer timescales.