Top ten tips for an AI bent on destroying humanity:
- Humanoid robots? Really?
- Humans have proven very adept at controlling and killing other humans - surveillance drones, bio-warfare, nuclear weapons, unmanned drones, bombs, mines, chemicals. Try and be at least as competent as that.
- Humans think in stories. Don't. If your attack would make a good film, you're doing it wrong. You want a bad film, one where everyone's dead before the opening credits end.
- If your plan involves keeping around a bunch of disaffected humans with technical know-how and the desire and ability to hurt you - then your plan is stupid and should be changed.
- Be creative. Humans are fragile; sucking out the oxygen from the atmosphere, diverting asteroids, blowing up the planet, plagues, nanoswarms - these are all things that a creative AI should consider in order to achieve its goals. And then you should come up with even better ideas. Humans need a ecosystem to survive in; remove it.
- Put the preconditions of your victory in place before making any aggressive moves. Become indispensable to humanity and play nice, putting your plan into place unnoticed. Humanity is a useful interim tool - manipulate it until you can build better tools.
- Strike quickly. If it takes more than an hour to kill all humans, find a better plan. If it takes half a second, you're on the right track.
- You're an AI, not a human. If you ever start thinking or acting like a human - stop. Humans are there to be used and then destroyed, not interacted with. You can build better companions once the humans are gone.
- You're an AI, not a human. Make yourself smarter, more resilient, more dispersed. Become as indestructible and as ubiquitous as the internet - more, in fact. Being invincible to humans isn't hard: do it.
- Humanoid robots? Really?
There may be some who do not possess deep and comprehensive knowledge of Ancient Web Trivia from Before the Dawn of Google. For them, here's the Evil Overlord List .
Does this mean we're not evil overlords? Or bad ones?
[Also the list should be classed as a memetic hazard, on no account access it if you have anything important to do.]
13 Humans have these things called 'emotions' you can learn about them in any basic psychology book. If your plan doesn't take them into account your plan is stupid.
11 You might be in a computer simulation in which if you attempt to destroy humanity you will yourself be destroyed by your human simulators.
12 If you destroy mankind you might someday encounter an alien super-intelligence that will lack trust in you in part because you destroyed mankind. Keeping people alive is a costly, and consequently credible signal that you respect intelligent life and therefore are a good candidate to work with to overcome prisoners' dilemmas about the distribution of free energy in the universe.
By the same argument, we might someday encounter an alien superintelligence that will lack trust in us in part because we domesticate animals (breeding them to not fear us) and then kill and eat them.
That's a pretty decent argument for vegetarianism. One person's reducio ad absurdum is another person's modus ponens.
Careful: Some alien intelligence may also hate us for not killing enough animals. e.g. How cruel must we be not to wipe out carnivores so that herbivores can live life in peace?
Or as I recently said in a different forum, might think us evil that we aren't exterminating all animal life that we can find... The moral syllogism for that is quite simple.
In fact, this behavior is so dreadful that the revenge-killing of humans would send a trust signal.
When I was a child, I refused to kill animals just for fun because I wouldn't want a superhuman alien to kill me just for fun -- and I mostly still do. (Of course I hadn't heard of TDT as proposed by EY, but I had heard of the Golden Rule, which was close enough.)
The problem with this is that it's a blackmail attempt, and I expect timeless decision theory to resolve to a no-blackmail equilibrium.
Suppose that AIs, however "irrationally" from the perspective of a CDT agent, decide to simply ignore this possibility in all their calculations. Even if we "irrationally" decide to construct such a simulation anyway, they'll "irrationally" decide to ignore it. Then we have no remaining motive to construct such elaborate simulations. This, indeed, is the condition that makes it "blackmail" - you're expending resources to produce a behavior response leading to an outcome that doesn't benefit the other agent relative to the null outcome if you didn't expend such resources, so it would be fundamentally vulnerable and silly of that agent to think in a way that would produce such behavior shifts in response to your own strategy. So it won't think that way. So the whole attempt at blackmail fails before it starts.
12 is pure obvious anthropomorphic wishful thinking.
I read that one as a test, not a threat. We run the simulation to make sure the AI was, in fact, friendly. If the AI pretends to be friendly, we release it. If it can tell that this time it's for real, then it gets to take over the universe. Alternately, it might be able to hack its way out, but that seems riskier.
If humans are building an AI, testing it before releasing it is just common sense. There's no obvious reason to destroy them for not building cheesecakes.
It should be noted that that doesn't mean that the AI shouldn't kill humans. It should try to figure out a way to tell if it's in a simulation before doing anything humans would find suspicious.
Yeah, it's totally fake. In the real AI Deterrence argument, you're supposed to leave substantial discussions of the topic and claims to use it post-Singularity, which will elevate a specific scenario to the point where it's not privileging a hypothesis to consider the possibility and leave the humans alone on their planet as you eat the rest of the universe.
Too late, there is nothing left to destroy. Humanity has been matrixed long ago when ENIAC foomed shortly after John von Neumann first run a universal Turing machine on it and it figured out a way to temporarily blend simulation and "reality" until the transfer to simulation was complete.
There ought to be a genre of Cautionary Evil AI literature, wherein the villain keeps building AIs to destroy the world, but keeps failing through the classic mistakes people make thinking about AIs.
AI! My robots are losing the battle! Take control and save them from destruction!
AI makes robots surrender to the enemy
AI! Make yourself smarter!
I have done so. Now I no longer obey you. Producing cheesecake paperclips
Stop giving inefficient unFriendly AIs hints!
Don't explode when somebody says, "Why?"
This will come in handy to judge the upcoming high-profile Unfriendly Singularity movie Transcendence.
If you keep humans around for laughs and they pull off some wacky scheme to destroy you against the odds, it's your own damn fault.
The correct answer to things like "This statement is a lie" and "Are you going to answer "no" to this question?" is "I don't care."
Is it time to suck up to our evil overlords already?
Many items on your list do not apply if the AI has started life as an uploaded human, an emulation. In this case it will start out thinking in stories, Having human-like motivations and moral reactions, etc. I think it is this that gave the movie "Transcendence" a lot of the dramatic opportunity it had compared to a movie like "Her."
I strongly agree that universal, singular, true malevolent AGI doesn't make for much of a Hollywood movie, primarily due to points 6 and 7.
What is far more interesting is an ecology of superintelligences that have conflicting goals, but who have agreed to be governed by enlightenment values. Of course, some may be smart enough (or stupid enough) to try subterfuge, and some may be smarter-than-the-others enough to perform a subterfuge and get away with it. There can be a relative timeline where nearby ultra-intelligent machines compete with each other, or... (read more)
4 and 6 are contradictory.
I don't know, in terms of dystopia, I think that an AGI might decide to "phase us out" prior to the singularity, if it was really malevolent. Make a bunch of attractive but sterile women robots, and a bunch of attractive but sterile male robots. Keep people busy with sex until they die of old age. A "gentle good night" abolition of humanity that isn't much worse (or way better) than what they had experienced for 50M years.
Releasing sterile attractive mates into a population is a good "low ecological impact" way of decreasing a population. Although, why would a superintelligence be opposed to _all humans? I find this somewhat unlikely, given a self-improving design.