I recently came across this post by Kevin Lacker about narrow AI risks. TLDR: There are reasonable routes to apocalyptic AI far before we achieve AGI. 

My default position has been extremely skeptical on the risks of AGI. I am generally in agreement with the Andrew Ng quote that "I don’t work on preventing AI from turning evil for the same reason that I don’t work on the problem of overpopulation on the planet Mars." I am still very skeptical of even the Kevin Lacker scenarios, but somewhat less so.

A lot of the AI risk discussion I've seen focuses on hypotheticals, or theories of alignment, or distant future low-probability scenarios. 

I'd like to ask for less theoretical ideas that could focus my thinking and perhaps the thinking of the community on more immediate threats.

This brings me to my question to the LW community, which is: What is the most evil AI that could be built, today? 

If you were an evil genius with, say, $1B of computing power, what is the most harm you could possibly do to society? In complete seriousness, I think the most harmful AI currently in existence is something like Facebook's user engagement algorithms, or the cameras with software designed to identify minorities and report them to the government. Is there more harmful AI that either currently exists, or would be possible to create?

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 4:17 PM

Nice try FBI

Consider that a detailed answer to this question might constitute an information hazard. How should experts respond to a forum question like "what is the most infectious lethal virus which could be engineered and released today"?

Consider that a detailed answer to this question might constitute an information hazard.

I don't think this is dangerous to talk about. If anything, talking publicly about my preferred attack vectors helps the world better triage them and (if necessary) deploy countermeasures. It's not like anybody is really going to throw away $1 billion for the sake of evil.

I agree; open discussion and red-teaming are valuable and I'm not concerned by your proposed (anti-?) financial attack vector. To quote Bostrom:

There are many ways of responding to information hazards. In many cases, the best response is no response, i.e., to proceed as though no such hazard existed. The benefits of information may so far outweigh its costs that even when information hazards are fully accounted for, we still under-invest in the gathering and dissemination of information. Moreover, ignorance carries its own dangers which are oftentimes greater than those of knowledge.

"[W]hat is the most infectious lethal virus which could be engineered and released today"?

Off the top of my head, my first impulse is to upgrade an influenza virus via gain-of-function research. Influenza spreads easily and used to kill lots of people. Plus, you can infect ferrets with it. (Ferrets have similar respiratory systems to human beings.) I don't think it's dangerous to talk about weaponized influenza because these facts are already public knowledge among biologists.

Effective evil is just like effective altruism. You must identify opportunities that are tractable, underfunded and high-impact. Plenty of people are throwing money at the most obvious amoral uses of AI. China is throwing billions of dollars at autonomous weapons and surveillance systems. (The US is funding autonomous weapons too.) Silicon Valley invests countless billions of dollars toward using machine learning to mind control people. If I were a supervillain with $1 billion in compute I wouldn't spend it on AI at all. That's like spitting in the ocean. I'd just sell the compute to raise money for engineering an artificial pandemic.

But if I had to use the billion dollars on evil AI specifically, I'd use the billion dollars to start an AI-powered hedge fund and then deliberately engineer a global liquidity crisis.

But if I had to use the billion dollars on evil AI specifically, I'd use the billion dollars to start an AI-powered hedge fund and then deliberately engineer a global liquidity crisis.

How exactly would you do this? Lots of places market "AI powered" hedge funds, but (as someone in the finance industry) I haven't heard much about AI beyond things like regularized regression actually giving significant benefit.

Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis?

How exactly would you do this?…Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis?

Pyramid scheme. I'd take up as much risk, debt and leverage as I can. Then I'd suddenly default on all of it. There are few defenses against this because rich agents in the financial system have always acted out of self-interest. Nobody has even intentionally thrown away $10 billion dollars and their reputation just to harm strangers indiscriminately. The attack would be unexpected and unprecedented.

Didn't this basically happen with LTCM? They had losses of $4B on $5B in assets and a borrow of $120B. The US government had to force coordination of the major banks to avoid blowing up the financial markets, but meltdown was avoided.

Edit: Don't pyramid schemes do this all the time, unintentionally? Like, Madoff basically did this and then suddenly (unintentionally) defaulted. 

Yes and yes. However, pyramid schemes are created to maximize personal wealth, not to destroy collective value. Those are not quite the same thing. I think a supervillain could cause more harm to the world by setting out with the explicit aim of crashing the market. It's the difference between an accidental reactor meltdown verses a nuclear weapon. If LTCM achieved 95% leverage acting with noble aims, imagine what would possible for someone with ignoble motivations.

Do you agree that AI could make militaries, terrorists, organized crime, and dictators more effective? If you do agree, do you need any further discussion? 

If you were an evil genius with, say, $1B of computing power, what is the most harm you could possibly do to society?

AI risk is existential and currently theoretical. Learning what a malicious actor could do with $1B of compute today will not help focus your thinking on the risks posed by AGI. It's like trying to focus your thinking on the risks of global nuclear war by asking, "What's the worst a terrorist could do with a few tons of TNT?" It's not that the scale is wrong, it's that the risks are completely different. That doesn't mean that a terrorist with 100,000 tons of TNT isn't an important problem, but it's not the problem that Thomas Schelling and the nuclear deterrence experts were working on during the cold war.

What is the most evil AI that could be built, today?

This is an entirely different question and the answer is there isn't any public evidence that anybody has the ability to create an evil AI today. I don't want to belabor the point, but nobody knew how to split an atom in 1937, yet in 1945 the US dropped two atomic bombs on hundreds of thousands of Japanese civilians.

The most powerful one is probably The Financial System. Composed of stock exchanges, market makers, large and small investors, (federal reserve) banks, etc...

I mean that in the sense that an anthill might be considered intelligent, while a single ant will not.

Most of the trading is done algorithmically, and the parts that are not might as well be random noise for the most part. The effects of the financial system on the world at large are mostly unpredictable and often very bad.

The financial system is like "hope" according to one interpretation of the myth of Pandora's box. Hope escaped (together with all other evil forces) as the box was opened, and released upon the world. But humanity mistook hope for something good, and clings to it, while in fact it is the most evil force of them all.

Okay I may have overdone it a little bit now, but I hope you get the point

For $1B you can almost certainly acquire enough fissile material to build dozens of of nuclear weapons, attach them to drones and simultaneously strike the capitols of the USA,  China, Russia, India, Israel and Pakistan.  The resulting nuclear war will kill far more people than any AI you are capable of building.

Don't like nuclear weapons?   Aum Shinrikyo was able to build a Sarin gas plant for $10M.

Still  too expensive?  You can mail-order smallpox.

If you really insist on using AI, I would suggest some kind of disinformation campaign.  Using something like the freely available GPT-NEO you can probably put together a convincing enough disinformation campaign to change an election outcome or perhaps manipulate the stock market or, I don't know, pick any two countries on the brink of war and  push them over the edge.