Something that I don't think I've seen discussed here is the threat posed by an AI which is smarter than we are when it comes to computer security without being generally intelligent.

Suppose there were a computer virus that could read code, to the extent of looking at the programs on a computer, seeing how they process input from the internet, and how they could be exploited to run arbitrary code. Historically, viruses have been annoyances. How much smarter would a virus have to be in order to be a threat on the scale of say, a planetary EMP burst?

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 5:11 AM

I'm not sure this would be a bad thing. It could be used to test code for vulnerabilities. Any code resistant to it won't be hurt by it, and it's less likely to be hurt by human hackers.

Surely it's a bad thing, compared to something which tests our code without destroying/reappropriating anything it infects.

It only destroys that copy. It shouldn't do much damage in the wild, since everything will have already been tested against it.

We're talking about risk scenarios. Of course there are positive uses for several dangerous technologies, but those who develop them should put thought into what would happen if they got loose in various ways.

This is a question worth thinking about. Note however, that many of the people who think that an intelligent AI is likely consider your scenario to be more unlikely then either a hard take off or no take off at all. I'm somewhat inclined to agree with them for the simple reason that a minimally intelligent entity that gained that level of computing power and had coherent goals would be effectively a hard take off situation simply due to its large amount of computing power and access to electronic devices which interact in the real world.

I don't think that the planetary nuisance situation is likely to occur due to a "smarter" virus. The most plausible version of this is someone making a virus or worm that has built into it a large set of exploits and it hits them all harder than expected. Thus for example, if something like Stuxnet were made that was more broadly damaging and more infectious, and it didn't start doing damage until a pre-set signal then it could have done a lot of damage to many different forms of infrastructure.

I'm not sure how to compare that situation to the "planetary EMP burst" because that is itself a pretty vague level of damage. There are different types of EMPs that can impact different things to different extents. It might make sense to compare this to specific historical solar flares, like the 1859 super solar storm which was bad enough to disrupt even telegraphs (modern electronics are in general more sensitive but the way telegraph lines are spread out in long cables might also make them more vulnerable to some electromagnetic effects). Note that the 1972 storm was very bad and resulted in black outs but it was probably smaller overall than 1859 (as judging by where auroras were seen).

[-]khafra13y140

A plausibility note on automated exploit writing.

Seems a lot easier to constrain its goals, though, even if we can't predict what it will break into/break.

Wow, that's a scary paper.

I sometimes wonder if the threat we'll face won't be a superintelligent AI but instead a corporation smarter than an amoeba. Right now corporations feed on money. They multiply when money is abundant, and only the really strong ones survive when money is scarce. Call it 'corporation space', the virtual space of the effects of massive amounts of money and contracts and supply and demand.

('money' could be resources, credit, cash flow)

We might have 'multicellular' corporations, but I don't think we have any that are smarter. What happens when a corporation can move in 'corporation space' as a real predator? A corporation with 'corporation space' intelligence as high as an octopus would be scarily powerful, and even one with intelligence as high as a shark would mean massive trouble.

Edit - please disregard this post

This post explains why that's taking an analogy way too far.

That does make sense. Thanks.

Edit - please disregard this post