AI is here, and AGI is coming. It's quite possible that any work being done now will be futile in comparison to reducing AI risk.
This is one of those things that's unsettling for me as someone who did a Ph. D. in a non-AI area of computer science.
But one of the main vectors by which a bootstrap AGI will gain power is by hacking into other systems. And that's something I can do something about.
Not many appreciate this, but unhackable systems are very possible. Security vulnerabilities occur when there is some broken assumption or coding mistake. They are not omnipresent: someone has to put them there. Software has in general gotten more secure over the last few decades, and technologies that provide extremely high security guarantees have. Consider the verified hypervisor coming out of Bedrock Systems; RockSalt, an unbreakable sandbox; or sel4, the verified kernel now being used in real safety-critical systems.
Suppose we "solve" security by bringing the vulnerabilities in important applications to near zero. Suppose we also "solve" the legacy problem, and are able to upgrade a super-majority of old software, included embedded devices, to be similarly secure. How much will this reduce AI risk?
To be clear: I personally am mainly interested in assuming this will be solved, and then asking the impact on AI safety. If you want talk about how hard it is, then, well, I won't be interested because I've given many lectures on closely related topics, although some others here may benefit from the discussion.
(When I call something verified or unbreakable, there are a number of technicalities about what exactly has been proven and what the assumptions are. E.g.: nothing I've mentioned provides guarantees against hardware attacks such as Row Hammer or instruction skipping. I'll be happy to explain these to anyone in great detail, but am more interested in discussion which assumes these will all be solved.)
I agree with about everything you said as well as several more criticisms along those lines you didn't say. I am probably more familiar with these issues than anyone else on this website with the possible exception of Jason Gross.
Now, suppose we can magic all that away. How much then will this reduce AI risk?
Hmm. It looks like my reply notifications are getting batched now. I didn't realize I'd set that up.
I've reordered some of this, because the latter parts get into the weeds a lot and may not be worth reading. I advise that anybody who gets bored stop reading there, because it's probably not going to get more interesting.
For background, I haven't been doing security hands-on for the last few years, but I did it full time for about 25 years before that, and I still watch the space. I started out long enough ago that "cyber" sets my teeth on edge...
State of practice in defense
Well yes but... not that much less. A lot of what's done is, shall we say, "aspirational", and a lot of the rest works much better to reduce the rate of damage from human adversaries than it would to resist a total takeover from AGI that had decided a given system was on the critical path for its success.
Today, if you're a real-world organization with a significant IT infrastructure, and you hire a skilled human penetration tester (or team),... (read more)