Review

Most of the object-level stories about how misaligned AI goes wrong involve either nanotechnology or bio-risk or both. Certainly I can (and have, and will again) tell a story about AI x-risk that doesn't involve anything at the molecular level. A sufficient amount of (macroscale) robotics would be enough to end humanity. But the typical story that we hear, particularly from EY, involves specifically nanotechnology. So let me ask a Robin Hanson-style question: Why not try to constrain wetlabs instead of AI? By "wetlabs" I mean any capability involving DNA, molecular biology or nanotechnology.

Some arguments:

  1. Governments around the world are already in the business of regulating all kinds of chemistry, such as the production of legal and illegal drugs.

  2. Governments (at least in the West) are not yet in the business of regulating information technology, and basically nobody thinks they will do a good job of it.

  3. The pandemic has set the stage for new thinking around regulating wetlabs, especially now that the lab leak hypothesis is considered mainstream.

  4. The cat might already be out of the bag with regards to AI. I'm referring to the Alpaca and Llama models. Information is hard to constrain.

  5. "You can't just pay someone over the internet to print any DNA/chemical you want" seems like a reasonable law. In fact it's somewhat surprising that it's not already a law. By comparison, "You can't just run arbitrary software on your own computer without government permission" would be an extraordinary social change and is well outside the Overton window.

  6. Something about pivotal acts which... I probably shouldn't even go there.

New Answer
New Comment

6 Answers sorted by

Shmi

157

There are many ways for something way smarter than a human to take control and do whatever the heck it wants. Focusing on making just one of them harder just give you a false sense of security. From Relatively Reasonable AI NotKillEveryonism Takes:

Hack most computers and internet-enabled things at once, analyze all the info, scale up, make or steal a lot of money, use to [let’s say bribe, hire and blackmail] many people, have your humans use your resources to take over, have them build your robots, your robots kill them.

If your response is ‘you can’t hack all those things’ then I give up, what does smarter even mean to you. If you think people wouldn’t let the rest of it play out, they’d grow spines and fight back and not dig their own graves, read more history.

I mean, no, the AI won’t do it that way, they’ll do something way faster and safer and smarter, I would do something smarter and I’m way dumber than the AI by assumption. Obviously the smarter-than-human AIs would think of new things and build new tech.

But it’s not like this plan wouldn’t work.

To be honest, I don't believe this story the way he tells it and I don't expect many people outside our community would be persuaded. To be clear, there are versions of this story I can believe, but I haven't heard anyone tell it in a persuasive way.

(Actually, scratch that: I think Hollywood already told this story, several times, usually without nanotech being a crux, and quite persuasively. I think if you ask regular people, their objection to the possibility of the robopocalypse is usually long timelines and not the fundamental problem of humans losing control. In fact I think most people, even techno-optimists, agree that we are doomed to lose control.)

5Shmi
Well, if you agree that humans are bound to lose control, what do you disagree with?
2Lone Pine
I think I should have said "lose control eventually." I'm becoming more optimistic that AIs are easy to align. Maybe you can get GPT-4 to say the n-word with an optimized prompt, but for normal usage, it's not exactly a 4channer.

Another point:

Focusing on making just one of them harder just give you a false sense of security.

I think this is a bad mindset. It's a fully general argument. The Swiss Cheese Model would be a much better approach than "we have to find the one perfect solution and ignore all other solutions." To be blunt, I think that the alignment community makes the perfect the enemy of the good.

Jackson Wagner

53

In fairness, "biosecurity" is perhaps the #2 longtermist cause area in effective-altruist circles.  I'm not sure how much of the emphasis on this is secretly motivated by concerns about AI unleashing super-smallpox (or nanobots), versus motivated by the relatively normal worry that some malevolent group of ordinary humans might unleash super-smallpox.  But regardless of motivation, I'd expect that almost all longtermist biosecurity work (which tends to be focused on worst-case GCBRs) is helpful for both human- and AI-induced scenarios.

It would be interesting to consider other potential "swiss cheese approach" attempts to patch humanity's most vulnerable attack surfaces:

  • Trying to harden all countries' nuclear weapons control systems against hacking and other manipulation attempts.  (EA also does some work on nuclear risk, although here I think the kinds of work that EA focuses on, like ALLFED-style recovery after a war, might not be particularly helpful when it comes to AI-nuclear-risk in particular.)
  • Trying to "harvest the low-hanging fruit" by exhausting many of the easiest opportunities for an AI to make money online, so that most of the fruit is picked by the time a rouge AI comes along.  Although picking the low-hanging fruit might be very destructive if it mostly involves, eg, committing crimes or scamming people out of their money.  (For better or worse, I think we can expect private actors to be sufficiently motivated to do plenty of AI-assisted fruit-picking without needing encouragement from EA!  Although smarter and smarter AI could probably reach higher and higher fruit, so you'll never be able to truly get it all.)
  • Somehow trying to make the world resistant to super-persuasive ideological propaganda / bribery / scams / other forms of psychological manipulation?  I don't really see how we could defend against this possibility besides maybe taking the same "low-hanging fruit" approach.  But I'd worry that a low-hanging fruit approach would be even more destructive in the "marketplace of ideas" than in the financial markets, making the world even more chaotic and crazy at exactly the wrong time.
  • One simpler attack surface that we could mitigate would be the raw availability of compute on earth -- it would probably be pretty easy for the military of the USA, if they were so inclined, to draw up an attack plan for quickly destroying most of the world's GPU datacenters and semiconductor fabs, using cruise missiles and the like.  Obviously this would seriously ruffle diplomatic feathers and would create an instant worldwide economic crisis.  But I'm guessing you might be able to quickly reduce the world's total stock of compute by 1-2 orders of magnitude, which could be useful in a pinch.  (Idk exactly how concentrated the world's compute resources are.)
    • For a less violent, more long-term and incremental plan, it might be possible to work towards some kind of regulatory scheme whereby major governments maintained "kill switches" that could disable datacenters and fabs within their own borders, plus maybe had cyberattacks queued up to use on other countries' datacenters and fabs.  Analogous to how the NSA is able to monitor lots of the world's internet traffic today, and how many nations might have kill switches for disabling/crippling the nation's internet access in a pinch.
  • Other biosecurity work besides wet-lab restrictions, like creating "Super PPE" and other pathogen-agnostic countermeasures.  This wouldn't work against advanced nanotech, but it might be enough to foil cruder plans based on unleashing engineered pandemics.  
  • Trying to identify other assorted choke-points that might come in handy in a pinch, such as disabling the world's global positioning system satellites in order to instantly cripple lots of autonomous robotic vehicles/drones/etc.
  • Laying the groundwork for a "Vulnerable World Hypothesis"-style global surveillance state, although this is obviously a double-edged sword for many reasons.
  • Trying to promote even really insubstantial, token gestures of international cooperation on AI alignment, in the hopes that every little bit helps -- I would love to see leading world powers come out with even a totally unenforceable, non-binding statement along the lines of "severely misaligned superintelligent AI cannot be contained and must never be built".  Analogous to various probably-insincere but nevertheless-somewhat-reassuring international statements that "nuclear war cannot be won and must never be fought".

I agree with @shminux that these hacky patches would be worth little in the face of a truly superintelligent AI.  So, eventually, the more central problems of alignment and safe deployment will have to be solved.  But along the way, some of these approaches might help might buy crucial time on our way to solving the core problems -- or at least help us die with a little more dignity.

jimrandomh

35

As others have said, if an AI is truly superintelligent, there are many paths to world takeover. That doesn't mean that it isn't worth fortifying the world against takeover; rather, it means that defenses only help if they're targeted at the world's weakest link, for some axis of weakness. In particular that means finding the civilizational weakness with the low bar for how smart the AI system needs to be, and raising the bar there. This buys time in the race between AI capability and AI alignment, and buys the extra time at the endgame when time is most valuable.

I don't think regulating wetlabs is very promising, from this perspective, because as AI world takeover plans go, "solve molecular nanotech via a wetlab" is on the very-high-end of intelligence required, and, if the AI is smart enough to figure out the nanotech part, it can certainly find ways around any roadblocks you place at the bootstrap-molecule-synthesis step.

jmh

20

As you note, we already have some constraints on that. I am fairly comfortable saying that the effectiveness is limited and most have some intuitive level of understanding this. Imposing such constraints don't stop the target from existing (which I don't think you are claiming). Efforts to make the approach more effective we run into the problem of cures being worse than the disease. 

As others have noted, the AIs that are of concern will be smarter, and faster, than humans imposing the constraints. Given the ability of humans to get around these constraints (generally quicker that the regulators can react) one might expect:

  1. The AI will be much better at that task
  2. The path to a cure being worse than the disease may be fast tracked as well and we have to worry more about humans than the AI.

ChristianKl

20

In our world, having economic and political power is about winning in competitions with other agents.

In a world with much smarter than human AGI, the agents that win competitions for power are going to be AGIs and not humans. Even if you would now have constrains on wetlabs, powerful AGIs are going to be able to have power over wetlabs.

jbash

20

Governments are indeed already in the business of "regulating" illegal drugs, and have been enforcing that heavily worldwide for about 100 years, with plenty of large pockets of similar enforcment in various places before that. Yet the drugs are readily available pretty much everywhere in pretty much any quantity you can pay for (admittedly it is a bit harder in some of the most extreme police states). And the prices aren't unreasonably high.

I'm not saying you can effectively stop people from building whatever AI they want, either, because I don't believe you can. Furthermore I believe that nearly all approaches to trying are probably dangerous wastes of time. The ones I've actually heard have all been, anyway.

But you still definitely can't keep a "rogue superintelligence", with its witting or unwitting human pawns, from doing chemistry or biology. A credible chemistry or biology lab actually takes less infrastructure than it takes to train large AI models. It's less conspicuous, too. If some truly dangerous AI is actively planning to Destroy All Hyoomons, I think we can assume it's not going to follow the law just because you ask it to. You have to be able to enforce it. And I don't see how you could even begin to approximate good enough enforcement to even slow things down.

I don't think I buy any of the assertions in your point 5, by the way. And I just generally don't think that you'd get wide agreement on any set of rules about AI or labs before it was too late to matter. Not even if they'd be effective, which as I said I don't think they would be.