Most of the object-level stories about how misaligned AI goes wrong involve either nanotechnology or bio-risk or both. Certainly I can (and have, and will again) tell a story about AI x-risk that doesn't involve anything at the molecular level. A sufficient amount of (macroscale) robotics would be enough to end humanity. But the typical story that we hear, particularly from EY, involves specifically nanotechnology. So let me ask a Robin Hanson-style question: Why not try to constrain wetlabs instead of AI? By "wetlabs" I mean any capability involving DNA, molecular biology or nanotechnology.
Some arguments:
-
Governments around the world are already in the business of regulating all kinds of chemistry, such as the production of legal and illegal drugs.
-
Governments (at least in the West) are not yet in the business of regulating information technology, and basically nobody thinks they will do a good job of it.
-
The pandemic has set the stage for new thinking around regulating wetlabs, especially now that the lab leak hypothesis is considered mainstream.
-
The cat might already be out of the bag with regards to AI. I'm referring to the Alpaca and Llama models. Information is hard to constrain.
-
"You can't just pay someone over the internet to print any DNA/chemical you want" seems like a reasonable law. In fact it's somewhat surprising that it's not already a law. By comparison, "You can't just run arbitrary software on your own computer without government permission" would be an extraordinary social change and is well outside the Overton window.
-
Something about pivotal acts which... I probably shouldn't even go there.
To be honest, I don't believe this story the way he tells it and I don't expect many people outside our community would be persuaded. To be clear, there are versions of this story I can believe, but I haven't heard anyone tell it in a persuasive way.
(Actually, scratch that: I think Hollywood already told this story, several times, usually without nanotech being a crux, and quite persuasively. I think if you ask regular people, their objection to the possibility of the robopocalypse is usually long timelines and not the fundamental problem of humans losing control. In fact I think most people, even techno-optimists, agree that we are doomed to lose control.)