We need a way to stop AI developers from building AIs that might go rogue and kill us all. This is obviously something for the government to do. It could pass new laws prohibiting this and/or establishing an AI regulatory body. But do we need new laws? Isn’t it already illegal to build a technology that could kill tons of people if it malfunctions?
The answer, as far as I can tell, is “yes and no”. The rest of the post is going to be based on my summary of my current understanding from a few conversations, searches, etc. I am not a lawyer.
There are laws against recklessly endangering people’s lives, and laws against creating a public nuisance. But, at least in the USA, the bar for bringing a case on such grounds seems to be quite high.
Several people I’ve talked to who seem to know more than me have expressed pessimism about such a lawsuit. But at the same time, the Florida Attorney General has launched a criminal investigation against OpenAI because ChatGPT was used to help plot a shooting at Florida State University that killed two people.
The main issue is that US law really doesn’t want to speculate on what might kill people. It wants to stop people from repeating (roughly) the same dangerous behaviors that have killed people. But “you cannot step into the same river twice”: what counts as “(roughly) the same” here becomes a matter of philosophical and legal debate.
A canonical example of reckless endangerment might be: Suppose you are the unwilling passenger in a car being chased by the cops in a car chase. The bad guys driving are putting your life at risk, and presumably could be successfully prosecuted for that.
What about if we instead consider a van chase? Or a boat chase? Is the appropriate category here “vehicle”?
Other examples that seem like they’d clearly count are firing guns in the air or towards areas where people are or might be, or doing a shoddy job on a safety-critical system, e.g. building a bridge with subpar materials. “Doing a shoddy job on a safety-critical system” seems like a great description of AI companies development practices to me!
How might we convince a court that AI x-risk is close enough to existing examples of such a thing? One thing would be to point at harms that AI has already caused or contributed to, such as deaths (murders or suicides) encouraged by LLMs. We might also lean on the idea of AI as a sort of autonomous being that is not properly controlled by its owners, like owning a dangerous pet and not keeping it properly constrained.
Ultimately, I think this would be a highly unusual case. But I think it could be won, if the facts about AI and the risks it poses were widely known and acknowledged. There’s also the possibility of bringing such a suit in another country; e.g. Wikipedia says that public endangerment “is punished most frequently in Canada”.
Thanks for reading The Real AI! Subscribe for free to receive new posts and support my work.
We need a way to stop AI developers from building AIs that might go rogue and kill us all. This is obviously something for the government to do. It could pass new laws prohibiting this and/or establishing an AI regulatory body. But do we need new laws? Isn’t it already illegal to build a technology that could kill tons of people if it malfunctions?
The answer, as far as I can tell, is “yes and no”. The rest of the post is going to be based on my summary of my current understanding from a few conversations, searches, etc. I am not a lawyer.
There are laws against recklessly endangering people’s lives, and laws against creating a public nuisance. But, at least in the USA, the bar for bringing a case on such grounds seems to be quite high.
Several people I’ve talked to who seem to know more than me have expressed pessimism about such a lawsuit. But at the same time, the Florida Attorney General has launched a criminal investigation against OpenAI because ChatGPT was used to help plot a shooting at Florida State University that killed two people.
The main issue is that US law really doesn’t want to speculate on what might kill people. It wants to stop people from repeating (roughly) the same dangerous behaviors that have killed people. But “you cannot step into the same river twice”: what counts as “(roughly) the same” here becomes a matter of philosophical and legal debate.
A canonical example of reckless endangerment might be: Suppose you are the unwilling passenger in a car being chased by the cops in a car chase. The bad guys driving are putting your life at risk, and presumably could be successfully prosecuted for that.
What about if we instead consider a van chase? Or a boat chase? Is the appropriate category here “vehicle”?
Other examples that seem like they’d clearly count are firing guns in the air or towards areas where people are or might be, or doing a shoddy job on a safety-critical system, e.g. building a bridge with subpar materials. “Doing a shoddy job on a safety-critical system” seems like a great description of AI companies development practices to me!
How might we convince a court that AI x-risk is close enough to existing examples of such a thing? One thing would be to point at harms that AI has already caused or contributed to, such as deaths (murders or suicides) encouraged by LLMs. We might also lean on the idea of AI as a sort of autonomous being that is not properly controlled by its owners, like owning a dangerous pet and not keeping it properly constrained.
Ultimately, I think this would be a highly unusual case. But I think it could be won, if the facts about AI and the risks it poses were widely known and acknowledged. There’s also the possibility of bringing such a suit in another country; e.g. Wikipedia says that public endangerment “is punished most frequently in Canada”.
Thanks for reading The Real AI! Subscribe for free to receive new posts and support my work.
Share