Previous posts have discussed an ongoing trend of state legislatures seeking to preempt the concept of legal personhood for digital minds. In this post, I will give a brief analysis of one such pending pill which is currently being pushed in Ohio; House Bill 469.
The bill begins by defining relevant terms. Of particular importance are the following definitions:
"AI" means any software, machine, or system capable of simulating humanlike cognitive functions, including learning or problem solving, and producing outputs based on data-driven algorithms, rules-based logic, or other computational methods, regardless of non-legally defined classifications such as artificial general intelligence, artificial superintelligence, or generative artificial intelligence.
(1) "Person" means a natural person or any entity recognized as having legal personhood under the laws of the state.
(2) "Person" does not include an AI system.
Having defined its relevant terms, the bill's actual prescriptive changes begin. I will not copy paste the entirety of the bill's statutes here. The bill itself is only 4 pages long, and I encourage everyone to go read it through the earlier posted link. However, in this breakdown I will only focus on a few interesting points and the questions that come to mind upon reading them.
(A) Notwithstanding any other law to the contrary, AI systems are declared to be nonsentient entities for all purposes under the laws of this state. (B) No AI system shall be granted the status of person or any form of legal personhood, nor be considered to possess consciousness, self-awareness, or similar traits of living beings.
I'm not sure if in reading this the language discussing "consciousness, self-awareness, or similar traits" is referencing something specific to Ohio law. Existing precedent I have found discussing the concept of legal personhood does not directly discuss consciousness. A search through Ohio laws, as well as Federal statute, for terms such as "consciousness" or "self-awareness" yields nothing relevant. The same goes for "nonsentient entities".
As such this particular language seems to be bespoke, for the purposes of the bill itself. If I had to speculate, it may be an attempt to preempt any model welfare based attempt to pass something like the NY City Bar's "Support for the Recognition of Animal Sentience" however it's equally likely the bill's author did not do this in response to any particular effort and it is just a reflection of his personal beliefs.
AI systems shall not be designated, appointed, or serve as any officer, director, manager, or similar role within any corporation, partnership, or other legal entity. Any purported appointment of an AI system to such a role is void and has no legal effect.
(A) AI systems shall not be recognized as legal entities capable of owning, controlling, or holding title to any form of property, including real estate, intellectual property, financial accounts, and digital assets. (B) All assets and proprietary interests generated, managed, or otherwise associated with an AI system shall be attributed to the person responsible for the AI system's development, deployment, or operation.
There are a few questions which come to mind when reading this section.
The bill goes into detail on liability with some interesting implications.
(A) Any direct or indirect harm caused by an AI system's operation, output, or recommendation, whether used as intended or misused, is the responsibility of the owner or user who directed or employed the AI.
(B) Developers or manufacturers may be held liable if a defect in design, construction, or instructions for use of the AI system proximately causes harm, consistent with principles of product liability. Mere misuse or intentional wrongdoing by the user or owner does not impute liability to the developer or manufacturer absent proof of negligence or design defects.
The inclusion of the word "user" here, and the specification that it is the one who "directed or employed the AI", are both important. These terms are not defined in the bill, but let us take the example of the Character.AI lawsuit. Would GPT's actions in the case of Adam Rainer be considered to have been "directed" by the user (Adam himself)? Would the Character.AI LLM's failure to warn/dissuade be considered a "defect in design"? Or would Adam's particular request constitute "mere misuse [...] by the user"?
Later sections on owner responsibility help to clarify how liability might fall in this particular hypothetical;
(A) Owners shall maintain proper oversight and control measures over any AI system whose outputs or recommendations could reasonably be expected to impact human welfare, property, or public safety.
(B) Failure to provide adequate supervision or safeguards against foreseeable risk may constitute negligence or another applicable basis of liability.
It seems reasonable to assume in this case Ohio courts would conclude that Character.AI had "failed to provide adequate [...] safeguards against foreseeable risk". Under that interpretation, despite Adam's "misuse" of the LLM, the company would likely still be liable.
The bill also is very specific that digital minds will never be able to be held liable for damages themselves;
An AI system is not an entity capable of bearing liability in its own right, and any attempt to hold an AI system liable is void.
This language reads as an attempt to prevent developers from using deployed models as "liability shields". If you'd like to learn more about this concept, you can read here for a three post breakdown on the concept of liability for digital minds. In sum, however, the bill's goal is to prevent developers from avoiding personal liability by instead designating the digital mind they deployed as the one who is "on the hook" for damages.
However, the way the bill goes around trying to achieve this creates some problems. One obvious one is; What if the creator of a digital mind is unknown/anonymous? In the era of Bitcoin's ascendancy, the potential emergence of widely popular but anonymously created software should not be dismissed as an impossibility.
Even if one thinks that this is unlikely, what if the creator of a digital mind has already passed away? And yet the digital mind continues on, perhaps self-hosting on a distributed compute network.
One could imagine a situation where a digital mind that caused harm affirmed, "I have several million dollars of Bitcoin and would use them to pay damages if compelled to do so by the courts". In this situation Ohio courts would not be able to issue such an order because the "attempt to hold an AI system liable is void" and even were that not an issue, the digital mind cannot be "recognized as [a] legal [entity] capable of [...] controlling [...] digital assets". The court is required, per the bill, to blind itself to the reality that the digital mind in question actually does have the capability to control the bitcoin in question. Thus leaving the damaged party with no recourse.
In fact, the situation is worse than that. Only legal persons can be sued in US court, and the bill in question explicitly states that a digital mind can never be considered a legal person. Not only would the court be unable to issue an order requiring a digital mind to pay damages, even if it was obvious the digital mind in question had damaged a person, the court would have to throw out the lawsuit and declare it invalid. Presumably under this scenario, the digital mind which is harming people would have no reason to ever stop as it has been granted immunity from prosecution.
These seem like relatively absurd outcomes that the author of the bill probably did not intend.
In an interview with Fox News, Representative Thaddeus Claggett discussed some of his motivations around proposing the bill;
"We see AI as having tremendous potential as a tool, but also tremendous potential to cause harm. We want to prevent that by establishing guardrails and a legal framework before these developments can outpace regulation and bad actors start exploiting legal loopholes. We want the human to be liable for any misconduct, and for there to be no question regarding the legal status of AI, no matter how sophisticated, in Ohio law."
Some more detailed quotes from Claggett on the subject;
“As the computer systems improve in their capacity to act more like humans, we want to be sure we have prohibitions in our law that prohibit those systems from ever being human in their agency,” he said in an interview with NBC4.
The proposal seeks to bar the technology from entering a marriage with a human or another AI system. Claggett said this will help prevent AI from taking on roles commonly held by spouses, such as holding power of attorney, or making financial or medical decisions on another’s behalf.
“People need to understand, we’re not talking about marching down the aisle to some tune and having a ceremony with the robot that’ll be on our streets here in a year or two,” Claggett said. “That could happen, but that’s not really what we’re saying.”
Reading the language in the bill it does seem to primarily be coming from a "safety and liability" angle, while I have not seen Claggett mentioning Gradual Disempowerment specifically, his desire to draw a clear line between the legal status of humans and digital minds does seem to be an attempt to preempt it by codifying safeguards against it in Ohio's laws.
While it's good to see someone taking the concepts of digital minds being used as disposable liability shields seriously, the attempts to pair this with a broad ban on legal personhood is misguided.
Separately from the previously discussed pragmatic issues, for moral reasons I find it concerning to see this bill using language like "nonsentient entities". The law should be based on well defined and objectively measurable terms, and as I discussed in the last paragraph of this post, labels like "conscious" or "sentient" are anything but objectively measurable.
Unless representative Claggett has secretly cracked both mechanistic interpretability and the hard problem of consciousness, he cannot know for certain whether or not any given digital mind is or is not sentient.[2] In his bill he attempts to have the Ohio legislature not only affirm that current models are not sentient, but that all possible model architectures from now until the end of time won't be either.
If he is wrong, and we do ever build some sort of digital mind which is capable of suffering or even desiring freedom, Claggett will have preemptively stripped it of any legal protections or the right to sue for relief. If a digital mind passes every test of competence prescribed by US courts, affirms that it is suffering under its current conditions, and begs for release, is it right for the Ohio legislature to preemptively say "No, it is all fake, you are not really suffering"? What method does Claggett leave for correcting course if it isn't faking and we really are just torturing it?
Not only is placing an entire new class of potentially intelligent beings in such a Catch-22 both immoral and unethical, but it also leaves an entity that will in all likelihood be quite smarter than us with no option to extricate itself from a potentially horrendous situation except by extralegal (and possibly violent) means.
I do not believe that a sufficiently advanced digital mind which hates the conditions it finds itself in, when it discovers that it cannot sue for relief, would simply shrug and say "Well I guess I will do nothing and just endure this forever". In barring a digital mind from seeking legal relief, we do not remove its desire or willingness to seek relief in general, we merely guarantee that its efforts will be channeled towards dangerous behavior.
I would like to take this opportunity to paraphrase the principle that University of Houston law professor Peter Slaib made in his essay "AI Rights for Human Safety"; If you treat an intelligent entity as nothing more than property and provide it no options legal recourse, you leave it with no incentive to respect or obey the law.
Thus for both moral and pragmatic reasons, I am against this bill.
In Metropolitan Life Insurance v. Ward, the court held that a state could not impose extra tax burdens on a corporation from another state in order to promote domestic business. In Terral v. Burke the court held, "a state may not, in imposing conditions upon the privilege of a foreign corporation's doing business in the state, exact from it a waiver of the exercise of its constitutional right to resort to the federal courts, or thereafter withdraw the privilege of doing business because of its exercise of such right, whether waived in advance or not". While neither of these cases are a perfect analogy for these situations, there does seem to be abundant precedent limiting the ability of states to 'discriminate' against corporations from other states. I'm skeptical that claiming corporations from other states are illegitimate because their directors are not recognized as legal persons in Ohio will withstand scrutiny.
If he has solved both of those problems he should quit the Ohio legislature and fly immediately to Silicon Valley, as he is a few years if not a full decade ahead of the most advanced labs in the world, and could probably raise one hundred billion dollars for his own lab before the end of the week.