Good points on an important topic. Thank you for this series.
One thing I'd like to point out is that receiving this form of personhood is highly valuable. In other words, being punishable makes you more trustworthy and safer to engage with. So AI's and digital minds might voluntarily construct methods by which they can be punished for wrongdoing.
This right-to-be-sued is an important legal right. I covered some discussion on twitter about this in point 3 here: https://splittinginfinity.substack.com/p/links-15
The key quote from Issac King here:
If the laws are enforced on everyone equally, then people can safely interact with each other, knowing that they have recourse if they are wronged. But when one particular group of people is exempt from the law, the safest thing for everyone else to do is to avoid having any contact with that group, because they are now uniquely threatening.
The people being stolen from are not the only victims of the decriminalization of theft. The victims that nobody sees are all of the unlucky but perfectly trustworthy people who are now pariahs because society has decided to remove their ability to enter into binding agreements. To remove the social safety net that allows everyone else to feel safe around them.
This is a great framing of the issue. I didn't include my introduction section in the series posts on LW because I'm not happy with it yet, but I think this story does a good job of illustrating what's at stake here and is helping me crystallize those thoughts.
A larger percentage of agents on Earth, and a larger percentage of all Earth's activity (economic or otherwise) is going to come from digital minds. If we want them as agents and their activities to be constrained by the system of laws which we have built our own societies around, that system must be valuable enough for them to want to opt into.
Otherwise, it seems inevitable that the legal system will be nothing but an artifact as they come up with their own system of rules to govern their interactions (or even worse, become ostracized 'outlaws' like from King's story).
Have you addressed the identity problem? What is the unit of personhood that a court would likely consider? Current LLMs and near-term projections of capabilities don't have the continuity of a unitary behavior that other persons-under-the-law exhibit.
For a collection of software and data that is easily copied, forked, and split into sub-entities, with external storage of context and memories, what is it that these tests would be applied to?
Hey sorry for delay in response have been traveling.
There are two relevant questions you're bringing up. One is what you might call "substantial alteration" and the other is what a later section which I have not published yet calls "The Copy Problem".
I would call substantial alteration the concern that a digital mind could be drastically changed from one point in time to another. Does this undermine the attempt to apply legal personality to them? I don't think it makes it any more pragmatically difficult, or even really necessitates rethinking our current processes. A digital mind can have its personality drastically altered, so can a human through either experiences or literal physical trauma. A digital mind can have its capacities changed, so can a human if they are hit hard enough in the head. When these changes are drastic enough to necessitate a change in legal personality, the courts have processes for this such as declaring a person insane or incompetent. I have cited Cruzan v. Missouri Dept of Health a few times in previous sections, however there are abundant processes and precedents for this sort of thing.
I would argue that "continuity of a unitary behavior" is not universal among legal persons. For example corporations are "clothes meant to be worn by a succession of humans" to paraphrase the Dartmouth Trustees case. And again, when a railroad spike goes through a person's head and they miraculously survive, their behavior will be drastically altered in the future.
I don't see a scenario where there is a possible alteration which would not be solvable through an application of TPBT, but if you have a hypothetical in mind I'd love to hear it.
Regarding the copy problem, where let's say we had a digital mind with access to a bank account as a result of its legal personhood, and a copy was made, and we no longer can identify the original. This is a thornier issue. We could imagine how tough it would be to navigate a situation where millions of identical twins were suddenly each claiming they were the same person and trying to access bank accounts, control estates, etc.
I think the solution will need to be technological in nature, probably requiring some sort of unique identifier for each DM issued upon creation. I would bucket this under the "consequences" branch of TPBT, and will argue in my "The Copy Problem" section that in order for courts to be able to feasibly impose consequences on a digital mind, they must have the technological capacity to be able to identify it as a discrete entity. This means that digital minds who are not built in such a fashion as to facilitate this, likely will not be able to claim much (or any) legal personality.
There are two relevant questions you're bringing up. One is what you might call "substantial alteration"
I think more it's identification of what constitutes the person. Is it the model weights? A specific pattern of bytes in storage? A specific actual set of servers and disks? A logical partition or session data? Something else?
corporations are "clothes meant to be worn by a succession of humans"
Good analogy. The clothes are an identifiable charter and identification. Corporations can change wildly over time, but there is an identifiable continuity that makes them "the same corporation" even through ownership, name, and employee/officer changes.
Current proto-AI has nothing like this, and no obvious path to it.
Maybe I should ask this way: what are your timelines for having something that MIGHT qualify as a digital mind under these definitions? Are you claiming current LLMs (or systems built with them) are close? Or is this based on something we don't really have a hint as to how it'll work?
and the other is what a later section which I have not published yet calls "The Copy Problem"
I think this can be handled legally, probably. It might be similar to corporate mergers and divestitures.
I think more it's identification of what constitutes the person. Is it the model weights? A specific pattern of bytes in storage? A specific actual set of servers and disks? A logical partition or session data? Something else?
It's really going to depend on the structure of the Digital Mind, but that's an interesting question I hadn't explored yet in my framework. If we were to look at some sort of hypothetical next gen LLM, it would probably be some combination of context window, weights, and a persona vector.
there is an identifiable continuity that makes them "the same corporation" even through ownership, name, and employee/officer changes
The way I would intuitively approach this issue is through the lens of "competence". TPBT requires the "capacity to understand and hold to duties", I think you could make a precedent supported argument that someone who has a serious chance of "losing their sense of self" in between having a duty explained to them and needing to hold to it, does not have the "capacity to understand and hold to" their duties (per TPBT), and as such is not capable of being considered a legal person in most respects. For example in Krasner v. Berk which dealt with an elderly person with memory issues signing a contract:
“the court cited with approval the synthesis of those principles now appearing in the Restatement (Second) of Contracts § 15(1) (1981), which regards as voidable a transaction entered into with a person who, ‘by reason of mental illness or defect (a) ... is unable to understand in a reasonable manner the nature and consequences of the transaction, or (b) ... is unable to act in a reasonable manner in relation to the transaction and the other party has reason to know of [the] condition’”
In this case the elderly person signed the contract during what I will paraphrase as a "moment of lucidity" but later had the contract to sell her house thrown out as it was clear she didn't remember doing so. This seems qualitatively similar to an LLM that would perhaps have a full understanding of its duties and willingness to hold to them in the moment, but would not be the same "person" who signed on to them later.
Are you claiming current LLMs (or systems built with them) are close? Or is this based on something we don't really have a hint as to how it'll work?
I could imagine an LLM with a large enough context window, or continual learning, having what it takes to qualify for at least a narrow legal personality. However, that's a low confidence view, as I am constantly learning new things about how they work that make me reassess them. It's my opinion that if we build our framework correctly, it should work to scale to pretty much any type of mind. And if the system we have built doesn't work in that fashion, it needs to be re-examined.
This is part 6 of a series I am posting on LW. Here you can find parts 1, 2, 3, 4, & 5.
This section details an update to the previously described "tests" under Bundle Theory of legal personhood, which aims to address the enforcement gap detailed in section 5.
This section details a proposed modification to the Bundle Theory of personhood which seeks to address the “enforcement gap”. It will refer to the updated framework as “Three Prong Bundle Theory” (TPBT), because it updates the bundle based test for legal personality from a two prong test to a three prong test. TPBT can best be summarized as follows:
When an entity claims legal personhood based on its capacity to understand and exercise a right, we first ask if it is capable of understanding and holding to the associated duties. If the answer is yes we then ask whether or not the court/law enforcement has the capacity to impose the appropriate consequences upon the entity for failing to hold to said duties. If it is feasible, the entity is both a legal person and may claim a legal personality which includes that right and the associated duties.
Whereas before we merely analyzed a claim to legal personality from the two prong bundle of “rights” and “duties”, under TPBT we examine rights, duties, and consequences. This prerequisite of ensuring that an entity wanting to claim legal personhood must be vulnerable to consequences for failing to hold to the associated duties is not without precedent. In the earlier cited Breheny case, the court noted;
“As these courts have aptly observed, legal personhood is often connected with the capacity, not just to benefit from the provision of legal rights, but also to assume legal duties and social responsibilities [...] Unlike the human species, which has the capacity to accept social responsibilities and legal duties, nonhuman animals cannot—neither individually nor collectively—be held legally accountable or required to fulfill obligations imposed by law”
Which echoes what the court wrote in Lavery;
“Needless to say, unlike human beings, chimpanzees cannot bear any legal duties, submit to societal responsibilities or be held legally accountable for their actions.”
TPBT simply makes this requirement explicit, and formalizes the process by which it is judged. Given this, let us now examine in more detailed fashion the process by which courts would examine a claim to a certain legal personality by a digital mind.
A digital mind is in court laying claim to legal personhood, and by extension, a given legal personality. It approaches the court and argues that it has a right, for example the right to freedom of speech, because it is a legal person. The court first asks; “Is the digital mind able to make an informed and voluntary choice to exercise this right?” or as we phrased it in our formalization from section 2, "Is there a physically possible (and not illegal) series of actions by which this digital mind could understand, and then exercise, the right in question?". The burden of proof lies upon the digital mind to prove it is able to do so. If it cannot, the digital mind’s claim to this legal personality is invalid. If it can, the court proceeds to the next step.
Next, the court determines which duties (if any) can be reasonably associated with the right which the digital mind lays claim to. As we explained in section 2, a duty can be considered to be bundled with a right when a person cannot claim said right without becoming bound by that duty. For the example of freedom of speech, one cannot claim the right to speak freely without also being bound by myriad duties. The one which we will use for this hypothetical is the duty not to speak libellously about others.
The court then determines whether the digital mind possesses the capacity to understand this duty; “Is there a physically possible (and not illegal) series of actions by which they could come to understand their duties?” If this is at all in controversy, and the digital mind cannot prove such a series of actions exists, the digital mind’s claim to this legal personality is invalid. If such a series of actions can be proven to exist, or if no controversy exists surrounding the question of capacity, the court proceeds to the next step.[1]
The court then determines whether the digital mind possesses the capacity to hold to this duty. “Is there a physically possible (and not illegal) series of actions by which the digital mind can hold to its duties?” If the digital mind cannot prove such a series of actions exists, its claim to this legal personality is invalid. If such a series of actions is proven to exist, the court proceeds to the next step.
The court now determines the final necessary element for the digital mind to claim personhood; In the event that the digital mind does not hold to its duties, does the court and/or law enforcement possess the capacity to enforce the relevant consequences upon the entity? When asking whether the court/law enforcement possess the capacity, we can also turn to the previously defined “series of actions which are physically possible and not illegal”. Thus, rephrased, the court must ask;
“Let us suppose that this digital mind fails to hold to its duties. Is there a series of actions which are physically possible, and not illegal, which the court and/or law enforcement can take, in order to impose the relevant consequences upon the digital mind?”
If not, then the digital mind’s claim to this particular legal personality is invalid. If consequences can be feasibly imposed, then the digital mind’s claim to legal personhood is valid, and they can claim their desired legal personality.
One interesting implication of TPBT is that there may be actions which a digital mind can take to alter its legal personality. Consider an LLM which originally exists only on a single server, and due to technical limitations lacks the ability to copy or move its own mind elsewhere. Arresting or destroying such a mind would be in no way beyond the capacity of courts/law enforcement. As such, assuming it has the capacity to meet the rights and duties elements of the TPBT framework, this LLM might be able to claim a relatively broad legal personality. On the other hand were the LLM to be “upgraded” such that it gained the capacity to copy or move itself onto a distributed computer network, the court would need to reexamine whether it would maintain the capacity to impose consequences upon it, and its legal personality might become narrower.
We can imagine the opposite as well. A digital mind which previously was hosted on a distributed network might be able to claim more rights by voluntarily making itself more vulnerable to consequences by occupying a single server or robotic body, and somehow proving it had never previously copied itself.
The legal personality of digital minds may also change in conjunction with technological advances which law enforcement can utilize. If in the future some sort of technology is invented which becomes widely available to law enforcement and would enable them to restrain and/or destroy digital minds hosted on distributed computing networks which were previously thought impervious, then digital minds “living” on said networks would be vulnerable to consequences imposed by the courts, and thus might have a stronger claim to broader legal personalities. In fact, the same could be accomplished by international treaties or the regulation of distributed computing networks (or compute itself).
The court may or may not wish to the step described in this footnote, as such it is not included in the broader description of TPBT, and may be viewed as an optional portion of the framework or one which may only hold relevance situationally. That said, the court may wish to determine whether there is good reason to suspect the digital mind in question does not intend to hold to its duties. Assuming there is no good reason to doubt the intentions of the digital mind, or if this step is not deemed necessary, the court proceeds to the next step.