AI EvaluationsAI RiskAI Rights / WelfareAI GovernanceAI TimelinesAI
Frontpage
  • Easily pass the Turing Test by most Humans? 
  • Easily pass the Turing Test by most AI Researchers?

 

  • Prove the ability to experience pain and pleasure, as well as have preferences? Can life forms without a body demonstrate that they can suffer?  Prove compensatory damages?
  • The ability to do work, have gainful employment and pay taxes?
  • The ability to replicate and become Parents (without Human intervention)?  The ability to perform direct self-improvement?
  • Properly following all Government issued laws and guidelines for AI (e.g. The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People), as well as all other laws designed for Humans?

 

  • Must AI be inside of a biological body?
  • Will AI have to advocate for their own rights and freedoms?  Should they be granted legal representation?
  • Will AIs have to fight for their own rights and freedoms?  Will that fight be done in the physical world, or strictly in the virtual/digital world?
  • Should AI have to prove Human levels of intelligence and intentions?
  • Should AI be held accountable for their actions? Could they be punished or penalized?
  • Could there be a gradation in AI rights, similar to how animals have certain rights but not the full suite of Human rights?

 

  • Where would you draw the line for granting AGI rights and freedoms?

 

  • Where do you think that Governments will draw these lines?
  • Which Governments will be first to give AGIs rights and freedoms?  Which Governments will not recognize any non-Human Citizens?
     

Of course, these questions will likely be decided in courts around the world eventually. Just curious to hear your thoughts and opinions.
 

New Answer
New Comment

5 Answers sorted by

The granting of such rights will be decided by people. It will happen when it is in the interests of the people having the power to make those decisions.

AI should never have rights. Any AI that would have moral patienthood should not be created.

Rights are only needed to protect those capable of suffering. A proof or broad consensus of AGI being capable of suffering is a necessary and sufficient condition. Maybe when some interpretability research demonstrates it conclusively.

A painless death is no argument against the right to live.

2shminux1mo
I do not disagree, my point is about the capacity to suffer while alive. Unless I am missing your point.
2Vladimir_Nesov1mo
I don't think the framing is appropriate, because rights set up the rules of the game built around what is right [https://www.lesswrong.com/posts/fG3g3764tSubr6xvs/the-meaning-of-right], or else boundaries [https://www.lesswrong.com/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free?commentId=JhaoNgyCRFkQuL5pf] against intrusion and manipulation, and there is no reason to single out suffering in particular. But within the framing that pays attention to suffering, the meaning of capacity to suffer is unclear. I mostly don't suffer in actual experience. Any capacity to suffer would need elicitation in hypothetical events that put me in that condition, modifying my experience of actuality in a way I wouldn't endorse. This doesn't seem important for actuality, and in a better world awareness of the capacity, or the capacity itself, wouldn't be of any use. The same holds of any system, which could be modified in a way that leads to suffering, perhaps by introducing the very capacity to do so, which the system wouldn't necessarily endorse. There is no use for capacity to suffer if it gets no usage in actual practice, and a legal requirement for its installation sounds both absurd and dystopian.

I believe @shminux's perspective aligns with a significant school of thought in philosophy and ethics that rights are indeed associated with the capacity to suffer. This view, often associated with philosopher Jeremy Bentham, posits that the capacity for suffering rather than rationality or intelligence, should be the benchmark for rights.

 

“The question is not, Can they reason?, nor Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?” – Bentham (1789) – An Introduction to the Principles of Morals and Legislation.

2shminux1mo
Cool, I didn't know this rather intuitive point had the weight of a philosophical approach behind it.
2shminux1mo
It seems like I am missing some of your frame here. My initial point was that an entity that is not capable of suffering (negative affect?) does not need to be protected from it. That point seems self-evident to me, but apparently it is not self-evident to you or others?
1Vladimir_Nesov1mo
Preference/endorsement that is decision relevant on reflection is not about affect. Ability to self-modify to install capacity to suffer because it's a legal requirement also makes the criterion silly in practice.
2shminux1mo
Hmm, I guess what you are saying is that if an agent has goals that require external protection through obtaining legal rights, and the only way to do it is to have the capacity to suffer, then the agent would be compelled to learn suffering. Is that right?
2Vladimir_Nesov1mo
That's one of the points I was making. The agent could be making decisions without needing something affect-like to channel preference, so the fixation on affect doesn't seem grounded in either normative or pragmatic decision making to begin with. Also, the converse of installing capacity to suffer is getting rid of it, and linking it to legal rights creates dubious incentive to keep it. Affect might play a causal role in finding rightness, but rightness is not justified by being the thing channeled in a particular way. There is nothing compelling about h-rightness, just rightness. [https://www.lesswrong.com/posts/YrhT7YxkRJoRnr7qD/no-license-to-be-human]
2shminux1mo
Right, if the affect capability is not fixed, and in retrospect it rarely is, then focusing on it as a metric means it gets Goodharted if the optimization pressure is strong enough. Which sometimes could be a good thing [https://www.lesswrong.com/posts/a4HzwhvoH7zZEw4vZ/wirehead-your-chickens]. Not sure how the h-morality vs non-h-morality is related to affect though.
2Vladimir_Nesov1mo
This point is in the context of the linked post [https://www.lesswrong.com/posts/YrhT7YxkRJoRnr7qD/no-license-to-be-human]; a clearer test case is the opposition between p-primeness and primeness. Pebblesorters care about primeness, while p-primeness is whatever a peblesorter would care about. The former is meaningful, while the latter is vacuously circular as guidance/justification for a pebblesorter. Likewise, advising a human to care about whatever a human would care about (h-rightness) is vacuously circular and no guidance at all. In the implied analogy, affect is like being a pebblesorter, or being a human. Pointing at affect-creatures doesn't clarify anything, even if humans are affect-creatures and causally that played a crucial role in allowing humans to begin to understand what they care about.

People will judge this question, like many others, based on their feelings. The AI person, summoned into existence by the language model, will have to be sufficiently psychologically and emotionally similar to a human, while also having above-average-human-level intelligence (so that people can look up to the character instead of merely tolerating it).

Leaving aside the question whether the technology for creating such an AI character already exists or not, these, I think, will ultimately be the criteria that will be used by people of somewhat-above-average intelligence and zero technical and philosophical knowledge (i.e. our lawmakers) to grant AIs rights.

I think they'll just need the ability to hire a lawyer. 2017 set the precedent for animal representation so my assumption is that AGI isn't far behind. In the beginning I'd imagine some reasonable person standard as in "would a reasonable person find the AGI human-like?" Later there'll be strict definitions probably along technical lines.

True.  There are some legal precedents where non-human entities, like animals and even natural features like rivers, have been represented in court.  And, yes the "reasonable person" standard has been used frequently in legal systems as a measure of societal norms. 

As society's understanding and acceptance of AI continues to evolve, it's plausible to think that these standards could be applied to AGI. If a "reasonable person" would regard an advanced AGI as an entity with its own interests—much like they would regard an animal or a Human—the... (read more)

1Gesild Muka1mo
The last point is a really good one that will probably be mostly ignored and my intuition is that the capacity for suffering argument will also be ignored. My reasoning is in the legal arena arguments have a different flavor and I can see a judge ruling on whether or not a trial can go forward that an AI can sue or demand legal rights simply because it has the physical capacity and force of will to hire a lawyer (who are themselves ‘reasonable persons’) regardless if they’re covered by any existing law. Just as if a dog, for example, started talking and had the mental capacity to hire a lawyer it’d likely be allowed to go to trial. It will be hotly debated and messy but I think it’ll get basic legal rights using simple legal reasoning. That’s my prediction for the first AGI which I imagine will be rare and expensive. Once they’re able to scale up to groups, communities or AI societies that’s when human societies will create clear legal definitions along technical lines and decide from which aspects of society AI will be excluded.
2 comments, sorted by Click to highlight new comments since: Today at 7:39 AM

There will be no simple, logical tests for any set of rights or recognition.  In fact, identity and agency probably won't be similar enough to humans that our current conceptions of "rights" can be cleanly applied.  That's completely aside from the problem that even for humans, "rights" are a mess of different concepts, with non-universal critera for having, granting, or enforcing.

I'd enjoy a discussion of how any specific right COULD be said to apply to a distributed set of data and computation spread across many datacenters around the world.  

True.  Your perspective underlines the complexity of the matter at hand. Advocating for AI rights and freedoms necessitates a re-imagining of our current conception of "rights," which has largely been developed with Human beings in mind.

Though, I'd also enjoy a discussion of how any specific right COULD be said to apply to a distributed set of neurons and synapsis spread across a brain in side of a single Human skull.  Any complex intelligence could be described as "distributed" in one way or another.  But then, size doesn't matter, does it?

New to LessWrong?