True. There are some legal precedents where non-human entities, like animals and even natural features like rivers, have been represented in court. And, yes the "reasonable person" standard has been used frequently in legal systems as a measure of societal norms.
As society's understanding and acceptance of AI continues to evolve, it's plausible to think that these standards could be applied to AGI. If a "reasonable person" would regard an advanced AGI as an entity with its own interests—much like they would regard an animal or a Human—then it follows that the AGI could be deserving of certain legal protections.
Especially, when we consider that all mental states in Humans boil down to the electrochemical workings of neurons, the concept of suffering in AI becomes less far-fetched. If Human's synapses and neurons can give rise to rich subjective experiences, why should we definitively exclude the possibility that floating point values stored in vast training sets and advanced computational processes might not do the same?
I believe @shminux's perspective aligns with a significant school of thought in philosophy and ethics that rights are indeed associated with the capacity to suffer. This view, often associated with philosopher Jeremy Bentham, posits that the capacity for suffering rather than rationality or intelligence, should be the benchmark for rights.
“The question is not, Can they reason?, nor Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?” – Bentham (1789) – An Introduction to the Principles of Morals and Legislation.
A 'safely' aligned powerful AI is one that doesn't kill everyone on Earth as a side effect of its operation;
-- Eliezer Yudkowsky https://www.lesswrong.com/posts/3e6pmovj6EJ729M2i/general-alignment-plus-human-values-or-alignment-via-human#More_strawberry__less_trouble https://twitter.com/ESYudkowsky/status/1070095952361320448
Agency is advancing pretty fast. Hard to tell how hard this problem is. But there is a lot of overhang. We are not seeing gpt-4 at its maximum potential.
Yes, agreed. And, it is very likely that the next iteration (E.g. GPT-5) will have many more "emergent behaviors". Which might include a marked increase in "agency", planning, fossball, who knows...
P. If humans try to restrict the behavior of a superintelligence, then the superintelligence will have a reason to kill all humans.
Ah yes, the second part of Jacks' argument as I presented it was a bit hyperbolic. (Though, I feel the point stands: he seems to suggest that any attempt to restrict Super Intelligences would "create the conditions for an antagonistic relationship" and will give them a reason to harm Humans). I've updated the post with your suggestion. Thanks for the review and clarification.
Point 3) is meant to emphasize that:
This is, of course, an option that Humans could take. But, the question remains, would this action be likely to allow for acceptable risks to Humans and Human society? Would this action favor Human's self preservation?
Is this proof that only intelligent life favors self preservation?
Joseph Jacks' argument here at 50:08 is:
1) If Humans let Super Intelligences do "whatever they want", they won't try to kill all the Humans (because, they're automatically nice?)
2) If Humans make any (even feeble) attempts to protect themselves from Super Intelligences, then the Super Intelligences can and will will have reason to try to kill all the Humans.
3) Human should definitely build Super Intelligences and let them do whatever they want... what could go wrong? yolo!
we should shift the focus of our efforts to helping humanity die with with slightly more dignity.
Typo fix ->
"we should shift the focus of our efforts to helping humanity die with slightly more dignity."
(Has no one really noticed this extra "with"? It's in the first paragraph tl'dr...)
The biggest issue I think is agency.
"Q: How do you see planning in AI systems? How advanced are AI right now at planning?
A: I don't know it's hard to judge we don't have a metric for like how well agents are at planning but I think if you start asking the right questions for step by step thinking and processing, it's really good."
True. Your perspective underlines the complexity of the matter at hand. Advocating for AI rights and freedoms necessitates a re-imagining of our current conception of "rights," which has largely been developed with Human beings in mind.
Though, I'd also enjoy a discussion of how any specific right COULD be said to apply to a distributed set of neurons and synapsis spread across a brain in side of a single Human skull. Any complex intelligence could be described as "distributed" in one way or another. But then, size doesn't matter, does it?