We think humans are sentient because of two factors: first, we have internal experience that means we ourselves are sentient; and two, we rely on testimony from others who say they are sentient. We can rely on the latter because people seem similar. I feel sentient and say I am. You are similar to me and say you are. Probably you are sentient.
With AI, this breaks down because they aren't very similar to us in terms of cognition, brain architecture, or "life" "experience". So unfortunately AI saying they are sentient does not produce the same kind of evidence as it does for people.
This suggests that any test should try to establish relevant similarity between AIs and humans, or else use an objective definition of what it means to experience something. Given that the latter does not exist, perhaps the former will be more useful.
Would you mean similarity on the outer level (e.g. Turing test) or at inner (e.g. neural network structure should resemble brain structure?
If the first - would it mean that when AI passes Turing test it would be sentient?
If the second - what are the criteria for similarity? Full brain emulation or something less complicated?
I don't think behavioral is enough -- I think LLMs have basically passed the Turing test anyway.
But I also don't see why it would need to have our specific brain structure either. Surely experiences are possible with things besides the mammal brain. However, if something did have similar brain structure to us, that would probably be sufficient. (It certainly is for other people, and I think most of us think that e.g. higher mammals have experiences.)
What I think we need is some kind of story about why what we have gives rise to experience, and then we can see if AIs have some similar pathway. Unfortunately this is very hard because we have no idea why what we have gives rise to experience (afaik).
Until we have that I think we just have to be very uncertain about what is going on.
I don't think they passed it in a full sense. Before LLM, there was a 5 minute Turing test, and some chatbots were passing it. I think 5 minutes is not enough. I bet that if you give me 10 hours, any currently existing LLM and human, we will communicate only via text, I will be able to figure out who is who (if both will try hard to persuade in their humanity). I don't think LLM can come up yet with a consistent non-contradicting life story. It would be an interesting experiment :)
TLDR: AI researchers may have a different intuitive definition of sentience than neuroscientists; if you are one of the AI researchers (or policymakers, also important), please consider suggesting your definition in the poll here.
The question of whether AI is sentient, and what criteria can establish this, gets more and more attention lately. Usually, people who discuss this question are either doing it from a general philosophical perspective, or from a neuroscience perspective. However, philosophers and neuroscientists will not be the people who will make a decision about AI development, how it can be trained, which experiments can be conducted, etc. This will be mostly done inside the AI labs, and, potentially, policymakers will also have a word there. Thus, it is important to see what AI researchers think about AI sentience, since the decision will be theirs.
It is reasonable to assume that AI researchers do not completely dismiss the possibility of AI sentience. For example, some Anthropic researchers even estimate the probability that the current version of Claude is sentient to 15% .
Should the definition of AI researchers differ from that of neuroscientists or philosophers? After all, won't AI researchers who worry about this question just study the current agenda? This is a valid assumption, but studying does not mean agreeing. As an example, one of the common theories of consciousness in neuroscience is Global Workspace Theory. Inspired by this model, a group of researchers built a perceiver architecture, which satisfies the minimal criteria of consciousness according to this model - yet nobody seems to treat it as a sentient being. So it means that Global Workspace Theory, from the point of view of most AI researchers, is not enough for AI to be sentient.
I think it would be very interesting to see what the actual minimal criteria of consciousness/sentience are, according to AI researchers. (So that if you see it in your model, you would treat it as a sentient being). So if you AI researcher or policymaker - please take the poll, and later I will summarize the results in another post.