As a materialist, I tend to believe the answer derives from the character of our social interactions with the 'something' in question. The number of neurons/parameters tends to correlate with the complexity of social behavior, but it fails at serving as a true threshold. This is more an empirically generated belief than the result of pure thought. We tend to attach welfare worth to 'somethings' we can interact socially with, perhaps as a result of thousands of years of evolution that forced us into socializing to survive.
Not much debate exists around the welfare worth of rocks. Perhaps shrimp deserves a consideration, but I am not surprised if the consensus weights toward no welfare. Octopus has been slightly doing better in this area. Humans tend to believe other humans are welfare worth, but notable examples exist of ideological movements that build narratives to simplify the social behavior of certain categories of humans as something not worthy of welfare.
I am aware this train of thought leads to mostly functional/operational considerations of phenomenology, and I am still working around the sharp edges of what this implies for my moral obligations. For me it is telling that a considerable amount of LLM users seem not deterred to consider their agents sentient beings despite the lack of a biological system (or any robust mechanistic understanding of their information processing scheme for what is worth). I personally believe that some principle-aligned LLMs are worth of some degree of welfare, but I am not sure exactly what that would be. It is clear though that biology-based welfare considerations are not applicable.
Now, let me apply my function to your experiment cases:
As a materialist, I tend to believe the answer derives from the character of our social interactions with the 'something' in question. The number of neurons/parameters tends to correlate with the complexity of social behavior, but it fails at serving as a true threshold. This is more an empirically generated belief than the result of pure thought. We tend to attach welfare worth to 'somethings' we can interact socially with, perhaps as a result of thousands of years of evolution that forced us into socializing to survive.
Not much debate exists around the welfare worth of rocks. Perhaps shrimp deserves a consideration, but I am not surprised if the consensus weights toward no welfare. Octopus has been slightly doing better in this area. Humans tend to believe other humans are welfare worth, but notable examples exist of ideological movements that build narratives to simplify the social behavior of certain categories of humans as something not worthy of welfare.
I am aware this train of thought leads to mostly functional/operational considerations of phenomenology, and I am still working around the sharp edges of what this implies for my moral obligations. For me it is telling that a considerable amount of LLM users seem not deterred to consider their agents sentient beings despite the lack of a biological system (or any robust mechanistic understanding of their information processing scheme for what is worth). I personally believe that some principle-aligned LLMs are worth of some degree of welfare, but I am not sure exactly what that would be. It is clear though that biology-based welfare considerations are not applicable.
Now, let me apply my function to your experiment cases:
1. False
2. False
3. False
4. True
5. True
6. False
Please let me know why I am wrong.