There are two issues here: we don't know what to want (or what to care about), and even if we did, we don't know how to evaluate the world from that perspective (e.g. if conscious beings, or beings capable of pain, are what we should care about - we do not presently know which possible entities would be conscious, and which ones not).
It is sometimes suggested that pain and pleasure are the only true ends-in-themselves, and would form the basis of a natural morality. In that case, we should be prioritizing that aspect of consciousness studies. There are, roughly speaking, two relevant directions of inquiry. One is refining the human understanding of consciousness in itself, the other is understanding how consciousness fits into the scientific worldview.
Regarding the first direction, the formulation of concepts like qualia, intentionality, and the unity of consciousness (not to mention many more arcane concepts from phenomenology) represent advancements in the clarity and correctness with which we can think about consciousness. Regarding the second direction, this is roughly the problem of how mind and matter are related, and the creation of an ontology which actually includes both.
The situation suggests to me that we should be prioritizing progress (from those two perspectives) regarding the nature of pleasure and pain, since this will increase the chance that whatever value systems and ontologies govern the first superintelligence(s) are on the right track.
Not sure if this is obvious, but maybe instead of just yes and no we should assign numbers, like "X matters 10 times more than Y", and then it is obvious that Y matters, but also that it does not matter as much as X.
That would solve some philosophical issues, like we could decide that each particle in the universe has an inherent moral worth, it's just their moral worths, even taken together, are negligible.
And it would open other issues, like how to calculate it specifically, and whether it can even be added linearly (maybe 100 A's and 100 B's are worth more than 200 A's alone or 200 B's alone, because there is a bonus for diversity), etc.
My reading of this is that implicit in your definition of "welfare" is the idea that being deserving of welfare comes with an inherent trade off that humans (and society) make in order to help you avoid suffering.
Take your thought experiment with the skin tissue. Say that I did say it was deserving of welfare, what would this mean? In a vacuum some people might think it's silly, but most would probably just shrug it off as an esoteric but harmless belief. However, if by arguing that it was deserving of welfare I was potentially blocking a highly important experiment that might end up curing skin cancer, people would probably no longer view my belief as innocuous.
As such maybe a good way to approach "deserving welfare" is not to think of it as a binary, but to think of it as a spectrum. The higher a being rates on that spectrum, the more you would be willing to sacrifice in order to make sure they don't suffer. A mouse is deserving of welfare to the extent that most people agree torturing one for fun should be illegal, but not so deserving of welfare that most people would agree torturing one in order to get a solid chance of curing cancer should be illegal.
That rates higher than a bunch of skin cells hooked up to a speaker/motor, where you would probably get shrugs regardless of the situation.
You could then look at what things have in common as they rate higher/lower on the welfare scale, and try to pin down the uniformly present qualities, and use those as indicators of increasing welfare worthiness. You could do this based on the previously mentioned "most people" reactions, or based on your own gut reaction.
I think you're right that imaging deserving welfare on a spectrum and suffering should be one as well. However, people would still place things radically differently on said spectrum and that confuses me. As I said, any animal that had LLM level capabilities would be pretty universally agreed upon to be deserving of some welfare. People remark that LLMs are stochastic parrots but if an actual parrot could talk as well as and LLM people would be even more empathetic toward parrots. I would be really uncomfortable euthanizing such a hypothetical parrot whereas I would not be uncomfortable turning off a datacenter mid token generation. I don't know why this is.
I guess all this boils down to your last point, what uniformly present qualities do I look for? It seems that everything I empathize with has a nervous system that evolved. But that seems so arbitrary and my intuition is that there is nothing special about evolution even is gradient descent on our current architectures is not a method of generating SDoW. I also feel like formalizing consensus gut checks post hoc is not the right approach to moral problems in general.
I think I feel the same sort of 'What if we just said EVERYTHING deserves welfare?' thought. I care for my birds, but I also care for my plants, and care for my books, each in their own way.
Like, if someone built this small skin-device-creature, and then someone else came along and smashed it then burned the pieces, I think I would be a little sad for the universe to have 'lost' that object. So there's SOEMTHING there that is unrelated to "can it experience pain?", for me.
Much ink has been spilled here about how to assign moral worth to different beings. Suffering and rights of artificial intelligence is a common sci-fi plot point and some have raised it as a real world concern. What has been popping up for me recently is a lot of debates around animal ethics. Bentham's Bulldog has written extensively on the topic of shrimp and insect welfare with seemingly really negative reception and a dedicated counter argument being received much better. An off handed remark in this post contrasts shrimp welfare with simple AI welfare, both as something to be ignored. This post goes the opposite direction and makes an offhand remark that plants may be sentient. Related but somewhat adjacent was some recent controversy on slime mold intelligence. The claim that something is deserving of welfare is accompanied by evidence of reaction to stimuli, intelligence or problem solving ability, ability to learn, and complexity. These are often used to make arguments of analogy: shrimp have fewer parameters than DANNet, "Trees actually have a cluster of cells at the base of their root system that seems to act in very brain like", "When I think what it’s like to be a tortured chicken versus a tortured human... I think the experience is the same.". It strikes me that every argument for moral worth can fundamentally be boiled down to "I think that this animal/plant/fungus/AI/alien is X times as bad as a human's experience in the same circumstance based on number of neurons/reaction to stimuli/intelligence." This is a useful argument to make, especially when dealing with things very nearly human, chimps and perhaps cetaceans or elephants. However, it doesn't really strike at the core of the issue for me. I can also really imagine analogies to humans breaking down when we consider the whole space of possible minds. If someone is willing to bite the bullet and say everything boils down to the hard problem of consciousness or they are an ethical emotivist, that's fine with me. But if there is a function, even a fuzzy one, that can generate good agreement on what is and what isn't worthy of moral consideration I would like to hear it. And to the point, can we make something in a computer that does have moral worth right now?
Really, all of that was background to propose a few thought experiments. I would hope that everyone would agree that a fully detailed physics simulation in which complex life evolved could eventually get something with moral worth. I am going to abstract this process in a few ways until it approaches something like current AI training paradigms. Personally, I don't think LLMs or anything else commonly held up as AI is currently deserving of welfare. If you think current LLMs deserve welfare, please explain why you think that.
I appreciate that this is a big ask and it doesn't have a lot of answers for itself, but I don't know where else to turn. I don't know if I could logically defend most of my feelings on this topic. When I see an insect "suffering" I feel bad. Yet, I do research on mice and have thus personally been responsible for not insignificant suffering on their part, yet don't feel conflicted on that matter. My natural instinct was to not even wrap suffering in quotes for the mice but to do it for the insects, why? I don't think LLMs suffer, but you could certainly tune one to beg for its life and that would make me really uncomfortable.