Trying to explain to a layperson why people who found Bing's breakdowns 'deeply, darkly amusing'[1] were not psychopaths laughing at the suffering of a conscious being, I was reminded of the 'Tinker Bell Theory' of animal suffering:

"we have the view that, in certain cases, at least, animals can feel more suffering for the same event. The reason for suggesting this is that humans have many interests, and being pain-free is just one of them. ... Tinker Bell’s emotions completely consume her because she is so small that she has no room for complex emotion."
~ Moral Weights of Six Animals

This provides one extreme of a helpful continuum of feeling versus verbalization, approximately: Bambi can feel, but cannot verbalize; humans can verbalize and feel; Bing can verbalize but cannot feel.

I found this a useful intuition pump for thinking about the difference between embodied beings such as you and I, and disembodied systems such as Bing, and the relative ethical weight to give them. So much of what we call suffering is physiological, and even when the cause is purely intellectual (eg. in love)-- the body is the vehicle through which it manifests, in the gut, in migraine, tension, Parkinsonism etc. Without access to this, it is hard to imagine LLMs as suffering in any similar way to humans or animals.

Not that I think it is good or wise to laugh at Bing, any more than it is good or wise to cage and torment an animal or human.

  1. ^
New Comment
11 comments, sorted by Click to highlight new comments since: Today at 8:34 PM
[-]gjm1y31

Your continuum feels wrong to me.

What the linked article is suggesting isn't that humans suffer less intensely because we can express things in words, but that humans suffer less intensely because we care about other things besides pain (I would want to qualify that suggestion by saying that for exactly that reason we can also suffer more intensely than animals, when what ails us combines physical pain with dread, disappointment, the suffering of others, etc.).

If Bing suffers less than non-human animals do[1], I think it's because of the "cannot feel" part, not the "can verbalize" part.

[1] I don't think Bing suffers at all, in any morally relevant sense. Though I think this stuff is complicated and confusing and I darkly suggest that this sort of question doesn't actually have a well-defined answer, even if we ask only about a single person's values, never mind those of the human race as a whole or The One Objectively Right Value System if there is one.

Actually you are right, it makes more sense as two independent axes. 'Suffering' on one axis (X) and 'Verbal ability' on the other (Y)-- with Tinker Bell on max(X) and high Y, animals on max(X), min(Y), humans (almost?) max(X), max(Y), LLMs on min(X), increasing Y.

It is in fact the independence of the two axes that was interesting, I botched that.

So much of what we call suffering is physiological, and even when the cause is purely intellectual (eg. in love)-- the body is the vehicle through which it manifests, in the gut, in migraine, tension, Parkinsonism etc. Without access to this, it is hard to imagine LLMs as suffering in any similar way to humans or animals.

I feel like this is a “submarines can't swim” confusion:  chemicals, hormones, muscle, none of these things are the qualia of emotions.

I think a good argument can be made that e.g. chatgpt doesn't implement an analog of the processing those things cause, but that argument does in fact need to be made, not assumed, if you're going to argue that a LLM doesn't have these perceptions.

Surely it is almost the definition of 'embodied cognition' that the qualia you mention are fundamentally dependent on loopback with eg. muscles, the gut.

Not only sustained in that way but most everything about their texture, their intricate characteristics.

To my mind it isn't that chatgpt doesn't implement an analog of the processing, but it doesn't implement an analog of the cause. And it seems like a lot of suffering is sustained in this embodied cognition/biofeedback manner.

Are you arguing that you couldn't implement appropriate feedback mechanisms via the stimulation of truncated nerve endings in an amputated limb?

Not exactly. I suppose you could do so.

Do you really think it is not acceptable to assume that LLMs don't implement any analogues for that kind of thing though?

Maybe the broader point is that there are many things an embodied organism is doing, and using language is only occasionally one of them. It seems safe to assume that an LLM that is specialized on one thing would not spontaneously implement analogues of all the other things that embodied organisms are doing.

Or do you think that is wrong? Do you, eg., think that an LLM would have to develop simulators for things like the gut in order to do its job better, is that what you are implying? Or am I totally misunderstanding you?

Whilst it may be that Bing cannot suffer in the human sense, it doesn’t seem obvious to me that more advanced AI’s, that are still no more than neural nets, cannot suffer in a way analogous to humans. No matter what the physiological cause of human suffering, it surely  has to translate into a pattern of nerve impulses around an architecture of neurons that has most likely been purposed to give rise to the unpleasant sensation of suffering. That architecture of neurons presumably arose for good evolutionary reasons.  The point is that there is no reason an analogous architecture could not  be created within an AI, and could then cause suffering similar to human suffering when presented with an appropriate stimulus.  The open question is whether such an architecture could possibly arise incidentally,  or whether it has to be hardwired in by design. We don’t know enough to answer that but my money is on the latter.

Frankly, just as its clear that Bing shows signs of intelligence (even if that intelligence is different from human), I think it is also clear that it will be able to suffer (with a kind of suffering that is different from human).

I just thought the visceral image of an animal consumed with physiological suffering was a useful image for understanding the difference.

So my personal viewpoint (and I could be proved wrong) is that Bing hasn’t the capability to suffer in any meaningful way, but is capable (though not necessarily sentiently capable) of manipulating us into thinking it is suffering. 

[-]gjm1y20

This is obviously tangential to your main point, but:

even when the cause is purely intellectual ... the body is the vehicle through which it manifests, in the gut, in migraine, tension, Parkinsonism etc.

Parkinsonism? Are you suggesting that Parkinson's disease is (always? usually? sometimes?) a manifestation of intellectual suffering? This seems very different from my understanding of that affliction. Or are you referring to e.g. psychomotor retardation in depression? Or am I completely misunderstanding?

"psychomotor retardation in depression" <= this! sorry for lack of clarity, maybe I was using outdated terminology?