In this recent paper, the author study whether the octopi can suffer. Well, how are you going to answer that? Ask him?
It turns out there are two ways you can decide whether the animal can suffer or cannot. The first is simple. You just define that vertebrates can suffer (therefore experiments with them should follow ethical regulations) and that all other animals can not. Simple, but kind of arbitrary. And yes, obviously octopi can't suffer then by definition.
Another approach is functional. If an animal can learn to avoid pain (not just avoid it right now, but memorize what happened at given conditions and avoid these conditions), then the animal can suffer. According to this definition and experiments conducted in the abovementioned paper, octopi can suffer - so should be treated as vertebrates.
The second definition looks more logical to me. However, there is one problem with this one as well. If we make it purely functional, we should agree that an artificial neural network under the procedure of reinforcement learning suffers. Or instead of introducing an arbitrary threshold of "vertebrate - invertebrate", we introduce not less arbitrary threshold "biological neurons - artificial neurons".
Does it mean we should worry about the suffering of our models? And this is not the whole story. From the mathematical point of view, many things can be mapped onto the learning process. Does it all involve suffering?
So far, I see the following possible answers. Maybe there are more, that I am unaware of.
- Indeed all abovementioned suffers.
- There is a magical biological threshold.
- The ability to suffer is determined in an anthropocentric way. I think that a cat suffers because I see it and I feel really sorry, I can relate to it. For octopus, I kind of can relate too, but less than to a cat. For code running on my laptop, I can't relate at all. An interesting corollary would be that a teddy bear suffers.