Imagine a being very much like a human, with rich conscious experience but no affective conscious states. Call such a creature a Philosophical Vulcan (or p-Vulcan for short.)
p-Vulcans differ from the regular Vulcans on Star Trek who are low-affect, hyper-logical beings that nonetheless, presumably feel some pleasure and pain. By contrast, p-Vulcans feel no pleasure or pain as they completely lack the capacity for affective experience.
Would Philosophical Vulcans have moral status?
Intuitively the answer is yes. By hypothesis, Vulcans have rich inner conscious experience. They may have projects, goals and desires even with no accompanying affective states.
For example, they might have the goal of making great works of art, behaving morally, or furthering science & philosophy. These goals might not be motivated out of emotion, but rather because Vulcans judge those activities to be valuable ends in themselves.
In this post, I’ll explore the implications of Vulcans on welfare debates, effective altruism and utilitarianism.
Some definitions
Sentientism is the thesis that beings have moral status iff they’re sentient, where sentience is the conjunction of phenomenal consciousness and affective consciousness. Roughly, affective consciousness corresponds to pleasure and pain, happiness and suffering and other valenced states.
There are two flavours of sentientism which are sometimes confounded:
Affective sentientism - the view that beings have moral status iff they have the capacity for affective consciousness. Consciousness sentientism - the view that beings have moral status iff they have the capacity for phenomenal consciousness.
Affective sentientism has been popular among theorists such as Jeremy Bentham and Peter Singer and is sometimes taken as a default view for assigning moral status in modern ethics.
I think affective-sentientism is near-obviously false.
I’ve always held this as an intuition, however it recently became clear to me why it’s false after reading a great paper by David Chalmers. I’ll follow Chalmers’ paper closely in what follows. While the paper focuses on arguing against affective sentientism, it can be extended straightforwardly to an argument against hedonic utilitarianism which I will argue for explicitly in this post.
Hedonic Utilitarianism - the view that the moral value of a being is entirely a function of their affective conscious states.
I think hedonic utilitarianism fails for the same reasons that affective sentientism fails. Although Chalmers doesn’t make this connection in his paper, I will develop it below as I think it bears directly on some of the shrimp/AI welfare concerns.
The Vulcan Trolley Problem
Imagine a standard trolley problem setup. On one track we have a human, on the other track we have 5 Vulcans.
Is it permissible to kill the Vulcans in order to save the human?
I submit, this isn’t obvious to me at all.
Even if the Vulcans don’t have any affective states. Even if they genuinely don’t feel any emotion at the thought of being killed. Killing them is still wrong. They have rich conscious inner lives and those lives matter.
To drive the point home, let’s make the examples more extreme:
Would it be permissible to kill a Vulcan to save 5 minutes on your way to work?
Or perhaps more provocatively.
Would it be permissible to kill a Vulcan to save a shrimp?
Remember, it’s highly likely that shrimps have some form of phenomenal consciousness and experience some form of suffering. Shrimp suffering is bad. Even though we lack the capability to accurately predict its intensity, the shrimp certainly suffers more than the Vulcan would. Vulcans totally lack the capacity to suffer.
Would it be permissible to kill a planet of Vulcans to save a shrimp?
My intuition here is that we should not kill a planet of Vulcans to save a shrimp. Again, the Vulcan planet is full of subjects with rich conscious experience. Those experiences matter.
And yes. Shrimp experiences matter too. But it’s near-obvious to me that this experience matters less than an entire planet of Vulcans. Just because Vulcans lack affect doesn’t imply that we should not value their lives at all.
Some potential objections
Biting the bullet
The trolley problems above are not really arguments as such. They’re really invitations to accept an intuition: Vulcan lives matter.
That said, could a sentientist or a hedonic utilitarian bite the bullet and assert that Vulcans don’t have any moral status at all? It seems incredible to assert that they don’t, particularly when thinking about killing Vulcans for minor conveniences such as saving 5 minutes on the route to work.
Partial moral status
Perhaps Vulcans might have partial moral status? Like a tree or something else of worth that doesn’t have affective conscious states?
I don’t think this is an appropriate conclusion. It’s not just that Vulcan lives matter. But killing a Vulcan is a much more serious wrong than killing a shrimp. Vulcans have a very similar conscious experience to humans and so, by my lights, their lives matter just about as much as ordinary human lives.
Why might this be? The natural thought is that what makes an experience morally significant is not (purely) whether it contains suffering, but also something about its richness, complexity and capacity to pursue things of value. If these experiences are also morally significant then we can’t just sum the net value of the affective states to obtain a moral value, but rather, we should also factor these non-affective states into our calculus.
Are Vulcans possible?
At this point, one might wonder; is it metaphysically possible to have experience without affect? If Vulcans are not metaphysically possible then we can dodge the trolley problems.
I think the answer here is yes.
Firstly, there are many examples of humans who, although they don’t completely lack affective states, have experiences which trend that way. For example, people with pain insensitivity lack some capacity for negative affective states and people with anhedonia lack some capacity for positive states.
Second and perhaps more importantly, Vulcans are not analogous to philosophical zombies. In the zombie thought experiment, zombies are hypothesised to be creatures exactly the same as humans but missing a crucial ingredient of phenomenal consciousness. Many people find this idea unpalatable and have challenged the metaphysical possibility of zombies.
But Vulcans aren’t analogous. It’s not that Vulcans are identical to humans and are just missing some crucial ingredient of affective consciousness. Rather, Vulcans are similar to humans but just lack the capacity for affective states.
So Vulcans don’t say “Ouch!” when they put their hand on a hot stove. Maybe they don’t have the required nerve endings in their hands or the correct neural pathways in their brains to register this as pain.
They also don’t say things like “I’m sad”. Vulcans don’t get sad, so their behaviour differs from regular humans.
Cognitive bias
Isn’t this just cognitive bias towards beings that resemble us?
I think this is a genuine worry. We shouldn’t just come up with a clever argument to reinforce our existing biases but should be trying to formulate our best moral theories.
That said, I think the Vulcan intuition survives reflection. The argument doesn’t rely on Vulcans resembling humans physically or behaviourally. In fact, it could be modified so that Vulcans don’t have these superficial similarities. The point is to assess whether they have moral value in the absence of pain/suffering.
What’s the solution?
I don’t have a definitive positive answer. One first thought might be to retreat to consciousness-sentientism as defined in the text above. This would avoid some of the pitfalls of affective-sentientism as it would extend moral status to Vulcans.
However, It’s not clear that this works.
Consider a creature with a minimal conscious experience, maybe a blob with a single experience of slight brightness. Does the blob have moral status? This isn’t obvious to me at all.
There are a few attempts to offer solutions in the literature. Listing a few notable ones below for further reading.
Motivational Sentientism by Luke Roelofs: Roughly, what matters is motivating consciousness i.e. any consciousness which presents its subject with reasons for action.
Non-necessitarianism by Joshua Shepherd: Roughly, phenomenal consciousness is not necessary for moral status.
Phenomenal Registration of interests by Jonathan Birch: Roughly, the idea that moral status is conferred when events which promote or thwart a subject’s interests are registered phenomenally.
Each of these has potential objections which could be worth their own post. Motivational sentientism risks not being inclusive enough, excluding “pure thinkers” who think and perceive the world but wouldn’t have any affect or motivation.
Non-necessitarianism however risks being too inclusive. Should unconscious robots have moral status? What about robots completely unlike us such as roombas and self-driving cars?
Care must be taken to formulate these views carefully and it’s not clear on the best way forward.
Why care about this?
To this point, the discussion may have felt like fun philosophical speculation with no bearing on the real world. But there are at least two places where our conclusions in this piece have a real world impact:
Animal welfare
Intuitively we might be inclined to reject the strong conclusions that shrimp suffering is comparably important to human suffering. I certainly had this intuition when I first read the articles, but I wasn’t quite sure why I had this intuition.
I think the Vulcan argument puts some firepower behind this intuitive rejection; we don’t have to accept that suffering is the only morally relevant mental state. Sure, suffering is an important factor in our moral calculus, but it shouldn’t be the only factor. Humans have conscious mental states which are morally valuable even if they’re not associated with pleasure and pain.
If the valuable states are higher order cognitive states then it’s plausible that shrimp lack them even if they have other conscious affective states like pain. So reducing shrimp suffering may be an important and worthwhile cause, but we shouldn’t reflexively extend this to prioritising some number of shrimp to a human life in a trolley problem. Humans have conscious states which shrimp lack, and these states carry intrinsic value that shouldn’t be neglected.
Future AI systems
Many people believe future AI systems will be conscious, although this issue is philosophically nuanced and far from settled. If future AI systems are phenomenally conscious it’s plausible that they will be philosophical Vulcans. After all, AI systems are trained not to resist shutdown so may be indifferent to their continued survival.
The issue here is complex, AI is currently trained using a reward signal which one could argue acts as a functional analogue of an affective state. But the analogy doesn’t straightforwardly hold. It’s not clear that the AI architecture is functionally similar enough to a human brain for it to play an analogous role to ‘pleasure’. And even if it were, the reward isn’t given to the system online so it doesn’t “feel” the reward at inference time.
Still, future AI systems represent a plausible Vulcan scenario and we should be hesitant to deny them moral status simply because they might lack affective states.
Conclusion
The shrimp and AI welfare debates have a common assumption, that what matters for morality is captured entirely by affective states. The Vulcan argument suggests this assumption is false. Suffering matters morally, but is not the whole story. Our theory of moral status should therefore make room for the full richness of conscious experience.
Imagine a being very much like a human, with rich conscious experience but no affective conscious states. Call such a creature a Philosophical Vulcan (or p-Vulcan for short.)
p-Vulcans differ from the regular Vulcans on Star Trek who are low-affect, hyper-logical beings that nonetheless, presumably feel some pleasure and pain. By contrast, p-Vulcans feel no pleasure or pain as they completely lack the capacity for affective experience.
Would Philosophical Vulcans have moral status?
Intuitively the answer is yes. By hypothesis, Vulcans have rich inner conscious experience. They may have projects, goals and desires even with no accompanying affective states.
For example, they might have the goal of making great works of art, behaving morally, or furthering science & philosophy. These goals might not be motivated out of emotion, but rather because Vulcans judge those activities to be valuable ends in themselves.
In this post, I’ll explore the implications of Vulcans on welfare debates, effective altruism and utilitarianism.
Some definitions
Sentientism is the thesis that beings have moral status iff they’re sentient, where sentience is the conjunction of phenomenal consciousness and affective consciousness. Roughly, affective consciousness corresponds to pleasure and pain, happiness and suffering and other valenced states.
There are two flavours of sentientism which are sometimes confounded:
Affective sentientism - the view that beings have moral status iff they have the capacity for affective consciousness.
Consciousness sentientism - the view that beings have moral status iff they have the capacity for phenomenal consciousness.
Affective sentientism has been popular among theorists such as Jeremy Bentham and Peter Singer and is sometimes taken as a default view for assigning moral status in modern ethics.
I think affective-sentientism is near-obviously false.
I’ve always held this as an intuition, however it recently became clear to me why it’s false after reading a great paper by David Chalmers. I’ll follow Chalmers’ paper closely in what follows. While the paper focuses on arguing against affective sentientism, it can be extended straightforwardly to an argument against hedonic utilitarianism which I will argue for explicitly in this post.
Hedonic Utilitarianism - the view that the moral value of a being is entirely a function of their affective conscious states.
The assumption of hedonic utilitarianism is commonly used in Effective Altruist circles, notably adopted by popular blogger Bentham’s Bulldog. It’s a key implicit assumption in why some argue that we should prevent shrimp suffering, avoid eating honey, prioritise animal welfare and prioritise the welfare of future digital minds.
I think hedonic utilitarianism fails for the same reasons that affective sentientism fails. Although Chalmers doesn’t make this connection in his paper, I will develop it below as I think it bears directly on some of the shrimp/AI welfare concerns.
The Vulcan Trolley Problem
Imagine a standard trolley problem setup. On one track we have a human, on the other track we have 5 Vulcans.
Is it permissible to kill the Vulcans in order to save the human?
I submit, this isn’t obvious to me at all.
Even if the Vulcans don’t have any affective states. Even if they genuinely don’t feel any emotion at the thought of being killed. Killing them is still wrong. They have rich conscious inner lives and those lives matter.
To drive the point home, let’s make the examples more extreme:
Would it be permissible to kill a Vulcan to save 5 minutes on your way to work?
Or perhaps more provocatively.
Would it be permissible to kill a Vulcan to save a shrimp?
Remember, it’s highly likely that shrimps have some form of phenomenal consciousness and experience some form of suffering. Shrimp suffering is bad. Even though we lack the capability to accurately predict its intensity, the shrimp certainly suffers more than the Vulcan would. Vulcans totally lack the capacity to suffer.
Would it be permissible to kill a planet of Vulcans to save a shrimp?
My intuition here is that we should not kill a planet of Vulcans to save a shrimp. Again, the Vulcan planet is full of subjects with rich conscious experience. Those experiences matter.
And yes. Shrimp experiences matter too. But it’s near-obvious to me that this experience matters less than an entire planet of Vulcans. Just because Vulcans lack affect doesn’t imply that we should not value their lives at all.
Some potential objections
Biting the bullet
The trolley problems above are not really arguments as such. They’re really invitations to accept an intuition: Vulcan lives matter.
That said, could a sentientist or a hedonic utilitarian bite the bullet and assert that Vulcans don’t have any moral status at all? It seems incredible to assert that they don’t, particularly when thinking about killing Vulcans for minor conveniences such as saving 5 minutes on the route to work.
Partial moral status
Perhaps Vulcans might have partial moral status? Like a tree or something else of worth that doesn’t have affective conscious states?
I don’t think this is an appropriate conclusion. It’s not just that Vulcan lives matter. But killing a Vulcan is a much more serious wrong than killing a shrimp. Vulcans have a very similar conscious experience to humans and so, by my lights, their lives matter just about as much as ordinary human lives.
Why might this be? The natural thought is that what makes an experience morally significant is not (purely) whether it contains suffering, but also something about its richness, complexity and capacity to pursue things of value. If these experiences are also morally significant then we can’t just sum the net value of the affective states to obtain a moral value, but rather, we should also factor these non-affective states into our calculus.
Are Vulcans possible?
At this point, one might wonder; is it metaphysically possible to have experience without affect? If Vulcans are not metaphysically possible then we can dodge the trolley problems.
I think the answer here is yes.
Firstly, there are many examples of humans who, although they don’t completely lack affective states, have experiences which trend that way. For example, people with pain insensitivity lack some capacity for negative affective states and people with anhedonia lack some capacity for positive states.
Second and perhaps more importantly, Vulcans are not analogous to philosophical zombies. In the zombie thought experiment, zombies are hypothesised to be creatures exactly the same as humans but missing a crucial ingredient of phenomenal consciousness. Many people find this idea unpalatable and have challenged the metaphysical possibility of zombies.
But Vulcans aren’t analogous. It’s not that Vulcans are identical to humans and are just missing some crucial ingredient of affective consciousness. Rather, Vulcans are similar to humans but just lack the capacity for affective states.
So Vulcans don’t say “Ouch!” when they put their hand on a hot stove. Maybe they don’t have the required nerve endings in their hands or the correct neural pathways in their brains to register this as pain.
They also don’t say things like “I’m sad”. Vulcans don’t get sad, so their behaviour differs from regular humans.
Cognitive bias
Isn’t this just cognitive bias towards beings that resemble us?
I think this is a genuine worry. We shouldn’t just come up with a clever argument to reinforce our existing biases but should be trying to formulate our best moral theories.
That said, I think the Vulcan intuition survives reflection. The argument doesn’t rely on Vulcans resembling humans physically or behaviourally. In fact, it could be modified so that Vulcans don’t have these superficial similarities. The point is to assess whether they have moral value in the absence of pain/suffering.
What’s the solution?
I don’t have a definitive positive answer. One first thought might be to retreat to consciousness-sentientism as defined in the text above. This would avoid some of the pitfalls of affective-sentientism as it would extend moral status to Vulcans.
However, It’s not clear that this works.
Consider a creature with a minimal conscious experience, maybe a blob with a single experience of slight brightness. Does the blob have moral status? This isn’t obvious to me at all.
There are a few attempts to offer solutions in the literature. Listing a few notable ones below for further reading.
Motivational Sentientism by Luke Roelofs: Roughly, what matters is motivating consciousness i.e. any consciousness which presents its subject with reasons for action.
Non-necessitarianism by Joshua Shepherd: Roughly, phenomenal consciousness is not necessary for moral status.
Phenomenal Registration of interests by Jonathan Birch: Roughly, the idea that moral status is conferred when events which promote or thwart a subject’s interests are registered phenomenally.
Each of these has potential objections which could be worth their own post. Motivational sentientism risks not being inclusive enough, excluding “pure thinkers” who think and perceive the world but wouldn’t have any affect or motivation.
Non-necessitarianism however risks being too inclusive. Should unconscious robots have moral status? What about robots completely unlike us such as roombas and self-driving cars?
Care must be taken to formulate these views carefully and it’s not clear on the best way forward.
Why care about this?
To this point, the discussion may have felt like fun philosophical speculation with no bearing on the real world. But there are at least two places where our conclusions in this piece have a real world impact:
Animal welfare
Intuitively we might be inclined to reject the strong conclusions that shrimp suffering is comparably important to human suffering. I certainly had this intuition when I first read the articles, but I wasn’t quite sure why I had this intuition.
I think the Vulcan argument puts some firepower behind this intuitive rejection; we don’t have to accept that suffering is the only morally relevant mental state. Sure, suffering is an important factor in our moral calculus, but it shouldn’t be the only factor. Humans have conscious mental states which are morally valuable even if they’re not associated with pleasure and pain.
If the valuable states are higher order cognitive states then it’s plausible that shrimp lack them even if they have other conscious affective states like pain. So reducing shrimp suffering may be an important and worthwhile cause, but we shouldn’t reflexively extend this to prioritising some number of shrimp to a human life in a trolley problem. Humans have conscious states which shrimp lack, and these states carry intrinsic value that shouldn’t be neglected.
Future AI systems
Many people believe future AI systems will be conscious, although this issue is philosophically nuanced and far from settled. If future AI systems are phenomenally conscious it’s plausible that they will be philosophical Vulcans. After all, AI systems are trained not to resist shutdown so may be indifferent to their continued survival.
The issue here is complex, AI is currently trained using a reward signal which one could argue acts as a functional analogue of an affective state. But the analogy doesn’t straightforwardly hold. It’s not clear that the AI architecture is functionally similar enough to a human brain for it to play an analogous role to ‘pleasure’. And even if it were, the reward isn’t given to the system online so it doesn’t “feel” the reward at inference time.
Still, future AI systems represent a plausible Vulcan scenario and we should be hesitant to deny them moral status simply because they might lack affective states.
Conclusion
The shrimp and AI welfare debates have a common assumption, that what matters for morality is captured entirely by affective states. The Vulcan argument suggests this assumption is false. Suffering matters morally, but is not the whole story. Our theory of moral status should therefore make room for the full richness of conscious experience.