The strongest in favour deontology/virtue ethics are about morality as distinct from axiology, which involve practical questions about how computationally bounded agents should behave in a universe which resembles our own and contains other agents. I think thought experiments like this mostly fail to engage with the actual reasons one might expose deontology or virtue ethics as a way that an agent should operate.
The problem with these thought experiments is, once you get beyond the most basic of trolley problems, is that they don't ever actually happen in the real world! You essentially never, as an agent, get both total knowledge over the exact scenario some number of people are in, except for some specific, bounded uncertainty over some parameters, with no way of communicating with anyone inside the settings, or any kind of longer-running tradition or doctrine or set of expectations from other agents as to what you "should" do.
I think Scott Alexander's Seagull Principle applies here. You can give me as many trolley problem suitcase swapping problems with deontology as you like, and I'm still going to go back to following rules like "Don't lie" and "Don't give up people's secrets" rather than utilitarianly calculating the expected value of every potential lie, safe in the knowledge that I will never be forced to decide whether to swap a person in a suitcase with a suitcase full of sand and then pull a lever to divert a trolley half an hour later.
Crosspost.
1 Introduction
Maybe the best paper I’ve ever read is called People in Suitcases by Kacper Kowalczyk. I do not think there is anything plausible deontologists can say in reply. I thought I’d summarize the paper, and also discuss some results from related papers to show why there is no way out for the deontologist. Pack your bags deontologists, you have been defeated by suitcases.
Behold: a deontologist’s worst nightmare.
In the famous footbridge case, deontologists say that you should not push one person off a bridge to stop a train from killing two people. But now imagine that there are three people: A, B, and C. Each person is in a suitcase, so you don’t know who is where. Maybe A is on top of the track, maybe B is, and maybe C is. Now ask: should you push the person on top of the track off in order to save the two on the bottom of the track?
The argument in favor: every single person is made better off in expectation. Everyone would rationally vote for you to push them, in light of the information they have. It would lower their risks of death from 2/3 to 1/3. All their families would want you to push too. Morality, if it means anything, means doing what’s best for everyone.
The argument against: you shouldn’t kill people!
Some deontologists go one way in this case, others go the other way. Kowalczyk’s paper shows why whichever way a deontologist goes, they’ll have huge problems.
Let’s start with the view that says you should push the people in suitcases.
2 Ex-ante deontology
Ex-ante deontology’s core claim: you get to violate a deontic constraint if doing so would be in everyone’s interests in expectation. In the suitcases case, everyone is made better off in expectation by pushing whoever is on top of the bridge off. Every single person’s risk of death decreases as a result of your pushing. Thus, you should push!
But now let’s imagine making two modifications to the scenario. The suitcase is very heavy and far from the track. For this reason, it takes an hour to push it off the track. However, after a half hour of pushing, you’ll be able to see which person is in the suitcase. At that point, you’ll be able to stop pushing. Doing so will leave that person mildly traumatized and thus somewhat worse off than they would have been if you’d never started pushing.
Here’s the worry for ex-ante deontology: at the start, before you see who is in the suitcase, it supports pushing. However, after a half hour, when you can see who you’ll kill, it supports stopping pushing. At that point, your action no longer is better for everyone in expectation. This leads to three interlocking problems:
Predictably reneging seems bad. So then to salvage ex-ante deontology, we’ll want some view where you precommit to a course of action. Either you don’t start pushing at the earlier time because you predict you’ll later renege, or you don’t stop pushing at the later time because you committed earlier to some course of action. Unfortunately, neither of these work.
3 Sophisticated ex-ante deontology
Sophisticated choice is the idea that when you’re taking an act, you should account for what you expect to do in the future. So sophisticated deontology—deontology that incorporates sophisticated choice—recommends not starting to push because you know you’ll later renege. However, this is no help in the following very similar case:
In other words, in this case, there are three people trapped in suitcases. Two of them are on track (pun intended) to be hit by the train. It’s currently 12:00. You can push whoever is in the top suitcase now to save the other two. You can also push the person at 13:00. By 13:00, they’ll no longer be in their suitcase, so you’ll be able to see who they are. If you push them now, that will cause extra trauma for everyone, relative to if you push them at 13:00.
On sophisticated ex-ante deontology, you should push them now. After all, later when you are deciding whether to push them, you’ll know who they are. Pushing will no longer be an improvement ex-ante. But clearly it’s wrong to push them now, when pushing them later would be better for everyone.
So sophisticated choice is no help.
4 Resolute ex-ante deontology
Resolute choice says that you should stand by the choice sequence you decided upon at a previous point. You should pick a plan and then stick to it!
Resolute deontologists, in the last case, can take the action which is an ex-ante Pareto improvement. At 12:00, the sequence you’d most want to commit to is pushing at 13:00. Then, at 13:00, resolute ex-ante deontology would simply follow the previous commitment and push.
The view has a number of problems:
The last problem seems particularly devastating. Right now, if I committed to taking whichever actions are prescribed by consequentialism in choices between killing and letting die, this would benefit everyone in expectation. So by the lights of this view, deontologists should commit to becoming consequentialists!
I thought of another problem for this view. Imagine you’re deciding whether to push the person, and can’t remember if you committed to pushing them at some point in the past, when doing so benefitted everyone in expectation. But suppose that if you committed to it, then the resultant death will be painful, while if you didn’t, the resultant death would be painless. On this view, you’re supposed to push if you committed, but not if you didn’t. But this is strange—pushing if you didn’t commit seems strictly better than pushing if you did. After all, it affects the same people in the same way, but their death is simply less painful. This violates the following principle:
5 Ex-post deontology expos(t)ed
Here’s the deal so far. The moderate deontologist who affirms ex-ante pareto thinks that you should push the people when they’re in suitcases, but you shouldn’t push them once they exit the suitcase. But this has a problem in the scenario discussed. In this scenario, the people in the suitcases will later leave the suitcases, and it’s better to push them after they leave. However, this view, by default, recommends pushing them earlier, when doing so is worse for everyone.
There are two standard ways out of dilemmas involving sequences of acts. First, you can hold that at the earlier time you should anticipate what you’ll do at the later time. But this is no help, as we’ve seen. Second, you can hold that at the later time you should bind yourself to the earlier choice. But this view has implausible implications too.
What about the view that denies that you should push them even when they’re in suitcases? Does this offer a way out?
To start with, this view is pretty counterintuitive. It implies that it’s wrong to take an act that makes everyone better off in expectation. Every person would rationally want you to push the person off the bridge when everyone is in suitcases. So would their families. It is hard, if you support doing the thing that’s expectantly worse for everyone, to claim that deontology has unique reverence for people and their interests.
There’s another case given by Caspar Hare that puts pressure on this position (strap in, this one is complicated). There are three people in suitcases: A, B, and C. Two of them are on the train tracks, one above the train tracks. There is a train coming that’s going to hit the two who are on the tracks. You don’t know who is where. In other words, you don’t know if the train will hit AB, BC, or AC.
Then, there is another parallel track on the right with three suitcases, each filled with sand. One is on top of the track, the other two are on the bottom of the tracks. A train is coming that will hit the two on the bottom. Here’s a diagram.
Drawn by hand, obviously.
There are three buttons: button a, button b, button c. Button a switches out person A with the sand suitcase on the opposite side that’s parallel. So, for example, if A is on top of the bridge, they’d be swapped out with the sand suitcase on top of the bridge, and if A is in the front, they’re swapped out with the sand suitcase in front. Then, the button also pushes, on the right side, whichever suitcase is on top off, so that it stops the other two suitcases from being run over. So in a sentence: button a swaps out person A with the parallel sand suitcase, and topples whichever suitcase is on top, on the right track, thus blocking the other two from being run over.
The other buttons are similar. Button b switches out person B with the parallel sand suitcase. Button c switches out person C with the parallel sand suitcase. Each of the buttons also topple whoever is on top of the bridge on the right side unless they’ve already been toppled by a previous button.
Clearly it is better to press button a than none of the buttons. By transporting person A to the right side of the tracks, and then sending the suitcase on top toppling down, it reduces A’s risk of death from 2/3 to 1/3. A will now be saved unless they were on top. The button affects no one else. If an action lowers a person’s risk of death and affects no one else, then you ought to take it. By the same logic you should press buttons b and c. But if you press all three, that simply has the effect of lowering whoever is in the top suitcase down and saving the other two people.
But if you should take a sequence of acts that simply has the effect of pushing the person off the bridge, then you should take a single act that pushes the person off the bridge. It would be weird to think “it’s super wrong to push the person,” but “you should press a bunch of buttons with the sole effect of pushing the person.”
One other reason to think this is that if it’s okay to press all the buttons together, then it seems okay to press a single button which has the effect of pressing all the other buttons. For instance, that single button might cause a stick to be lowered down to press the other three. But clearly whether it’s okay to push that single button, which only has the effect of pushing the person off the bridge, is the same as whether it’s okay to push the person off the bridge.
Now, maybe you can hold that whether it’s okay to press the buttons will depend on what you’ll do later. So, for instance, maybe you shouldn’t press the first button, because if you do you’ll probably press the second, and the third. Then it’s guaranteed you’ll kill someone. This verdict seems bizarre to me—pressing the first button doesn’t give you any new options. How can doing right things be wrong, on grounds that if you do them you might be motivated to do other similar right things?
The bigger worry Kowalczyk raises is that this view leads to deontic cycling. You get actions better than actions that are better than themselves. Specifically, you get the following preference ranking:
no push>pushing a, b, c>pushing a, b>pushing a>no push.
It is a big problem if a view implies that some action is better than actions better than it. This has another big problem—consider the following case:
Kowalczyk includes this chart to help illustrate the case:
The sad face indicates survival while being traumatized. The gray skull indicates being killed, the white skull indicates being allowed to die.
So basically, right now a train will kill whichever two people are on the bottom tracks. Your options are:
Clearly 4 is better than 5, because it is better for A in expectation and worse for no one. Clearly 3 is better than 4, because it is better for B in expectation and worse for no one. By the same logic, 2 is better than 3, because it’s better than C. But on this view, pressing all the buttons is impermissible, and stopping at 2 is equivalent to pressing all the buttons. So 1 must be better than 2. Thus, on this view, you must stop the timer immediately.
But stopping the timer immediately is strictly worse than never stopping the timer! Never stopping the timer would be better for the one person who ends up traumatized and worse for no one. So this implies that you should sometimes take an action that is worse for everyone than inaction. That seems wrong! And sophisticated and resolute choice are of no help:
6 Minimally Paretian deontology
So far we have seen that the deontologist inevitably must support doing things that are worse for people in expectation. When acting under uncertainty, the deontologist must support doing things that make one person worse off in expectation and benefit no one. That is a bad result.
But what if there is no uncertainty? Can the deontologist hold a view called minimally Paretian deontology? On this view, if there are two sequences of action, and one ensures everyone is better off than the other, you should pick the one that leaves everyone better off. This isn’t a solution to the core puzzle but maybe it’s a thing that deontologists can think.
No, sorry, it is not. Surrender, for all hope is lost. Deontologists can’t have nice things. They can’t even have minimally Paretian deontology. Here’s the case Kowalczyk gives.
So here are your options:
Clearly b>a because it is better for B and worse for no one. By the lights of deontology, c>b, because you’re not allowed to push. d>c because it’s better for A and worse for no one. e>d for the same reason c>b. By the same basic pattern of reasoning f>e and g>f. But a>g because it’s better for one person and worse for no one. So we’re left with the following cycle: “a < b < c < d < e < f < g < a.”
You can’t permissibly do f because g would be better, and you can’t permissibly do d because e would be better, and so on. But you can’t permissibly do g because a would be better. In every case, either one must perform the deontically prohibited action or an action that violates Pareto.
Ask: which action are you permitted to perform? For the Paretian deontologist, every action is ruled out.
You can’t be allowed to do a because b would be better for one person and worse for no one. You can’t be allowed to do b because relative to c, you are killing one person to save two. And so on. And you can’t be permitted to do g—leaving at the start—because a would be an improvement over it.
One other addition I like to make to the case: imagine that being killed is less painful than being allowed to die. You’re better off being pushed than being run over. I can imagine the deontologist say that being killed is better than being allowed to die, so you should simply wait all the way through, even though pushing A would have the same effects plus save an extra life. But if pushing A would make A’s death less painful, and it would save someone else, it seems obvious that it is better to push than to wait. The deontologist’s respect for persons is so great that they do things that lead an extra person to die, and are worse for the victim, to avoid the dirtying of their hands. Some respect for persons.
7 What is it to know a person? Should you push identical twins off bridges? And should you poke your eyes out for no reason?
(Really asking the deep questions).
At this point, we have reached the climax of the arguments in the Kowalczyk paper. By this point, I think there is little hope for the deontologist. But fortunately, there are a number of other arguments—some of which appear in other bits of the literature, some of which are original to me—which even more firmly close the door on the deontologist. Deontology’s suitcase ideal is not over. It has barely begun.
In this section, I’ll talk about additional problems for the ex-ante deontologist.
For those who need a refresher, the ex-ante deontologist says that you should push the person atop the bridge in the suitcase to save two, but that you shouldn’t push the large man in the normal case. The basic idea is that it’s right to push so long as doing so is good for everyone in expectation. Now, we’ve already seen how this view doesn’t really live up to the promise, and licenses sequences of acts that are worse for some people and better for none. But there are even more problems.
The first big one was highlighted in a paper by Caspar Hare called Should We Wish Well to All (apparently your name has to be approximately Casper to contribute to this literature). Here’s the core worry: ex-ante deontology says that you should push the person, but only if you don’t know who they are, so that the action benefits everyone in expectation. But whether you know a person is a fuzzy category. What we normally think of as knowing a person really consists in knowing a lot of things about them.
Suppose that while the person is in the suitcase, you learn an increasing number of facts about them. You learn, say, how tall they are and what they were called as a child. To pick a delightfully vivid image from Hare’s paper:
What if you can see the left side of their head? What if you can only see their outline? It seems bizarre that there’d be some precise point where you know enough about them to know them in some deep sense, thus rendering pushing impermissible. And it can’t be that knowing any facts about them suffices. When they’re in the suitcase, you know one fact about them: they’re in that suitcase. And what could possibly determine what level of factual knowledge you need to possess about them for it to be impermissible to push? Note, the problem isn’t just that we don’t know where to draw the line: it seems that there can’t be any line drawn in principle. To know a person is a matter of degree; what could possibly determine where the threshold lies?
Here’s another problem in this vicinity: the view seems to create very strong reasons to remain ignorant of who is in the suitcase. After all, if you come to see who is in the suitcase, then you’ll no longer be able to permissibly push. So imagine that right before you have the opportunity to push, you’ll see who is in the suitcase unless you poke your eyes out. On this view, you’d then have very strong reasons to poke your eyes out, even though doing so would benefit no one. But that’s crazy! You shouldn’t have such a strong aversion to learning things! Even Bryan Caplan would say this goes a bit far.
A third worry: this would seem to permit pushing people in a way that deontology should deem impermissible. Suppose, for example, you know that there are five people who will be hit by a train and a sixth who won’t who is standing over the bridge. You can push the sixth person so that they die and stop the train from hitting the five people. No one is in suitcases. Crucially, you know the collection of people but you don’t know who is where. So, for example, you might know that the people are Bob, Fred, Steve, Mary, Cary, and Gary, but you don’t know if Gary’s on top of the track, or Mary, or Steve. However, if you push the person, by default, you would then see who you pushed.
But now suppose that you could run up with your eyes closed and push whoever is atop the bridge off, so that they stop the train. Well, this would be an ex-ante Pareto improvement. Everyone’s expected prospects are better. But if it’s wrong to push the person off the bridge to save five, then surely it would still be wrong if you close your eyes before doing it, so you can’t see who you are killing!
Now, you might think what’s relevant is not the knowledge that you have but instead the knowledge that the people have. But in this case, we can imagine that the people don’t know where they are (they’re all blind, say). Still seems like the permissibility of pushing should be the same as in the normal footbridge case.
A fourth problem: imagine that there were six identical twins who all looked the same. You know what they all look like. One is atop a bridge, and you can push him off to save the other five. Should you? Any answer given by the ex-ante deontologist is weird.
So the ex-ante deontologist is in even deeper trouble! In the words of Mohammed Hijab “after the intellectual decimation, discombobulation, has been done, you’ve been disheveled, you’ve been disappointed, you’ve been discombobulated, you’ve been intellectually decapitated, you are done!”
8 Ex-postmodern neomarxists
The ex-post deontologists think that you shouldn’t push the person off the bridge, in a suitcase. So far, we’ve seen that across a number of cases, ex-post deontology recommends actions that are bad for some people and good for no one. This is, in my view, a fatal problem—but are there still more problems.
Here’s one: we naturally think that actions are permitted even if they cause some deaths so long as they’re better for everyone in expectation. For instance, suppose that a doctor knows that nine of ten people have some disease. The disease is fatal unless medicine is administered. If medicine is administered to all ten, then they can save the nine who have the disease, but they will kill the one who doesn’t have the disease. Imagine everyone is in a coma, so you can’t ask for their opinion.
Surely, in this case, the doctor should give out the medicine. But notice: the ex-post deontologist cannot say that they should do it because it’s in everyone’s interest. The ex-post deontologist says that you sometimes shouldn’t do things that are in everyone’s interests. So it’s a bit hard to see what the difference is supposed to be between the medicine case and the case of the train.
We can see this in another way (strap in, this one is janky). Imagine a train is on its way to dump toxic sludge on five people. There is one person on the bridge above the train. You have three options:
In this case, clearly 1 is better than 2. After all, both ensure that the person dies, but 1 ensures that it is painless. But on ex-post deontology, 2 is permissible but 1 is impermissible. Now, you could in theory hold that 1 becomes permissible if 2 is also an option, because it’s an upgrade over a permissible action. But this is bizarre:
9 Conclusion
This consideration against deontology strikes me as very forceful. The deontologist seems forced to conclude that one should take acts that are worse for some people and better for no one. The deontologist ultimately elevates principles over making people’s lives better, supporting sequences of acts that make people’s lives worse for no benefit. As Kowalczyk memorably concludes, “contrary to popular deontological rhetoric, it is deontology, rather than consequentialism, that does not take individual people seriously.”