"What's the worst that can happen?" goes the optimistic saying. It's probably a bad question to ask anyone with a creative imagination. Let's consider the problem on an individual level: it's not really the worst that can happen, but would nonetheless be fairly bad, if you were horribly tortured for a number of years. This is one of the worse things that can realistically happen to one person in today's world.
What's the least bad, bad thing that can happen? Well, suppose a dust speck floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck.
For our next ingredient, we need a large number. Let's use 3^^^3, written in Knuth's up-arrow notation:
- 3^3 = 27.
- 3^^3 = (3^(3^3)) = 3^27 = 7625597484987.
- 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = (3^(3^(3^(... 7625597484987 times ...)))).
3^^^3 is an exponential tower of 3s which is 7,625,597,484,987 layers tall. You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a number much larger than the number of atoms in the universe, but which could still be written down in base 10, on 100 square kilometers of paper; then raise 3 to that power; and continue until you've exponentiated 7625597484987 times. That's 3^^^3. It's the smallest simple inconceivably huge number I know.
Now here's the moral dilemma. If neither event is going to happen to you personally, but you still had to choose one or the other:
Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?
I think the answer is obvious. How about you?
Strongly disagree.
Utilitarianism did not fall from a well of truth, nor was it derived from perfect rationality.
It is an attempt by humans, fallible humans, to clarify and spell out pre-existing, grounding ethical belief, and then turn this clarification into very simple arithmetic. All this arithmetic rests on the attempt to codify the actual ethics, and then see whether we got them right. Ideally, we would end up in a scenario that reproduces our ethical intuitions, but more precisely and quickly, where you look at the result and go “yes, that expresses what I wanted it to express, just even better than I expressed it before”. You would recognise your ethics in it. Divergences in assessment would be rare, and would dissolve upon closer assessment; if you thought and felt about them, you would conclude that a superficial thing had lead your judgement astray, and the arithmetic had captured what your judgement should have been, and change to it.
E.g. in trolley problem variations (pushing a button to reroute a train, vs. dragging a person onto the train tracks and trying them down to stop the train with their body), I will encounter the fact that I feel more reluctant to personally, closely and physically bring someone to death by hand than to press a button, even if the reasons for it are equally just or injust; and then I will decide, upon consideration, that these things are actually identical (I am doing the same amount of hurt, and violating the same ethical principles), and that I am merely hiding the full horror of them for myself if I envision pressing a button, and that the solution I hence want is to visualise the full horror if I press the button, also, that I want to treat both scenarios the same (even if that makes me less inclined to press a button). In this case, math points out that I was irrationally biased against recognising harm if it is less viscerally close to me. The button pressing is just as bad, I am just shielded from having to see the badness, and clarity brings it forth.
But this is *not* how I feel about classic utilitarian counter examples.
If you apply utilitarianism to an ethical scenario, and the result runs massively counter to the ethical belief that spawned utilitarianism in the first place (e.g. telling me I should torture someone to avoid *a lot* of dust specks), not just to a degree that briefly feels confusing and uncomfortable until you assess it clearly, but to a degree that deeply repulses and horrifies me persistently, has me certain I would not wish to live in this world, and nor would by ethical mentors – then my conclusion is that my utilitarian model clearly failed to adequately capture the ethical belief in question, *and the utilitarian model needs to be overhauled*. The arithmetic may check out, but the far more important assumptions that spawned it and legitimised it clearly were not adequate depictions of my ethical beliefs.
Most people do not want to live in Omelas. They do not want to see the organs of one person harvested to heal a dozen. They do not want themselves and an inconceivable number of humans spared a speck of dust in exchange for someone being tortured. They do not want to live in this universe, regardless of whether they are the poor sod being tortured, or the privileged supermajority spared an inconsequential inconvenience in exchange for another person being tortured. They don’t want to live in the artificial happy box that stimulates their nerves to be maximally happy all the time.
It is ludicrous to tell them that they must convince themselves that they want this horrible world and live in it, when collectively, they very clearly do not, they do not prefer it, they feel bad about it, they do not feel their happiness outweighs this suffering, they feel that it is deeply wrong, they do not think that is what they meant when they explained that the consequences of actions has moral relevance to them, or that promoting happiness and avoiding pain has moral relevance for them, they clearly feel something important was forgotten here, *so in what meaningful way is it better if none of the people involved judge it to be better or want it?*
And it is okay that they do not want it. There is no rationalist obligation to find that preferable, you do not need to talk yourself into it. An equation you invented in an attempt to capture your ethics should not hold power over you when it turns out that it has not. It was meant as a tool, not a guide. If this is not the world you want, the ethical code that spawned the arithmetic was clearly in error, or at least crucially incomplete.
And that is *because* utilitarianism is crucially incomplete. People have been saying that, loudly, ever since the first person came up with it. There are entire books on the topic illustrating that this is broken. In every single philosophy course you run, many of your students will immediately speak up, saying this is not their ethics, and they are opposed to it, and pointing out problems. If you taught an ethics class and claimed this was the only rational ethical system, you’d get complaints to the dean for teaching an obvious falsehood. There are numerous alternative ethical systems in which happiness or consequences do not matter at all.
Take the organ harvesting from live innocents to preserve multiple people example: Even if we edited all the memories of the people who loved the person being harvested so none of them feel grief and regret, even if we keep it secret so society is not destabilised by the knowledge that random innocents regularly get harvested, even if there are specificallt no consequences beyond the harvested person being killed, and a dozen other people being saved… I wouldn’t want to live in this world, I wouldn’t want it to be. It repulses me, in a way that cannot be outweighed. Killing sentient innocents to use their bodies is wrong, even if their bodies are very useful. If your ethics say these are fine, they have failed to capture something of supreme ethical importance to me, and we need to carry on looking in our attempt to codify ethics.
Hence, to anyone reading this story and feeling that the answer is not obvious, or that they do not want to chose the torture option: This does not mean, in any way, that you are not a rational person.