[Cross-posted from Grand, Unified, Empty.]
The CFAR Handbook has a really interesting chapter on policy-level decision-making (pages 170-173). It’s excellent, grounds much of this post, and comes with some classic Calvin & Hobbes comics; I recommend it. If you’re too lazy for that, I’ll summarize with a question: What should you do when you’ve made a plan, and then there’s a delay, and you really don’t want to do the thing you’ve planned anymore? The handbook starts with two basic perspectives:
“Look,” says the first perspective. “You’ve got to have follow-through.
You’ve got to be able to keep promises to yourself. If a little thing like a few
hours’ delay is enough to throw you off your game, there’s practically no point
in making plans at all. Sometimes, you have to let past you have the steering
wheel, even when you don’t feel like it anymore, because otherwise you’ll
never finish anything that takes sustained effort or motivation or attention.”
“Look,” says the second perspective. “There’s nothing to be gained from
locking yourself in boxes. Present you has the most information and context;
past you was just guessing at what you would want in this moment. Forcing
yourself to do stuff out of some misguided sense of consistency or guilt or
whatever is how people end up halfway through a law degree they never
actually wanted. You have to be able to update on new information and
adapt to new circumstances.”
Policy-level decision-making is the handbook’s suggested way to thread this needle:
[W]hat policy, if I followed it every time I had to make a decision like this, would strike the right balance? How do I want to trade off between follow-through and following my feelings, or between staying safe and seizing rare opportunities?
It’s obviously more work to come up with a policy than to just make the decision in the moment, but for those cases when you feel torn between the two basic perspectives, policy-level decision-making seems like a good way to resolve the tension.
There is a peculiar manoeuvre in philosophy, as in life, called “biting the bullet”. Biting the bullet in life is to accept and then do something painful or unpleasant because you don’t think you have any better alternatives to get the thing you want. Want to swim in the ocean, but it’s the middle of winter? You’re going to have to “bite the bullet” and get in even though the water will be freezing cold.
Biting the bullet in philosophy is analogous; it means to accept weird, unpleasant, and frequently counter-intuitive implications of a theory or argument because the theory or argument is otherwise valuable or believed to be true. If you think that simple utilitarianism is the correct ethical theory, then you have to deal with the transplant problem, where you have the option to kill one random healthy person and use their organs to save five others. Really basic utilitarianism suggests this is a moral necessity, because five lives are more valuable than one life. One way to deal with this apparently appalling rule is to “bite the bullet”; accept and actually argue that we should kill people for their organs.
Bringing this back to policy-level decision-making: I realized recently that I don’t have a policy for biting bullets, in philosophy or in life.
In life, a policy for biting bullets is probably useful, and I’m sure there’s an important post to be written there, but at least personally I don’t feel the lack of policy too painfully. If there’s a thing I want and something in the way, then it’s a pretty standard (though frequently subconscious) cost-benefit analysis based on how much I want the thing and how much pain or work is in the way. If the analysis comes out right, I’ll “bite the bullet” and do the thing.
Philosophy, however, is a different matter. Not only have I realized that I am biting bullets in philosophy somewhat inconsistently, I also notice that it’s been the source of many times where I’ve agonized at length over an argument or philosophical point. I think a policy for biting philosophical bullets would help me be more consistent in my philosophy, and also save me a bit of sanity on occasion.
So what’s a good policy for biting philosophical bullets? As a starting point, let’s copy the handbook and articulate the most basic (and extreme) perspectives:
“Look,” says the first perspective. “Philosophy is fundamentally grounded in our intuitions. You’ve got to be consistent with those, in the same way that any theory of physics has to be consistent with our empirical observations. If a philosophical theory asks you to deny an intuition, then that theory can’t be ultimately true; it might still be a useful approximation, but nothing more. And anyway it’s a slippery slope; if you accept biting bullets as a valid epistemic move, then every theory becomes equally valid because every objection can be ‘bitten’ away.”
“Look,” says the second perspective. “Our intuitions are basically garbage; you can’t expect them to be internally consistent, let alone universally correct. Humans are flawed, complicated creatures mostly built on hard-wired heuristics derived from a million years living on the savanna. A philosophical theory should be free to get rid of as many of these outdated intuitions as it needs to. After all, this is one of the ways we grow as people, by replacing our moral intuitions when persuaded by good arguments.”
Obviously both of these positions are somewhat exaggerated, but they do raise strong points. We don’t want a policy that lets us bite any old bullet, since that would significantly weaken our epistemology, but at the same time we do want to be able to bite some bullets or else we end up held captive by our often-flawed intuitions. But then how do we decide which bullets to bite?
Instinctively, there are two sides to the question of biting any particular philosophical bullet: the argument, and the intuition. In a sense, the stronger of the two wins; a strong argument countered by a weak intuition suggests biting the bullet (the argument wins), whereas a weak argument faced with a strong intuition suggests the opposite (the intuition wins). This is a nice model, but only succeeds in pushing the question down a layer: what do we mean by “strong” and “weak”, and how do we compare strengths between such disparate objects as arguments and intuitions? What I really want is Google’s unit conversion feature to be able to tell me “your intuition for atheism is worth 3.547 teleological arguments”. Alas, real life is somewhat messier than that.
“Strong” and “weak” for an intuition may be hard to precisely pin down with language, but at the very least I have a clear felt sense for what it means that an intuition is strong or weak, and I suspect this is common. Somewhat surprisingly, it is how to consider “strong” and “weak” with respect to arguments that seems to give more trouble. Assuming of course that the argument is logically valid (and that the empirical facts are well-specified), what makes a philosophical argument “stronger” seems to boil all the way down to intuitions again: a stronger philosophical argument is backed by more and/or stronger intuitions.
But if it’s true that argument strength is ultimately just intuition strength, then our policy for biting bullets can be summarized as “choose whichever side has the stronger intuitions”. This defeats the whole purpose of the exercise, since the previous times I’ve found myself agonizing over biting a bullet was precisely because the intuitions on both sides were already well-balanced; if there was a clear winner, I wouldn’t have had to work so hard to choose.
Perhaps this is a fundamental truth, that choosing to bite a bullet (or not) has to be a hard choice by definition. Or perhaps there is some other clever policy for biting bullets that I just haven’t managed to think of today. I’m certainly open to new suggestions.
All of this talk of biting hard things has reminded me of a poem, so I’ll leave you with these two stanzas from Lewis Carroll’s You Are Old, Father William:
“You are old,” said the youth, “and your jaws are too weak
For anything tougher than suet;
Yet you finished the goose, with the bones and the beak—
Pray, how did you manage to do it?”
“In my youth,” said his father, “I took to the law,
And argued each case with my wife;
And the muscular strength, which it gave to my jaw,
Has lasted the rest of my life.”
One thing that seems relevant to philosophical "bullets" is how often they come up. If your theory says that you should kill a live person to save 5 with their organs, that comes up literally all the time. Not in the sense that you know who will get the organs, but we all know that transplant lists are long and people regularly die because there is no compatible transplant available.
Obviously bringing up Immanuel Kant is not exactly breaking new ground, but its worth considering what would happen if many people bit that particular bullet. And that's a major difference between the transplant problem and the triage problem -- few people are triage doctors or nurses and even for them, that particular situation is rare. Whereas for the transplant problem, tools are such that almost every adult human being could physically murder a healthy person in a way that preserves their organs.
So I would think that a policy for "biting" philosophical "bullets" should include something about how often the hypo would actually come up in everyday life. If it comes up frequently, the intuition that you shouldn't do that thing is probably more valuable than the clever argument. Intuitions should be considered "weaker" when applied to situations are rare or nigh-impossible to actually occur, since our intuitions were developed for common situations.
With the transplant problem compare that to a similar triage problem. You have 6 life or death patients 1 of which has a condition that takes a lot of resources to cure and 5 that take a little. You can only save the hard case or the easy cases. From the point of view of basic utilitarianism the transplant problem and the triage problem seems almost identical. But it seems way easier to let the hard case die rather than disassemble a live person. That a intuition puts a dent in the general pattern of a rule doesn't mean that the rule stops applying.
Also if you bite the bullet that its okay to repurpose healthy lives for transplants then you could think of the improvement that instead of taking all the 5 organs from one life you could take the organs from 5 different healthy lives which could give them an increased chance of being alive (or like even if you can't live without a liver a life of dialysis might be preferrable to death). The step from agreeing that the rule should be followed in that case to advocating to kill people seems hasty and unneccesary.
For most part philosophy doesn't have a deadline, we will ponder the things to our contention and even then a little bit more. So the contradictions should puzzle us but we don't ever need to commit to an answer that we know is partially wrong.
I have also found that if I understand to apply a rule to a particular situation it can override the intuition how it initially seems and theorethising an intuition gives it more legitimacy. For example if people knew that doctors might actively harm patients then people would be reluctant to seek medical attention. This kind of rule would strongly differentiate between the transplant problem and the triage problem. Understanding that this kind of rule could be in conflict with "help as much as you can" takes more intricate and detailed applying rather than a vague general case. If you would let the hard case die in triage and would keep the healthy person alive in transplant does that mean you don't follow or do not believe in utilitarianism? If you would kill to save millions but wouldn't kill to save thousands do you believe in the hippocratic oath?