A few years ago, I tried convincing people some commenters that hypotheticals were important even when they weren't realistic. That failed, but I think I've spend enough time reflecting to give this another go. This time, my focus will be on challenging the following common assumption:
The Direct Application Fallacy: If a hypothetical situation can't conceivably occur, then the hypothetical situation doesn't matter
I chose this name because it assumes that the only purpose of discussing a hypothetical is to know what would happen or what we should do in such a situation. It ignores the other lessons that such a discussion may teach us and how it might have logical consequences for situations that actually do occur.
(Note: This post was renamed from: Unrealistic Hypotheticals Still Contain Lessons)
Exploiting Opportunities for Learning
In The Least Convenient Possible World, Scott Alexander considers the classic objection to utilitarianism that it implies that a surgeon should be prepared to harvest the organs of a random traveller if it would allow them to save five other patients. Scott argues that pointing out that the random traveller's organs probably be genetic mismatches, while "technically correct", also "completely misses the point and loses a valuable opportunity to examine the nature of morality". He also notes that responding in this manner leaves too much "wiggle room". Even if we aren't consciously aware of it, we often construct arguments to avoid believing things that we don't want to, so we can improve our rationality by limiting our ability to avoid understanding the other person's perspective. While Scott is referring to people who completely miss the point of the hypothetical, I think that dismissing a hypothetical as unrealistic often also sacrifices opportunities for learning as we'll see below.
Practise Exercises Don't Need to be Real
Imagine that you are an instructor setting problems for your students so that they can learn an area like economics, physics or applied maths. How strongly do you care about these exercises being realistic? I would argue that this isn't very important and that this further applies to philosophy:
1. Simplification: Students may be at a point where a realistic exercise would be quite beyond their abilities. For example, you may want to ignore friction or air resistance because the students haven't been taught that yet, even if this makes the situation completely unrealistic. Similarly, philosophical problems often assume "no-one will ever know" so that you can discuss moral principles without having to have a detailed understanding of human psychology and sociology.
2. Testing for Understanding: Students are often assigned questions as a way to gauge their understanding of a concept. Maybe no object has zero mass, but if someone can't tell you that this should create no gravitational force, they must have a misunderstanding somewhere. Similarly, even if utility monsters don't exist, they provide a useful tool for clarifying utilitarianism, as it explains why, "greatest good for the greatest number" isn't a completely accurate characterisation.
3. Realism Trades off Against Other Factors: Perhaps, you could find simple exercises that are realistic or test for understanding with more realistic scenarios. However, your goal is to make your students learn and this is dependent on a whole host of factors. If you insist on questions always being realistic, then this trades off against other dimensions, such as engagement, memorability and time required to construct a situation. This last dimension is particularly important for conversations where people have to be able to construct these situations on the fly.
This is taken for granted when talking about maths and physics, but if you want to learn to deeply understand philosophy, you'll have to accept unrealistic practise questions as well.
Applying the Unrealistic to the Real
In maths, it is very common to take the limit of a formula as some variable, like as x approaches infinity. This technique is very useful for approximations. For example, it's easier to consider the limit as x approaches infinity of (2x^2-x+10)/(x^2+79) than to substitute in a specific value like a million. This is applied constantly throughout programming with Big-O Notation. Even though an infinite dataset is completely unrealistic, this heuristic is still incredibly useful for designing algorithms.
Similarly, when a utilitarian points out that strict versions of deontology will always allow us to construct situations where following the rules cost us infinite utility, the unrealism of the situation doesn't make it irrelevant. Just with Big-O Notation, step 2 of the argument could very well be to scale it down to a more realistic situation. Unfortunately, many people will assume that step 2 isn't coming and judge the argument as flawed at this stage. They may even interrupt the speaker with the objection that the argument isn't realistic. This often negatively affects the conversation, as it pushes the speaker to address step 2, before they've had the opportunity to ensure that everyone has understood step 1.
Being Aware of Limitations
Consider the formula y=10/(x[x-5]). This has two discontinuities at x=0 and x=5. I really want a more practical example, so if you have one, please list in the comment, but let's pretend x represents the number of people and we know there'll always be at least one person in practise. So someone could wave away the discontinuity with the x=0 case, while if they instead looked into it, they'd realise that more general issue is the division by zero, which would make them aware that there is also a discontinuity at 5.
Let's suppose that someone is promoting deontology and they aren't worried about theoretical situations. They just want a practical model or heuristic to help them act morality. If they are proposing a heuristic, they should fully expect it to have limitations and situations where it just completely breaks. And it would probably be useful to know what these limitations are. Some of these limitations mightn't be obvious and the heuristic may even be broken if some of these occur more often than they expect. Discussions of how the model behaves as the utility cost of a principle approaches infinity shouldn't be met by dismissal, but by either biting the bullet or acknowledging that the model seems to break down in those circumstances. It can still be defended as a heuristic or you can assert this kind of situation tends to break our intuitions (see epistemic learned helplessness), but either way it needs to be acknowledged as a limitation that can be weighed up against other limitations. After all, there could be a better model that has a solution to these issues.
One of the key threads of this post has been to not assume that you know where an argument is going. Just because someone is talking about an unrealistic situation, it doesn't follow that they aren't going to tie it back to reality. Further, you shouldn't assume that there's a single path for this to occur. At the very least, I would suggest replacing "This is unrealistic" with "How are you going to tie it back to reality?". The second question is far superior, as it doesn't make the unwarranted assumption that the only purpose of constructing a model is to attempt to directly apply it to reality.