People frequently describe hypothetical situations on LW. Often, other people make responses that suggest they don't understand the purpose of hypotheticals.
- When someone puts forth the hypothetical A, it doesn't mean they believe it is true. They may be trying to show not(A).
- When someone posits A => B (A implies B), it doesn't mean that they believe A is true. The proposition A => B is commonly used to prove that B is true, or that A is false.
- A solution to a hypothetical scenario is useful only if, when you map it back into the original domain, it solves the original problem.
I'll expand on the last point. Sorry for being vague. I'm trying not to name names.
When a hypothetical is put forward to test a theory, ignore aspects of the hypothetical scenario that don't correspond to parts of the theory. Don't get emotionally involved. Don't think of the hypothetical as a narrative. A hypothetical about Omega sounds a lot like a story about a genie from a lamp, but you should approach it in a completely different way. Don't try to outsmart Omega (unless you're making a point about the impossibility of an Omega who can eg decide undecidable problems). When you find a loophole in the way the hypothetical is posed, that doesn't exist in the original domain, point it out only if you are doing so to improve the phrasing of the hypothetical situation.
John Searle's Chinese Room is an example of a hypothetical in which it is important to not get emotionally involved. Searle's conclusion is that the man in the Chinese room doesn't understand Chinese; therefore, a computer doesn't understand Chinese. His model maps the running software onto the complete system of room plus man plus cards; but when he interprets it, he empathizes with the human on each half of the mapping, and so maps the locus of consciousness from the running software onto just the man.1
Sometimes it's difficult to know whether your solution to a hypothetical is exploiting a loophole in the hypothetical, or finding a solution to the original problem. But when the original problem is testing a mathematical model, it's usually obvious. There are a few general situations where it's not obvious.
Consciousness is often a tricky area that makes it hard to tell whether you are looking at a solution to the original problem, or a loophole in the hypothetical. Sometimes the original problem is a paradox because of consciousness, so you can't map it away. In the Newcomb paradox, if you replace the person with a computer program, people would be much quicker to say: You should write a computer program that will one-box. But you can phrase it that way only if you're sure that the Newcomb paradox isn't really a question about free will. The "paradox" might be regarded as the assertion that there is no such thing as free will.
Another tricky case involves infinities. A paradox of infinities typically involves taking two different infinities, but treating them as a single infinity, so that they don't cancel out the way they should, or do cancel out when they shouldn't. Zeno's paradox is an example: The hypothetical doesn't notice that the infinity of intervals is cancelled out by their infinitesimal sizes. Eliezer discusses some other cases here.
Another category of tricky cases is when the hypothetical involves impossibilities. It's possible to accidentally construct a hypothetical that makes an assumption that isn't valid in our universe. (I think these paradoxes were unknown before the 20th century, but there may be a math example.) These crop up frequently in modern physics. The ultraviolet catastrophe may be the first such paradox discovered. The hypothetical in which a massive black hole suddenly appears one light-minute away from you, and you want to know how you can be influenced by its gravity before gravity waves have time to reach you, might be an example. The aspect of the Newcomb paradox that allows Omega to predict what you will do without fail may be such a flawed hypothetical.
If you are giving a solution to a hypothetical scenario that tests a mathematical model, and your response doesn't use math, and doesn't hinge on a consciousness, infinity, or impossibility from the original problem domain, your response is likely irrelevant.
1 He makes other errors as well. It's a curious case in which amassing a large number of errors in a model makes it harder to rebut, because it's harder to figure out what the model is. This is a clever way of exploiting the scientific process. Searle takes on challengers one at a time. Each challenger, being a scientist, singles out one error in Searle's reasoning. Searle uses other errors in his reasoning to construct a workaround; and may immediately move on to another challenger, and use the error that the original challenger focused on to work around the error that the second challenger focused on. This sort of trickery can be detected by looking at all of someone's counterarguments en masse, and checking that they all define the same terms the same way, and agree on which statements are assumptions and which statements are conclusions.
Right.. I'll give a few more examples from math. Say you're arguing that calculus is a lie because deriving dy/dx clearly involves division by zero. In this case, you're getting 'emotionally involved'. You're focusing on the notation dy/dx and all sorts of things about the existence of infinitesimals and division by zero. But that impossibility doesn't exist in the original theory, because (standard) calculus is founded on limits and not division by zero or infinitesimals. The infinities and infinitesimals aren't part of the original model which you're arguing against
Likewise, if you're arguing that ZFC is inconsistent by Russell's paradox, because you can construct peculiar but plausible sounding sets which imply contradictions, you're making the same mistake. You're being emotionally involved with your naive/primitive concept of a 'set', whereas the theory in question (ZFC) doesn't even allow you to construct such sets.
The above arguments are less common, but I have heard them. A more common argument concerns the Axiom of Choice, and goes a little something like this:
I pulled that from the math subreddit where it was posted a few days ago, and it's a fairly common argument. But the commenter has become emotionally involved with day-to-day sets and Cartesian products. What would the product of an uncountable collection of uncountable sets even look like? Once one refers to the formal, very abstract definition, it should be clear that we have absolutely no right to expect anything about it's emptiness or nonemptiness, because the intuition and emotional involvement are replaced by formal abstraction. The things which one assumes exist aren't actually there in the original theory (ZF).
The paradoxes falling out of the geocentric model, maybe?
Not necessarily. At least, not necessarily more so than anyone becomes "emotionally involved" when deciding on axioms to use in a mathematical theory.
AC is after all independent of ZF. So of course no argument in favor of it can be constructed literally on the basis of the "original theory" (ZF). Saying that the Cartesian product of nonempty sets ought to be nonempty is an aesthetic statement about what the rules of the game should be, not a mistaken inference from the axioms of ZF.
Because of the independence, we logically have a "right to expect" anything we want (AC or not-AC). The choice is a matter of taste, and the tastes of the mathematical community evidently incline toward AC.
Note how different this is from your two previous examples regarding derivatives and Russell's paradox. Those involved outright logical errors; whereas in the case of AC, the commenter is making a legitimate aesthetic argument.
By "paradox" I mean something stronger than "the model is inaccurate". I mean a hypothetical where all possible answers seem to be wrong.
Would Olbers' paradox qualify?
Olbers' paradox Basically, if the universe is of infinite size, then because an infinite number of concentric shells can be constructed centred on the earth. More distant shells have more stars, but the intensity of light we experience is reduced in accordance with the inverse square law. As these two effects cancel out, each shell should be of equal brightness, and so as there are an infinite number shells, the sky should by full of light. If the universe is not infinite, it would collapse due to gravity.
The wrong assumption is that universe is static (and an ignorance of relativity)
Sounds like it might. Wikipedia says this goes back to the 16th century.
The reason I'm not sure is it sounds like you're describing cases where a hypothetical, designed for some other purpose is flawed, unbeknownst to the creator, whereas Olber's paradox was a case of a hypothetical framed to show a contradiction in our existing knowledge of the universe.
Another vague, nameless example:
If someone says
A and (A => B => C), and someone else disputes
A, it does not mean this second person wishes to reenact glorious battles over
"The model used in this hypothetical does not meaningfully correspond to reality" seems relevant and not to fall under those categories, though it may count as impossibility. A lot of objections to hypotheticals, from what I've seen, stem from this conceptual problem but people rarely come out and say this bluntly.
While that can be a valid response, the map from the hypothetical to reality is often not obvious at first. This can be due to plain old inferential distance problems, or it can be a deliberate 'trick'.
If you're trying to convince someone of the truth of X, and they are emotionally involved in the truth of X, it can help to get them to agree with you on the truth of Y (of which they are emotionally neutral), and then only after they're on the same page, give them the map.
For this reason, "That is true, but I don't see any relevance" is usually a better response than "but real life isn't a hypothetical!".
IAWY and this also applies to hypotheticals testing non-mathematical models. For instance, there isn't much isomorphism between Newcomblike problems involving perfectly honest game players who can predict your every move, and any gamelike interaction you're ever likely to have.
Oh, yes; that's another valid response.
On some level, ALL hypothetical situations are impossible. The universe does not contain that configuration. Some are conceptually further from reality than others, but every single one of them is about a model. Something that's too far removed from a real model of the universe is probably not interesting to discuss (cf. pinhead angels, unstoppable force vs immovable object, etc).
I've gotten into the habit of using the word "counterfactual" rather than "hypothetical". It makes it slightly more clear that the discussion is not about the universe, but about a model of an alternate universe that may share some (conceptual) features with this one.
It has proven useful in short-cutting unproductive and frustrating discussions to ask the question "what's the minimal difference between this universe and that one"? In many cases, this question is isomorphic to the question that the thought experiment is intended to illuminate. Good, it's nice to have multiple frames from which to approach it. In other cases, this question exposes massive flaws in the hypothetical, and saves everybody time.
Hypotheticals are not necessarily untrue, much less impossible. Nor are all hypotheticals, even the ones that are impossible, counterfactuals - see his brief mention of implication.
I think delving into the difference between untrue and impossible would help here. In a model which contains rules distinct from state, "untrue" means "same rules, different state" (usually a state that's not obtainable from the current state and rules). "impossible" means "unsustainable under the rules".
That distinction between rules and state is only in our minds/models, though. In the actual universe, if there is such a distinction to an outside observer, it's lost to those of us stuck in it, because we can effect neither portion of reality.
note: I'm saying this more confidently than I feel. I would deeply appreciate pointers to any evidence that the universe has rules and state which are somehow alterable separately.
As to "hypothetical" vs "counterfactual", you're right that this isn't a blanket synonym. There are hypotheticals that have unknown truth value rather than being known falsehoods. For purposes of this discussion, and for most interesting thought experiment, the hypothetical situation given is simply false - it does not exist as described in the universe.
I don't know if I agree - it seems to me that our ability to effect changes to one, but not the other, is precisely what defines the difference!
For example, my state is not "standing in the front yard", though it could be. I could easily make it so. However, there's a rule against "floating 10 feet up in the front yard without the aid of platforms or balloons, etc"... and I know this is a rule, not a state, precisely because I cannot float!
I think the most interesting hypotheticals are those we do not yet know whether or not they hold.
Untrue means it is not factual, though it could have been in the past or the future or in another location. Impossible means it could not occur in out universe (at least we do not think it could occur given our current understanding of our universe).
I try to respond to fantasy scenarios in the spirit of the least convenient possible world, but I appreciate it when the author of a scenario spells out the intended dilemma explicitly, or else proofs it against tempting escapes.
An analogy or hypothetical will subtract value If it causes enough distraction. They are dangerously powerful communication, and practically necessary, even though ideally we can just create notation for the actual problem, and reason using that.
That's a clever point. Maybe there are two uses for an analogy, then: (1) to reason informally (performing a logical argument using natural, more familiar, language). This could (and I think, should) be done formally whenever possible. and (2) the use as an "intuition pump" (word from Dennett, I think) where the point is to provide a tangible analogy to your model, enabling someone to "understand" you, but not necessarily proving anything.