Albert: "Every time I've listened to a tree fall, it made a sound, so I'll guess that other trees falling also make sounds. I don't believe the world changes around when I'm not looking."
Barry: "Wait a minute. If no one hears it, how can it be a sound?"
While writing the dialogue of Albert and Barry in their dispute over whether a falling tree in a deserted forest makes a sound, I sometimes found myself losing empathy with my characters. I would start to lose the gut feel of why anyone would ever argue like that, even though I'd seen it happen many times.
On these occasions, I would repeat to myself, "Either the falling tree makes a sound, or it does not!" to restore my borrowed sense of indignation.
(P or ~P) is not always a reliable heuristic, if you substitute arbitrary English sentences for P. "This sentence is false" cannot be consistently viewed as true or false. And then there's the old classic, "Have you stopped beating your wife?"
Now if you are a mathematician, and one who believes in classical (rather than intuitionistic) logic, there are ways to continue insisting that (P or ~P) is a theorem: for example, saying that "This sentence is false" is not a sentence.
But such resolutions are subtle, which suffices to demonstrate a need for subtlety. You cannot just bull ahead on every occasion with "Either it does or it doesn't!"
So does the falling tree make a sound, or not, or...?
Surely, 2 + 2 = X or it does not? Well, maybe, if it's really the same X, the same 2, and the same + and =. If X evaluates to 5 on some occasions and 4 on another, your indignation may be misplaced.
To even begin claiming that (P or ~P) ought to be a necessary truth, the symbol P must stand for exactly the same thing in both halves of the dilemma. "Either the fall makes a sound, or not!"—but if Albert::sound is not the same as Barry::sound, there is nothing paradoxical about the tree making an Albert::sound but not a Barry::sound.
(The :: idiom is something I picked up in my C++ days for avoiding namespace collisions. If you've got two different packages that define a class Sound, you can write Package1::Sound to specify which Sound you mean. The idiom is not widely known, I think; which is a pity, because I often wish I could use it in writing.)
The variability may be subtle: Albert and Barry may carefully verify that it is the same tree, in the same forest, and the same occasion of falling, just to ensure that they really do have a substantive disagreement about exactly the same event. And then forget to check that they are matching this event against exactly the same concept.
Think about the grocery store that you visit most often: Is it on the left side of the street, or the right? But of course there is no "the left side" of the street, only your left side, as you travel along it from some particular direction. Many of the words we use are really functions of implicit variables supplied by context.
It's actually one heck of a pain, requiring one heck of a lot of work, to handle this kind of problem in an Artificial Intelligence program intended to parse language—the phenomenon going by the name of "speaker deixis".
"Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable invisibly grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?
The variables in a variable question fallacy often aren't neatly labeled—it's not as simple as "Say, do you think Z + 2 equals 6?"
If a namespace collision introduces two different concepts that look like "the same concept" because they have the same name—or a map compression introduces two different events that look like the same event because they don't have separate mental files—or the same function evaluates in different contexts—then reality itself becomes protean, changeable. At least that's what the algorithm feels like from inside. Your mind's eye sees the map, not the territory directly.
If you have a question with a hidden variable, that evaluates to different expressions in different contexts, it feels like reality itself is unstable—what your mind's eye sees, shifts around depending on where it looks.
This often confuses undergraduates (and postmodernist professors) who discover a sentence with more than one interpretation; they think they have discovered an unstable portion of reality.
"Oh my gosh! 'The Sun goes around the Earth' is true for Hunga Huntergatherer, but for Amara Astronomer, 'The Sun goes around the Earth' is false! There is no fixed truth!" The deconstruction of this sophomoric nitwittery is left as an exercise to the reader.
And yet, even I initially found myself writing "If X is 5 on some occasions and 4 on another, the sentence '2 + 2 = X' may have no fixed truth-value." There is not one sentence with a variable truth-value. "2 + 2 = X" has no truth-value. It is not a proposition, not yet, not as mathematicians define proposition-ness, any more than "2 + 2 =" is a proposition, or "Fred jumped over the" is a grammatical sentence.
But this fallacy tends to sneak in, even when you allegedly know better, because, well, that's how the algorithm feels from inside.
That is a great quote from The Simple Truth. And what is more, it is perfectly responsive to what I was trying to say. Thank you.
As you may already know, Eliezer quoted that passage in Quantum Non-realism because QM makes it necessary to modify that argument slightly. The trouble is that in QM, your experimental results are no longer "determined" or at least not in the same sense. Oh, I agree with the basic message of that Quantum Non-realism posting that QM creates no problems for realism that MWI and a little fine print can't fix. But I think that the fact that QM forced a change to the argument does suggest that there may be even more changes needed down the road.
If you want to call the whatever-it-is 'reality', that is fine with me. The whatever-it-is is definitely different from the best map that you know of. But it is possible, is it not, that the whatever-it-is is the whole tower of maps - including the maps you know of and the maps you don't even imagine yet.
A map doesn't determine observations. A whole tower of maps determines observations (modulo the necessary QM/MWI fine print). In much the same way that map-towers determine theoretical predictions. Maps, predictions, and observations are all made out of the same kind of 'stuff'. There is nothing mysterious about it. You only get into trouble if you somehow begin to imagine that experimental observations are somehow built out of some kind of 'reality stuff' which is ontologically different from map-tower stuff. They are not. Observations are very theory-laden.
Logical positivism had all this stuff covered fairly satisfactorily by 1970 or so (IMHO) but then somehow there was a change in the Zeitgeist and everyone agreed that positivism is dead. I am a contrarian who thinks something like it can be revived - along with a number of more academically serious anti-realist philosophers working in philosophy of science.
I'm not sure what you mean by this.
How does one make maps into a tower? What would such a tower of maps look like? How is this different from a "territory" containing a tower of maps?