Traditional Rationality is phrased as social rules, with violations interpretable as cheating: if you break the rules and no one else is doing so, you're the first to defect - making you a bad, bad person. To Bayesians, the brain is an engine of accuracy: if you violate the laws of rationality, the engine doesn't run, and this is equally true whether anyone else breaks the rules or not.
Consider the problem of Occam's Razor, as confronted by Traditional philosophers. If two hypotheses fit the same observations equally well, why believe the simpler one is more likely to be true?
You could argue that Occam's Razor has worked in the past, and is therefore likely to continue to work in the future. But this, itself, appeals to a prediction from Occam's Razor. "Occam's Razor works up to October 8th, 2007 and then stops working thereafter" is more complex, but it fits the observed evidence equally well.
You could argue that Occam's Razor is a reasonable distribution on prior probabilities. But what is a "reasonable" distribution? Why not label "reasonable" a very complicated prior distribution, which makes Occam's Razor work in all observed tests so far, but generates exceptions in future cases?
Indeed, it seems there is no way to justify Occam's Razor except by appealing to Occam's Razor, making this argument unlikely to convince any judge who does not already accept Occam's Razor. (What's special about the words I italicized?)
If you are a philosopher whose daily work is to write papers, criticize other people's papers, and respond to others' criticisms of your own papers, then you may look at Occam's Razor and shrug. Here is an end to justifying, arguing and convincing. You decide to call a truce on writing papers; if your fellow philosophers do not demand justification for your un-arguable beliefs, you will not demand justification for theirs. And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".
But to a Bayesian, in this era of cognitive science and evolutionary biology and Artificial Intelligence, saying "a priori" doesn't explain why the brain-engine runs. If the brain has an amazing "a priori truth factory" that works to produce accurate beliefs, it makes you wonder why a thirsty hunter-gatherer can't use the "a priori truth factory" to locate drinkable water. It makes you wonder why eyes evolved in the first place, if there are ways to produce accurate beliefs without looking at things.
James R. Newman said: "The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2." The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience. Wikipedia quotes Hume: Relations of ideas are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe." You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.
But in this era of neurology, one ought to be aware that thoughts are existent in the universe; they are identical to the operation of brains. Material brains, real in the universe, composed of quarks in a single unified mathematical physics whose laws draw no border between the inside and outside of your skull.
When you add 1 + 1 and get 2 by thinking, these thoughts are themselves embodied in flashes of neural patterns. In principle, we could observe, experientially, the exact same material events as they occurred within someone else's brain. It would require some advances in computational neurobiology and brain-computer interfacing, but in principle, it could be done. You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2. How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.
If this seems counterintuitive, try to see minds/brains as engines - an engine that collides the neural pattern for 1 and the neural pattern for 1 and gets the neural pattern for 2. If this engine works at all, then it should have the same output if it observes (with eyes and retina) a similar brain-engine carrying out a similar collision, and copies into itself the resulting pattern. In other words, for every form of a priori knowledge obtained by "pure thought", you are learning exactly the same thing you would learn if you saw an outside brain-engine carrying out the same pure flashes of neural activation. The engines are equivalent, the bottom-line outputs are equivalent, the belief-entanglements are the same.
There is nothing you can know "a priori", which you could not know with equal validity by observing the chemical release of neurotransmitters within some outside brain. What do you think you are, dear reader?
This is why you can predict the result of adding 1 apple and 1 apple by imagining it first in your mind, or punch "3 x 4" into a calculator to predict the result of imagining 4 rows with 3 apples per row. You and the apple exist within a boundary-less unified physical process, and one part may echo another.
Are the sort of neural flashes that philosophers label "a priori beliefs", arbitrary? Many AI algorithms function better with "regularization" that biases the solution space toward simpler solutions. But the regularized algorithms are themselves more complex; they contain an extra line of code (or 1000 extra lines) compared to unregularized algorithms. The human brain is biased toward simplicity, and we think more efficiently thereby. If you press the Ignore button at this point, you're left with a complex brain that exists for no reason and works for no reason. So don't try to tell me that "a priori" beliefs are arbitrary, because they sure aren't generated by rolling random numbers. (What does the adjective "arbitrary" mean, anyway?)
You can't excuse calling a proposition "a priori" by pointing out that other philosophers are having trouble justifying their propositions. If a philosopher fails to explain something, this fact cannot supply electricity to a refrigerator, nor act as a magical factory for accurate beliefs. There's no truce, no white flag, until you understand why the engine works.
If you clear your mind of justification, of argument, then it seems obvious why Occam's Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found. "But," you cry, "why is the universe itself orderly?" This I do not know, but it is what I see as the next mystery to be explained. This is not the same question as "How do I argue Occam's Razor to a hypothetical debater who has not already accepted it?"
Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam's Razor, just as you cannot argue anything to a rock. A mind needs a certain amount of dynamic structure to be an argument-acceptor. If a mind doesn't implement Modus Ponens, it can accept "A" and "A->B" all day long without ever producing "B". How do you justify Modus Ponens to a mind that hasn't accepted it? How do you argue a rock into becoming a mind?
Brains evolved from non-brainy matter by natural selection; they were not justified into existence by arguing with an ideal philosophy student of perfect emptiness. This does not make our judgments meaningless. A brain-engine can work correctly, producing accurate beliefs, even if it was merely built - by human hands or cumulative stochastic selection pressures - rather than argued into existence. But to be satisfied by this answer, one must see rationality in terms of engines, rather than arguments.
My posts for the next two days will be on related topics.
Something feels off about this to me. Now I have to figure out if it's because fiction feels stranger than reality or because I am not confronting a weak point in my existing beliefs. How do we tell the difference between the two before figuring out which is happening? Obviously afterward it will be clear, but post-hoc isn't actually helpful. It may be enough that I get to the point where I consider the question.
On further reflection I think it may be that I identify a priori truths with propositions that any conceivable entity would assign a high plausibi... (read more)
Generalizing from past observations to future expectations is often referred to in philosophy as the "problem of induction". It has the same problem is that you have to accept induction working in the past to expect it to work in the future, and if Bertrand Russell is right to argue that you were created five seconds ago with false memories you can't know it worked in the past either. Against that kind of skepticism I can only fall back on a David Stove type "common sense" position, but fortunately I am not interested in persuading others but understanding the world well enough to attain my goals.
You left the italics tag on.
Greedy, all you're doing is specifying properties into the definition of what you mean by "entity" or "knows enough". I can always build a tape recorder that plays back "Two and two make five!" forever.
TGGP, fixed the tag. And remember, it's not about persuading an ideal philosophy student of perfect emptiness, it's about understanding why the engine works.
I can rigorously model a universe with different contents, and even one with different laws of physics, but I can't think of how I could rigorously model (as opposed to vaguely imagine) one where 2+2=3. It just breaks everything. This suggests there's still some difference in epistemic status between math and everything else. Are "necessary" and "contingent" no more than semantic stopsigns? How about "logical possibility" as distinct from physical possibility?
I don't really understand what Eliezer is arguing against. Clearly he understands the value of mathematics, and clearly he understands the difference between induction and deduction. He seems to be arguing that deduction is a kind of induction, but that doesn't make much sense to me.
Nick: you can construct a model where there is a notion of 'natural number' and a notion of 'plus' except this plus happens to act 'oddly' when applied to 2 and 2. I don't think this model would be particularly interesting, but it could be made.
Nick, I'm honestly not sure if there's a difference between logical possibility and physical possibility - it involves questions I haven't answered yet, though I'm still diligently hitting Explain instead of Worship or Ignore. But I do know that everything we know about logic comes from "observing" neurons firing, and it shouldn't matter if those neurons fire inside or outside our own skulls.
Gray Area, what I'm arguing is that deduction, induction, and direct sensory experiences, should all be considered as equivalent-to-observation.
Eliezer: Good answer. I take the same view, although I think the "can you model it" question suggests there is a difference. Do you think a rigorous, consistent (or not provably inconsistent) model of arithmetic or physics is possible where 2+2=3? (or the 3rd decimal place of pi is 2, or Fermat's last theorem is false, or ...)
It seems like you could justify Occam's Razor by looking at the past history of discarded explanations. An explanation that is ridiculously complex, yet fits all the observations so far, will probably be broken by the next observation; a simple explanation is less likely to fail in the future. A hypothesis that says "Occam's Razor will work until October 8th, 2007" falls into the general category of "hypotheses with seemingly random exceptions", which should have a history of lesser accuracy than hypotheses with justified exceptions or ... (read more)
Really? I'm aware that physical outputs are totally determined by physical inputs. Neurology can tell us what sorts of physical causes give rise to what sorts of physical effects. We even have reason to believe that thoughts can be infered from the physical state of the brain in a lawlike fashion but this surely doesn't let us infer that thoughts are IDENTICAL to the operation of brains. Merely that they alwa... (read more)
"I'm aware that physical outputs are totally determined by physical inputs."
Even this is far from a settled matter, since I think this implies both determinism and causal closure.
logicnazi, if we can talk about our experiences, our experiences have a causal effect on the physical world. Assuming, as you do, causal closure (which is not known, but the most parsimonious hypothesis), this means that the idea of different experiences with the same physical state is indeed incoherent.
"We even have reason to believe that thoughts can be infered from the physical state of the brain in a lawlike fashion but this surely doesn't let us infer that thoughts are IDENTICAL to the operation of brains. Merely that they always go together in the actual world."
Look at airplanes: they all have a bunch of common characteristics like an engine, wings, rudders, etc. If you argued that an airplane was not really "identical" to the pile of parts, but that they just "always went together", people would look at you like you had three heads. Yet, when applied to brains, people think this argument makes sense. A brain is made up of the frontal cortex, visual cortex, auditory cortex, amygdala, pituitary gland, cerebellum, etc.; that's just what it is.
Tom: I agree with your analogy. Yudkowsy said: "Gray Area, what I'm arguing is that deduction, induction, and direct sensory experiences, should all be considered as equivalent-to-observation."
This is only convincing to someone who believes logic is only possible when their is some physical structure directly corresponds to logical output. Yet even the evidence indicating this is true uses logic.
I recently started (and then backed out of)a debate with a Christian presupositionalist.I had no idea how to show how logic itself works except by examp... (read more)
I am not sure if my understanding of Occam’s Razor matches Eliezer Yudkowsky’s.
I understand it more as (to use a mechanical analogy) “don’t add any more parts to a machine than are needed to make it work properly.”
This seems to fit Occam’s Razor if I take it to be a guide, not a prediction or a law. It does not say that the theory with the fewest parts is more likely to be correct. It just reminds us to take out anything that is unnecessary.
If scientists have often found that theories with more parts are less often correct, that may further encourage us to... (read more)
I think a discussion of what people mean exactly when they invoke Occam's Razor would be great, though it's probably a large enough topic to deserve its own thread.
The notion of hypothesis parsimony is, I think, a very subtle one. For example, Nick Tarleton above claimed that 'causal closure' is 'the most parsimonious hypothesis.' At some other point, Eliezer claimed the multi-world interpretation of quantum mechanics as the most parsimonious. This isn't obvious! How is parsimony measured? Would some version of Chalmers' dualism really be less parsimonious? How will we agree on a procedure to compare 'hypothesis size?' How much should we value 'God' vs 'the anthropic landscape' favored at Stanford?
"Anyone agree or disagree with the futility of debating someone who believes the universe is around 6,000 years old (and is also above age 25)?"
Agree 100%. The Universe is slightly over 10,000 years old. The 6000-ers got their math badly wrong. Crackpots, the lot of them.
Constant, the obviousness felt by both disagreeing parties almost never changes. How many formal debates actually end with the other person changing their mind? I would take it further and say formal debate is usually worthless too.
In the meantime where are your error bars? I bet somewhere there is a fundy who includes error bars.
Error bars: give or take about 14 billion years. My calculations are quite precise. I am still working out the ramifications of the universe being 10,000 minus 14 billion years old.
I knew you would come through Constant simply by reading your name.
"But what is a "reasonable" distribution? Why not label "reasonable" a very complicated prior distribution, which makes Occam's Razor work in all observed tests so far, but generates exceptions in future cases?"
Occam's Razor is only relevant to model selection problems. A complicated prior distribution does not matter. What does matter is how much the prior distribution volume in parameter space decreases as the model becomes more complex (more parameters). Each additional parameter in the model spreads the prior distributio... (read more)
Eliezer: It sure does seem to me that when you say that "a mind needs a certain amount of dynamic structure to be an argument acceptor" you are saying that it does in fact know certain things prior to any "learning" taking place, e.g. that there are "priors". I would argue that 2+2=4 is part of this set, but as the punchline argues, we have already established the basics, now we are just haggling.
By considering models in the first place, one is already using Occam's razor. With no preference for simplicity in the priors at all, one would start with uniform priors for all possible data sequences, not finite-parameter models of data sequences. If you formalize models as being programs for Turing machines which have a separate tape for inputting the program, and your prior is a uniform distribution over possible inputs on that tape, you exactly recover the 2^-k Occam's razor law, where k is the number of program bits that the Turing machine re... (read more)
If you make the assumption that what you observe is the result of a computational process, the prior probability of a lossless description/explanation/theory of length l becomes inversely proportional to the size of the space of halting programs of length l. You're free to dismiss the assumption, of course.
One reason among many may be the KAM-Theorem.
Occam's Razor has two aspects. One is model fitting. If the model with more free parameters fits better that could merely be because it has more free parameters. It would take a thorough Bayesian analysis to work out if it was really better. A model that fits just as well but with fewer parameters is obviously better.
Occam's Razor goes blunt when you already know that the situation is complicated and messy. In neurology, in sociology, in economics, you can observe the underlying mechanisms. It is obvious enough that there are not going to be simple laws. I... (read more)
Alan: Does a scientist likewise have no reason to pay attention to any model of the universe but fundamental physics? High level descriptions of the world very frequently can account for most of the variance in high level phenomena without containing the known complexity of the substrate.
Do high level descriptions of the world frequently account for most of the variance in high level phenomena without containing the known complexity of the substrate?
I think you can constrast thermodynamics and sociology by noticing that there is no Princess Diana molecule. All the molecules are on the same footing. None of them get to spoil the statistics by setting a trend and getting in all the newspapers papers. So perhaps Occam's Razor grabs credit not due to it, as researchers favour simple theories when they have specific reasons to do so.
An example ... (read more)
"I am not sure if my understanding of Occam’s Razor matches Eliezer Yudkowsky’s.
I understand it more as (to use a mechanical analogy) “don’t add any more parts to a machine than are needed to make it work properly.”
Think of Kolmogorov complexity: the most parsimonious hypothesis is the one that can generate the data using the least number of bits when fed into a Turing machine.
"One way is to appeal to Occam's Razor. Let us prefer the simpler hypothesis that increases to the minimum wage are random. That is bogus."
Why it is bogus? An ideal st... (read more)
Let's see. What else would I have to believe in order to accept a statement like "~(p&~p) is not a theorem in propositional logic?"
A statement of the form "X is a theorem in this particular formal mathematical system" means that I can use the operations allowed within that system to construct a "proof" of the sentence X. In theory, I can make a machine that takes a "proof" as input and returns "true" if the proof is indeed a correct proof and "false" if there is a step in the proof that is not... (read more)
A person not capable of correct deductive reasoning is insane. The people usually deemed insane are those with deviant behavior, or what Caplan calls "the extreme tails of a preference distribution with high variance".
And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".
I should note that the most famous paper in 20th Century analytic philosophy, Quine's "Two Dogmas of Empiricism", is an attack on the idea of the a priori. The paper was written in 1951 and built on papers written in the previous two decades. A large proportion of contemporary philosophers agree with Quine's basic position. This doesn't stop them from doing theoretical work, just as Eliezer's disavowal of the a priori need not prevent him theorizing... (read more)
Eliezer - It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence". For one thing, no amount of mere observation will suffice to bring us to a conclusion, as Lewis Carroll's tortoise taught us. Further, it mistakes content and vehicle. When I judge that p, and subsequently infer q, the basis for my inference is simply p - the proposition itself - and not the psychological fact that I judge that p. I could infer some things from the latter fact too, of course, but that's a very different matter. (And in tu... (read more)
It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence".
If you view it as an argument, yes. The engines yield the same outputs.
Minds are a rather different matter. They are not conceptually reducible to neurons firing.
Just because you do not know how the trick works, does not mean the trick is powered by magic pixie dust.
Eliezer Yudkowsky said: "Just because you do not know how the trick works, does not mean the trick is powered by magic pixie dust."
I agree yet this won't convinve a sophisticated right-wing Christian (or Jew, or Muslim, etc).
Who said anything about 'magic pixie dust'? I agree that the brain gives rise to (or 'powers') the mind, thanks to the laws of nature that happen to govern our universe. I may even agree with all the causal claims you want to make. But if you're going to start talking about identity, then you need to do some real philosophy.
"If you view it as an argument, yes. The engines yield the same outputs."
What does the latter have to do with rationality?
Does Eliezer really need to do some "real philosophy"? If he does not, will he miss out on the Singularity? Will A.I be insufficiently friendly? I don't see any reason to think so. I say be content in utter philosophical wrongness. Shout to the heavens that our actual world is a zombie world with XYZ rather than H20 flowing in the creeks that tastes grue, all provided it has no impact on your expectations.
What's the difference between the brain giving rise to a mind by the laws of nature and the brain giving rise to a mind without identity by the laws of nature?
But if you're going to start talking about identity, then you need to do some real philosophy.
"Identity" is not magic. There is no abiding personal essence, just continuity of memory. A real philosopher said that, by the way.
I do think there are important unanswered questions in the philosophy of mind, but this isn't one of them. (Although one of them is "where is our thinking still contaminated by the idea of magic personal identity?", which I suspect is at the root of several apparent paradoxes.)
Tom, I think we are actually agreeing. I'm arguing that if you already know the situation is complicated you cannot just appeal to Occam's Razor, you need some reason specific to the situation about why the simple hypothesis should win.
You are proposing a reason, specific to economics, about why the complications might be washed away, making it reasonable to prefer the simpler hypothesis. My claim is that those extra reasons are essential. Occam's Razor, on its own, is useless in situations known to be complicated.
Tom McCabe, Thank you for the comment. You have started me thinking about the differences between Occam's Razor and Einstein's "Everything should be made as simple as possible, but not simpler." John
"--" should have been "Shakespeare's Fool" John
TGGP - You seem to have missed the conditional nature of my claim. I'm not forcing philosophy on anyone; just saying if you're going to do it at all, best do it well.
Nick - I never suggested there was an "abiding personal essence". (Contemporary philosophers like Derek Parfit and David Velleman have done a stellar job in revealing the conceptual confusions underlying such an idea.) In any case, it's hardly relevant. The issue here is individuation (how to count the distinct things in the world), not personal identity and persistence through time.... (read more)
Eliezer: "You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2. How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence."
Richard: "It's just fundamentally mistaken to conflate reason... (read more)
Richard: oops, I thought you meant personal identity. Ach, homonyms.
Do you think that the human bodies in a physics-only zombie world would behave identically to ours? ( = Do you think physics is causally closed?)
Nick Hay: great explanation.
Richard: sure, minds and brains can "come apart" in possible worlds other than ours (or indeed in this one, if and when someone teaches a computer to think), but I have never understood why some people seem to think that this suggests that there's anything weird about the relationship between actual minds and actual brains in the actual world.
Consider those airplanes again, but let's use a more general term like "flying machine" that isn't so tightly tied to the details of their construction. You can imagine (yes?) a world in which a Bo... (read more)
g - there's no possible world that's physically identical to ours but where the Boeing's don't fly. There is a possible world that's physically identical to ours that lacks consciousness. That's the difference. It shows that physics suffices for flight but not fully-fledged mentality. (N.B. the interesting case here is not minds without brains, but brains without minds.)
Nick Hay - Thanks for bringing this back to the key issue. In fact I do not "consider having successfully determined a conclusion from pure thought evidence that that thought is correc... (read more)
Sorry, my second sentence to NH is unclear. The psychological fact could be taken as a kind of indirect evidence, as noted in my postscript. But it is not what I take my evidence to be, when I am reasoning according to a #1-style argument. We could say the evidence of my thought [vehicle] is not the evidence in my thought [content].
Nick T. - yes, I accept the causal closure of the physical. (And thus epiphenomenalism. I discuss the epistemic consequences in my post 'Why do you think you're conscious?')
On the broader issue - to expand on my response to James above - see my post on the explanatory power of dualism.
Richard, are you saying that if in this world I attempted to move around some material to produce an artificial brain, it would not work unless I also did some psycho-manipulation of some sort? Or is the psycho-stuff bound so tightly with the material that the materially-sufficient is psycho-sufficient?
I neglected to link to this before when I mentioned anticipated experiences, which is one of my favorite posts here. I am so fond of linking to it I assumed I already had.
Richard, you have presented absolutely no evidence that there is a possible world physically identical to ours but in which we are not conscious, beyond saying that it's "conceptually possible" for minds and brains to "come apart", if we imagine a world with different laws of nature.
But it's equally conceptually possible for flying machines and aerofoils to come apart, if we imagine a world with different laws of nature, and (it appears) you don't see that as any reason to think that flying machines fly by aerofoils plus some extra brid... (read more)
I agree with Richard that we should respect the fact that philosophers have spilled a lot of ink on the consciousness question; we should read them and respond to their arguments. We should have at least one post devoted to this topic. But after doing so, I'm betting I'll still mainly agree with Eliezer.
Richard, I don't think Eliezer conflated reasoning with observing your own brain - he just suggested that simple Bayesian reasoning based on observing your own brain gets you pretty much all the conclusions you need from most other "reasoning."
Robin and Richard - I think it is possible that Eliezer did not word his statement as cleanly as he might. However if his wording conflated categories, I am confident that with some care the exact same point can be re-worded without such conflation. There is something real and significant here that he's pointing out, and it's not going to go away simply because he was (if he was) a bit to loose in his presentation.
I think this contains one of the main points:
The philosop... (read more)
The evolutionary formation of the mind is, as Eliezer points out, based on the truth, and not on justification. Mutation throws one brain after another at the problems of life, and the brains that generate true beliefs are the ones that tend to survive. No justification is involved at this stage. For example, suppose that Occam's razor is true (to understand this, translate the method called "Occam's razor" into the appropriate assertion about the world so that it can be assigned a truth value). Then brains that apply Occam's razor will tend to s... (read more)
Constant - Sure, there's something to be said for epistemic externalism. But I thought Eliezer had higher ambitions than merely distinguishing rationality and reliability? He seems to be attacking the very notion of the a priori, claiming that philosophers lazily treat it as a semantic stopsign or 'truce' (a curious claim, since many philosophers take themselves to be more or less exclusively concerned with the a priori domain, and yet have been known to disagree with one other on occasion), and dismissively joking "it makes you wonder why a thirsty h... (read more)
Richard, I would like to know what you mean by "conceptually possible" and why you think conceptual possibility has anything to do with actual possibility. I think you mean something like "I can/can't imagine X without any obvious inconsistencies". So, e.g., you can imagine, or think you can imagine, a world physically identical to ours in which people have no experiences; but you can't imagine, or think you can't imagine, a world physically identical to ours in which jumbo jets don't fly.
But whether something is "conceptually poss... (read more)
Oh, gosh, that was rather long. Sorry.
I liked it. I really don't get Robin's desire for short comments. This is the only blog where I've seen that restriction. Is he worried about the high cost of bandwidth? For text?
I think I can forgive it this once.
Zombies and similar creatures are "conceptually possible" when someone doesn't understand the connection between lower and higher levels of organization, so that the stored propositions about the lower and higher levels of organization are mentally unconnected and can be switched on or off independently. This is a fact about the person's state of mind, not a fact about the phenomenon in question.
g, beautifully said.
Hopefully Anonymous had a post about zombies here, in which I made fun of him.
Anticipating experience may be a useful constraint for science, but that is not all there is to know.
If I was going to dispute this I would have to specify what it means to "know" and get into one of those goofy epistemology discussions I derided here. Philosophy is the required method to argue against philosophy, oh bother. Good thing reality doesn't revolve around dispute.
g - No, by 'conceptually possible' I mean ideally conceptually possible, i.e. a priori coherent, or free of internal contradiction. (Feel free to substitute 'logical possibility' if you are more familiar with that term.) Contingent failures of imagination on our part don't count. So it's open to you to argue that zombies aren't conceptually possible after all, i.e. that further reflection would reveal a hidden contradiction in the concept. But there seems little reason, besides a dogmatic prior commitment to materialism, to think such a thing. Most (but ad... (read more)
I have future posts planned that will shed light on this topic, but not today.
Richard, I'm unconvinced that you have any way of telling whether the existence of zombies is ideally conceptually possible; the fact that you seem to be able to imagine a zombie world certainly isn't good evidence that it's "free of internal contradiction". (Consider, again, the Riemann hypothesis.)
I don't have anything like a proof that the idea of zombies is in fact incoherent. But if you're right that its coherence would entail the existence of these mysterious psychophysical bridging laws and all the rest of your epiphenomenal apparatus, the... (read more)
Before we discovered that water was H2O, our concept of water did not include that it was H2O. Since our concept did not include that, then surely it would not have been incoherent, at the time, to say that water is not H2O (imagine that this occurs during the period after the discovery of H and O and before the discovery of the composition of water - imagine that there was such a period), since there was nothing in our concept of water at the time that logically contradicted that statement. However, today it is incoherent to say that water is not H2O, because our concept of water includes that it is H2O - water is regularly defined as H2O.
Let us think about the period when, because the concept did not include that it was H2O, it was free of internal contradiction to say that water is not H2O, and therefore logically possible, ideally conceptually possible, and a priori coherent, to say that water is not H2O. Given their concept of water and perhaps even given everyth... (read more)
Well said, Constant.
The logic for why zombies can't exist, very briefly, goes like this:
You see a bright red light.
The mysterious redness-quality of the red light seems inexplicable in merely material terms.
You think, within your stream of consciousness, "The redness of this light seems inexplicable in merely material terms."
You say out loud, "The redness of this light seems inexplicable in merely material terms."
Your lips moved.
Whatever caused your lips to move must lie within the realm of physics because it had a physical effect.
If we sum up all the forces acting on your lips - gravity, electromagnetism, etc. - we will necessarily include the proximal cause of your lips moving. This is because when we sum up the forces acting on your lips, we can tell where your lips will go. In particular, we can tell that they'll move. So if we delete anything that isn't on the force list, your lips go to exactly the same place.
As it so happens, the proximal cause of your lips moving is nervous instructions sent from your motor cortex and cerebellum. It is not possible to imagine a world in which your lips move and all the laws of physics are the same, but there are no n... (read more)
(Let me just add that the first chapter of my thesis addresses Constant's concerns, and my previously linked post 'why do you think you're conscious?' speaks to Eliezer's worries about epiphenomenalism -- what is sometimes called 'the paradox of phenomenal judgment.' Some general advice: philosophers aren't idiots, so it's rarely warranted to attribute their disagreement to a mere "failure to realize" some obvious fact.)
I don't know what you mean about "idiots". My arguments are not intended as insults. In fact I fully expect you to have already dealt with them. However, I have little choice but to answer the particular point you raised at a particular time, because if I try to jump ahead and anticipate all your answers and then your answers to my answers to your answers, the result will probably be an incredibly confusing monologue that is more likely than not to simply have mis-anticipated your actual responses. Aside from that there is the matter of comment length to consider.
Richard, I don't actually believe philosophers are idiots because I've seen their standardized test scores. I do think they could more productively use their intellects though. If I were to ignore IQ/general intelligence and simply try to judge whether one philosopher does better philosophizing than another, would I be able to do it without becoming a philosopher myself and judging their arguments? I can determine that rocket physicists are good at what they do because they successfully send rockets in the air, I know brain surgeons are because the brains ... (read more)
Richard is quite right to point out that philosophers of mind are well aware of the counter arguments that Constant and Eliezer offer. And he is right to insist this is a subtle question to which a few quick comments do not do justice. There are however many philosophers who agree with Constant and Eliezer. See for example the October Philosophical Quarterly article on Anti-Zombies.
Not to be too uncharitable, but I'd say the arguments of us material monists are simple, and it's only the flaws in the complex dualist arguments that are subtle.
PS: I've got some back copies of the Journal of Consciousness Studies on my bookshelf, so don't necessarily assume that I'm unaware of the big philosophical mess here.
TGGP, your question illustrates nicely my explanation for why more history than futurism.
This book review claims that the majority position in philosophy rejects the dualism Constant and Eliezer object to - this is most certainly not a dispute between philosophers and scientists.
Sorry, have you argued someplace else for either reduction or eliminative materialism?
I have a series of posts planned on that in due time.
In a comment on "How to convince we that 2+2=3", I pointed out that the study of neccessary truths is not the same as the possession of neccessary truths (credit to David Deutsch for that important insight). Unfortunately, the discussion here seems to have gotten hung up on a philosophical formulation that blurs that important distinction, a priori. Eliezer's quotative paragraph illustrates the problem:... (read more)
We use Occam's Razor because it has tended to work better than Occam's Butterknife.
What's so complicated about that?
I read no comments
"You could argue that Occam's Razor has worked in the past, and is therefore likely to continue to work in the future. But this, itself, appeals to a prediction from Occam's Razor."
It seems to me like it is more of an appeal to induction. (Granted the problem Hume raised about induction, but also granted Hume's [and my own] defection to practicality.)
To distinguish the word "arbitrary" from "random", I think of an arbitratorâi.e., an outside judge chooses something. (Maybe this results in a uniform prior for me, if'n I don't know what she'll do. Or maybe I'm a mathematician and I choose to be ready for any choice that arbitrator might make.)
When I'm teaching linear algebra and explain arbitrary parameters to my students, I use exactly this metaphor. How many times does someone else have to come in and arbitrate the value of other variables, before you can tell the questioner what the answer is?
Could you not argue Occam's Razor from the conjunction fallacy? The more components that are required to be true, the less likely they are all simultaneously true. Propositions with less components are therefore more likely, or does that not follow?
2+2=4 is part of the definition of +. The question isn't why we think 2+2=4. The question is why we're so obsessed with addition. 2 << 2 = 8, but you don't hear people talking about how 2 and 2 makes eight.
You simply can't do anything without something being a priori. Is the universe orderly? Maybe it looks orderly by coincidence. The probability that it looks this way given that it's random is simple enough, but we also need to know the probability that it looks this way and it's not orderly. We need some a priori probability that it isn't orderly, ... (read more)
A statement can be true a priori in the sense that no sensory evidence is needed to infer it- the principle of non-contradiction, for example.
A priori, translated very roughly but with respect to the spirit of the phrase, means "before experience". It is used with a posteriori which means "after experience". Something known a priori is equivalent to your prior probability; something known a posteriori is equivalent to the posterior probability. That is, when you are concerned with an event, before any experience of the event, your knowledge is a priori.
This is, of course, a slippery slope: that prior is simply the posterior of something else.
Some philosophers have tried to us... (read more)
I thought Occam's Razor was justified by the fact that every new proposition involved necessarily increased the number of ways in which the entire explanation could fail. Then you require evidence for yet another belief, and since you cannot be 100% accurate in any of your propositions, your accuracy continually decreases as well.
A Priori has always just seemed to me like another way to describe what we call "assumption" in classical logic. You can't deduce anything in classical logic without starting from certain assumptions and seeing what you can deduce from them, and one of the strengths of classical logic is that it forces you to actually list your assumptions up front, so someone else can say "I agree with your reasoning, but I think your assumption "B" is invalid".
Trying to take assumptions apart, see if they are valid, see if they can either ... (read more)
No no no. The difference between a priori and a posteriori is where the justification lies. You may be counting your fingers when you count 1 + 1. It may be that you won't be able to figure out the answer if someone cut off your fingers. In fa... (read more)
Or the word "intuition"... (read more)
But of course there's such a thing as an a priori statement! Running a computation forwards without any uncertainty in it yields a result: this is "a priori" in the sense that, since it operates only on abstract mental data with no reference to empirical reality, it requires no experience to "get right" (rather, experience is required to locate a useful computation, out of all possible computations). 2+2 really does equal 4, every time, all the time, because any computation isomorphic to 2+2 must always yield an answer isomorphic to 4.
"the exact same material events"?
He has made a prediction of observable events that I predict is very very very wrong.
Your brain is not my brain. Not the exact same size, shape, or connectivity. Not performing the exact same ocean of processing at any one time that one might also be adding 1+1 to equal 2. For ... (read more)
I must respectfully disagree with your interpretation of Kant's use of the term "a priori knowledge." Kant says "While all knowledge beings with experience, it does not all arise out of experience." Hence, Kant never himself says that we can have knowledge without ever having had experience, that is to say, the shorthand explanation of Kant's philosophy, namely that "a priori truths are truths that can be attained without experience" does a poor job of representing the nuance of his epistemological system. Again, he says "... (read more)
I mean, yeah? You can still do that in your armchair, without looking at anything outside of yourself. Mathematical facts are indeed "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe," if you modify the statement a little to say "anywhere else existent" in order to acknowledge that the operation of tho... (read more)
Modus Ponens can be justified by truth tables. A is B ATFTTFFTF, and A→B is B ATFTTTFFT. By combining these to A∧A→B we get the truth table B ATTTT∧T=TF∧T=FFT∧F=FF∧T=T which is only true if both A and B are true.
Of course, one can always reject the notion of truth tables and then we are back to square one...
As for Occam's Razor - I used to think of it in terms of avoiding overfitting. More complex explanations have more degrees of freedom which makes it easier for them to explain all the datapoin... (read more)
That's very much not proven. There are multiple arguments for Occams Razor ,(see the Wikipedia page) , most or all of which aren't circular.