Something feels off about this to me. Now I have to figure out if it's because fiction feels stranger than reality or because I am not confronting a weak point in my existing beliefs. How do we tell the difference between the two before figuring out which is happening? Obviously afterward it will be clear, but post-hoc isn't actually helpful. It may be enough that I get to the point where I consider the question.
On further reflection I think it may be that I identify a priori truths with propositions that any conceivable entity would assign a high plausibi...
Generalizing from past observations to future expectations is often referred to in philosophy as the "problem of induction". It has the same problem is that you have to accept induction working in the past to expect it to work in the future, and if Bertrand Russell is right to argue that you were created five seconds ago with false memories you can't know it worked in the past either. Against that kind of skepticism I can only fall back on a David Stove type "common sense" position, but fortunately I am not interested in persuading others but understanding the world well enough to attain my goals.
Greedy, all you're doing is specifying properties into the definition of what you mean by "entity" or "knows enough". I can always build a tape recorder that plays back "Two and two make five!" forever.
TGGP, fixed the tag. And remember, it's not about persuading an ideal philosophy student of perfect emptiness, it's about understanding why the engine works.
I can rigorously model a universe with different contents, and even one with different laws of physics, but I can't think of how I could rigorously model (as opposed to vaguely imagine) one where 2+2=3. It just breaks everything. This suggests there's still some difference in epistemic status between math and everything else. Are "necessary" and "contingent" no more than semantic stopsigns? How about "logical possibility" as distinct from physical possibility?
I don't really understand what Eliezer is arguing against. Clearly he understands the value of mathematics, and clearly he understands the difference between induction and deduction. He seems to be arguing that deduction is a kind of induction, but that doesn't make much sense to me.
Nick: you can construct a model where there is a notion of 'natural number' and a notion of 'plus' except this plus happens to act 'oddly' when applied to 2 and 2. I don't think this model would be particularly interesting, but it could be made.
Nick, I'm honestly not sure if there's a difference between logical possibility and physical possibility - it involves questions I haven't answered yet, though I'm still diligently hitting Explain instead of Worship or Ignore. But I do know that everything we know about logic comes from "observing" neurons firing, and it shouldn't matter if those neurons fire inside or outside our own skulls.
Gray Area, what I'm arguing is that deduction, induction, and direct sensory experiences, should all be considered as equivalent-to-observation.
Eliezer: Good answer. I take the same view, although I think the "can you model it" question suggests there is a difference. Do you think a rigorous, consistent (or not provably inconsistent) model of arithmetic or physics is possible where 2+2=3? (or the 3rd decimal place of pi is 2, or Fermat's last theorem is false, or ...)
It seems like you could justify Occam's Razor by looking at the past history of discarded explanations. An explanation that is ridiculously complex, yet fits all the observations so far, will probably be broken by the next observation; a simple explanation is less likely to fail in the future. A hypothesis that says "Occam's Razor will work until October 8th, 2007" falls into the general category of "hypotheses with seemingly random exceptions", which should have a history of lesser accuracy than hypotheses with justified exceptions or ...
But in this era of neurology, one ought to be aware that thoughts are existent in the universe; they are identical to the operation of brains.
Really? I'm aware that physical outputs are totally determined by physical inputs. Neurology can tell us what sorts of physical causes give rise to what sorts of physical effects. We even have reason to believe that thoughts can be infered from the physical state of the brain in a lawlike fashion but this surely doesn't let us infer that thoughts are IDENTICAL to the operation of brains. Merely that they alwa...
"I'm aware that physical outputs are totally determined by physical inputs."
Even this is far from a settled matter, since I think this implies both determinism and causal closure.
logicnazi, if we can talk about our experiences, our experiences have a causal effect on the physical world. Assuming, as you do, causal closure (which is not known, but the most parsimonious hypothesis), this means that the idea of different experiences with the same physical state is indeed incoherent.
"We even have reason to believe that thoughts can be infered from the physical state of the brain in a lawlike fashion but this surely doesn't let us infer that thoughts are IDENTICAL to the operation of brains. Merely that they always go together in the actual world."
Look at airplanes: they all have a bunch of common characteristics like an engine, wings, rudders, etc. If you argued that an airplane was not really "identical" to the pile of parts, but that they just "always went together", people would look at you like you had three heads. Yet, when applied to brains, people think this argument makes sense. A brain is made up of the frontal cortex, visual cortex, auditory cortex, amygdala, pituitary gland, cerebellum, etc.; that's just what it is.
Tom: I agree with your analogy. Yudkowsy said: "Gray Area, what I'm arguing is that deduction, induction, and direct sensory experiences, should all be considered as equivalent-to-observation."
This is only convincing to someone who believes logic is only possible when their is some physical structure directly corresponds to logical output. Yet even the evidence indicating this is true uses logic.
I recently started (and then backed out of)a debate with a Christian presupositionalist.I had no idea how to show how logic itself works except by examp...
I am not sure if my understanding of Occam’s Razor matches Eliezer Yudkowsky’s.
I understand it more as (to use a mechanical analogy) “don’t add any more parts to a machine than are needed to make it work properly.”
This seems to fit Occam’s Razor if I take it to be a guide, not a prediction or a law. It does not say that the theory with the fewest parts is more likely to be correct. It just reminds us to take out anything that is unnecessary.
If scientists have often found that theories with more parts are less often correct, that may further encourage us to...
I think a discussion of what people mean exactly when they invoke Occam's Razor would be great, though it's probably a large enough topic to deserve its own thread.
The notion of hypothesis parsimony is, I think, a very subtle one. For example, Nick Tarleton above claimed that 'causal closure' is 'the most parsimonious hypothesis.' At some other point, Eliezer claimed the multi-world interpretation of quantum mechanics as the most parsimonious. This isn't obvious! How is parsimony measured? Would some version of Chalmers' dualism really be less parsimonious? How will we agree on a procedure to compare 'hypothesis size?' How much should we value 'God' vs 'the anthropic landscape' favored at Stanford?
"Anyone agree or disagree with the futility of debating someone who believes the universe is around 6,000 years old (and is also above age 25)?"
Agree 100%. The Universe is slightly over 10,000 years old. The 6000-ers got their math badly wrong. Crackpots, the lot of them.
Constant, the obviousness felt by both disagreeing parties almost never changes. How many formal debates actually end with the other person changing their mind? I would take it further and say formal debate is usually worthless too.
In the meantime where are your error bars? I bet somewhere there is a fundy who includes error bars.
Error bars: give or take about 14 billion years. My calculations are quite precise. I am still working out the ramifications of the universe being 10,000 minus 14 billion years old.
"But what is a "reasonable" distribution? Why not label "reasonable" a very complicated prior distribution, which makes Occam's Razor work in all observed tests so far, but generates exceptions in future cases?"
Occam's Razor is only relevant to model selection problems. A complicated prior distribution does not matter. What does matter is how much the prior distribution volume in parameter space decreases as the model becomes more complex (more parameters). Each additional parameter in the model spreads the prior distributio...
Eliezer: It sure does seem to me that when you say that "a mind needs a certain amount of dynamic structure to be an argument acceptor" you are saying that it does in fact know certain things prior to any "learning" taking place, e.g. that there are "priors". I would argue that 2+2=4 is part of this set, but as the punchline argues, we have already established the basics, now we are just haggling.
William,
By considering models in the first place, one is already using Occam's razor. With no preference for simplicity in the priors at all, one would start with uniform priors for all possible data sequences, not finite-parameter models of data sequences. If you formalize models as being programs for Turing machines which have a separate tape for inputting the program, and your prior is a uniform distribution over possible inputs on that tape, you exactly recover the 2^-k Occam's razor law, where k is the number of program bits that the Turing machine re...
You could argue that Occam's Razor is a reasonable distribution on prior probabilities. But what is a "reasonable" distribution?
If you make the assumption that what you observe is the result of a computational process, the prior probability of a lossless description/explanation/theory of length l becomes inversely proportional to the size of the space of halting programs of length l. You're free to dismiss the assumption, of course.
"But," you cry, "why is the universe itself orderly?"
One reason among many may be the KAM-Theorem.
Occam's Razor has two aspects. One is model fitting. If the model with more free parameters fits better that could merely be because it has more free parameters. It would take a thorough Bayesian analysis to work out if it was really better. A model that fits just as well but with fewer parameters is obviously better.
Occam's Razor goes blunt when you already know that the situation is complicated and messy. In neurology, in sociology, in economics, you can observe the underlying mechanisms. It is obvious enough that there are not going to be simple laws. I...
Alan: Does a scientist likewise have no reason to pay attention to any model of the universe but fundamental physics? High level descriptions of the world very frequently can account for most of the variance in high level phenomena without containing the known complexity of the substrate.
Do high level descriptions of the world frequently account for most of the variance in high level phenomena without containing the known complexity of the substrate?
I think you can constrast thermodynamics and sociology by noticing that there is no Princess Diana molecule. All the molecules are on the same footing. None of them get to spoil the statistics by setting a trend and getting in all the newspapers papers. So perhaps Occam's Razor grabs credit not due to it, as researchers favour simple theories when they have specific reasons to do so.
An example ...
"I am not sure if my understanding of Occam’s Razor matches Eliezer Yudkowsky’s.
I understand it more as (to use a mechanical analogy) “don’t add any more parts to a machine than are needed to make it work properly.”
Think of Kolmogorov complexity: the most parsimonious hypothesis is the one that can generate the data using the least number of bits when fed into a Turing machine.
"One way is to appeal to Occam's Razor. Let us prefer the simpler hypothesis that increases to the minimum wage are random. That is bogus."
Why it is bogus? An ideal st...
Let's see. What else would I have to believe in order to accept a statement like "~(p&~p) is not a theorem in propositional logic?"
A statement of the form "X is a theorem in this particular formal mathematical system" means that I can use the operations allowed within that system to construct a "proof" of the sentence X. In theory, I can make a machine that takes a "proof" as input and returns "true" if the proof is indeed a correct proof and "false" if there is a step in the proof that is not...
A person not capable of correct deductive reasoning is insane. The people usually deemed insane are those with deviant behavior, or what Caplan calls "the extreme tails of a preference distribution with high variance".
And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".
I should note that the most famous paper in 20th Century analytic philosophy, Quine's "Two Dogmas of Empiricism", is an attack on the idea of the a priori. The paper was written in 1951 and built on papers written in the previous two decades. A large proportion of contemporary philosophers agree with Quine's basic position. This doesn't stop them from doing theoretical work, just as Eliezer's disavowal of the a priori need not prevent him theorizing...
Eliezer - It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence". For one thing, no amount of mere observation will suffice to bring us to a conclusion, as Lewis Carroll's tortoise taught us. Further, it mistakes content and vehicle. When I judge that p, and subsequently infer q, the basis for my inference is simply p - the proposition itself - and not the psychological fact that I judge that p. I could infer some things from the latter fact too, of course, but that's a very different matter. (And in tu...
It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence".
If you view it as an argument, yes. The engines yield the same outputs.
Minds are a rather different matter. They are not conceptually reducible to neurons firing.
Just because you do not know how the trick works, does not mean the trick is powered by magic pixie dust.
Eliezer Yudkowsky said: "Just because you do not know how the trick works, does not mean the trick is powered by magic pixie dust."
I agree yet this won't convinve a sophisticated right-wing Christian (or Jew, or Muslim, etc).
Who said anything about 'magic pixie dust'? I agree that the brain gives rise to (or 'powers') the mind, thanks to the laws of nature that happen to govern our universe. I may even agree with all the causal claims you want to make. But if you're going to start talking about identity, then you need to do some real philosophy.
"If you view it as an argument, yes. The engines yield the same outputs."
What does the latter have to do with rationality?
Does Eliezer really need to do some "real philosophy"? If he does not, will he miss out on the Singularity? Will A.I be insufficiently friendly? I don't see any reason to think so. I say be content in utter philosophical wrongness. Shout to the heavens that our actual world is a zombie world with XYZ rather than H20 flowing in the creeks that tastes grue, all provided it has no impact on your expectations.
But if you're going to start talking about identity, then you need to do some real philosophy.
What's the difference between the brain giving rise to a mind by the laws of nature and the brain giving rise to a mind without identity by the laws of nature?
But if you're going to start talking about identity, then you need to do some real philosophy.
"Identity" is not magic. There is no abiding personal essence, just continuity of memory. A real philosopher said that, by the way.
I do think there are important unanswered questions in the philosophy of mind, but this isn't one of them. (Although one of them is "where is our thinking still contaminated by the idea of magic personal identity?", which I suspect is at the root of several apparent paradoxes.)
Tom, I think we are actually agreeing. I'm arguing that if you already know the situation is complicated you cannot just appeal to Occam's Razor, you need some reason specific to the situation about why the simple hypothesis should win.
You are proposing a reason, specific to economics, about why the complications might be washed away, making it reasonable to prefer the simpler hypothesis. My claim is that those extra reasons are essential. Occam's Razor, on its own, is useless in situations known to be complicated.
Tom McCabe, Thank you for the comment. You have started me thinking about the differences between Occam's Razor and Einstein's "Everything should be made as simple as possible, but not simpler." John
TGGP - You seem to have missed the conditional nature of my claim. I'm not forcing philosophy on anyone; just saying if you're going to do it at all, best do it well.
Nick - I never suggested there was an "abiding personal essence". (Contemporary philosophers like Derek Parfit and David Velleman have done a stellar job in revealing the conceptual confusions underlying such an idea.) In any case, it's hardly relevant. The issue here is individuation (how to count the distinct things in the world), not personal identity and persistence through time....
Eliezer: "You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2. How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence."
Richard: "It's just fundamentally mistaken to conflate reason...
Richard: oops, I thought you meant personal identity. Ach, homonyms.
Do you think that the human bodies in a physics-only zombie world would behave identically to ours? ( = Do you think physics is causally closed?)
Nick Hay: great explanation.
Richard: sure, minds and brains can "come apart" in possible worlds other than ours (or indeed in this one, if and when someone teaches a computer to think), but I have never understood why some people seem to think that this suggests that there's anything weird about the relationship between actual minds and actual brains in the actual world.
Consider those airplanes again, but let's use a more general term like "flying machine" that isn't so tightly tied to the details of their construction. You can imagine (yes?) a world in which a Bo...
g - there's no possible world that's physically identical to ours but where the Boeing's don't fly. There is a possible world that's physically identical to ours that lacks consciousness. That's the difference. It shows that physics suffices for flight but not fully-fledged mentality. (N.B. the interesting case here is not minds without brains, but brains without minds.)
Nick Hay - Thanks for bringing this back to the key issue. In fact I do not "consider having successfully determined a conclusion from pure thought evidence that that thought is correc...
Sorry, my second sentence to NH is unclear. The psychological fact could be taken as a kind of indirect evidence, as noted in my postscript. But it is not what I take my evidence to be, when I am reasoning according to a #1-style argument. We could say the evidence of my thought [vehicle] is not the evidence in my thought [content].
Nick T. - yes, I accept the causal closure of the physical. (And thus epiphenomenalism. I discuss the epistemic consequences in my post 'Why do you think you're conscious?')
On the broader issue - to expand on my response to James above - see my post on the explanatory power of dualism.
Richard, are you saying that if in this world I attempted to move around some material to produce an artificial brain, it would not work unless I also did some psycho-manipulation of some sort? Or is the psycho-stuff bound so tightly with the material that the materially-sufficient is psycho-sufficient?
I neglected to link to this before when I mentioned anticipated experiences, which is one of my favorite posts here. I am so fond of linking to it I assumed I already had.
Richard, you have presented absolutely no evidence that there is a possible world physically identical to ours but in which we are not conscious, beyond saying that it's "conceptually possible" for minds and brains to "come apart", if we imagine a world with different laws of nature.
But it's equally conceptually possible for flying machines and aerofoils to come apart, if we imagine a world with different laws of nature, and (it appears) you don't see that as any reason to think that flying machines fly by aerofoils plus some extra brid...
I agree with Richard that we should respect the fact that philosophers have spilled a lot of ink on the consciousness question; we should read them and respond to their arguments. We should have at least one post devoted to this topic. But after doing so, I'm betting I'll still mainly agree with Eliezer.
Richard, I don't think Eliezer conflated reasoning with observing your own brain - he just suggested that simple Bayesian reasoning based on observing your own brain gets you pretty much all the conclusions you need from most other "reasoning."
Robin and Richard - I think it is possible that Eliezer did not word his statement as cleanly as he might. However if his wording conflated categories, I am confident that with some care the exact same point can be re-worded without such conflation. There is something real and significant here that he's pointing out, and it's not going to go away simply because he was (if he was) a bit to loose in his presentation.
I think this contains one of the main points:
If you clear your mind of justification, of argument, then it seems obvious why Occam's Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found. "But," you cry, "why is the universe itself orderly?" This I do not know, but it is what I see as the next mystery to be explained. This is not the same question as "How do I argue Occam's Razor to a hypothetical debater who has not already accepted it?"
The philosop...
Traditional Rationality is phrased as social rules, with violations interpretable as cheating: if you break the rules and no one else is doing so, you're the first to defect - making you a bad, bad person. To Bayesians, the brain is an engine of accuracy: if you violate the laws of rationality, the engine doesn't run, and this is equally true whether anyone else breaks the rules or not.
Consider the problem of Occam's Razor, as confronted by Traditional philosophers. If two hypotheses fit the same observations equally well, why believe the simpler one is more likely to be true?
You could argue that Occam's Razor has worked in the past, and is therefore likely to continue to work in the future. But this, itself, appeals to a prediction from Occam's Razor. "Occam's Razor works up to October 8th, 2007 and then stops working thereafter" is more complex, but it fits the observed evidence equally well.
You could argue that Occam's Razor is a reasonable distribution on prior probabilities. But what is a "reasonable" distribution? Why not label "reasonable" a very complicated prior distribution, which makes Occam's Razor work in all observed tests so far, but generates exceptions in future cases?
Indeed, it seems there is no way to justify Occam's Razor except by appealing to Occam's Razor, making this argument unlikely to convince any judge who does not already accept Occam's Razor. (What's special about the words I italicized?)
If you are a philosopher whose daily work is to write papers, criticize other people's papers, and respond to others' criticisms of your own papers, then you may look at Occam's Razor and shrug. Here is an end to justifying, arguing and convincing. You decide to call a truce on writing papers; if your fellow philosophers do not demand justification for your un-arguable beliefs, you will not demand justification for theirs. And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".
But to a Bayesian, in this era of cognitive science and evolutionary biology and Artificial Intelligence, saying "a priori" doesn't explain why the brain-engine runs. If the brain has an amazing "a priori truth factory" that works to produce accurate beliefs, it makes you wonder why a thirsty hunter-gatherer can't use the "a priori truth factory" to locate drinkable water. It makes you wonder why eyes evolved in the first place, if there are ways to produce accurate beliefs without looking at things.
James R. Newman said: "The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2." The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience. Wikipedia quotes Hume: Relations of ideas are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe." You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.
But in this era of neurology, one ought to be aware that thoughts are existent in the universe; they are identical to the operation of brains. Material brains, real in the universe, composed of quarks in a single unified mathematical physics whose laws draw no border between the inside and outside of your skull.
When you add 1 + 1 and get 2 by thinking, these thoughts are themselves embodied in flashes of neural patterns. In principle, we could observe, experientially, the exact same material events as they occurred within someone else's brain. It would require some advances in computational neurobiology and brain-computer interfacing, but in principle, it could be done. You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2. How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.
If this seems counterintuitive, try to see minds/brains as engines - an engine that collides the neural pattern for 1 and the neural pattern for 1 and gets the neural pattern for 2. If this engine works at all, then it should have the same output if it observes (with eyes and retina) a similar brain-engine carrying out a similar collision, and copies into itself the resulting pattern. In other words, for every form of a priori knowledge obtained by "pure thought", you are learning exactly the same thing you would learn if you saw an outside brain-engine carrying out the same pure flashes of neural activation. The engines are equivalent, the bottom-line outputs are equivalent, the belief-entanglements are the same.
There is nothing you can know "a priori", which you could not know with equal validity by observing the chemical release of neurotransmitters within some outside brain. What do you think you are, dear reader?
This is why you can predict the result of adding 1 apple and 1 apple by imagining it first in your mind, or punch "3 x 4" into a calculator to predict the result of imagining 4 rows with 3 apples per row. You and the apple exist within a boundary-less unified physical process, and one part may echo another.
Are the sort of neural flashes that philosophers label "a priori beliefs", arbitrary? Many AI algorithms function better with "regularization" that biases the solution space toward simpler solutions. But the regularized algorithms are themselves more complex; they contain an extra line of code (or 1000 extra lines) compared to unregularized algorithms. The human brain is biased toward simplicity, and we think more efficiently thereby. If you press the Ignore button at this point, you're left with a complex brain that exists for no reason and works for no reason. So don't try to tell me that "a priori" beliefs are arbitrary, because they sure aren't generated by rolling random numbers. (What does the adjective "arbitrary" mean, anyway?)
You can't excuse calling a proposition "a priori" by pointing out that other philosophers are having trouble justifying their propositions. If a philosopher fails to explain something, this fact cannot supply electricity to a refrigerator, nor act as a magical factory for accurate beliefs. There's no truce, no white flag, until you understand why the engine works.
If you clear your mind of justification, of argument, then it seems obvious why Occam's Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found. "But," you cry, "why is the universe itself orderly?" This I do not know, but it is what I see as the next mystery to be explained. This is not the same question as "How do I argue Occam's Razor to a hypothetical debater who has not already accepted it?"
Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam's Razor, just as you cannot argue anything to a rock. A mind needs a certain amount of dynamic structure to be an argument-acceptor. If a mind doesn't implement Modus Ponens, it can accept "A" and "A->B" all day long without ever producing "B". How do you justify Modus Ponens to a mind that hasn't accepted it? How do you argue a rock into becoming a mind?
Brains evolved from non-brainy matter by natural selection; they were not justified into existence by arguing with an ideal philosophy student of perfect emptiness. This does not make our judgments meaningless. A brain-engine can work correctly, producing accurate beliefs, even if it was merely built - by human hands or cumulative stochastic selection pressures - rather than argued into existence. But to be satisfied by this answer, one must see rationality in terms of engines, rather than arguments.