Traditional Rationality is phrased as social rules, with violations interpretable as cheating: if you break the rules and no one else is doing so, you're the first to defect - making you a bad, bad person.  To Bayesians, the brain is an engine of accuracy: if you violate the laws of rationality, the engine doesn't run, and this is equally true whether anyone else breaks the rules or not.

Consider the problem of Occam's Razor, as confronted by Traditional philosophers.  If two hypotheses fit the same observations equally well, why believe the simpler one is more likely to be true?

You could argue that Occam's Razor has worked in the past, and is therefore likely to continue to work in the future.  But this, itself, appeals to a prediction from Occam's Razor.  "Occam's Razor works up to October 8th, 2007 and then stops working thereafter" is more complex, but it fits the observed evidence equally well.

You could argue that Occam's Razor is a reasonable distribution on prior probabilities.  But what is a "reasonable" distribution?  Why not label "reasonable" a very complicated prior distribution, which makes Occam's Razor work in all observed tests so far, but generates exceptions in future cases?

Indeed, it seems there is no way to justify Occam's Razor except by appealing to Occam's Razor, making this argument unlikely to convince any judge who does not already accept Occam's Razor.  (What's special about the words I italicized?)

If you are a philosopher whose daily work is to write papers, criticize other people's papers, and respond to others' criticisms of your own papers, then you may look at Occam's Razor and shrug.  Here is an end to justifying, arguing and convincing.  You decide to call a truce on writing papers; if your fellow philosophers do not demand justification for your un-arguable beliefs, you will not demand justification for theirs.  And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".

But to a Bayesian, in this era of cognitive science and evolutionary biology and Artificial Intelligence, saying "a priori" doesn't explain why the brain-engine runs.  If the brain has an amazing "a priori truth factory" that works to produce accurate beliefs, it makes you wonder why a thirsty hunter-gatherer can't use the "a priori truth factory" to locate drinkable water.  It makes you wonder why eyes evolved in the first place, if there are ways to produce accurate beliefs without looking at things.

James R. Newman said:  "The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2."  The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience.  Wikipedia quotes Hume:  Relations of ideas are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe."  You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.

But in this era of neurology, one ought to be aware that thoughts are existent in the universe; they are identical to the operation of brains.  Material brains, real in the universe, composed of quarks in a single unified mathematical physics whose laws draw no border between the inside and outside of your skull.

When you add 1 + 1 and get 2 by thinking, these thoughts are themselves embodied in flashes of neural patterns.  In principle, we could observe, experientially, the exact same material events as they occurred within someone else's brain.  It would require some advances in computational neurobiology and brain-computer interfacing, but in principle, it could be done.  You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2.  How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing?  When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

If this seems counterintuitive, try to see minds/brains as engines - an engine that collides the neural pattern for 1 and the neural pattern for 1 and gets the neural pattern for 2.  If this engine works at all, then it should have the same output if it observes (with eyes and retina) a similar brain-engine carrying out a similar collision, and copies into itself the resulting pattern.  In other words, for every form of a priori knowledge obtained by "pure thought", you are learning exactly the same thing you would learn if you saw an outside brain-engine carrying out the same pure flashes of neural activation.  The engines are equivalent, the bottom-line outputs are equivalent, the belief-entanglements are the same.

There is nothing you can know "a priori", which you could not know with equal validity by observing the chemical release of neurotransmitters within some outside brain.  What do you think you are, dear reader?

This is why you can predict the result of adding 1 apple and 1 apple by imagining it first in your mind, or punch "3 x 4" into a calculator to predict the result of imagining 4 rows with 3 apples per row.  You and the apple exist within a boundary-less unified physical process, and one part may echo another.

Are the sort of neural flashes that philosophers label "a priori beliefs", arbitrary?  Many AI algorithms function better with "regularization" that biases the solution space toward simpler solutions.  But the regularized algorithms are themselves more complex; they contain an extra line of code (or 1000 extra lines) compared to unregularized algorithms.  The human brain is biased toward simplicity, and we think more efficiently thereby.  If you press the Ignore button at this point, you're left with a complex brain that exists for no reason and works for no reason.  So don't try to tell me that "a priori" beliefs are arbitrary, because they sure aren't generated by rolling random numbers.  (What does the adjective "arbitrary" mean, anyway?)

You can't excuse calling a proposition "a priori" by pointing out that other philosophers are having trouble justifying their propositions.  If a philosopher fails to explain something, this fact cannot supply electricity to a refrigerator, nor act as a magical factory for accurate beliefs.  There's no truce, no white flag, until you understand why the engine works.

If you clear your mind of justification, of argument, then it seems obvious why Occam's Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found.  "But," you cry, "why is the universe itself orderly?"  This I do not know, but it is what I see as the next mystery to be explained.  This is not the same question as "How do I argue Occam's Razor to a hypothetical debater who has not already accepted it?"

Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam's Razor, just as you cannot argue anything to a rock.  A mind needs a certain amount of dynamic structure to be an argument-acceptor.  If a mind doesn't implement Modus Ponens, it can accept "A" and "A->B" all day long without ever producing "B".  How do you justify Modus Ponens to a mind that hasn't accepted it?  How do you argue a rock into becoming a mind?

Brains evolved from non-brainy matter by natural selection; they were not justified into existence by arguing with an ideal philosophy student of perfect emptiness.  This does not make our judgments meaningless.  A brain-engine can work correctly, producing accurate beliefs, even if it was merely built - by human hands or cumulative stochastic selection pressures - rather than argued into existence.  But to be satisfied by this answer, one must see rationality in terms of engines, rather than arguments.

51

133 comments, sorted by Highlighting new comments since Today at 6:52 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

My posts for the next two days will be on related topics.

Something feels off about this to me. Now I have to figure out if it's because fiction feels stranger than reality or because I am not confronting a weak point in my existing beliefs. How do we tell the difference between the two before figuring out which is happening? Obviously afterward it will be clear, but post-hoc isn't actually helpful. It may be enough that I get to the point where I consider the question.

On further reflection I think it may be that I identify a priori truths with propositions that any conceivable entity would assign a high plausibi... (read more)

Generalizing from past observations to future expectations is often referred to in philosophy as the "problem of induction". It has the same problem is that you have to accept induction working in the past to expect it to work in the future, and if Bertrand Russell is right to argue that you were created five seconds ago with false memories you can't know it worked in the past either. Against that kind of skepticism I can only fall back on a David Stove type "common sense" position, but fortunately I am not interested in persuading others but understanding the world well enough to attain my goals.

You left the italics tag on.

Greedy, all you're doing is specifying properties into the definition of what you mean by "entity" or "knows enough". I can always build a tape recorder that plays back "Two and two make five!" forever.

TGGP, fixed the tag. And remember, it's not about persuading an ideal philosophy student of perfect emptiness, it's about understanding why the engine works.

I can rigorously model a universe with different contents, and even one with different laws of physics, but I can't think of how I could rigorously model (as opposed to vaguely imagine) one where 2+2=3. It just breaks everything. This suggests there's still some difference in epistemic status between math and everything else. Are "necessary" and "contingent" no more than semantic stopsigns? How about "logical possibility" as distinct from physical possibility?

I don't really understand what Eliezer is arguing against. Clearly he understands the value of mathematics, and clearly he understands the difference between induction and deduction. He seems to be arguing that deduction is a kind of induction, but that doesn't make much sense to me.

Nick: you can construct a model where there is a notion of 'natural number' and a notion of 'plus' except this plus happens to act 'oddly' when applied to 2 and 2. I don't think this model would be particularly interesting, but it could be made.

Nick, I'm honestly not sure if there's a difference between logical possibility and physical possibility - it involves questions I haven't answered yet, though I'm still diligently hitting Explain instead of Worship or Ignore. But I do know that everything we know about logic comes from "observing" neurons firing, and it shouldn't matter if those neurons fire inside or outside our own skulls.

Gray Area, what I'm arguing is that deduction, induction, and direct sensory experiences, should all be considered as equivalent-to-observation.

Eliezer: Good answer. I take the same view, although I think the "can you model it" question suggests there is a difference. Do you think a rigorous, consistent (or not provably inconsistent) model of arithmetic or physics is possible where 2+2=3? (or the 3rd decimal place of pi is 2, or Fermat's last theorem is false, or ...)

It seems like you could justify Occam's Razor by looking at the past history of discarded explanations. An explanation that is ridiculously complex, yet fits all the observations so far, will probably be broken by the next observation; a simple explanation is less likely to fail in the future. A hypothesis that says "Occam's Razor will work until October 8th, 2007" falls into the general category of "hypotheses with seemingly random exceptions", which should have a history of lesser accuracy than hypotheses with justified exceptions or ... (read more)

2Дмитрий Зеленский1yBut both rule "have no seemingly random exceptions" and the passage in Virtues are special cases of Occam's razor. So the argument does become rounded (or, at best, thrown one step to the "low-entropy universe" and rounded there).
But in this era of neurology, one ought to be aware that thoughts are existent in the universe; they are identical to the operation of brains.

Really? I'm aware that physical outputs are totally determined by physical inputs. Neurology can tell us what sorts of physical causes give rise to what sorts of physical effects. We even have reason to believe that thoughts can be infered from the physical state of the brain in a lawlike fashion but this surely doesn't let us infer that thoughts are IDENTICAL to the operation of brains. Merely that they alwa... (read more)

"I'm aware that physical outputs are totally determined by physical inputs."

Even this is far from a settled matter, since I think this implies both determinism and causal closure.

logicnazi, if we can talk about our experiences, our experiences have a causal effect on the physical world. Assuming, as you do, causal closure (which is not known, but the most parsimonious hypothesis), this means that the idea of different experiences with the same physical state is indeed incoherent.

"We even have reason to believe that thoughts can be infered from the physical state of the brain in a lawlike fashion but this surely doesn't let us infer that thoughts are IDENTICAL to the operation of brains. Merely that they always go together in the actual world."

Look at airplanes: they all have a bunch of common characteristics like an engine, wings, rudders, etc. If you argued that an airplane was not really "identical" to the pile of parts, but that they just "always went together", people would look at you like you had three heads. Yet, when applied to brains, people think this argument makes sense. A brain is made up of the frontal cortex, visual cortex, auditory cortex, amygdala, pituitary gland, cerebellum, etc.; that's just what it is.

Tom: I agree with your analogy. Yudkowsy said: "Gray Area, what I'm arguing is that deduction, induction, and direct sensory experiences, should all be considered as equivalent-to-observation."

This is only convincing to someone who believes logic is only possible when their is some physical structure directly corresponds to logical output. Yet even the evidence indicating this is true uses logic.

I recently started (and then backed out of)a debate with a Christian presupositionalist.I had no idea how to show how logic itself works except by examp... (read more)

I am not sure if my understanding of Occam’s Razor matches Eliezer Yudkowsky’s.

I understand it more as (to use a mechanical analogy) “don’t add any more parts to a machine than are needed to make it work properly.”

This seems to fit Occam’s Razor if I take it to be a guide, not a prediction or a law. It does not say that the theory with the fewest parts is more likely to be correct. It just reminds us to take out anything that is unnecessary.

If scientists have often found that theories with more parts are less often correct, that may further encourage us to... (read more)

I think a discussion of what people mean exactly when they invoke Occam's Razor would be great, though it's probably a large enough topic to deserve its own thread.

The notion of hypothesis parsimony is, I think, a very subtle one. For example, Nick Tarleton above claimed that 'causal closure' is 'the most parsimonious hypothesis.' At some other point, Eliezer claimed the multi-world interpretation of quantum mechanics as the most parsimonious. This isn't obvious! How is parsimony measured? Would some version of Chalmers' dualism really be less parsimonious? How will we agree on a procedure to compare 'hypothesis size?' How much should we value 'God' vs 'the anthropic landscape' favored at Stanford?

"Anyone agree or disagree with the futility of debating someone who believes the universe is around 6,000 years old (and is also above age 25)?"

Agree 100%. The Universe is slightly over 10,000 years old. The 6000-ers got their math badly wrong. Crackpots, the lot of them.

Constant, the obviousness felt by both disagreeing parties almost never changes. How many formal debates actually end with the other person changing their mind? I would take it further and say formal debate is usually worthless too.

In the meantime where are your error bars? I bet somewhere there is a fundy who includes error bars.

1PetjaY6yWhile people often often end debates without admitting defeat, if you discuss with them after a couple of days (or weeks) you can often see their opinions changed. This is because people need time to think before changing their mind, which they cannot do that well while debating. Especially people do not like admitting they´re wrong before they´re sure they are.

Error bars: give or take about 14 billion years. My calculations are quite precise. I am still working out the ramifications of the universe being 10,000 minus 14 billion years old.

I knew you would come through Constant simply by reading your name.

"But what is a "reasonable" distribution? Why not label "reasonable" a very complicated prior distribution, which makes Occam's Razor work in all observed tests so far, but generates exceptions in future cases?"

Occam's Razor is only relevant to model selection problems. A complicated prior distribution does not matter. What does matter is how much the prior distribution volume in parameter space decreases as the model becomes more complex (more parameters). Each additional parameter in the model spreads the prior distributio... (read more)

Eliezer: It sure does seem to me that when you say that "a mind needs a certain amount of dynamic structure to be an argument acceptor" you are saying that it does in fact know certain things prior to any "learning" taking place, e.g. that there are "priors". I would argue that 2+2=4 is part of this set, but as the punchline argues, we have already established the basics, now we are just haggling.

William,

By considering models in the first place, one is already using Occam's razor. With no preference for simplicity in the priors at all, one would start with uniform priors for all possible data sequences, not finite-parameter models of data sequences. If you formalize models as being programs for Turing machines which have a separate tape for inputting the program, and your prior is a uniform distribution over possible inputs on that tape, you exactly recover the 2^-k Occam's razor law, where k is the number of program bits that the Turing machine re... (read more)

You could argue that Occam's Razor is a reasonable distribution on prior probabilities. But what is a "reasonable" distribution?

If you make the assumption that what you observe is the result of a computational process, the prior probability of a lossless description/explanation/theory of length l becomes inversely proportional to the size of the space of halting programs of length l. You're free to dismiss the assumption, of course.

"But," you cry, "why is the universe itself orderly?"

One reason among many may be the KAM-Theorem.

Occam's Razor has two aspects. One is model fitting. If the model with more free parameters fits better that could merely be because it has more free parameters. It would take a thorough Bayesian analysis to work out if it was really better. A model that fits just as well but with fewer parameters is obviously better.

Occam's Razor goes blunt when you already know that the situation is complicated and messy. In neurology, in sociology, in economics, you can observe the underlying mechanisms. It is obvious enough that there are not going to be simple laws. I... (read more)

Alan: Does a scientist likewise have no reason to pay attention to any model of the universe but fundamental physics? High level descriptions of the world very frequently can account for most of the variance in high level phenomena without containing the known complexity of the substrate.

Do high level descriptions of the world frequently account for most of the variance in high level phenomena without containing the known complexity of the substrate?

I think you can constrast thermodynamics and sociology by noticing that there is no Princess Diana molecule. All the molecules are on the same footing. None of them get to spoil the statistics by setting a trend and getting in all the newspapers papers. So perhaps Occam's Razor grabs credit not due to it, as researchers favour simple theories when they have specific reasons to do so.

An example ... (read more)

"I am not sure if my understanding of Occam’s Razor matches Eliezer Yudkowsky’s.

I understand it more as (to use a mechanical analogy) “don’t add any more parts to a machine than are needed to make it work properly.”

Think of Kolmogorov complexity: the most parsimonious hypothesis is the one that can generate the data using the least number of bits when fed into a Turing machine.

"One way is to appeal to Occam's Razor. Let us prefer the simpler hypothesis that increases to the minimum wage are random. That is bogus."

Why it is bogus? An ideal st... (read more)

Let's see. What else would I have to believe in order to accept a statement like "~(p&~p) is not a theorem in propositional logic?"

A statement of the form "X is a theorem in this particular formal mathematical system" means that I can use the operations allowed within that system to construct a "proof" of the sentence X. In theory, I can make a machine that takes a "proof" as input and returns "true" if the proof is indeed a correct proof and "false" if there is a step in the proof that is not... (read more)

A person not capable of correct deductive reasoning is insane. The people usually deemed insane are those with deviant behavior, or what Caplan calls "the extreme tails of a preference distribution with high variance".

And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".

I should note that the most famous paper in 20th Century analytic philosophy, Quine's "Two Dogmas of Empiricism", is an attack on the idea of the a priori. The paper was written in 1951 and built on papers written in the previous two decades. A large proportion of contemporary philosophers agree with Quine's basic position. This doesn't stop them from doing theoretical work, just as Eliezer's disavowal of the a priori need not prevent him theorizing... (read more)

Eliezer - It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence". For one thing, no amount of mere observation will suffice to bring us to a conclusion, as Lewis Carroll's tortoise taught us. Further, it mistakes content and vehicle. When I judge that p, and subsequently infer q, the basis for my inference is simply p - the proposition itself - and not the psychological fact that I judge that p. I could infer some things from the latter fact too, of course, but that's a very different matter. (And in tu... (read more)

It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence".

If you view it as an argument, yes. The engines yield the same outputs.

Minds are a rather different matter. They are not conceptually reducible to neurons firing.

Just because you do not know how the trick works, does not mean the trick is powered by magic pixie dust.

Eliezer Yudkowsky said: "Just because you do not know how the trick works, does not mean the trick is powered by magic pixie dust."

I agree yet this won't convinve a sophisticated right-wing Christian (or Jew, or Muslim, etc).

Who said anything about 'magic pixie dust'? I agree that the brain gives rise to (or 'powers') the mind, thanks to the laws of nature that happen to govern our universe. I may even agree with all the causal claims you want to make. But if you're going to start talking about identity, then you need to do some real philosophy.

"If you view it as an argument, yes. The engines yield the same outputs."

What does the latter have to do with rationality?

Does Eliezer really need to do some "real philosophy"? If he does not, will he miss out on the Singularity? Will A.I be insufficiently friendly? I don't see any reason to think so. I say be content in utter philosophical wrongness. Shout to the heavens that our actual world is a zombie world with XYZ rather than H20 flowing in the creeks that tastes grue, all provided it has no impact on your expectations.

But if you're going to start talking about identity, then you need to do some real philosophy.

What's the difference between the brain giving rise to a mind by the laws of nature and the brain giving rise to a mind without identity by the laws of nature?

But if you're going to start talking about identity, then you need to do some real philosophy.

"Identity" is not magic. There is no abiding personal essence, just continuity of memory. A real philosopher said that, by the way.

I do think there are important unanswered questions in the philosophy of mind, but this isn't one of them. (Although one of them is "where is our thinking still contaminated by the idea of magic personal identity?", which I suspect is at the root of several apparent paradoxes.)

Tom, I think we are actually agreeing. I'm arguing that if you already know the situation is complicated you cannot just appeal to Occam's Razor, you need some reason specific to the situation about why the simple hypothesis should win.

You are proposing a reason, specific to economics, about why the complications might be washed away, making it reasonable to prefer the simpler hypothesis. My claim is that those extra reasons are essential. Occam's Razor, on its own, is useless in situations known to be complicated.

Tom McCabe, Thank you for the comment. You have started me thinking about the differences between Occam's Razor and Einstein's "Everything should be made as simple as possible, but not simpler." John

"--" should have been "Shakespeare's Fool" John

TGGP - You seem to have missed the conditional nature of my claim. I'm not forcing philosophy on anyone; just saying if you're going to do it at all, best do it well.

Nick - I never suggested there was an "abiding personal essence". (Contemporary philosophers like Derek Parfit and David Velleman have done a stellar job in revealing the conceptual confusions underlying such an idea.) In any case, it's hardly relevant. The issue here is individuation (how to count the distinct things in the world), not personal identity and persistence through time.... (read more)

Eliezer: "You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2. How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence."

Richard: "It's just fundamentally mistaken to conflate reason... (read more)

Richard: oops, I thought you meant personal identity. Ach, homonyms.

Do you think that the human bodies in a physics-only zombie world would behave identically to ours? ( = Do you think physics is causally closed?)

Nick Hay: great explanation.

Richard: sure, minds and brains can "come apart" in possible worlds other than ours (or indeed in this one, if and when someone teaches a computer to think), but I have never understood why some people seem to think that this suggests that there's anything weird about the relationship between actual minds and actual brains in the actual world.

Consider those airplanes again, but let's use a more general term like "flying machine" that isn't so tightly tied to the details of their construction. You can imagine (yes?) a world in which a Bo... (read more)

g - there's no possible world that's physically identical to ours but where the Boeing's don't fly. There is a possible world that's physically identical to ours that lacks consciousness. That's the difference. It shows that physics suffices for flight but not fully-fledged mentality. (N.B. the interesting case here is not minds without brains, but brains without minds.)

Nick Hay - Thanks for bringing this back to the key issue. In fact I do not "consider having successfully determined a conclusion from pure thought evidence that that thought is correc... (read more)

Sorry, my second sentence to NH is unclear. The psychological fact could be taken as a kind of indirect evidence, as noted in my postscript. But it is not what I take my evidence to be, when I am reasoning according to a #1-style argument. We could say the evidence of my thought [vehicle] is not the evidence in my thought [content].

Nick T. - yes, I accept the causal closure of the physical. (And thus epiphenomenalism. I discuss the epistemic consequences in my post 'Why do you think you're conscious?')

On the broader issue - to expand on my response to James above - see my post on the explanatory power of dualism.

Richard, are you saying that if in this world I attempted to move around some material to produce an artificial brain, it would not work unless I also did some psycho-manipulation of some sort? Or is the psycho-stuff bound so tightly with the material that the materially-sufficient is psycho-sufficient?

I neglected to link to this before when I mentioned anticipated experiences, which is one of my favorite posts here. I am so fond of linking to it I assumed I already had.

Richard, you have presented absolutely no evidence that there is a possible world physically identical to ours but in which we are not conscious, beyond saying that it's "conceptually possible" for minds and brains to "come apart", if we imagine a world with different laws of nature.

But it's equally conceptually possible for flying machines and aerofoils to come apart, if we imagine a world with different laws of nature, and (it appears) you don't see that as any reason to think that flying machines fly by aerofoils plus some extra brid... (read more)

I agree with Richard that we should respect the fact that philosophers have spilled a lot of ink on the consciousness question; we should read them and respond to their arguments. We should have at least one post devoted to this topic. But after doing so, I'm betting I'll still mainly agree with Eliezer.

Richard, I don't think Eliezer conflated reasoning with observing your own brain - he just suggested that simple Bayesian reasoning based on observing your own brain gets you pretty much all the conclusions you need from most other "reasoning."

Robin and Richard - I think it is possible that Eliezer did not word his statement as cleanly as he might. However if his wording conflated categories, I am confident that with some care the exact same point can be re-worded without such conflation. There is something real and significant here that he's pointing out, and it's not going to go away simply because he was (if he was) a bit to loose in his presentation.

I think this contains one of the main points:

If you clear your mind of justification, of argument, then it seems obvious why Occam's Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found. "But," you cry, "why is the universe itself orderly?" This I do not know, but it is what I see as the next mystery to be explained. This is not the same question as "How do I argue Occam's Razor to a hypothetical debater who has not already accepted it?"

The philosop... (read more)

The evolutionary formation of the mind is, as Eliezer points out, based on the truth, and not on justification. Mutation throws one brain after another at the problems of life, and the brains that generate true beliefs are the ones that tend to survive. No justification is involved at this stage. For example, suppose that Occam's razor is true (to understand this, translate the method called "Occam's razor" into the appropriate assertion about the world so that it can be assigned a truth value). Then brains that apply Occam's razor will tend to s... (read more)

Constant - Sure, there's something to be said for epistemic externalism. But I thought Eliezer had higher ambitions than merely distinguishing rationality and reliability? He seems to be attacking the very notion of the a priori, claiming that philosophers lazily treat it as a semantic stopsign or 'truce' (a curious claim, since many philosophers take themselves to be more or less exclusively concerned with the a priori domain, and yet have been known to disagree with one other on occasion), and dismissively joking "it makes you wonder why a thirsty h... (read more)

Richard, I would like to know what you mean by "conceptually possible" and why you think conceptual possibility has anything to do with actual possibility. I think you mean something like "I can/can't imagine X without any obvious inconsistencies". So, e.g., you can imagine, or think you can imagine, a world physically identical to ours in which people have no experiences; but you can't imagine, or think you can't imagine, a world physically identical to ours in which jumbo jets don't fly.

But whether something is "conceptually poss... (read more)

Oh, gosh, that was rather long. Sorry.

I liked it. I really don't get Robin's desire for short comments. This is the only blog where I've seen that restriction. Is he worried about the high cost of bandwidth? For text?

I think I can forgive it this once.

Zombies and similar creatures are "conceptually possible" when someone doesn't understand the connection between lower and higher levels of organization, so that the stored propositions about the lower and higher levels of organization are mentally unconnected and can be switched on or off independently. This is a fact about the person's state of mind, not a fact about the phenomenon in question.

1Juno_Watt8yI kind of see the point about logical possibility being what you get if you switch off your knowledge of how the world works, and just run off a minimal axiom set. But I don't know what the connection between that particular set of lower and higher level of organisation is, ie the connection between consciousness and mind. I don't think anyone else does. Zombies are logically conceivable for everybody. But conceivability is not about the world, as you say.

g, beautifully said.

Hopefully Anonymous had a post about zombies here, in which I made fun of him.

Anticipating experience may be a useful constraint for science, but that is not all there is to know.
If I was going to dispute this I would have to specify what it means to "know" and get into one of those goofy epistemology discussions I derided here. Philosophy is the required method to argue against philosophy, oh bother. Good thing reality doesn't revolve around dispute.

g - No, by 'conceptually possible' I mean ideally conceptually possible, i.e. a priori coherent, or free of internal contradiction. (Feel free to substitute 'logical possibility' if you are more familiar with that term.) Contingent failures of imagination on our part don't count. So it's open to you to argue that zombies aren't conceptually possible after all, i.e. that further reflection would reveal a hidden contradiction in the concept. But there seems little reason, besides a dogmatic prior commitment to materialism, to think such a thing. Most (but ad... (read more)

I have future posts planned that will shed light on this topic, but not today.

Richard, I'm unconvinced that you have any way of telling whether the existence of zombies is ideally conceptually possible; the fact that you seem to be able to imagine a zombie world certainly isn't good evidence that it's "free of internal contradiction". (Consider, again, the Riemann hypothesis.)

I don't have anything like a proof that the idea of zombies is in fact incoherent. But if you're right that its coherence would entail the existence of these mysterious psychophysical bridging laws and all the rest of your epiphenomenal apparatus, the... (read more)

No, by 'conceptually possible' I mean ideally conceptually possible, i.e. a priori coherent, or free of internal contradiction. (Feel free to substitute 'logical possibility' if you are more familiar with that term.)

Before we discovered that water was H2O, our concept of water did not include that it was H2O. Since our concept did not include that, then surely it would not have been incoherent, at the time, to say that water is not H2O (imagine that this occurs during the period after the discovery of H and O and before the discovery of the composition o... (read more)

Well said, Constant.

The logic for why zombies can't exist, very briefly, goes like this:

You see a bright red light.

The mysterious redness-quality of the red light seems inexplicable in merely material terms.

You think, within your stream of consciousness, "The redness of this light seems inexplicable in merely material terms."

You say out loud, "The redness of this light seems inexplicable in merely material terms."

Your lips moved.

Whatever caused your lips to move must lie within the realm of physics because it had a physical effect.

If we sum up all the forces acting on your lips - gravity, electromagnetism, etc. - we will necessarily include the proximal cause of your lips moving. This is because when we sum up the forces acting on your lips, we can tell where your lips will go. In particular, we can tell that they'll move. So if we delete anything that isn't on the force list, your lips go to exactly the same place.

As it so happens, the proximal cause of your lips moving is nervous instructions sent from your motor cortex and cerebellum. It is not possible to imagine a world in which your lips move and all the laws of physics are the same, but there are no n... (read more)

(Let me just add that the first chapter of my thesis addresses Constant's concerns, and my previously linked post 'why do you think you're conscious?' speaks to Eliezer's worries about epiphenomenalism -- what is sometimes called 'the paradox of phenomenal judgment.' Some general advice: philosophers aren't idiots, so it's rarely warranted to attribute their disagreement to a mere "failure to realize" some obvious fact.)

Richard,

I don't know what you mean about "idiots". My arguments are not intended as insults. In fact I fully expect you to have already dealt with them. However, I have little choice but to answer the particular point you raised at a particular time, because if I try to jump ahead and anticipate all your answers and then your answers to my answers to your answers, the result will probably be an incredibly confusing monologue that is more likely than not to simply have mis-anticipated your actual responses. Aside from that there is the matter of comment length to consider.

Richard, I don't actually believe philosophers are idiots because I've seen their standardized test scores. I do think they could more productively use their intellects though. If I were to ignore IQ/general intelligence and simply try to judge whether one philosopher does better philosophizing than another, would I be able to do it without becoming a philosopher myself and judging their arguments? I can determine that rocket physicists are good at what they do because they successfully send rockets in the air, I know brain surgeons are because the brains ... (read more)

0SilasBarta11yWell put, TGGP, well put.

Occam's razor states- the explaination of any phenomenon should make use of as few assumptions as possible, eliminating those that make no difference in the observable predictions of the explainatory hypothesis.
That does not say the universe is simple or explainable by material means or any such thing.
In fact those assumptions violate Occam's razor! Saying the universe is simple or explainable by material means are unneeded assumptions.
In order to explain the phenomena of physics we need material, material forces, chance, and freedom. The chance comes... (read more)

Richard is quite right to point out that philosophers of mind are well aware of the counter arguments that Constant and Eliezer offer. And he is right to insist this is a subtle question to which a few quick comments do not do justice. There are however many philosophers who agree with Constant and Eliezer. See for example the October Philosophical Quarterly article on Anti-Zombies.

Not to be too uncharitable, but I'd say the arguments of us material monists are simple, and it's only the flaws in the complex dualist arguments that are subtle.

PS: I've got some back copies of the Journal of Consciousness Studies on my bookshelf, so don't necessarily assume that I'm unaware of the big philosophical mess here.

TGGP, your question illustrates nicely my explanation for why more history than futurism.

This book review claims that the majority position in philosophy rejects the dualism Constant and Eliezer object to - this is most certainly not a dispute between philosophers and scientists.

Sorry, have you argued someplace else for either reduction or eliminative materialism?

I have a series of posts planned on that in due time.

In a comment on "How to convince we that 2+2=3", I pointed out that the study of neccessary truths is not the same as the possession of neccessary truths (credit to David Deutsch for that important insight). Unfortunately, the discussion here seems to have gotten hung up on a philosophical formulation that blurs that important distinction, a priori. Eliezer's quotative paragraph illustrates the problem:

The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience. Wikipedia
... (read more)

We use Occam's Razor because it has tended to work better than Occam's Butterknife.

What's so complicated about that?

I read no comments

"You could argue that Occam's Razor has worked in the past, and is therefore likely to continue to work in the future. But this, itself, appeals to a prediction from Occam's Razor."

It seems to me like it is more of an appeal to induction. (Granted the problem Hume raised about induction, but also granted Hume's [and my own] defection to practicality.)

To distinguish the word "arbitrary" from "random", I think of an arbitrator—i.e., an outside judge chooses something. (Maybe this results in a uniform prior for me, if'n I don't know what she'll do. Or maybe I'm a mathematician and I choose to be ready for any choice that arbitrator might make.)

When I'm teaching linear algebra and explain arbitrary parameters to my students, I use exactly this metaphor. How many times does someone else have to come in and arbitrate the value of other variables, before you can tell the questioner what the answer is?

Could you not argue Occam's Razor from the conjunction fallacy? The more components that are required to be true, the less likely they are all simultaneously true. Propositions with less components are therefore more likely, or does that not follow?

1Regex5yI was wondering this myself. I roughly knew of Solomonoff Induction as related... but apparently that is equivalent! The next thing my memory turned up was "Minimum Description Length" principle, which as it turns out... is also a version of Occam's Razor. Funny how that works. If we look at the original question again... "If two hypotheses fit the same observations equally well, why believe the simpler one is more likely to be true?" If I understand the conjunction fallacy correctly, it is strictly true that adding more propositions cannot increase the probability.That is to say, P( A & B) <= P(B)... and P( A & B) <= P(A). So the argument could be made that B might have probability one and therefore would be an equally probable hypothesis with its addition. So if you start with A, and B has probability less than one it will strictly lower the probability to include it. Thus as far as I can tell, Occam's Razor holds except where additional propositions have probability one. ...But if they have probability one, wouldn't they have to be axiomatically identical to just having proposition A? Or would it perhaps have to be probability one given A? I honestly don't know enough here, but I think the basic idea stands?
1gjm5yAs Richard Kennaway has said [http://lesswrong.com/lw/k2/a_priori/crom], this only deals with cases where one hypothesis is a conjunction including another (e.g., "There is a god" and "There is a god called Bill"), but most cases in which we actually want to apply OR aren't like that; they're more like "geocentric astronomy with circular orbits plus epicycles" and "heliocentric astronomy with elliptical orbits".
0Regex5yAh. Yeah that does clear things up a bit. What would a solution look like, then? To show the complexity of an idea impacts its probability... but unless you use the historic argument of 'it's looked that way in the past for stuff like this' I don't see any way of even approaching that. What if we imagine the space of hypotheses? A simpler hypothesis would be a larger circle because there may be more specific rules that act in accordance with it. 'The strength of a hypothesis is not what it can explain, but what it fails to account for', so a complicated prediction should occupy a very tiny region and therefore have a tiny probability. Or... is that just another version of Solomonoff Induction, and so the same thing?
2hairyfigment5yNear as I can tell, you're describing the same conjunction rule from your previous comment! This conjunction rule says that a claim like 'The laws of physics always hold,' has less probability than, 'The laws of physics hold up until September 25, 2015 (whether or not they continue to hold after).' Solomonoff Induction is an attempt to find a rule that says, 'OK, but the first claim accounts for nearly all of the probability assigned to the second claim.'
0Regex5yHrm, yeah. I think I need more tools and experience to be able to think about this properly.
1RichardKennaway5yPropositions with more parts are not necessarily merely the conjunction of those parts. "A or B" and "A and B" may both be the same amount of complexity, by whatever measure, more than A.

2+2=4 is part of the definition of +. The question isn't why we think 2+2=4. The question is why we're so obsessed with addition. 2 << 2 = 8, but you don't hear people talking about how 2 and 2 makes eight.

You simply can't do anything without something being a priori. Is the universe orderly? Maybe it looks orderly by coincidence. The probability that it looks this way given that it's random is simple enough, but we also need to know the probability that it looks this way and it's not orderly. We need some a priori probability that it isn't orderly, ... (read more)

A statement can be true a priori in the sense that no sensory evidence is needed to infer it- the principle of non-contradiction, for example.

A priori, translated very roughly but with respect to the spirit of the phrase, means "before experience". It is used with a posteriori which means "after experience". Something known a priori is equivalent to your prior probability; something known a posteriori is equivalent to the posterior probability. That is, when you are concerned with an event, before any experience of the event, your knowledge is a priori.

This is, of course, a slippery slope: that prior is simply the posterior of something else.

Some philosophers have tried to us... (read more)

(More necromancy!)

I thought Occam's Razor was justified by the fact that every new proposition involved necessarily increased the number of ways in which the entire explanation could fail. Then you require evidence for yet another belief, and since you cannot be 100% accurate in any of your propositions, your accuracy continually decreases as well.

A Priori has always just seemed to me like another way to describe what we call "assumption" in classical logic. You can't deduce anything in classical logic without starting from certain assumptions and seeing what you can deduce from them, and one of the strengths of classical logic is that it forces you to actually list your assumptions up front, so someone else can say "I agree with your reasoning, but I think your assumption "B" is invalid".

Trying to take assumptions apart, see if they are valid, see if they can either ... (read more)

How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

No no no. The difference between a priori and a posteriori is where the justification lies. You may be counting your fingers when you count 1 + 1. It may be that you won't be able to figure out the answer if someone cut off your fingers. In fa... (read more)

0Juno_Watt7yIf you define evience as a system getting information from outside, then observing your own brain is not evidence. Inferential Apriori truth is what you can (but don't have to) get in a closed system. Aposteriori truth is what can only be obtained in an open system, one with sensors. And non-inferential, innate knowlege remains a problem.

If you are a philosopher whose daily work is to write papers, criticize other people's papers, and respond to others' criticisms of your own papers, then you may look at Occam's Razor and shrug. Here is an end to justifying, arguing and convincing. You decide to call a truce on writing papers; if your fellow philosophers do not demand justification for your un-arguable beliefs, you will not demand justification for theirs. And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".

Or the word "intuition"... (read more)

[-][anonymous]6y 0

But of course there's such a thing as an a priori statement! Running a computation forwards without any uncertainty in it yields a result: this is "a priori" in the sense that, since it operates only on abstract mental data with no reference to empirical reality, it requires no experience to "get right" (rather, experience is required to locate a useful computation, out of all possible computations). 2+2 really does equal 4, every time, all the time, because any computation isomorphic to 2+2 must always yield an answer isomorphic to 4.

When you add 1 + 1 and get 2 by thinking, these thoughts are themselves embodied in flashes of neural patterns. In principle, we could observe, experientially, the exact same material events as they occurred within someone else's brain.

"the exact same material events"?

He has made a prediction of observable events that I predict is very very very wrong.

Your brain is not my brain. Not the exact same size, shape, or connectivity. Not performing the exact same ocean of processing at any one time that one might also be adding 1+1 to equal 2. For ... (read more)

I must respectfully disagree with your interpretation of Kant's use of the term "a priori knowledge." Kant says "While all knowledge beings with experience, it does not all arise out of experience." Hence, Kant never himself says that we can have knowledge without ever having had experience, that is to say, the shorthand explanation of Kant's philosophy, namely that "a priori truths are truths that can be attained without experience" does a poor job of representing the nuance of his epistemological system. Again, he says "... (read more)

When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

I mean, yeah? You can still do that in your armchair, without looking at anything outside of yourself. Mathematical facts are indeed "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe," if you modify the statement a little to say "anywhere else existent" in order to acknowledge that the operation of tho... (read more)

Modus Ponens can be justified by truth tables. A is , and  is . By combining these to  we get the truth table  which is only true if both A and B are true.

Of course, one can always reject the notion of truth tables and then we are back to square one...

------

As for Occam's Razor - I used to think of it in terms of avoiding overfitting. More complex explanations have more degrees of freedom which makes it easier for them to explain all the datapoin... (read more)

1TAG2moNot at all. A basically false explanation, such as a geocentric model of the solar system, can predict as accurately as a basically true model, so long as you are allowed to add endless numbers of epicycles. That's one of the basic motivations for using Occams Razor. If predictive power and ontological content were identical, there would be no need for it .
1Idan Arye2moYou'll need more than just epicycles to make the geocentric model yield accurate predictions. For example, what will happen if we launch a rocket straight up, and observe the Earth from that rocket? According to the geocentric model, the Earth does not spin - it is the Sun that revolves around it. So if we launch a rocket straight up, it should not observe the Earth rotating. With our modern model, or even with the heliocentric model, we would predict that the rocket see the Earth rotating because the ratio between the perpendicular velocity the rocket started with and its distance from the Earth gets lower and lower as the rocket gets farther away. So that's one different prediction. Now, say that we modify the geocentric model so that the Earth is still in the center, but also rotates. What's is the angular velocity of its rotation? If we calculate it based on the observations from our rocket, we will come the the conclusion that the Sun's rotational velocity is extremely low. So low, in fact, that it should not be able to maintain its centrifugal force - and should have been pulled into the Earth a long time ago. So you'd have to change the rules of gravity too. And then the rules of relativity. And the description will be infinite - because not only you'll need to match not only the known epicycles, not only the existing scenarios, but any possible setting and formation that can come to mind. And because it is infinite, it can never be actually used to predict things before they are observed - because calculating these predictions will take infinite time. We can always ever use a finite sub-representation of it, which will not yield accurate predictions for all cases. But still - if we could, it will not be different than a correct model, just like the sum of an infinite Taylor series is the same as the infinitely differentiable function it is derives from. Even if the Taylor series no longer represents the intuition behind that function.
1Idan Arye2moActually... if you squint a bit there is a compact way to represent the fitted geocentric mode: * The Earth is at the center. * There is a mysterious force, originating from the earth, that pushes all objects away. It's strength is what you would expect from the Earth's centrifugal force [https://en.wikipedia.org/wiki/Centrifugal_force] according to the modern model. * All the objects in the universe, other than the Earth, are accelerated at the opposite direction and size than what the Earth's acceleration in the modern model. With relativity in mind these rules may not be enough, but let's ignore that for the sake of the argument. At this point, I'll ask the neogeocenterists (pun intended), wouldn't it be simpler and easier to just use the modern model for calculating my predictions? "But then you'll get wrong results!", they'll say. How so? The centrifugal force from assuming the Earth rotates mimics your mysterious force that pushes all things away, and the acceleration the Earth mimics the acceleration your model adds to all other celestial bodies. Then, when predictions for the relative position and velocity of each pair of objects should be identical in both models. "Yea, sure, but you'd still get wrong results - the Earth will not be in the cener." So... what? What difference does being in the center make? If it makes a difference, we should test for that difference and support or disprove your model! "No, this is not a difference you can test for, but it makes us special!" Special... how? "There are countless planets in the universe, and infinite positions to put the center. What is the probability that we are the ones in the center? That we are the only planet that doesn't move? That these mysterious unexplainable forces make sure we are kept in the center of the universe?" Pretty damn high, I'd say, considering how you picked the origin to be our position, you decided to use our velocity for calculating the relevant velocitie
1TAG2moIt takes more than literal epicycles , but there are any number of ways of complicating a theory to meet the facts. Of course it is different. Heliocentricism says something different about reality than geocentricism.
1Idan Arye2moDifferent... how? In what meaningful ways is it different?
1Teerth Aloke2moThe debate is apparently about the meaning of 'different'. Someone might define different as, 'predicting different observations' and another as 'different ontological content'. If there is a box in front of you, which either contains as $20 or $100 note. However, you have very strong reasons to believe that the content of the box shall be unknown to you, forever. Is the question, "Is there a $20 or $100 note in the box?" meaningful. Is the belief in the presence of a $20 note different from the belief in the presence of a $100 note? That is essentially, similar to the problem of identical models.
1Idan Arye2moIf the content of the box is unknown forever, that means that it doesn't matter what's inside it because we can't get it out.
1TAG2moWhether something is empiricly unknowable forever is itself unknowable ... it's an acute form of the problem of induction. But that isn't quite the same as say ing that statements about what's inside are meaningless. A statement can be meaningful without mattering. And you have to be able to interpret the meaning, in the ordinary sense, in order to be able to notice that it doesn't matter.
1Idan Arye2moIf a universe where the statement is true is indistinguishable from a universe where the statement is false, then the statement is meaningless. And if the set of universes where statement A is true is identical to the set of universes where statement B is true, then statement A and statement B have the same meaning whether or not you can "algebraically" convert one to the other.
1TAG2moThey're not, because A and B assert different things.
1Idan Arye2moIf A and B assert different things, we can test for these differences. Maybe not with current technology, but in principle. They yield different predictions and are therefore different beliefs.
1TAG2moYou keep assuming verificationism in order to prove verificationism. They assert different things because they mean different things, because the dictionary meanings are different. In the thought experiment we are considering , the contents of the box can be er be tested. Nonetheless $10 and $100 mean different things.
1Idan Arye2moI'm not sure you realize how strong a statement "the contents of the box can be never be tested" is. It means even if we crack open the box we won't be able to read the writing on the bill. It means that even if we somehow tracked all the $20 and all the $100 bills that were ever printed, their current location, and whether or not they were destroyed, we won't be able to find one which is missing and deduce that it is inside the box. It means that even if we had a powerful atom-level scanner that can accurately map all the atoms in a given volume and put the box inside it, it won't be able to detect if the atoms are arranged like a $20 bill or like a $100 bill. It means that even if a superinteligent AI capable of time reversal calculations tried to simulate a time reversal it wouldn't be able to determine the bill's value. It means, that the amount printed on that bill has no effect on the universe, and was never affected by the universe. Can you think of a scenario where that happens, but the value of dollar bill is still meaningful? Because I can easily describe a scenario where it isn't: Dollar bills were originally "promises" for gold. They were signed by the Treasurer and the secretary of the Treasury because the Treasury is the one responsible for fulfilling that promise. Even after the gold standard was abandoned, the principle that the Treasury is the one casting the value into the dollar bills remains. This is why the bills are still signed by the Treasury's representatives. So, the scenario I have in mind is that the bill inside the box is a special bill - instead of a fixed amount, it says the Treasurer will decide if it is worth 20 or 100 dollars. The bill is still signed by the Treasurer and the secretary of the Treasury, and thus has the same authority as regular bills. And, in order to fulfill the condition that the value of the bill is never known - the Treasurer is committed to never decide the worth of that bill. Is it still meaningful to ask
1TAG2moI can understand that your revised scenario is unverifiable, by understanding the words you wrote, ie. by grasping their meaning. As usual, the claim that some things are unverifiable is parasitic on the existence of a kind of meaning that has nothing to do with verifiability.
1Idan Arye2moThe Quotation is not the Referent [https://www.lesswrong.com/s/p3TndjYbdYaiWwm9x/p/np3tP49caG4uFLRbS]. Just because the text describing them is different doesn't mean the assertions themselves are different. Eliezer identified evolution with the blind idiot god Azathoth [https://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-god]. Does this make evolution a religious Lovecraftian concept? Scott Alexander identified the Canaanite god Moloch with the principle that forces you to sacrifice your values for the competition [Scott Alexander identified the Canaanite god Moloch with the principle that forces you to sacrifice your values for the competition]. Does this make that principle an actual god? Should we pray to it? I'd argue not. Even though Eliezer and Scott brought the gods in for the theatrical and rhetorical impact, evolution is the same old evolution and competition is the same old competition. Describing the idea differently does not automatically make it a different idea - just like describingf(x)=(x+1)2asg( x)=x2+2x+1does not make it a different function. In case of mathematic functions we have a simple equivalence law:f≡g⟺∀xf(x)=g(x) . I'd argue we can have a similar equivalence law for beliefs -A≡B⟺∀XP(X∣A)=P(X∣ B)where A and B are beliefs and X is an observation. This condition is obviously necessary because ifA≡Beven though∃YP(Y∣A)≠P(Y∣B)and we find thatP(Y)=P(Y∣A), that would support A and therefore also B (because they are equivalent) which means an observation that does not match the belief's predictions supports it. Is it sufficient? My argument for its sufficiency is not as analytical as the one for its necessity, so this may be the weak point of my claim, but here it goes: IfA≢B, even though they give the same predictions, then something other than the state and laws of the universe is deciding whether a belief is true or false (actually - how much accurate is it). This undermines the core idea of both science and Bayesianism that belie
1TAG2mo..because exact synonymy is possible. Exact synonymy is also rare, and it gets less probable the longer the text is. You need to be clear whether you are claiming that two theories are the same because their empirical content is the same, or because their semantic content is the same. Those are different...computationally. They would take a different amount of time to execute. Pure maths is exceptional in its lack of semantics. f=ma and P=IV ..are identical mathematically, but have different semantics in physics. If two theories are identical empirically and ontologically, then some mysterious third thing would be needed to explain any difference. But that is not what we are talking about. What we are discussing is your claim that empirical difference is the only possible difference , equivalently that the empirical content of a theory is all its content. Then the answer to "what further difference could there be" is "what the theories say about reality".
1TAG2moSemantically and ontologically. The dictionary meanings of the words heliocentric and geocentric are opposites, so they assert different things about the territory. Note that this the default hypothesis. Whatever I just called "dictionary meaning" is what is usually called "meaning" simpliciter. Attempts to resist this conclusion are based on putting forward non standard definitions of "meaning", which need to because argued for, not just assumed.
1Idan Arye2moBut this is not the dictionary definition of the geocentric model we are talking about - this we have twisted it to have the exact same predictions as the modern astronomical model. So it no longer asserts the same things about the territory as the original geocentric model - its assertions are now identical to the modern model. So why should it still hold the same meaning as the original geocentric model?
1TAG2moDictionaries don't define complex scientific theories. Our complicated , bad, wrong , neo-geocentric theory is still a geocentric theory. Therefore it makes different assertions about the territory than heliocentricism.
1Idan Arye2moSo if I copied the encyclopedia definition of the heliocentric model, and changed the title to "geocentric" model, it would be a "bad, wrong , neo-geocentric theory [that] is still a geocentric theory"?
1TAG2moIt would be a theory that didn't work, because you only changed one thing.
1Idan Arye2moI'm not sure I follow - what do you mean by "didn't work"? Shouldn't it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?
1Idan Arye2moOK, I continued reading, and in Decoherence is Simple [https://www.lesswrong.com/s/Kqs6GR7F5xziuSyGZ/p/Atu4teGvob5vKvEAF] Eliezer makes a good case for Occam's Razor as more than just a useful tool. In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs - but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher. So, if a simple belief A started with -10 decibels [https://www.lesswrong.com/posts/GDLP8MjvyhK3wx6hc/the-quick-bayes-table] and a complicated belief B started with -20 decibels, and we get 15 decibels of evidence supporting both, the posterior credibility of the beliefs are 5 and -5 - so we should favor A. Even if we get another 10 decibels of evidence and the credibility of B becomes 5, the credibility of A is now 15 so we should still favor it. The only way we can favor B is if we get enough evidence that support B but not A. Of course - this doesn't mean that A is true and B is false, only that we assign a higher probability to A. So, if we go back to astronomy - our neogeocentric model has a higher burden of proof than the modern model, because it contains additional mysterious forces. We prove gravity and relativity and the work out how centrifugal forces work and that's (more or less) enough for the modern model, and the exact same evidences also support the neogeocentric model - but they are not enough for it because we also need evidence for the new forces we came up with. Do note, though, that the claim that "there is no mysterious force" is simpler than "there is a mysterious force" is taken for granted here...
1TAG2moIf you take a heliocentric theory, and substitute "geocentric" for "heliocentric", you get a theory that doens't work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory. What does "true" mean when you use it? A geocentric theory can match any observation, providing you complicate it endelessly. This discussion is about your claim that two theories are the same iff their empirical predictions are the same. But if that is the case, why does complexity matter? EY is a realist and a correspondence theorist. He thinks that "true" means "corresponds to reality", and he thinks that complexity matters, because, all other things being equal, a more complex theory is less likely to correspond than a simpler one. So his support of Occam's Razor, his belief in correspondence-truth, and his realism all hang together. But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are denying that they are have any semantic (non empirical content), and, as an implication of that, that they "mean" or "say" nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?
1Idan Arye2moI only change the title, I don't change anything they theory says. So its predictions are still the same as the heliocentric model. The semantics are still very important as a compact representation of predictions. The predictions are infinite - the belief will have to give a prediction for every possible scenario, and scenariospace is infinite. Even if the belief is only relevant for a finite subset of scenarios, it'd still have to say "I don't care about this scenario" an infinite number of times. Actually, it would make more sense to talk about belief systems than individual beliefs, where the belief system is simply the probability function P. But we can still talk about single beliefs if we remember that they need to be connected to a belief system in order to give predictions, and that when we compare two competing beliefs we are actually comparing two belief systems where the only difference is that one has belief A and the other has belief B. Human minds, being finite, cannot contain infinite representations - we need finite representations for our beliefs. And that's where the semantics come in - they are compact rules that can be used to generate predictions for every given scenario. And they are also important because the amount of predictions we can test is also finite. So even if we could comprehend the infinite prediction field over scenariospace, we wouldn't be able to confirm a belief based on a finite number of experiments. Also, with that kind of representation, we can't even come up with the full representation of the belief. Consider a limited scenario space with just three scenarios X, Y and Z. We know what happened in X and Y, and write a belief based on it. But what would that belief say about Z? If the belief is represented as just its predictions, without connections between distinct predictions, how can we fill up the predictions table? The semantics help us with that because they have less degrees of freedom. With N degrees of freedom
1TAG2moMaybe you do, but it's my thought experiment! That isn't what you need to show. You need to show that the semantics have no ontological implications, that they say nothing about the territory .
1Idan Arye2moActually, what I need to show is that the semantics say nothing extra about the territory that is meaningful. My argument is that the predictions are canonical representation of the belief, so it's fine if the semantics say things about the territory that the predictions can't say, as long as everything it says that does not affect the predictions is meaningless. At least, meaningless in the territory. The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called "gravity". If you call that force "travigy" instead, it will cause no difference in the predictions. This is because the name of the force if a property of the map, not the territory - if it was meaningful in the territory it should have had impact on the predictions. And I claim that the "center of the universe" is similar - it has no meaning in the territory. The universe has no "center" - you can think of "center of mass" or "center of bounding volume" of a group of objects, but there is no single point you can naturally call "the center". There can be good or bad choices for the center, but not right or wrong choices - the center is a property of the map, not the territory. If it had any effect at all on the territory, it should have somehow affected the predictions.
1TAG2mo1. How can you say something, but say something meaningless? 2. Why does not saying anything (meaningful) about the territory buy you? What's the advantage? Realists are realists because they place a terminal value in knowing what the territory is above and beyond making predictions. They can say what the advantage is ... to them. If you don't personally value knowing what the territory is, that need not apply to others. Travigy means nothing, or it means gravity. Either way , it doesnt affect my argument. You don't seem to understand what semantics is. It's not just a matter of spelling changes or textual changes. A semantic change doesn't mean that two strings fail strcmp() , it means that terms have been substituted with meaningful terms that mean something different. "There is a centre of the universe" is considered false in modern cosmology. So there is no real thing corresponding to the meaning of string "centre of the universe". Which is to say that the string "centre of the universe" has a meaning , unlike the string "flibble na dar wobble". The territory can be different ways that produce the same predictions.

Indeed, it seems there is no way to justify Occam’s Razor except by appealing to Occam’s Razor, making this argument unlikely to convince any judge who does not already accept Occam’s Razor.

That's very much not proven. There are multiple arguments for Occams Razor ,(see the Wikipedia page) , most or all of which aren't circular.