Followup toLogical PinpointingCausal Reference

Take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

- Death, in Hogfather by Terry Pratchett

Meditation: So far we've talked about two kinds of meaningfulness and two ways that sentences can refer; a way of comparing to physical things found by following pinned-down causal links, and logical validity by comparison to models pinned-down by axioms. Is there anything else that can be meaningfully talked about? Where would you find justice, or mercy?

... 
... 
...

Suppose that I pointed at a couple of piles of apples on a table, a pile of two apples and a pile of three apples.

And lo, I said:  "If we took the number of apples in each pile, and multiplied those numbers together, we'd get six."

Nowhere in the physical universe is that 'six' written - there's nowhere in the laws of physics where you'll find a floating six. Even on the table itself there's only five apples, and apples aren't fundamental. Or to put it another way:

Take the apples and grind them down to the finest powder and sieve them through the finest sieve and then show me one atom of sixness, one molecule of multiplication.

Nor can the statement be true as a matter of pure math, comparing to some Platonic six within a mathematical model, because we could physically take one apple off the table and make the statement false, and you can't do that with math.

This question doesn't feel like it should be very hard.  And indeed the answer is not very difficult, but it is worth spelling out; because cases like "justice" or "mercy" will turn out to proceed in a similar fashion.

Navigating to the six requires a mixture of physical and logical reference.  This case begins with a physical reference, when we navigate to the physical apples on the table by talking about the cause of our apple-seeing experiences:

Next we have to call the stuff on the table 'apples'.  But how, oh how can we do this, when grinding the universe and running it through a sieve will reveal not a single particle of appleness?

This part was covered at some length in the Reductionism sequence.  Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider.  Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.

We also use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC.  A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark.  (Or a quantum field, really; but you get the idea.)

So is the 747 made of something other than quarks?  And is the statement "this 747 has wings" meaningless or false?  No, we're just modeling the 747 with representational elements that do not have a one-to-one correspondence with individual quarks.

Similarly with apples.  To compare a mental image of high-level apple-objects to physical reality, for it to be true under a correspondence theory of truth, doesn't require that apples be fundamental in physical law.  A single discrete element of fundamental physics is not the only thing that a statement can ever be compared-to.  We just need truth conditions that categorize the low-level states of the universe, so that different low-level physical states are inside or outside the mental image of "some apples on the table" or alternatively "a kitten on the table".

Now we can draw a correspondence from our image of discrete high-level apple objects, to reality.

Next we need to count the apple-objects in each pile, using some procedure along the lines of going from apple to apple, marking those already counted and not counting them a second time, and continuing until all the apples in each heap have been counted.  And then, having counted two numbers, we'll multiply them together.  You can imagine this as taking the physical state of the universe (or a high-level representation of it) and running it through a series of functions leading to a final output:

And of course operations like "counting" and "multiplication" are pinned down by the number-axioms of Peano Arithmetic:

And we shouldn't forget that the image of the table, is being calculated from eyes which are in causal contact with the real table-made-of-particles out there in physical reality:

And then there's also the point that the Peano axioms themselves are being quoted inside your brain in order to pin down the ideal multiplicative result - after all, you can get multiplications wrong - but I'm not going to draw the image for that one.  (We tried, and it came out too crowded.)

So long as the math is pinned down, any table of two apple piles should yield a single output when we run the math over it. Constraining this output constrains the possible states of the original, physical input universe:

And thus "The product of the apple numbers is six" is meaningful, constraining the possible worlds. It has a truth-condition, fulfilled by a mixture of physical reality and logical validity; and the correspondence is nailed down by a mixture of causal reference and axiomatic pinpointing.

I usually simplify this to the idea of "running a logical function over the physical universe", but of course the small picture doesn't work unless the big picture works.


The Great Reductionist Project can be seen as figuring out how to express meaningful sentences in terms of a combination of physical references (statements whose truth-value is determined by a truth-condition directly correspnding to the real universe we're embedded in) and logical references (valid implications of premises, or elements of models pinned down by axioms); where both physical references and logical references are to be described 'effectively' or 'formally', in computable or logical form.  (I haven't had time to go into this last part but it's an already-popular idea in philosophy of computation.)

And the Great Reductionist Thesis can be seen as the proposition that everything meaningful can be expressed this way eventually.

But it sometimes takes a whole bunch of work.

And to notice when somebody has subtly violated the Great Reductionist Thesis - to see when a current solution is not decomposable to physical and logical reference - requires a fair amount of self-sensitization before the transgressions become obvious.


Example:  Counterfactuals.

Consider the following pair of sentences, widely used to introduce the idea of "counterfactual conditioning":

  • (A) If Lee Harvey Oswald didn't shoot John F. Kennedy, someone else did.
  • (B) If Lee Harvey Oswald hadn't shot John F. Kennedy, someone else would've.

The first sentence seems agreeable - John F. Kennedy definitely was shot, historically speaking, so if it wasn't Lee Harvey Oswald it was someone.  On the other hand, unless you believe the Illuminati planned it all, it doesn't seem particularly likely that if Lee Harvey Oswald had been removed from the equation, somebody else would've shot Kennedy instead.

Which is to say that sentence (A) appears true, and sentence (B) appears false.

One of the historical questions about the meaning of causal models - in fact, of causal assertions in general - is, "How does this so-called 'causal' model of yours, differ from asserting a bunch of statistical relations?  Okay, sure, these statistical dependencies have a nice neighborhood-structure, but why not just call them correlations with a nice neighborhood-structure; why use fancy terms like 'cause and effect'?"

And one of the most widely endorsed answers, including nowadays, is that causal models carry an extra meaning because they tell us about counterfactual outcomes, which ordinary statistical models don't.  For example, suppose this is our causal model of how John F. Kennedy got shot:

Kennedy causes Oswald

Roughly this is intended to convey the idea that there are no Illuminati:  Kennedy causes Oswald to shoot him, does not cause anybody else to shoot him, and causes the Moon landing; but once you know that Kennedy was elected, there's no correlation between his probability of causing Oswald to shoot him and his probability of causing anyone else to shoot him.  In particular, there's no Illuminati who monitor Oswald and send another shooter if Oswald fails.

In any case, this diagram also implies that if Oswald hadn't shot Kennedy, nobody else would've, which is modified by a counterfactual surgery a.k.a. the do(.) operator, in which a node is severed from its former parents, set to a particular value, and its descendants then recomputed:

do Oswald=N

 

And so it was claimed that the meaning of the first diagram is embodied in its implicit claim (as made explicit in the second diagram) that "if Oswald hadn't shot Kennedy, nobody else would've".  This statement is true, and if all the other implicit counterfactual statements are also true, the first causal model as a whole is a true causal model.

What's wrong with this picture?

Well... if you're strict about that whole combination-of-physics-and-logic business... the problem is that there are no counterfactual universes for a counterfactual statement to correspond-to.  "There's apples on the table" can be true when the particles in the universe are arranged into a configuration where there's some clumps of organic molecules on the table.  What arrangement of the particles in this universe could directly make true the statement "If Oswald hadn't shot Kennedy, nobody else would've"?  In this universe, Oswald did shoot Kennedy and Kennedy did end up shot.

But it's a subtle sort of thing, to notice when you're trying to establish the truth-condition of a sentence by comparison to counterfactual universes that are not measurable, are never observed, and do not in fact actually exist.

Because our own brains carry out the same sort of 'counterfactual surgery' automatically and natively - so natively that it's embedded in the syntax of language.  We don't say, "What if we perform counterfactual surgery on our models to set 'Oswald shoots Kennedy' to false?"  We say, "What if Oswald hadn't shot Kennedy?"  So there's this counterfactual-supposition operation which our brain does very quickly and invisibly to imagine a hypothetical non-existent universe where Oswald doesn't shoot Kennedy, and our brain very rapidly returns the supposition that Kennedy doesn't get shot, and this seems to be a fact like any other fact; and so why couldn't you just compare the causal model to this fact like any other fact?

And in one sense, "If Oswald hadn't shot Kennedy, nobody else would've" is a fact; it's a mixed reference that starts with the causal model of the actual universe where there are actually no Illuminati, and proceeds from there to the logical operation of counterfactual surgery to yield an answer which, like 'six' for the product of apples on the table, is not actually present anywhere in the universe.  But you can't say that the causal model is true because the counterfactuals are true.  The truth of the counterfactuals has to be calculated from the truth of the causal model, followed by the implications of the counterfactual-surgery axioms.  If the causal model couldn't be 'true' or 'false' on its own, by direct comparison to the actual real universe, there'd be no way for the counterfactuals to be true or false either, since no actual counterfactual universes exist.


So that business of counterfactuals may sound like a relatively obscure example (though it's going to play a large role in decision theory later on, and I expect to revisit it then) but it sets up some even larger points.

For example, the Born probabilities in quantum mechanics seem to talk about a 'degree of realness' that different parts of the configuration space have (proportional to the integral over squared modulus of that 'world').

Could the Born probabilities be basic - could there just be a basic law of physics which just says directly that to find out how likely you are to be in any quantum world, the integral over squared modulus gives you the answer?  And the same law could've just as easily have said that you're likely to find yourself in a world that goes over the integral of modulus to the power 1.99999?

But then we would have 'mixed references' that mixed together three kinds of stuff - the Schrodinger Equation, a deterministic causal equation relating complex amplitudes inside a configuration space; logical validities and models; and a law which assigned fundamental-degree-of-realness a.k.a. magical-reality-fluid.  Meaningful statements would talk about some mixture of physical laws over particle fields in our own universe, logical validities, and degree-of-realness.

This is just the same sort of problem if you say that causal models are meaningful and true relative to a mixture of three kinds of stuff, actual worlds,  logical validities, and counterfactuals, and logical validities.  You're only supposed to have two kinds of stuff.

People who think qualia are fundamental are also trying to build references out of at least three different kinds of stuff: physical laws, logic, and experiences.

Anthropic problems similarly revolve around a mysterious degree-of-realness, since presumably when you make more copies of people, you make their experiences more anticipate-able somehow.  But this doesn't say that anthropic questions are meaningless or incoherent.  It says that since we can only talk about anthropic problems using three kinds of stuff, we haven't finished Doing Reductionism to it yet.  (I have not yet encountered a claim to have finished Reducing anthropics which (a) ends up with only two kinds of stuff and (b) does not seem to imply that I should expect my experiences to dissolve into Boltzmann-brain chaos in the next instant, given that if all this talk of 'degree of realness' is nonsense, there is no way to say that physically-lawful copies of me are more common than Boltzmann brain copies of me.)

Or to take it down a notch, naive theories of free will can be seen as obviously not-completed Reductions when you consider that they now contain physics, logic, and this third sort of thingy called 'choices'.

And - alas - modern philosophy is full of 'new sorts of stuff'; we have modal realism that makes possibility a real sort of thing, and then other philosophers appeal to the truth of statements about conceivability without any attempt to reduce conceivability into some mixture of the actually-physically-real-in-our-universe and logical axioms; and so on, and so on.

But lest you be tempted to think that the correct course is always to just envision a simpler universe without the extra stuff, consider that we do not live in the 'naive un-free universe' in which all our choices are constrained by the malevolent outside hand of physics, leaving us as slaves - reducing choices to physics is not the same as taking a naive model with three kinds of stuff, and deleting all the 'choices' from it.  This is confusing the project of getting the gnomes out of the haunted mine, with trying to unmake the rainbow.  Counterfactual surgery was eventually given a formal and logical definition, but it was a lot of work to get that far - causal models had to be invented first, and before then, people could only wave their hands frantically in the air when asked what it meant for something to be a 'cause'.  The overall moral I'm trying convey is that the Great Reductionist Project is difficult; it's not a matter of just proclaiming that there's no gnomes in the mine, or that rainbows couldn't possibly be 'supernatural'.  There are all sorts of statement that were not originally, or are presently not obviously decomposable into physical law plus logic; but that doesn't mean you just give up immediately.  The Great Reductionist Thesis is that reduction is always possible eventually.  It is nowhere written that it is easy, or that your prior efforts were enough to find a solution if one existed.

Continued next time with justice and mercy (or rather, fairness and goodness).  Because clearly, if we end up with meaningful moral statements, they're not going to correspond to a combination of physics and logic plus morality.


Mainstream status.

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "By Which It May Be Judged"

Previous post: "Causal Universes"

37

357 comments, sorted by Highlighting new comments since Today at 6:41 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Not to be obnoxious, but...

You're only supposed to have two kinds of stuff.

Why two?

ETA: I feel like I may have distracted from the thrust of the post. I think the main point was that there really really probably shouldn't be more then two stuffs, which is legit.

Because Tegmark 4 isn't mainstream enough yet to get it down to one.

If there is a way to reduce it to zero or not is one discovery I'm much looking forward to, but there probably isn't. It certainly seems totally impossible, but that only really means "I can't think of a way to do it".

6Eliezer Yudkowsky8yIt does indeed seem possible that in the long run we'll end up with one kind of stuff, either from the reduction of logic to physics, or the reduction of physics to math. It's also worth noting that my present model does have magical-reality-fluid in it, and it's conceivable that this will end up not being reduced. But the actual argument is something along the lines of, "We got it down to two crisp things, and all the proposals for three don't have the crisp nature of the two."
3MaoShan8yThat seems to me more like an irreducible string of methods of interpretation. You have physics, whether you like it or not. If you want to understand the physics, you need math. And to use the math, you need logic. Physics itself does not require math or logic. We do, if we want to do anything useful with it. So it's not so much "reducible" as it is "interpretable"--physics is such that turning it into a bunch of numbers and wacky symbols actually makes it more understandable. But to draw from your example, you can't have a physical table with physically infinite apples sitting on it. Yet you can do math with infinities, but all the math in the world won't put more apples on that table. ...and since when is two apples sitting next to each other a pile??
2MrMind8yJust as mental gymnastics, what if instead we would be able to reduce physics and logic to magical reality fluid? :) Anyway, for the "logic from physics" camp the work of Valentin Turchin seems interesting (above all "The cybernetic foundation of mathematics"). Also of notice the recent foundational program called "Univalent foundation".
0Eugine_Nier8yI don't think you can reduce logic to anything else, since you would need to use logic to perform the reduction.
0MrMind8yWell, since nobody have done that yet, we cannot be sure, but for example a reduction of logic to physics could look like this: "for a system built on top of this set of physics laws, this is the set of logical system available to it", which would imply that all the axiomatic system we use are only those accessible via our laws of physics. For an extreme seminal example, Turing machine with infinite time have a very different notion of "effective procedure".
1Eugine_Nier8yHow would one show the above, or even build up a system on top of physical laws without using logic?
1MrMind8yI have (at the moment) no idea. It's clear that such a demonstration needs to use some kind of logic, but I think that doesn't undermine the (possible) reduction: if you show that the (set of) logic available to a system depends on the physical laws, you have shown that our own logic is determined by our own laws. This would entail that (possibly) different laws would have granted us different logics. I'm fascinated for example by the fact that the concept of "second order arithmetical truth" (SOAT) is inacessible by effective finite computation, but there are space-times that allow for infinite computation (and so system inhabiting such a world could possibly grasp effectively SOATs).
2Eugine_Nier8yI think you're going to have better luck figuring out how to make the third thing crisp than reducing it to the first two.
0Armok_GoB8yI only see one crisp thing and one thing borrowing some of the crispness of the first thing but mostly failing, in your model.
3DanArmak8yWhat would that mean? How do you reduce something to nothing? Or, well, everything to nothing?
-4Peterdjones8ySplit the universe into energy and information. Let positive and negative energy sum to nothing. That leaves information. Large ensembles contain very little overall information because it takes little information to specify them, eg: "every real number". However, they can still seem complicated from the inside. An ultimate ensemble plausibly contains no information because there is no need to pinpoint it in EverythingPossibleSpace. However, it is not clear that level IV is general enough, since the existence of non mathematical thingies is not obvioulsy impossible.
2DanArmak8yThat doesn't mean you don't talk about energy as a basic ontological kind. You still have to talk about it - to say that its value happens to be zero. Whether it is actually zero, is an empirical matter. I haven't heard of physical theories that claim this, so what do you mean exactly?
8bryjnar8yThis. EY's made a kind of argument that you should have two kinds of stuff (although I still think the logical pinpointing stuff is a bit weak), but he seems to be proceeding as if he'd shown that that was exhaustive. For all the arguments he's given so far, this third post could have been entitled "Experiences: the Third Kind of Stuff", and it would be consistent with what he's already said. So yeah, we need an argument for; "You're only supposed to have two kinds of stuff."
2MrMind8yI think the whole point of "the great reductionist project" is that we don't really have a sufficiency theorem, so we should treat "no more than two" as an empirical hypothesis and proceed to discover its truth by the methods of science.
2Eugine_Nier8yHe may be overreacting against a strain in philosophy that seeks to reduce everything to experience. Similar to the way behaviorism [http://lesswrong.com/lw/sr/the_comedy_of_behaviorism/] was an overreaction against Freud.
-1shminux8yNot third, first. There are only two kinds of stuff, experiences and models. Separating physical models from logical is rather artificial, both are used to explain experiences.
0Rob Bensinger8yWe only access models via experiences. If you aren't willing to reduce models to experiences, why are you willing to reduce the physical world of apples and automobiles to experiences? You're already asserting a kind of positivistic dualism; I see no reason not to posit a third domain, the physical, to correspond to our concrete experiences, just as you've posited a 'model domain' (cf. Frege's third realm) to correspond to our abstract experiences.
4MixedNuts8yAgreed. The number two is ridiculous and can't exist. Once you allow stuff to have a physical kind and a logical kind, what's to stop you from adding other kinds like degree-of-realness and Buddha-nature? OTOH, logical abstractions steadfastly refuse to be reduced to physics. There may be hope for the other way around, a solution to "Why does stuff exist?" that makes the universe somehow necessary. (Egan's "conscious minds find themselves" is cute but implies either chaotic observations or something to get the minds started.) But we can't be very optimistic.
3[anonymous]8yThat's Tegmark's Mathematical Universe Hypothesis, the best explanation I've seen of is Section 8.1 “Something for Nothing” in Good and Real by Gary Drescher.
1TsviBT8yFor math as mere physics, see Egan's Luminous.
-1Armok_GoB8yThis problem is already solved, the answer is here: http://arxiv.org/abs/0704.0646 [http://arxiv.org/abs/0704.0646]
2MixedNuts8yI don't get it. Okay, obviously our universe is a mathematical structure, that's why physics works. "All math is real" is seductive, but "All computable math is real, but there are no oracles" is just weird; why would you expect that without experimental evidence of Church-Turing? The idea that since there are twice as many infinite strings containing "1010" than "10100", the former must exist twice as much as the latter nicely explains why our universe is so simple. But I'm not at all convinced that universes like ours with stable observers are simpler than pseudorandom generators that pop out Boltzmann brains.
1Armok_GoB8yThat all math is "real" in some sense you observe directly any time you do any. The insight is not that math is MORE real than previously thought, but just that there isn't some additional find of realness. Sort of, this is an oversimplification. Also check out: http://lesswrong.com/lw/1zt/the_mathematical_universe_the_map_that_is_the/ [http://lesswrong.com/lw/1zt/the_mathematical_universe_the_map_that_is_the/]
2shminux8yThat post is a confused jumble of multiple misinterpretations of the word "exist".
1MixedNuts8yIf all levels of the Turing hierarchy are about as real, it's extremely unlikely our universe is at level zero. Yet Church-Turing looks pretty solid.
0endoself8yCombine this with the simulation hypothesis; a universe can only simulate less computationally expensive universes. (Of course this is handwavy and barely an argument, but it's possible something stronger could be constructed along these lines. I do think that much more work needs to be done here.)
0Rob Bensinger8yI'm pretty sure Eliezer's approach is the opposite of Tegmark's. For Tegmark, the math is real and our physical world emerges from it, or is an image of part of it. For Eliezer, our world, in all its thick, visceral, spatiotemporal glory, is the Real, and logical, mathematical, counterfactual, moral, mentalizing, essentializing, and otherwise abstract reasoning is a human invention that happens to be useful because its rules are precisely and consistently defined. There's much less urgency to producing a reductive account of mathematical reasoning when you've never reified 'number' in the first place. Of course, that's not to deny that something like Tegmark's view (perhaps a simpler version, Game-of-Life-style or restricted to a very small subset of possibility-space that happens to be causally structured) could be true. But if such a view ends up being true, it will provide a reduction of everything we know to something else; it won't be likely to help at all in reducing high-level human concepts like number or qualia or possibility directly to Something Else. For ordinary reductive purposes, it's physics or bust.
3Eliezer Yudkowsky8yAlways two there are. No more. No less.
2DaFranker8yMy best vulgarization, which I hope not to be a rationalization (read: Looking for more evidence that it is!), is that Physical kinds of stuff are about what is, while logical kinds of stuff are about "what they do". If you have one lone particle¹ in an empty universe, there's only the one kind, the physical. The particle is there. Once you have two particles, the physical kind of stuff is about how they are, their description, while the logical stuff is about the axiom "these two particles interact" - and everything that derives from there, such as "how" they interact². I do not see any room for more kinds of stuff that is necessary in order to fully and perfectly simulate all the states of the entire universe where these two particles exist. I also don't see how adding more particles is going to change that in any manner. As per the evidence we have, it seems extremely likely that our own universe is a version of this universe with simply more particles in it. So really, you can reduce it to "one", if you're willing to hyper-reduce the conceptual fundamental "is" to the simple logical "do" - if you posit that a single particle in a separate universe simply does not exist, because the only existence of a particle is its interaction, and therefore interactions are the only thing that do exist. Then the distinction between the physical and logical becomes merely one of levels of abstraction, AFAICT, and can theoretically be done away with. However, the physical-logical two-rule seems to be useful, and the above seems extremely easy to misinterpret or confuse with other things. 1. Defined as whatever is the most fundamentally reduced smallest possible unit of the universe, be that a point in a wave field equation, a quark, or anything else reality runs on. 2. I've read some theories (and thought some of my own) implying that there is no real "how" of interaction, and that all the interactions are simply the simplest, most primitive
2Peterdjones8yHow does EY know there are only two? Is it aprori knowledge? Is it empirical? Is it subject to falsification? How many failed reduictions-to-two-kinds-of-stuff do there have to be before TKoS is falsified?

How confident are we in the Great Reductionist Thesis? Short of the Great Reductionist Project's success, what would be evidence for or against it?

2Eliezer Yudkowsky8yAfter it's been right the last 300 times or so, we should assess a substantial probability that it will be wrong before the 1,000th occasion, but believe much more strongly that it will be correct on the next occasion.

Only because you're cheating by reclassifying all cases where it was wrong as cases where we haven't figure out how to properly apply it yet.

4JoshuaZ8yThat doesn't seem to answer dspeyer's questions.

Okay. I'll bet with somewhere around 50% probability that the Great Reductionist Project as I've described it works, with reduction to a single thing counting as success, and requiring magical reality-fluid counting as failure. I'll bet with 95% probability that it's right on the next occasion for anthropics and magical reality-fluid, and with 99+ probability that it's right on the next occasion for things that confuse me less; except that when it comes to e.g. free will, I don't know who I'd accept as a judge that didn't think the issue already settled.

3JoshuaZ8yCan you expand on what you mean by this?
2[anonymous]8yEither the Great Reductionist Thesis ("everything meaningful can be expressed by [physics+logic] eventually") is itself expressible with physics+logic (eventually) or it isn't. If it is, then it might be true. If it isn't, then the great reductionist thesis is not true, because the proposition it expresses is not meaningful. I'm worried about this possibility because the phrase 'everything meaningful' strikes me as dangerously self-referential.

This is a reply to the long conversation below between Esar and RobbBB.

Let me first say that I am grateful to Esar and RobbBB for having this discussion, and double-grateful to RobbBB for steelmanning my arguments in a very proper and reasonable fashion, especially considering that I was in fact careless in talking about "meaningful propositions" when I should've remembered that a proposition, as a term of art in philosophy, is held to be a meaning-bearer by definition.

I'm also sorry about that "is meaningless is false" phrase, which I'm certain was a typo (and a very UNFORTUNATE typo) - I'm not quite sure what I meant by it originally, but I'm guessing it was supposed to be "is meaningless or false", though in the context of the larger debate now that I've read it, I would just say "colorless green ideas sleep furiously" is "meaningless" rather than false. In a strict sense, meaningless utterances aren't propositions so they can't be false. In a looser sense, an utterance like "Maybe we're living in an inconsistent set of axioms!" might be impossible to render coherent under strict standards of meaning, while also being... (read more)

7Rob Bensinger8yR3) "What sort of utterances can we argue about in English?" is (perhaps deliberately) vague. We can argue about colorless green ideas, if nothing else at the linguistic level. Perhaps R3 is not about meaning, but about debate etiquette: What are the minimum standards for an assertion to be taken seriously as an assertion (i.e., not as a question, interjection, imperative, glossolalia, etc.). In that case, we may want to break R3 down into a number of sub-questions, since in different contexts there will be different standards for the admissibility of an argument. I'm not sure what exactly a sensus divinatus is, or why it wouldn't be axiomatizable. Perhaps it would help flesh out the Great Reductionist Thesis if we evaluated which of these phenomena, if any, would violate it: 1. Objective fuzziness. I.e., there are entities that, at the ultimate level, possess properties vaguely; perhaps even some that exist vaguely, that fall in different points on a continuum from being to non-being. 2. Ineffable properties, i.e., ones that simply cannot be expressed in any language. The specific way redness feels to me, for instance, might be a candidate for logico-physical inexpressibility; I can perhaps ostend the state, but any description of that state will underdetermine the precise feeling. 3. Objective inconsistencies, i.e., dialetheism [http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.ndjfl/1039540770] . Certain forms of perspectivism, which relativize all truths to an observer, might also yield inconsistencies of this sort. Note that it is a stronger claim to assert dialetheism (an R1-type claim) than to merely allow that reasoning non-explosively with apparent contradictions can be very useful (an R2-type claim, affirming paraconsistent logics). 4. Nihilism. There isn't anything. 5. Eliminativism about logic, intentionality, or compu
5Eliezer Yudkowsky8yI have no objection to your description of R3 - basically it's there so that (a) we don't think that something not immediately obviously being in R2 means we have to kick it off the table, and (b) so that when somebody claims their imagination is giving them veridical access to something, we can describe the thing accessed as membership in R3, which in turn is (and should be) too vague for anything else to be concluded thereby; you shouldn't be able to get info about reality merely by observing that you can affirm English utterances. Insofar as your GRT violations all seem to me to be in R3 and not R2 (i.e., I cannot yet coherently imagine a state of affairs that would make them true), I'm mostly willing to agree that reality actually being that way would falsify GRT and my proposed R2. Unless you pick one of them and describe what you mean by it more exactly - what exactly it would be like for a universe to be like that, how we could tell if it were true - in which case it's entirely possible that this new version will end up in the logic-and-physics R2, and for similar reasons, wouldn't falsify GRT if true. E.g., a version of "nihilism" that is cashed out as "there is no ontologically fundamental reality-fluid", denial of "reference" in which there is no ontologically basic descriptiveness, eliminativism about "logic" which still corresponds to a computable causal process, "relativized" descriptions along the lines of Special Relativity, and so on. This isn't meant to sneak reductionism in sideways into universes with genuinely ineffable magic composed of irreducible fundamental mental entities with no formal effective description in logic as we know it. Rather, it reflects the idea that even in an intuitive sense, sufficiently effable magic tends toward science, and since our own brains are in fact computable, attempts to cash out the ineffable in greater detail tend to turn it effable. The traditional First-Cause ontologically-basic R3 "God" falsifies reductio
5Rob Bensinger8yHere are three different doctrines: 1. Expressibility. Everything (or anything) that is the case can in principle be fully expressed or otherwise represented. In other words, an AI is constructible-in-principle that could model every fact, everything that is so. Computational power and access-to-the-data could limit such an AI's knowledge of reality, but basic effability could not. 2. Classical Expressibility. Everything (or anything) that is the case can in principle be fully expressed in classical logic. In addition to objective ineffability, we also rule out objective fuzziness, inconsistency, or 'gaps' in the World. (Perhaps we rule them out empirically; we may not be able to imagine a world where there is objective indeterminacy, but we at least intuit that our world doesn't look like whatever such a world would look like.) 3. Logical Physicalism. The representational content of every true sentence can in principle be exhaustively expressed in terms very similar to contemporary physics and classical logic. Originally I thought that your Great Reductionist Thesis was a conjunction of 1 and 3, or of 2 and 3. But your recent answers suggest to me that for you GRT may simply be Expressibility (1). Irreducibly unclassical truths are ruled out, not by GRT, but by the fact that we don't seem to need to give up principles like Non-Contradiction and Tertium Non Datur in order to Speak Every Truth. And mentalistic or supernatural truths are excluded only insofar as they violate Expressibility or just appear empirically unnecessary. If so, then we should be very careful to distinguish your confidence in Expressibility from your confidence in physicalism. Neither, as I formulated them above, implies the other. And there may be good reason to endorse both views, provided we can give more precise content to 'terms very similar to contemporary physics and classical logic.' Perhaps the easiest
4Eliezer Yudkowsky8ySo... in my world, transubstantiation isn't in R2, because I can't coherently conceive of what a substance is, apart from accidents. For a similar reason, I don't yet have R2-language for talking about a universe being metaphysically made of anything. I mean, I can say in R3 that perhaps physics is made of cheese, just like I can say that the natural numbers are made of cheese, but I can't R2-imagine a coherent state of affairs like that. A similar objection applies to a logical universe which is allegedly made out of mental stuff. I don't know how to imagine a logically structured universe being made of anything . Having Latin-language phonemes carve at the joints of fundamental reality seems very hard, because in my world Latin-language phonemes are already reduced - there's already sequential sound-patterns making them up, and the obvious way to have a logic describing the physics of such a world is to have complex specifications of the phonemes which are 'carving at the joints'. It's not totally clear to me how to make this complex thing a fundamental instead, though perhaps it could be managed via a logic containing enough special symbols - but to actually figure out how to write out that logic, you would have to use your own neuron-composed brain in which phonemes are not fundamental. I do agree that - if it were possibly to rule out the Matrix, I mean, if spells not only work but the incantation is "Stupefy" then I know perfectly well someone's playing an S-day prank on me - that finding magic work would be a strong hint that the whole framework is wrong. If we actually find that prayers work, then pragmatically speaking, we've received a hint that maybe we should shut up and listen to what the most empirically powerful priests have to say about this whole "reductionism" business. (I mean, that's basically why we're listening to Science.) But that kind of meta-level "no, you were just wrong, shut up and listen to the spiritualist" is something you'd only ex
6Rob Bensinger8yMany mathematicians, scientists, and philosophers believe in things they call 'sets.' They believe in sets partly because of the 'unreasonable effectiveness' of set theory, partly because they help simplify some of our theories, and partly because of set theory's sheer intuitiveness. But I have yet to hear anyone explain to me what it means for one non-spatiotemporal object to 'be an element of' another. Inasmuch as set theory is not gibberish, we understand it not through causal contact or experiential acquaintance with sets, but by exploring the theoretical role these undefined 'set' thingies overall play (assisted, perhaps, by some analogical reasoning). 'Substance' and 'accident' are antiquated names for a very commonly accepted distinction: Between objects and properties. (Warning: This is an oversimplification. See The Warp and Woof of Metaphysics [http://pvspade.com/Logic/docs/WarpWoo1.pdf] for the historical account.) Just as the efficacy of mathematics tempts people into reifying the set-member distinction, the efficacy of propositional calculus (or, more generally, of human language!) tempts people into reifying the subject-predicate distinction. The objects (or 'substances') are whatever we're quantifying over, whatever individual(s) are in our domain of discourse, whatever it is that predicates are predicated of; the properties are whatever it is that's being predicated. And we don't need to grant that it's possible for there to be an object with no properties (∃x(∀P(¬P(x)))), or a completely uninstantiated property (∃P(∀x(¬P(x)))). But once we introduce the distinction, Christians are free to try to exploit it to make sense of their doctrines. If set theory had existed in the Middle Ages, you can be sure that there would have been attempts to explicate the Trinity in set-theoretic terms; but the silliness of such efforts would not necessarily have bled over into delegitimizing set theory itself. That said, I sympathize with your bafflement. I'm not c
2[anonymous]8yCan I run something by you? An argument occurred to me today that seems suspect, but I don't know what I'm getting wrong. The conclusion of the argument is that GRTt entails GRTm. For the purposes of this argument, GRTt is the statement that all true statements have a physico-logical expression (meaning physical, logical, or physical+logical expression). GRTm is the statement that all true and all false statements have a physico-logical expression. P1) All true statements have a physico-logical expression. (GRTt) P2) The negation of any false statement is true. P3) If a statement has a physico-logical expression, its negation has a physico-logical expression. P4) All false statements have a physico-logical expression. C) All true and all false statements have a physical-logical expression. (GRTm) So for example, suppose XYZ is false, and has no physico-logical expression. If XYZ is false, then ~XYZ is true. By GRTt, ~XYZ has a physico-logical expression. But if ~XYZ has a physico-logical expression, then ~(~XYZ), or XYZ, does. Throwing a negation in front of a statement can't change the nature of the statement qua reducible. Therefore, GRTt entails GRTm. What do you think?
3Rob Bensinger8yI think your argument works. But I can't accept GRTm; so I'll have to ditch GRTt. In its place, I'll give analyzing GRT another go; call this new formulation GRTd: * 'Every true statement can be deductively derived from the set of purely physical and logical truths combined with statements of the semantics of the non-physical and non-logical terms.' This is quite unlike (and no longer implies) GRTm, 'Every meaningful statement is expressible in purely physical and logical terms.' The problem for GRTt was that statements like 'there are no gods' and 'there are no ghosts' seem to be true, but cast in non-physical terms; so either they are reducible to physical terms (in which case both GRTt and GRTm are true), or irreducible (in which case both GRTt and GRTm are false). For GRTd, it's OK if 'there are no ghosts' can't be analyzed into strictly physical terms, provided that 'there are no ghosts' is entailed by a statement of what 'ghost' means plus all the purely physical and logical truths. For example, if part of what 'ghost' means is 'something non-physical,' then 'there are no ghosts' will be derivable from a complete physical description of the world provided that such a description includes a physical/logical totality fact. You list everything that exists, then add the totality fact 'nothing except the above entities exists;' since the semantic of 'ghost' ensures that 'ghost' is not identical to anything on the physicalism list, we can then derive that there are no ghosts. Note that the semantic 'bridge laws' are themselves entailed by (and, in all likelihood, analyzable into) purely physical facts about the brains of English language speakers.
3[anonymous]8yWell done, I like GRTd especially in that it pulls free of reference to expressibility and meaningfulness. My only worry at the moment is the totality fact, partly because of what I take EY to want from the GRT in reference to R1. I take it we will agree right off that the totality fact can't follow from having listed all the physico-logical facts. Otherwise we could derive 'there are no ghosts' right now, just given the meaning of 'ghost'. But we need the answer to the question posed by R1 to be (in every case which doesn't involve a purely logical contradiction) an empirical answer. What we want to say about ghosts is not that they're impossible, but that their existence is extremely unlikely given the set of physico-logical facts we do have. We won't ever have opportunity to deploy a totality fact (since this requires omniscience, it seems), but it seems like an important part of the expression of the GRTd. But if we can't get the totality fact just from having listed all the physico-logical facts, and if the totality fact must itself be a physico-logical fact then I have a hard time seeing how we can deduce from physico-logical omniscience that there are no ghosts. In order to deduce the non-existence of ghosts, we'd need first to deduce the totality fact (since this is a premise in the former deduction), but if the totality fact is not deducible from all the physico-logical facts, then in order to deduce it, it looks like we need 'there are no ghosts' as a premise. But then our deduction of 'there are no ghosts' begs the question. Unless I'm missing something, it seems to me that the totality fact has to end up being deducible from all the physico-logical facts if deductions which employ it are to be valid. But this again makes the GRTd (specifically that part of it which describes the totality fact) an a priori claim, which we're trying to avoid especially because it means that GRTd is not an answer to R1 (which is what EY, at least, is looking for).
2Rob Bensinger8yThe totality fact could take a number of different forms. For instance, 'Everything is a set, a spacetime region, a boson, or a fermion' would suffice, if our semantics for 'ghost' made it clear that ghosts are none of those things. This is why we don't need omniscient access to every object to formulate the fact; all we need is a plausibly finished set of general physical categories. If 'physical' and 'logical' are themselves well-defined term in our physics, we could even formulate the totality fact simply as: 'Everything is physical or logical.' Another, more modest totality-style fact would be: 'The physical is causally closed.' This weaker version won't let us derive 'there are no ghosts,' but it will let us derive 'ghosts, if real, have no causal effect on the physical,' which is presumably what we're most interested in anyway. GRTd itself doesn't force you to accept totality facts (also known as Porky Pig facts). But if you reject these strange facts, then you'll end up needing either to affirm GRTm too, or needing to find some way to express negative existential facts about Spooky Things in your pristine physical/logical language. All three of these approaches have their costs, but I think GRTd is the most modest option, since it doesn't commit us to any serious speculation about the limits of semantics or translatability. I think the totality fact is a physical (or 'mixed') fact. Intuitively, it's a fact about our world that it doesn't 'keep going' past a certain point. The totality fact can't be strictly deduced from any other fact. In all cases these totality facts are empirical inferences from the apparent ability of our physical predicates to account for everything. Inasmuch as we are confident that (category-wise) 'That's all, folks,' we are confident in there being no more categories, and hence (if only implicitly) in there being no Spooky addenda. Notice this doesn't commit us to saying that we can meaningfully talk about Spooky nonphysical en
0[anonymous]8ySo, I like GRTd, insofar as it captures both what is so plausible about physicalism, and insofar as the 'totality fact' expresses an important kind of empirical inference: from even a small subset of all the physico-logical facts, we can get a good general picture of how the universe works, and what kinds of things are real. I still have questions about the GRTd as a principle however. I don't see how the following three statements are consistant with one another: S1) GRTd: 'Every true statement can be deductively derived from the set of purely physical and logical truths combined with statements of the semantics of the non-physical and non-logical terms.' S2) The totality fact is true. S3) 'The totality fact can't be strictly deduced from any other fact.' One of these three has to go, and I strongly suspect I've misunderstood S3. So my question is this: Given all the physical and logical facts, combined with statements of the semantics of any non-physical and non-logical terms one might care to make use of, do you think we could deduce the totality fact?
0Rob Bensinger8yThe totality fact is one of the physical/logical facts, and can be expressed in purely physical/logical terms. For instance, in a toy universe where the only properties were P ('being a particle') and C ('being a spacetime point'), the totality fact would have the form ∀x(P(x) ∨ C(x)) to exclude other categories of entity. A more complete totality fact would exclude bonus particles and spacetime points too, by asserting ∀x(x=a ∨ x=b ∨ x=c...), where {a,b,c...} is the (perhaps transfinitely large) set of particles and points. You can also express the same idea using existential quantification. S1, S2, and S3 are all correct, provided that the totality fact is purely physical and logical. (Obviously, any physical/logical fact follows trivially from the set of all physical/logical facts.) GRTd says nothing about which, if any, physical/logical facts are derivable from a proper subset of the physical/logical. (It also says nothing about whether there are non-physicological truths; it only denies that, if there are some, their truth or falsehood can fail to rest entirely on the physical/logical facts.) A single giant totality fact would do the job, but you could also replace it (or introduce redundancy) by positing a large number of smaller totality facts. Suppose you want to define a simple classical universe in which a 2x2x2-inch cube exists. You can quantify over a specific 2x2x2-inch region of space, and assert that each of the points within the interval is occupied. But that only posits an object that's at least that large; we also need to define the empty space around it, to give it a definite border. A totality fact (or a small army of them) could give you the requisite border, establishing 'there's no more cube' in the same way that the Giant Totality Fact establishes 'there's no more reality.' But if you get a kick out of parsimony or concision, you don't need to do this again and again for each new bounded object you posit. Instead, you can stick to positive
-1[anonymous]8yAh, I took GRTd to mean that 'every true statement (including all physical and logical truths) can be deductively derived from the set of purely physical and logical truths (excluding the one to be derived)...'.Thus, if the totality fact is true, then it should be derivable from the set of all physico-logical facts (excluding the totality fact). Is that right, or have I misunderstood GRTd? I may, I think, just be overestimating what it takes to plausibly posit the totality fact: i.e. you may just mean that we can have a lot of confidence in the totality fact just by having as broad and coherent a view of the universe as we actually do right now. The totality fact may be false, but its supported in general by the predictive power of our theories and an apparent lack of spooky phenomena. If we had all the physico-logical facts, we could be super duper confident in the totality fact, as confident as we are about anything. It would by no means follow deductively from the set of all physico-logical facts, but it's not that sort of claim anyway. Is that right?
0Rob Bensinger8yThe edit is fine. Let me add that 'the' totality fact may be a misleading locution. Nearly every model that can be analyzed factwise contains its own totality fact, and which model we're in will change what the 'totality' is, hence what the shape of the totality fact is. We can be confident that there is at least one fact of this sort in reality, simply because trivialism is false. But GRTd does constrain what that fact will have to look like: It will have to be purely logical and physical, and/or derivable from the purely logical and physical truths. (And the only thing we could derive a Big Totality Fact from would be other, smaller totality facts like 'there's no more square,' plus a second-order totality fact.)
0[anonymous]8yExcellent, I think I understand. GRTd sounds good to me, and I think you should convince EY to adopt it as opposed to GRTt/m.
0Rob Bensinger8yI didn't intend for you to read '(excluding the one to be derived)' into the statement. The GRTd I had in mind is a lot more modest, and allows for totality facts and a richer variety of causal relations. GRTd isn't a tautology (unless GRTm is true), because if there are logically underivable nonphysical and nonlogical truths, then GRTd is false. 'X can be derived from the conjunction of GRTd with X' is a tautology, but an innocuous one, since it leaves open the possibility that 'X' on its lonesome is a garden-variety contingent fact.
0[anonymous]8ySorry, I didn't expect you to read my post so quickly, and I edited it heavily without marking my edits (a failure of etiquette, I admit).
0Peterdjones8yEY, please hand the SIAI keys to Rob!
3Eliezer Yudkowsky8yWhat could it mean for a ghost to exist but be nonphysical? I think that what you think are counterexamples to GRTm are a large number of things which, examined carefully, would end up in R3-only, and not in R2. I furthermore note that you just rejected GRTt, which sounds scarily like concluding that actual non-reductionist things exist, because you didn't want to accept the conclusion that talk of non-physical ghosts might fail strict qualifications of meaning. How could you possibly get there from here? How could your thoughts about what's meaningful, entail that the laws of physics must be other than what we'd previously observed them to be? Shouldn't reaching that conclusion require like a particle accelerator or something? Alternatively, perhaps your rejection of GRTt isn't intended to entail that non-reductionist things exist. If so, can you construe a narrower version of GRTt which just says that, y'know, non-reductionist thingies don't exist? And then would Esar's argument not go through for this version? I think Esar's argument mainly runs into trouble when you want to call R3-statements 'false', in which case their negations are colloquially true but in R3-only because there's no strictly coherent and meaningful (R2) way to describe what doesn't exist (i.e. non-physical ghosts). If your desire to apply this language demands that you consider these R3-statements meaningful, then you should reject GRTm, I suppose - though not because you disagree with me about what stricter standards entail, but because you want the word "meaningful" to apply to looser standards. However, getting from there to rejecting R1 is a severe problem - though from the description, it's possible you don't mean by GRTt what I mean by R1. I am a bit worried that you might want 'non-physical ghosts don't exist' to be true, hence meaningful, hence its negation to also be meaningful, hence a proposition, hence there to be some state of affairs that could correspond to non-physical gho
2Rob Bensinger8yTo reject GRTt is to affirm: "Some truths are not expressible in physical-and/or-logical terms." Does that imply that irreducibly nonphysical things exist? I don't quite see why. My initial thought is this: I am much more confident that physicalism is true than that nonphysicalism is inexpressible or meaningless. But if this physicalism I have such faith in entails that nonphysicalism is inexpressible, then either I should be vastly more confident that nonphysicalism is meaningless, or vastly less confident that physicalism is true, or else GRTt does not capture the intuitively very plausible heart of physicalism. Maybe GRTt and GRTm are correct; but that would take a lot of careful argumentation to demonstrate, and I don't want to hold physicalism itself hostage to GRTm. I don't want a disproof of GRTm to overturn the entire project of reductive physicalism; the project does not hang on so thin a thread. So GRTd is just my new attempt to articulate why our broadly naturalistic, broadly scientific world-view isn't wholly predicated on our confidence in the meaninglessness of the assertions of the Other Side. This dispute is over whether, in a physical universe, we can make sense of anyone even being able to talk about anything non-physical. Four issues complicate any quick attempts to affirm GRTm: 1) Meaning itself is presumably nonfundamental. Without a clear understanding of exactly what is neurologically involved when a brain makes what we call 'representations,' attempts to weigh in on what can and can't be meaningful will be somewhat speculative. And since meaning is nonfundamental, truth is also nonfundamental, is really an anthropological and linguistic category more than a metaphysical one; so sacrificing GRTt may not be as devastating as it initially seems. 2) 'Logical pinpointing' complicates our theory of reference. Numbers are abstracted from observed regularities, but we never come into causal contact with numbers themselves; yet we seem to be able
2Alejandro18yThe need for a totality fact is reminiscent of the beginning of Wittgenstein's Tractatus, It is interesting how the same (or at least analogous) problems, arguments and concerns reappear in successive iterations of the Great Reductionist Project.
2Rob Bensinger8yI don't see anything wrong with this kind of self-reference. We can only explain what generalizations are by asserting generalizations about generalization; but that doesn't undermine generalization itself. GRT would only be an immediate problem for itself if GRT didn't encompass itself.
1[anonymous]8yOkay, so lets assume that the generalization side of things is not a problem, though I hope you'll grant me that if a generalization about x's is meaningful, propositions expressing x's individually are meaningful. That is, if 'every meaningful proposition can be expressed by physics+logic (eventually)', then 'the proposition "the cat is on the mat" is meaningful' is meaningful. It's this that I'm worried about, and the generalization only indirectly. So: 1) A proposition is meaningful if and only if it is expressible by physics+logic, or merely by logic. 2) If a proposition is expressible by physics+logic, it constrains the possible worlds. 3) If the proposition "the cat is on the mat" is meaningful, and it is expressible by physics+logic, then it constrains the possible worlds. 4) If the proposition "the cat is on the mat" constrains the possible worlds, then the proposition "the proposition 'the cat is on the mat' is meaningful" does not constrain the possible worlds. Namely, no proposition of the form '"XYZ" constrains the possible worlds' itself constrains the possible worlds. So if 'XYZ' constrains the possible worlds, then for every possible world, XYZ is either true of that world or false of that world. But if the proposition '"XYZ" constrains the possible worlds' expresses simply that, namely that for every possible world XYZ is either true or false of that world, then there is no world of which '"XYZ" constrains the possible worlds' is false. 5) The proposition 'the proposition "the cat is on the mat" is meaningful' is not both meaningful and expressible by physics+logic. But it is meaningful, and therefore (as per premise 1) it is expressible by mere logic. 6) Every generalization about a purely logical claim is itself a purely logical claim (I'm not sure about this premise) 7) The GRT is a purely logical claim. I'm thinking EY wants to get off the GRT boat here: I don't think he intends the GRT to be a logical axiom or derivable from logical ax
2Rob Bensinger8yI don't think we need this rule. It would make logical truths / tautologies meaningless, inexpressible, or magical. (We shouldn't dive into Wittgensteinian mysticism that readily.) That depends on what you mean by "proposition." The written sentence "the cat is on the mat" could have been ungrammatical or semantically null, like "colorless green ideas sleep furiously." After all, a different linguistic community could have existed in the role of the English language. So our semantic assertion could be ruling out worlds where "the cat is on the mat" is ill-formed. On the other hand, if by "proposition" you mean "the specific meaning of a sentence," then your sentence is really saying "the meaning of 'the cat is on the mat' is a meaning," which is just a special case of the tautology "meanings are meanings." So if we aren't committed to deeming tautologies meaningless in the first place, we won't be committed to deeming this particular tautology meaningless. This looks like a problem of self-reference, but it's really a problem of essence-selection. When we identify something as 'the same thing' across multiple models or possible worlds, we're stipulating an 'essence,' a set of properties providing identity-conditions for an object. Without such a stipulation, we couldn't (per Leibniz's law) identify objects as being 'the same' while they vary in temporal, spatial, or other properties. If we don't include the specific meaning of a sentence in its essence, then we can allow that the 'same' sentence could have had a different meaning, i.e., that there are models in which sentence P does not express the semantic content 'Q.' But if we instead treat the meaning of P as part of what makes a sentence in a given model P, then it is contradictory to allow the possibility that P would lack the meaning 'Q,' just as it would be contradictory to allow the possibility that P could have existed without P existing. What's important to keep in mind is that which of these cases ar
0[anonymous]8yNo, I didn't say that constraining possible worlds is a necessary condition on meaning. I said this: This leaves open the possibility of meaningful, non-world-constraining propositions (e.g. tautologies, such as the claims of logic), only they are not physics+logic expressible, but only logic expressible. That's not relevant to my point. I'd be happy to replace it with any proposition we can agree (for the sake of argument) to be meaningful. In fact, my argument will run with an unmeaningful proposition (if such a thing can be said to exist) as well. No, this isn't what I mean. By 'proposition' I mean a sentence, considered independently of its particular manifestation in a language. For example, 'Schnee ist weiss' and 'Snow is white' express the same proposition. Saying and writing 'shnee ist weiss' express the same proposition. I didn't understand this. Propositions (as opposed to things which express propositions) are not "in" worlds, and nothing of my argument involved identifying anything across multiple worlds. EY's OP stated that in order for an [empirical] claim to be meaningful, it has to constrain possible worlds, e.g. distinguish those worlds in which it is true from those in which it is false. Since a statement about the meaningfulness of propositions doesn't do this (i.e. it's a priori true or false of all possible worlds), it cannot be an empirical claim. So I haven't said anything about essence, nor does any part of my argument require reference to essence. Agreed, it is not a merely logical claim. Given that it is also not an empirical (i.e. a physics+logic claim), and given my premise (1), which I take EY to hold, then we can conclude that the GRT is meaningless.
0Rob Bensinger8yMy mistake. When you said "physics+logic," I thought you were talking about expressing propositions in general with physics and/or logic (as opposed to reducing everything to logic), rather than talking about mixed-reference assertions in particular (as opposed to 'pure' logic). I think you'll need to explain what you mean by "logic"; Eliezer's notion of mixed reference allows that some statements are just physics, without any logical constructs added. What 'Schnee ist weiss' and 'Snow is white' have in common is their meaning, their sense. A proposition is the specific meaning of a declarative sentence, i.e., what it declares. Then they don't exist. By 'the world' I simply mean 'everything that is,' and by 'possible world' I just mean 'how everything-that-is could have been.' The representational content of assertions (i.e., their propositions), even if they somehow exist outside the physical world, still have to be related in particular ways to our utterances, and those relations can vary across physical worlds even if propositions (construed non-physically) cannot. The utterance 'the cat is on the mat' in our world expresses the proposition . But in other worlds, 'the cat is on the mat' could have expressed a different proposition, or no proposition at all. Now let's revisit your (4): A clearer way to put this is: If the proposition p, , varies in truth-value across possible worlds, then the distinct proposition q, , does not vary in truth-value across possible worlds. But what does it mean to say that a proposition is meaningful? Propositions just are the meaning of assertions. There is no such thing as a 'meaningless proposition.' So we can rephrase q as really saying: . In other words, you are claiming that all propositions exist necessarily, that they exist at (or relative to) every possible world, though their truth-value may or may not vary from world to world. Once we analyze away the claim that propositions are 'meaningful' as really just the clai
0[anonymous]8yWe have a couple of easy issues to get out of the way. The first is the use of the term 'proposition'. That term is famously ambiguous, and so I'm not attached to using it in one way or another, if I can make myself understood. I'm just trying to use this term (and all my terms) as EY is using them. In this case, I took my cue from this: http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/ [http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/] EY does not seem to intend 'proposition' here to be identical to 'meaning'. At any rate, I'm happy to use whatever term you like, though I wish to discuss the bearers of truth value, and not meanings. I don't want to define the GRT at all. I'm using EY's definition, from the OP: You might want to disagree with EY about this, but for the purposes of my argument I just want to talk about EY's conception of the GRT. Nevertheless, I think EY's conception, and therefore mine, follows from yours, so it may not matter much as long as you accept that everything false should also be expressible by physics+logic (as EY, I believe, wants to maintain). I'd like to get these two issues out of the way before responding to the rest of your interesting post. Let me know what you think.
1Rob Bensinger8yEliezer is not very attentive to the distinction between propositions, sentences (or sentence-types), and utterances (or sentence-tokens). We need not import that ambiguity; it's already caused problems twice, above. An utterance is a specific, spatiotemporally located communication. Two different utterances may be the same sentence if they are expressed in the same way, and they intend the same proposition if they express the same meaning. So: A) 'Schnee ist weiss.' B) 'Snow is white.' C) 'Snow is white.' There are three utterances above, two distinct sentences (or sentence-types), and only one distinct proposition/meaning. Clearer? EY misspoke. As with the proposition/utterance confusion, my interest is in evaluating the substantive merits or dismerits of an Eliezer steel man, not in fixating on his overly lax word choice. Reductionism is falsified if they are true sentences that cannot be reduced, not just if there are meaningful but false ones that cannot be so reduced. It's obvious that EY isn't concerned with the reducibility of false sentences because he doesn't consider it a grave threat, for example, that the sentence "Some properties are not reducible to physics or logic." is meaningful.
1[anonymous]8yWhich one is the proper object of truth-evaluation, and which one is subject to the question 'is it meaningful'? EY's position throughout this sequence, I think, has been that whichever is the proper object of truth-evaluation is also the one about which we can ask 'is it meaningful?' If you don't think these can be the same, then your view differs from EY's substantially, and not just in terminology. How about this? I'll use the term 'gax' for the thing that is a) properly truth-evaluable, and b) subject to the question 'is this meaningful'. Maybe, but the entire sequence is about the question of a criterion for the meaningfulness of gaxes. His motivation may well be to avert the disaster of considering a true gax to be meaningless, but his stated goal throughout the sequence is establishing a criterion for meaningfulness. So I guess I have to ask at this point: other than the fact that you think his argument stands stronger with your version of the GRT, do you have any evidence (stronger than his explicit statement otherwise) that this is EY's actual view?
0Rob Bensinger8yThe proposition/meaning is what we evaluate for truth. Thus utterances sharing the same proposition cannot differ in truth-value. Utterances or utterance-types can be evaluated for meaningfulness. To ask 'Is that utterance meaningful?' is equivalent to asking, for apparent declarative sentences, 'Does that utterance correspond to a proposition/meaning?' You could ask whether sentence-types or -tokens intend propositions (i.e., 'are they meaningful?'), and, if they do intend propositions, whether they are true (i.e., whether the propositions correspond to an obtaining fact). But, judging by how Eliezer uses the word 'proposition,' he doesn't have a specific stance on what we should be evaluating for truth or meaningfulness. He's speaking loosely. I think the sequence is about truth, not meaning. He takes meaning largely for granted, in order to discuss truth-conditions for different classes of sentence. He gave a couple of hints at ways to determine that some utterance is meaningless, but he hasn't at all gone into the meta-semantic project of establishing how utterances acquire their content or how content in the brain gets 'glued' (reference magnetism) to propositions with well-defined truth-conditions. He hasn't said anything about what sorts of objects can and can't be meaningful, or about the meaning of non-assertive utterances, or about how we could design an A.I. with intentionality (cf. the Chinese room), or about what in the world non-empirical statements denote. So I take it that he's mostly interested in truth here, and meaning is just one of the stepping stones in that direction. Hence I don't take his talk of 'propositions' too seriously. It would be a waste of effort to dig other evidence up. Ascribing your version of GRT to Eliezer requires us to theorize that he didn't spend 30 seconds thinking about GRT, since 30 seconds is all it would take to determine its falsehood. If that version of GRT is his view, then his view can be dismissed immediatel
0[anonymous]8yOkay, it doesn't look like we can make any progress here, since we cannot agree on what EY's stance is supposed to be. I think you're wrong that EY hasn't said much about the problem of meaning in this sequence. That's been its explicit and continuous subject. The question throughout has been ...and this seems to have been discussed throughout, e.g.: But if you've been reading the same sequence I have, and we still don't agree on that, then we should probably move on. That said... I'd be interested to know what you have in mind here. Why would the 'meaningfulness' version of the GRT be so easy to dismiss? I want, first, to be clear that I've found this conversation very helpful and interesting (as all my conversations with you have been). Second, the above is unfair: understanding EY in terms of what he explicitly and literally says is not 'the most absurd possible interpretation'. It may be the wrong interpretation, but to take him at face value cannot be called absurd.
7Rob Bensinger8yThe colloquial meaning of "proposition" is "an assertion or proposal". The simplest explanation for EY's use of the term is that he was oscillating somewhat between this colloquial sense and its stricter philosophical meaning, "the truth-functional aspect of an assertion". A statement's philosophical proposition is (or is isomorphic to) its meaning, especially inasmuch as its meaning bears on its truth-conditions. Confusion arose because EY spoke of 'meaningless' propositions in the colloquial sense, i.e., meaningless linguistic utterances of a seemingly assertive form. If we misinterpret this as asserting the existence of meaningless propositions in the philosophical sense, then we suddenly lose track of what a 'proposition' even is. The intuitive idea of a proposition is that it's what different sentences that share a meaning have in common; treating propositions as the locus of truth-evaluation allows us to rule out any doubt as to whether "Schnee ist weiss." and "Snow is white." could have different truth-values while having identical meanings. But if we assert that there are also propositions corresponding to meaningless locutions, or that some propositions are non-truth-functional, then it ceases to be clear what is or isn't a 'proposition,' and the term entirely loses its theoretical value. Since Eliezer has made no unequivocal assertion about there being meaningless propositions in the philosophical sense, the simpler and more charitable interpretation is that he was just speaking loosely and informally. My sense is that he's spent a little too much time immersed in positivistic culture, and has borrowed their way of speaking to an extent, even though he rejects and complicates most of their doctrines (e.g., allowing that empirically untestable doctrines can be meaningful). This makes it a little harder to grasp his meaning and purpose at times, but it doesn't weaken his doctrines, charitably construed. I just have higher standards than you do for what
1Peterdjones8yRob, you are better at being EY than EY is.
0[anonymous]8ySo we're assuming for the purposes of your argument here that the GRT is about meaningfulness, and we should distinguish this from your (and perhaps EY's) considered view of the GRT. So lets call the 'meaningfulness' version I attributed to EY GRTm, and the one you attribute to him GRTt. We can gloss the difference thusly: the GRTt states that anything true must be expressible in physical+logical, or merely logical terms (tautologies, etc.). The GRTm states that anything true or false must be expressible physical+logical, or merely logical terms. Your argument appears to be that on the GRTm view, the sentence "some properties are not reducible to physics or logic" would be meaningless rather than false. You take this to be a reductio, because that sentence is clearly meaningful and false. Why do you think that, on the GRTm, this sentence would be meaningless? The GRTm view, along with the GRTt view, allows that false statements can be meaningful. And I see no reason to think that the above sentence couldn't be expressed in physics+logic, or merely logical terms. So I'm not seeing the force of the reductio. You don't argue for the claim that "some properties are not reducible to physics or logic" would be meaningless on the GRTm view, so could you go into some more detail there?
2Rob Bensinger8yOne way to get at what I was saying above is that GRTt asserts that all true statements are analyzable into truth-conditions that are purely physical/logical, while GRTm asserts that all meaningful statements are analyzable into truth-conditions that are purely physical/logical. If we analyze "Some properties are not reducible to physics or logic." into physical/logical truth-conditions, we find that there is no state we can describe on which it is true; so it becomes a logical falsehood, a statement that is false given the empty set of assumptions. Equally, GRTm, if meaningful, is a tautology if we analyze its meaning in terms of its logico-physically expressible truth-conditions; there is no particular state of affairs we can describe in logico-physical terms in which GRTm is false. But perhaps focusing on analysis into truth-conditions isn't the right approach. Shifting to your conception of GRTm and GRTt, can you find any points where Eliezer argues for GRTm? An argument for GRTm might have the following structure: 1. Some sentences seem to assert non-physical, non-logical things. 2. But the non-physicologicality of those things makes those sentences meaningless. 3. So non-physicologicality in general probably makes statements meaningless. On the other hand, if Eliezer is really trying to endorse GRTt, his arguments will instead look like this: 1. Some sentences seem to be true but non-physicological. 2. But those sentences are either false or analyzable/reducible to purely physicological truths. 3. So non-physicological truths in general are probably expressible purely physicologically. Notice that the latter argumentative approach is the one he takes in this very article, where he introduces 'The Great Reductionist Project.' This gives us strong reason to favor GRTt as an interpretation over GRTm, even though viewed in isolation some of his language does suggest GRTm. Is there any dialectical evidence in favor of the alternative interpr
1[anonymous]8yHere's my exchange with EY: EY replied: So I replied: And he said: So I'm actually not much less confused. His first reply seems to support GRTt. His second reply (the first word of it anyway) seems to support GRTm. Thoughts?
0Rob Bensinger8yThanks for taking the time to hunt down the facts! I think "Everything true and most meaningful false statements can be expressed this way." is almost completely clear. Unless a person is being deliberately ambiguous, saying "most P are Q" in ordinary English conversation has the implicature "some P aren't Q." I'm not even clear on what the grammar of "That statement is meaningless is false." is, much less the meaning, so I can't comment on that statement. I'm also not clear on how broad "the terms you describe in Logical Pinpointing, Causal Reference, and Mixed Reference" are; he may think that he's sketched meaningfulness criteria somewhere in those articles that are more inclusive than "The Great Reductionist Project" itself allows.
0[anonymous]8yI think that was fairly clear. Each of those articles is explicitly about a form of reference sentences can have: logical, physical, or logicophysical, and his statement of the GRT was just that all meaningful (or in your reading, true) things can be expressed in these ways. But it occurs to me that we can file something away, and tomorrow I'm going to read over your last three or four replies and think about the GRTt whether or not it's EY's view. That is, we can agree that the GRTm view is not a tenable thesis as we understand it.
0Rob Bensinger8yOne possible source of confusion: What is the meaning of the qualifier "physical"? "Physical," "causal," "verifiable," and "taboo-able/analyzable" all have different senses, and it's possible that for some of them Eliezer is more willing to allow meaningful falsehoods than for others.
0Rob Bensinger8yYeah. I'll re-read his posts, too. In all likelihood I didn't even think about the ambiguity of some of his statements, because I was interpreting everything in light of my pet theory that he subscribes to GRTt. I think he does subscribe to GRTt, but I may have missed some important positivistic views of his if I was only focusing on the project of his he likes. Some of the statements you cited where he discusses 'meaning' do create a tension with GRTt.
0Eliezer Yudkowsky8yMy reply to this conversation so far is at: * http://lesswrong.com/lw/frz/mixed_reference_the_great_reductionist_project/8067 [http://lesswrong.com/lw/frz/mixed_reference_the_great_reductionist_project/8067]
1[anonymous]8yYou'd just about convinced me, until I reread the OP and found it consistently and unequivocally discussing the question of meaningfulness. So before we go on, I'm just going to PM Eliezer and ask him what he meant. I'll let you know what he says if he replies.

From the logic point of view, counterfactuals are unproblematic, in that I can prove consistency of my favorite counterfactual logic by exhibiting a model. Then as far as a logician is concerned, we are done: our counterfactual worlds live in the mathematical structure of the exhibited model.


From the computer science point of view a little more is required, but as luck would have it, we can implement counterfactuals in some causal models. If your causal model is an actual circuit, then not only is it perfectly meaningful to ask "the output of the circuit is 1, what would be the output if I changed gate_0212 from OR to AND?" but it is possible to implement the counterfactual directly, and check. This is because we know enough about the causal model to ensure counterfactual invariance (e.g. other gates do not change). People use this kind of counterfactual reasoning to debug programs and circuits all the time! So from the "comp. sci" point of view, counterfactuals are unproblematic. The counterfactual universe "exists" in the operational sense of us having an effective procedure to get us there.


The problem arises when you are trying to deal wit... (read more)

Further to my other comment, how would one define a counterfactual in the Game of Life? Surely we should be able to analyze this simple case first if we want to talk about counterfactuals in the "real world"?

6faul_sname8ySay we have a blank grid. It would be reasonable to say "if this blank grid had a glider, the glider would move up and left" even if there is no actual glider on the grid. You can still make a mental model of what would happen in a changed grid, even if that grid isn't instantiated. I chose the example of a glider to show that you don't actually have to run a step-by-step simulation of the grid to predict behavior and thus emphasize that a counterfactual is a mental model, not an actual universe. Counterfactuals require a universe and a model that is isomorphic to that universe in some way, but the isomorphism doesn't have to be perfect.
1shminux8yI like this example, and it counts as a counterfactual in our universe, where there is no actual glider drawn on an actual blank grid, but I am not sure it would count as a counterfactual in a GoL universe, unless you define such a universe to contain only a single blank canvas and nothing else.
1faul_sname8ySo what you're saying is that if we did define such a universe to contain only a single blank canvas and nothing else, our internal model of a grid with a glider would be a good example of a counterfactual? (thus demonstrating that counterfactuals can, themselves, contain counterfactuals).
0shminux8yNice one. I am trying to nail the definition of a counterfactual in a GoL universe. Clearly, if you define this universe as a blank canvas, every game is a counterfactual. However, if the GoL universe is a collection of all possible games (hello, Tegmark!!), then there are no counterfactuals of the type you describe in it. However, what army1987 suggested [http://lesswrong.com/lw/frz/mixed_reference_the_great_reductionist_project/7zlx] would probably still count as a counterfactual: given a realization of a game and a certain position in it, find whether another realization, with an extra glider, converges to the same position. The counterfactualness there comes from privileging one game from the lot, not from mapping it to our universe.
3[anonymous]8yYou go back to an earlier state of the grid, erase a glider, and resume the simulation from there.
3shminux8yA few thoughts on the matter. What you suggest is one type of a counterfactual: change the state. Erasing a glider is, of course, illegal under the rules of the game, so to make it a legal game, you have to trace it backwards from the new state, or else you are not talking about the GoL anymore. This creates an interesting aside. Like the real life, the Game of Life is not well-posed [http://en.wikipedia.org/wiki/Well-posed_problem] when run backwards: infinitely many configurations are legal just one simulation step back from a given one. This is because objects in the Game can die without a trace, and so can appear without a cause when run backward. This is similar to the way the world appears to us macroscopically: there is no way to tell the original shape of a drop of ink after it is dissolved in a bucket of water. This situation is known as the reversibility problem [http://en.wikipedia.org/wiki/Reversible_cellular_automaton] in cellular automata. This freedom to create life out of nothing when simulating GoL backwards does not help us, however, in constructing the same starting configuration as the one with the glider not erased, because GoL is deterministic in the forward direction, and you cannot arrive at two different configurations when starting from the same one. But it does let us answer the following hypothetical: would adding a glider have made a difference in the future? I.e. would the glider in question collide with another object and disintegrate without a trace after several turns? This "butterfly effect" investigation is trivial in the GoL and similar irreversible automata with simple rules, but it is quite suggestive if we consider the original question: We can liken Oswald to your glider and see of removing it from the simulation ("counterfactual surgery") still results in the same final configuration (JFK shot). If so, we can declare the above statement to be "true", though not in the same sense as "Oswald shot JFK" is true, but in the

I am finding the same problem with all articles in this sequence that I find with the explanation of Bayes' Theorem on Yudkowsky's main site. There are parts that seem so blindingly obvious they don't bear mentioning.

Yet soon thereafter, all of a sudden, I find myself completely lost. I can understand parts of the text separately, but can't link them together. I don't see where it comes from, where it's going, what problems it's addressing. I find it especially difficult to relate the illustrations to what's going on in the text.

I seldom have had this problem with the blog posts from the classical sequences (with some exceptions, such as his quantum physics sequence, which left me similarly confused).

Am I the only one who feels this way?

EDIT: upon reflection, this phenomenon, of feeling like there was a sudden, imperceptible jump from the boringly obvious to the utterly confusing, I've already experienced it before: in college, many lessons would follow this pattern, and it would take intensive study to figure out the steps the professor merrily jumped between what is, to them, two categories of the set of blindingly obvious things they already know and need to explain again. Maybe there's some sort of pattern there?

This is a problem known as "bad writing" which I continue to struggle with, even after many years. Can you list the first part where you felt lost? Somewhere between there and the previous part, I must have skipped something.

I do hope people appreciate that all the "blindingly obvious" parts are parts where (at least in my guesstimation, and often in my actual experience) somebody else would otherwise get lost. The "obvious" is not the same for all people.

3Ritalin8yI would tell you about it, but now I'm afraid I'm distracting you from the latest chapter in Methods, which is kind of overdue and eagerly expected (and half of a Na No Wri Mo novel's wordcount? what exactly have you been up to?). I swear I'll take the time to go through the sequence and identify and point out the points at which I got lost, but first I'll wait for you to publish that chapter. And yes, I know that one person's obvious is another's opaque; after all, that is the very root of this very problem. @Donvoters: I am genuinely sorry; I'm just being honest here. This is like being addicted to a drug and, after months of waiting, hearing that the next batch is imminent and huge. I'm sort of fretting right now, and I'm probably not the only one.
2NancyLebovitz8yDid you get back to Eliezer about what you found difficult in Mixed Reference?
0Ritalin8yI had forgotten. Thanks for reminding me.
1lukeprog8yI'll be linking to this comment pretty often, I think, to reply to commentors on my own posts.
0Ritalin8ySo, first of all, I'm going to complain that doing this was a pain in the neck, and that commenting/editing would be much easier on Gdocs or on some similar application. In fact, I used Gdocs to write this, because doing so on the LW interface would have been intolerable. Still, there you are; I suppose you mean an “elementary particle”? Took me a second to get it; it’s not the standard expression. I found this frankly misleading. When you say “mental image”, I think of an actual visualization, which is not a category a “low-level physical state” can belong to (or be “inisde of”). “Mental configuration” or “mental arrangement” might be more appropriate, and “corresponding” or “not corresponding” sound more acceptable. However, I’d rephrase the entire thing differently, as “different low-level physical states whose observation would result in a mental image of some apples on the table or a kitten on the table”. The picture underneath is confusing because the previous paragraph makes us expect a “brain” or a “head” “visualizing” the “high states”, not the “high states” being somehow (one is function of the other, a correspondence? identification? belonging) linked to the “this actual universe in all its low-level glory” picture. I also find the choice of fuzziness around the edges of picture fragments, and the use of dotted lines, to be rather jarring. Is it supposed to be cute? Because what it conveys to me is “we’re not sure” and “the concept is unclear” and “the correspondence is distant or uncertain”, and that contrasts strongly with the actual text, which is much more rigorous. At the very least, you may want the line from “the Universe” to “all possible worlds” to end in a thicker dot, and to distort the shape of “all the possible worlds that would result in “a bunch of apples on the table” (that’s what the dotted circle means, right?) to be bigger and more potato-shaped or something, as is traditional to denote “abstract set of stuff whose shape doesn’t matt
0BerryPick68yWhen reading your work, I often share the feeling that Ritalin just described. In this particular instance, I was with you up until you started talking about the Born probabilities and then I just felt totally lost.
3gwern8yAh yes. Have you read about 'inferential distance' yet? :)
5Ritalin8yYes, I knew about them. I try to shorten them it in everything I do, from my vocabulary register [http://home.comcast.net/~garbl/stylemanual/words.htm] to the concepts I use, which I try to make as rent-paying and empirical as possible. It's heavier work than I foresaw. This has moved me from "impossible-to-understand nerd who talks down to you from an impenetrable ivory tower" to "that creepy guy who talks in punches and has strange ideas that make sense". Or, if you will, from a Sheldon Cooper to a coolness-impaired Tyler Durden. Socially, it wasn't a big gain.
2[anonymous]8yThat's more or less how I felt about Penrose's The Road to Reality. The great thing about talking with someone in person (or at least, in real-time one-to-one conversations) is that you can first assess how large the inferential distance is, e.g. “What are you working on?” “Cosmic rays. Do you know what cosmic rays are?” “No.” “Do you know what subatomic particles are?” “No.” “Do you know what an atom is?” “Yes.”
1Ritalin8yYou just have to hope they won't Wheatley they way around your questions and try to feign understanding things they don't, treating knowledge like a status game. That can really put a damper on meaningful communication.
1[anonymous]8yI don't think that ever happened to me -- at worst, they incorrectly believed that the understanding they had got from popularizations was accurate. But pretty much everybody at some point admits “I wish I could understand everything of that, but that sounds cool”, except people who actually understand (as evidenced by the fact that they ask questions too relevant for them to be just parroting stuff to hide ignorance). (I guess the kind of people who treat everything like a status game would consider knowledge about sciency topics to be nerdy and therefore uncool.)
3Qiaochu_Yuan8yOne way to treat knowledge like a status game is to be a "science fan." This is a game you play with other "science fans," and you win by knowing more "mind-blowing facts" about science than other people. It is popular on Quora.
1MrMind8yAbsolutely not, it's quite a common feeling among mathematicians :)
3Ritalin8yAh, yes, the mathematician's double take [http://tvtropes.org/pmwiki/pmwiki.php/Main/DoubleTake]. One should be wary of those, especially at a high level; when an elder mathematician wants to skip inferential steps for the sake of expediency, there's a chance that "then a miracle occurs [http://blog.stackoverflow.com/wp-content/uploads/then-a-miracle-occurs-cartoon.png] " is somewhere in that mess of a blackboard. In fact, the whole point of having a younger chevruta [http://lesswrong.com/lw/6j1/find_yourself_a_worthy_opponent_a_chavruta/] is so that they can point out that kind of details the bigger, more inferentially-distant minds might accidentally gloss over. They're like the great writer's spell-checker. Or like the comment section for Yudkowsky's blog posts. Joking aside, I was actually wondering if others here felt the same way as I about EY's latest sequence of posts.

Take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

Take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of time, one molecule of velocity. Oh wait...

9cousin_it8yYeah, with "atoms of threeness" Eliezer seems to have narrowly missed an interesting point. Multiplying apples to get square apples makes no sense, but if we'd divided them instead, we'd notice that the universe contains dimensionless constants [http://en.wikipedia.org/wiki/Dimensionless_physical_constant] - if the universe can be said to "contain" anything at all, like atoms or velocity.
2Chris_Leong2yThis is a really good point, I'm disappointed that he didn't respond to it.

Incidentally, I'd give a probability of about 0.1 to the statement "If Lee Harvey Oswald hadn't shot John F. Kennedy, someone else would have" - there have been many people who have tried to assassinate Presidents.

I was going to challenge you to a wager, but then I realized that (1) I agree with your estimate, and (2) I don't know how we'd settle a wager about a counterfactual.

2shminux8yI guess this is my main issue with the whole sequence. No way to settle a wager means in my mind that there is no way to ascertain the truth of a statement, no matter how much physics, math and logic you throw at it. EDIT: Trying to steel-man the game of counterfactuals: One way to settle the wager would be to run a simulation of the world as is, watch the assassination happen in every run, then do a tiny change which leads to no measurable large-scale effects (no-butterflies condition), except "Lee Harvey Oswald hadn't shot John F. Kennedy". But what does "Lee Harvey Oswald hadn't shot John F. Kennedy" mean, exactly? He missed? Kennedy took a different route? Oswald grew up to be an upstanding citizen? One can imagine a whole spectrum of possible counterfactual Kennedy-lives (KL) worlds, some of which are very similar to ours up to the day of the shooting, and others not so much. What properties of this spectrum would constitute a winning wager? Would you go for "every KL world has to be otherwise indistinguishable (by what criteria? Media headlines?) from ours"? Or "there is at least one KL world like that"? Or something in between? Or something totally different? Until one drill down and settles the definition of a counterfactual, probably in a way similar to the above, I see no way to meaningfully discuss the issue.
9Eliezer Yudkowsky8yThat's the point of this post. Only causal models can be settled. Counterfactuals cannot be observed, and can only be derived as logical constructs via axiomatic specification from the causal models which can be observed.
0Bugmaster8yAs faul_sname said below, one way to settle the wager -- and I mean an actual wager in our current world, where we don't have access to Oracle AIs -- would be to aggregate historical data about presidential assassinations in general, and assassination attempts on Kennedys in particular, and build a model out of them. We could then say, "Ok, there's a 82% chance that, in the absence of Oswald, someone would've tried to assassinate Kennedy, and there's a 63% chance that this attempt would've succeeded, so there's about a 52% chance that someone would've killed Kennedy after all, and thus you owe me about half of the prize money".
2TsviBT8y...which would be settling a wager about the causal model that you built. The closer your causal model comes to accurately reflecting the "counterfactual world" that it is supposed to refer or correspond to, the more it actually instantiates that world. (Except that by performing counterfactual surgery, you have inserted yourself into the causal mini-universe that you've built.) The "counterfactual" stops being counter, and starts being factual.
1BerryPick68yThanks to this comment something in my brain just made an audible 'click', and I understand this current sequence much better. Thank you.
0shminux8yHow do you know how close it is? And what's the difference between a counterfactual world and a model of it?
0TsviBT8yTL;DR: skip to the last sentence. A counterfactual world doesn't exist (I think?), whereas your model does. If your model is a full-blown Planck-scale-detailed simulation of a universe, then it is a physical thing which fits very well your logical description of a counterfactual world. E.g., if you make a perfect simulation of a universe with the same laws of physics as ours, but where you surgically alter it so that Oswald misses, then you have built an "accurate" model of that counterfactual - that is, one of the many models that satisfy the (quasi-)logical description, "Everything is the same except Oswald didn't kill Kennedy". A model is closer to the counterfactual when the model better satisfies the conditions of the counterfactual. A statistical model of the sort we use today can be very effective in limited domains, but it is a million miles away from actually satisfying conditions of a counterfactual universe. For example, consider Eliezer's diagram for the "Oswald didn't kill Kennedy" model. It uses the impressive, modern math of conditional probability - but it has five nodes. I would venture to guess that our universe has more than five nodes, so the model does not fit the description "a great big causal universe in all its glory, but where Oswald didn't kill Kennedy". More realistically: Our model might have millions of “neurons” in a net, or millions of nodes in a PGM, or millions of feature parameters for regression... but that is nowhere near the complexity contained in .1% of one millionth of the pinky toe of the person we are supposedly modelling. It works out nicely for us because we only want to ask our model a few high-level questions, and because we snuck in a whole bunch of computation, e.g. when we used our visual cortex to read the instrument that measures the patient’s blood pressure. But our model is not accurate in an absolute sense. This last example is a model of another physical system. The Oswald example is supposed to model a
0Bugmaster8ySorry, I still don't think I understand your objection. Let's say that, instead of cancer insurance, our imaginary insurance company was selling assassination insurance. A politician would come to us; we'd feed what we know about him into our model; and we'd quote him a price based on the probability that he'd be assassinated. Are you saying that such a feat cannot realistically be accomplished ? If so, what's the difference between this and cancer insurance ? After all, "how likely is this guy to get killed" is also a "high-level question", just as "how likely is this guy to get cancer" -- isn't it ?
0TsviBT8yYeah we are definitely talking past each other. 1. Someone could realistically predict whether or not you will be assassinated, with high confidence, using (perhaps much larger) versions of modern statistical computations. 2. To do so, they would not need to construct anything so elaborate as a computation that constitutes a chunk of a full blown causal universe. They could ignore quarks and such, and still be pretty accurate. 3. Such a model would not refer to a real thing, called a “counterfactual world”, which is a causal universe like ours but with some changes. Such a thing doesn’t exist anywhere. 4. ...unless we make it exist by performing a computation with all the causality-structure of our universe, but which has tweaks according to what we are testing. This is what I meant by a more accurate model.
-1Bugmaster8yAll right, that was much clearer, thanks ! But then, why do we care about a "counterfactual world" at all ? My impression was that Eliezer claimed that we need a counterfactual world in order to evaluate counterfactuals. But I argue that this is not true; for example, we could ask our model "what are my chances of getting cancer ?" just as easily as "what are my chances of getting cancer if I stop smoking right now ?", and get useful answers back -- without constructing any alternate realities. So why do we need to worry about a fully-realized counterfactual universe ?
3TsviBT8yExactly. We don't. There are only real models, and logical descriptions of models. Some of those descriptions are of the form "our universe, but with tweak X", which are "counterfactuals". The problem is that when our brains do counterfactual modeling, it feels very similar to when we are just doing actual-world modeling. Hence the sensation that there is some actual world which is like the counterfactual-type model we are using.
1Bugmaster8yMy impression was that Eliezer went much farther than that, and claimed that in order to do counterfactual modeling at all, we'd have to create an entire counterfactual world, or else our models won't make sense. This is different from saying, "our brains don't work right, so we've got to watch out for that".
2TsviBT8yI definitely didn't understand him to be saying that. If that's what he meant then I'd disagree.
0Bugmaster8yI'm not sure I understand this statement. Forget Oswald for a moment, and let's imagine we're working at an insurance company. A person comes to us, and says, "sell me some cancer insurance". This person is currently does not have cancer, but there's a chance that he could develop cancer in the future (let's pretend there's only one type of cancer in the world, just for simplicity). We collect some medical data from the person, feed it into our statistical model (which has been trained on a large number of past cases), and it tells us, "there's a 52% chance this person will develop cancer in the next 20 years". Now we can quote him a reasonable price. How is this situation different from the "killing Kennedy" scenario ? We are still talking about a counterfactual, since Kennedy is alive and our applicant is cancer-free.
0TsviBT8ySee my reply above, specifically the last paragraph.
3faul_sname8yYou don't have to construct the model at that level of detail to meaningfully discuss the issue. Just look at the base rate of presidential assassinations and update that to cover the large differences with the Kennedy case. If you're trying to simulate a universe without Lee Harvey Oswald, you're probably overfitting, particularly if you're a human. Your internal model of how Kennedy was actually shot doesn't contain a high-fidelity of the world in which Oswald grew up and went through a series of mental states that culminated with him shooting Kennedy (or at least, you're not simulating each mental state to come to the outcome). Instead, you have a model of the world in which Lee Harvey Oswald shoots JFK, and otherwise doesn't really factor into your model. While removing Oswald from the real world would have large effects, removing him from your model doesn't. I think that you ask "what are the chances that Kennedy would have been shot if Oswald hadn't done it?" you're probably asking something along the lines of "If I build the best model I can of the world surrounding that event, and remove Oswald, does the model show Kennedy getting shot, and if so, with what confidence?" So in order to settle the wager, you would have to construct a model of the world that both of you agreed made good enough predictions (probably by giving it information about the state of society at various times and seeing how often it predicts a presidential assassination) and seeing what the answer it spits out is. There might be a problem of insufficient data, but it seems pretty clear to me that when we talk about counterfactuals, we're talking about models of the world that we alter, not actual, existing worlds. If many worlds was false and there was only one, fully deterministic universe (that contained humans), we would still talk about counterfactuals. Unless I'm missing something obvious.
2fubarobfusco8yWell, my model has Oswald in the Marines with Kerry Thornley — aka Lord Omar, of Discordian legend — and a counterfactual in which a slightly more tripped-out conversation between the two would have led to Oswald becoming an anarchist instead of a Marxist; thus preventing his defection to the Soviet Union ....
1TraderJoe8yAnd many people who have tried to assassinate Kennedys...

three kinds of stuff, actual worlds, logical validities, and counterfactuals, and logical validities.

This list contains duplicate elements.

(I have not yet encountered a claim to have finished Reducing anthropics which (a) ends up with only two kinds of stuff and (b) does not seem to imply that I should expect my experiences to dissolve into Boltzmann-brain chaos in the next instant, given that if all this talk of 'degree of realness' is nonsense, there is no way to say that physically-lawful copies of me are more common than Boltzmann brain copies of me.)

I think it was Vladimir Nesov who said something like the following: Anticipation is just what it feels like when your brain has decided... (read more)

0bryjnar8yYou've just redefined "expect" so that the problem goes away. For sure, there's no point practically worrying about outcomes that you can't do anything about, but that doesn't mean you shouldn't expect them. If you want to argue that we should use different notion than "expect", or that the practical considerations show that the Boltzmann-brain argument isn't a problem, that's fine, but this has all the benefits of theft over honest toil.
1Tyrrell_McAllister8yI don't believe that there is any redefinition going on here. I intend to use "expect" in exactly the usual sense, which I take also to be the sense that Eliezer was using when he wrote "I have not yet encountered a claim to have finished Reducing anthropics which ... does not seem to imply that I should expect my experiences to dissolve into Boltzmann-brain chaos in the next instant". Both he and I are referring to a particular mental activity, namely the activity that is normally called "expecting". With regard to this very same activity, I am addressing the question of whether one "should expect [one's] experiences to dissolve into Boltzmann-brain chaos in the next instant". (Emphasis added.) The potentially controversial claim in my argument is not the definition of "expect". That definition is supposed to be utterly standard. The controversial claim is about when one ought to expect. The "standard view" is that one ought to expect an event just when that event has a probability of happening that is greater than some threshold. To argue against this view, I am pointing to the fact that expecting an event is a certain mental act. Since it is an act, a proper justification for doing it should take into account utilities as well as probabilities. My claim is that, once one takes the relevant utilities into account, one easily sees that one shouldn't expect oneself to dissolve into Boltzmann-brain chaos, even if that dissolution is overwhelmingly likely to happen.
0bryjnar8yAh, okay. You're quite right then, I misdiagnosed what you were trying to do. I still think it's wrong, though. In particular, I don't think the "should" in that sentence works the way you're claiming that it does. In context, "Should I expect X?" seems equivalent to "Would I be correct in expecting X?" or somesuch, rather than "Ought I (practically/morally) to expect X?". English is not so well-behaved as that. I guess it kind of looks like perhaps it's an epistemic-rationality "should", but I'm not sure it's even that.
0Tyrrell_McAllister8yThen my answer would be, Maybe you would be correct. But why would this imply that anthropics needs any additional "reducing", or that something more than logic + physics is needed? It all still adds up to normality. You still make all the same decisions about what you should work to protect or prevent, what you should think about and try to bring about, etc. All the same things need to be done with exactly the same urgency. Your allegedly impending dissolution doesn't change any of this.
0bryjnar8yRight. So, as I said, you are counselling that "anthropics" is practically not a problem, as even if there is a sense of "expect" in which it would be correct to expect the Boltzmann-brain scenario, this is not worth worrying about because it will not affect our decisions. That's a perfectly reasonable thing to say, but it's not actually addressing the question of getting anthropics right, and it's misleading to present it as such. You're just saying that we shouldn't care about this particular bit of anthropics. Doesn't mean that I wouldn't be correct (or not) to expect my impending dissolution.
1Tyrrell_McAllister8yI would have been "addressing the question of getting anthropics right" if I had talked about what the "I" in "I will dissolve" means, or about how I should go about assigning a probability to that indexical-laden proposition. I don't think that I presented myself as doing that. I'm also not saying that I've solved these problems, or that we shouldn't work towards a general theory of anthropics that answers them. The uselessness of anticipating that you will be a Boltzmann brain is particular to Boltzmann-brain scenarios. It is not a feature of anthropic problems in general. The Boltzmann brain is, by hypothesis, powerless to do anything to change its circumstances. That is what makes anticipating the scenario pointless. Most anthropic scenarios aren't like this, and so it is much more reasonable to wonder how you should allocate "anticipation" to them. The question of whether indexicals like "I" should play a role in how we allocate our anticipation — that question is open as far as I know. My point was this. Eliezer seemed to be saying something like, "If a theory of anthropics reduces anthropics to physics+logic, then great. But if the theory does that at the cost of saying that I am probably a Boltzmann brain, then I consider that to be too high a price to pay. You're going to have to work harder than that to convince me that I'm really and truly probably a Boltzmann brain." I am saying that, even if a theory of anthropics says that "I am probably a Boltzmann brain" (where the theory explains what that "I" means), that is not a problem for the theory. If the theory is otherwise unproblematic, then I see no problem at all.
-1Vaniver8yThat... sounds like success to me. Did you want him to redefine it so the problem stuck around?
1bryjnar8yIt sounds like solving a different problem. Like I said, it's fine to claim that we should use a different notion than the one that we do, but changing it by fiat and then claiming there's no problem is not doing that.

I realize this is a small thing, but this essay appears to use "fact" to mean "a statement sufficiently well-formed to be either true or false" rather than "a statement which is true" and that kept distracting me from its actual point. Can some other word be found?

0Tyrrell_McAllister8yHas the post been edited since you made this comment? I couldn't find any examples of this.
0dspeyer8y
0Tyrrell_McAllister8yHe is saying that that is a fact, but not merely because it is "a statement sufficiently well-formed to be either true or false". For example, he would say that "If Oswald hadn't shot Kennedy, somebody else would've" is not a fact, even though it is equally well formed. The point of the article is to explain how some counterfactuals can be facts while others are not.

I like how you frame this discussion. At this stage, I'd like to see more LessWrongers spending sleepless nights pondering how we want to renegotiate our correspondence theory to keep our theory and jargon as clean and useful as possible. Calling ordinary assertions 'true/false' and logical ones 'valid/invalid' isn't satisfactory. Not only does it problematize mixed-reference cases, but it also confusingly conflates a property of structured groups of assertions (arguments, proofs, etc.) with a property of individual assertions.

Our prototype for 'truth' is ... (read more)

1khafra8yWhat's missing from this part, to keep it from adequately addressing the question (combined with the earlier post [http://lesswrong.com/lw/f4e/logical_pinpointing/] on the nature of logic)?
5Rob Bensinger8y* I. 'Valid' is a bad word for what Eliezer's talking about, because validity is a property of arguments, proofs, inferences, not of individual assertions. For now, I'll call Eliezer's validity 'derivability' or 'provability.' * II. Strictly speaking, is logical derivability a kind of truth, or is it an alternative to truth that sometimes gets confused with it? Eliezer seems to alternate between these two views. * III. Are some statements simply 'valid' / 'derivable'? Or is validity/derivability always relative to a set of inference rules (and, in some cases, axioms or assumptions)? * IIII. If derivability is always relativized in this way, then what does it mean to say that "The product of the apple numbers is six" is true in virtue of a mixture of physical reality and logical derivability? A different set of logical or mathematical rules would have yielded a different result. 'Logical pinpointing' is meant to solve this — there is a unique imaginary, fictional, mathematical, etc. image that we're reasoning with in every case, and 'intuitionistic real numbers' simply aren't the same objects as 'conventional real numbers,' and there simply is no such thing as 'the real numbers' absent the aforementioned specifications. Should we say, then, that truth is bivalent, whereas derivability/validity is trivalent? Here's an example of where this sort of reasoning will lead us: First, there simply isn't any such thing as a 'continuum hypothesis;' we must exhaustively specify a set of inference rules and axioms/assumptions before we can even entertain a discrete logical claim, much less evaluate that claim's derivability. Once we have fully pinpointed the expression, say as the 'conventional continuum hypothesis' or the 'consistent Zermelo-Frankel continuum hypothesis,' we then arrive at the conclusion that the hypothesis is not true (since it is logical and not empirical); nor is it false; nor i
0torekp8yI especially like point/questions V. If we abandon the correspondence theory of truth, can we duck the questions? Because answering them seems like a lot of work, and like Dilbert and his office mates, I love the sweet smell of unnecessary work.

Mainstream status:

AFAIK, the proposition that "Logical and physical reference together comprise the meaning of any meaningful statement" is original-as-a-whole (with many component pieces precedented hither and yon). Likewise I haven't elsewhere seen the suggestion that the great reductionist project is to be seen in terms of analyzing everything into physics+logic.

An important related idea I haven't gone into here is the idea that the physical and logical references should be effective or formal, which has been in the job description since, if I... (read more)

AFAIK, the proposition that "Logical and physical reference together comprise the meaning of any meaningful statement" is original-as-a-whole (with many component pieces precedented hither and yon). Likewise I haven't elsewhere seen the suggestion that the great reductionist project is to be seen in terms of analyzing everything into physics+logic.

This seems awfully similar to Hume's fork:

If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.

  • David Hume, An Enquiry Concerning Human Understanding (1748)

As Mardonius says, 20th century logical empiricism (also called logical positivism or neopositivism) is basically the same idea with "abstract reasoning" fleshed out as "tautologies in formal systems" and "experimental reasoning" fleshed out initially as " statements about sensory experiences". So the neopositivists' original plan was to analyz... (read more)

4Eliezer Yudkowsky8yIs there a good statement of the "mature neopositivist" / Carnap's position?
2Alejandro18yThere is no article on Carnap on the SEP, and I couldn't find a clear statement on the Vienna Circle article, but there is a fairly good one in the Neurath article [http://plato.stanford.edu/entries/neurath/#NeuPlaLogEmpPhyAntFouHolNatExtPra]: The mature Carnap position seems to be, then, not to reduce everything to logic + fundamental physics (electrons/wavefunctions/etc), as perhaps you thought I had implied, but to reduce everything to logic + observational physics (statements like "Voltimeter reading = 10 volts"). Theoretical sentences about electrons and such are to be reduced (in some sense that varied which different formulations) to sentences of observational physics. This does not mean that for Carnap electrons are not "real"; as I said before, reductionism was conceived as a linguistic proposal, not an ontological thesis.
0Eliezer Yudkowsky8yExperience + logic != physics + logic > causality + logic
-1shminux8yExperience + models = reality
1Rob Bensinger8yCucumbers are neither experiences nor models. Yet I'm pretty sure reality includes at least one cucumber.
1shminux8yCucumbers are both experiences and models, actually. You experience its sight, texture and taste, you model this as a green vegetable with certain properties which predict and constrain your similar future experiences. Numbers, by comparison, are pure models. That's why people are often confused about whether they "exist" or not.
0[anonymous]8yAre experiences themselves models? If not, are you endorsing the view that qualia are fundamental?
1shminux8yExperiences are, of course, themselves a multi-layer combination of models and inputs, and at some point you have to stop, but qualia seem to be at too high a level, given that they appear to be reducible to physiology in most brain models.
0Rob Bensinger8y1. How do you know models exist, and aren't just experiences of a certain sort? 2. How do you know that unexperienced, unmodeled cucumbers don't exist? How do you know there was no physical universe prior to the existence of experiencers and modelers?
3NancyLebovitz8yI've played with the idea that there is nothing but experience (Zen and the Art of Motorcycle Maintenance was rather convincing). However, it then becomes surprising that my experience generally behaves as though I'm living in a stable universe with such things as previously unexperienced cucumbers showing up at plausible times.
1Rob Bensinger8yI think there are three broadly principled and internally consistent epistemological stances: Radical skepticism, solipsism, and realism. Radical skepticism is principled because it simply demands extremely high standards before it will assent to any proposition; solipsism is principled because it combines skepticism with the Cartesian insight that I can be certain of my own experiences; and realism is principled because it tries to argue to the best explanation for phenomena in general, appealing to unexperienced posits that could plausibly generate the data at hand. I do not tend to think so highly of idealistic and phenomenalistic views that fall somewhere in between solipsism and realism; these I think are not as pristine and principled as the above three views, and their uneven application of skepticism (e.g., doubting that mind-independent cucumbers exist but refusing to doubt that Platonic numbers or Other Minds exist) weakens their case considerably.
1Eugine_Nier8yRadical stances are often more "consistent and principled" in the sense they're easier to argue for, i.e., the arguments supporting them are shorter. That doesn't mean their correct.
0shminux8yThis question is meaningless in the framework I have described (Experience + models = reality). If you provide an argument why this framework is not suitable, i.e., it fails to be useful in a certain situation, feel free to give an example.
3Rob Bensinger8yIf commitment to your view renders meaningless any discussion of whether your view is correct, then that counts against your view. We need to evaluate the truth of "Experience + models = reality" itself, if you think the statement in question is true. (And if it isn't true, then what is it?) Your language just sounds like an impoverished version of my language. I can talk about models of cucumbers, and experiences of cucumbers; but I can also speak of cucumbers themselves, which are the spatiotemporally extended referent of 'cucumber,' the object modeled by cucumber models, and the object represented by my experiential cucumbers. Experiences occur in brains; models are in brains, or in an abstract Platonic realm; but cucumbers are not, as a rule, in brains. They're in gardens, refrigerators, grocery stores, etc.; and gardens and refrigerators and grocery stores are certainly not in brains either, since they are too big to fit in a brain. Another way to motivate my concern: It is possible that we're all mistaken about the existence of cucumbers; perhaps we've all been brainwashed to think they exist, for instance. But to say that we're mistaken about the existence of cucumbers is not, in itself, to say that we're mistaken about the existence of any particular experience or model; rather, it's to say that we're mistaken about the existence of a certain physical object, a thing in the world outside our skulls. Your view either does not allow us to be mistaken about cucumbers, or gives a completely implausible analysis of what 'being mistaken about cucumbers' means in ordinary language.
-1Peterdjones8yThere may be a cerrtain element of cross purposes here. I'm pretty sure Carnap was only seeking to reduce sentences to epistemic components, not reduce reality to ontological componennts. I'm not sure what Shminux is saying.
-2shminux8yDefine "correct".
1Rob Bensinger8yTrue. Accurate. Describing how the world is. Corresponding to an obtaining fact. My argument is: 1. Cucumbers are real. 2. Cucumbers are not models. 3. Cucumbers are not experiences. 4. Therefore some real things are neither models nor experiences. (Reality is not just models and experiences.) You could have objected to any of my 3 premises, on the grounds that they are simply false and that you have good evidence to the contrary. But instead you've chosen to question what 'correctness' means and whether my seemingly quite straightforward argument is even meaningful. Not a very promising start.
1shminux8ySorry, EY is right, "define" is not a strong enough word. Taboo "correct" and all its synonyms, like "true" and "accurate". This is somewhat better. What is "obtaining fact" but analyzing (=modeling) an experience? Yes, given that experiences+models=reality, cucumbers are a subset of reality.
1DaFranker8yA personal favorite is: Achieves optimal-minimum "Surprising Experience" / "Models"(i.e. possible predictions consistent with the model) ratio. That the same models achieve correlated / convergent such ratios across agents seems to be evidence that there is a unified something elsewhere that models can more accurately match, or less accurately match. Note: I don't understand all of this discussion, so I'm not quite sure just how relevant or adequate this particular definition/reduction is.
1Rob Bensinger8yThat a fact obtains requires no analysis, modeling, or experiencing. For instance, if no thinking beings existed to analyze anything, then it would be a fact that there is no thinking, no analyzing, no modeling, no experiencing. Since there would still be facts of this sort, absent any analyzing or modeling by any being, facts cannot be reduced to experiences or analyses of experience. You still aren't responding to my argument. You've conceded premise 1, but you haven't explained why you think premise 2 or 3 is even open to reasonable doubt, much less outright false. 1. ∃x(cucumber(x)) 2. ∀x(cucumber(x) → ¬model(x)) 3. ∀x(cucumber(x) → ¬experience(x)) 4. ∴ ∃x(¬model(x) ∧ ¬experience(x)) This is a deductively valid argument (i.e., the truth of its premises renders its conclusion maximally probable). And it entails the falsehood of your assertion "Experience + models = reality" (i.e., it at a minimum entails the falsehood of ∀x(model(x) ∨ experience(x))). And all three of my premises are very plausible. So you need to give us some evidence for doubting at least one of my premises, or your view can be rejected right off the bat. (It doesn't hurt that defending your view will also help us understand what you mean by it, and why you think it better than the alternatives.)
-1shminux8yThis is a counterfactual. I'm happy to consider a model where this is true, as long as you concede that this is a model.
1Rob Bensinger8ySure, all counterfactuals are models. But there is a distinction between counterfactuals that model experiences, counterfactuals that model models, and counterfactuals that model physical objects. Certainly not all models are models of models, just as not all words denote words, and not all thoughts are about thoughts. When we build a model in which no experiences or models exist, we find that there are still facts. In other words, a world can have facts without having experiences or models; neither experiencelessness nor modellessness forces or entails the total absence of states of affairs. If x and y are not equivalent — i.e., if they are not true in all the same models — then x and y cannot mean the same thing. So your suggestion that "obtaining fact" is identical to "analyzing (=modeling) an experience" is provably false. Facts, circumstances, states of affairs, events — none of these can be reduced to claims about models and experiences, even though we must use models and experiences in order to probe the meanings of words like 'fact,' 'circumstance,' 'state of affairs.' (For the same reason, 'fact' is not about words, even though 'fact' is a word and we must use words to argue about what facts are.)
-1shminux8yNot sure who that "we" is, but I'm certainly not a part of that group. Anyway, judging by the downvotes, people seem to be getting tired of this debate, so I am disengaging.
1Rob Bensinger8yAre you saying that when you model what the Earth was like prior to the existence of the first sentient and reasoning beings, you find that your model is of oblivion, of a completely factless void in which there are no obtaining circumstances? You may need to get your reality-simulator repaired. I haven't gotten any downvotes for this discussion. If you've been getting some, it's much more likely because you've refused to give any positive arguments for your assertion "experience + models = reality" than because people are 'tired of this debate.' If you started giving us reasons to accept that statement, you might see that change.
-1Peterdjones8yBut its just an exterme case of the LW Bad Habit of employig gerrymandered definitions of "meaning".
0DaFranker8yAs opposed to...? (Just because there's a black box doesn't mean we shouldn't ever work on anything that requires using the black box.)
-1Peterdjones8yUsing definitions rooted in linguisitics, semiotics, etc.
1DaFranker8yIs there any such definition of meaning that does not pile up incredibly higher power-towers of linguistic complexity and uses even more mental black boxes? All the evidence I've seen so far not only imply that we've never found one, but that there might be a reason we would never find one.
2Peterdjones8yOK. There might not be a clean definition of meaning. However, what this sub thread is about Shminux's right to set up a personal definition, and use it to reject criticism.
1DaFranker8yValid point. Any "gerrymandered" definitions should be done with the intent to clarify or simplify the solution towards a problem, and I'd only evaluate them on their predictive usefulness, not how you can use them to reject or enforce arguments in debates.
0Peterdjones8y"Gerrymandering" has the connotation of self-serving, as in the political meaning of the term. Hence I do not see it as ever being useful.
1bryjnar8yEven just take the old logical postivist doctrine about analyticity/syntheticity: all statements are either "analytic" (i.e. true by logic (near enough)), or synthetic (true due to experience). That's at least on the same track. And I'm pretty sure they wouldn't have had a problem with statements that were partially both.
7crazy888yI think I must be misunderstanding what you're saying here because something very similar to this is probably the principle accusation relied upon in metaphysical debates (if not the very top, certainly top 3). So let me outline what is standard in metaphysical discussions so that I can get clear on whether you're meaning something different. In metaphysics, people distinguish between quantitative and qualitative parsimony. Quantitative parisimony is about the amount of stuff your theory is committed to (so a theory according to which more planets exist is less quantitatively parsimonious than an alternative). Most metaphysicians don't care about quantative parsimony. On the other hand, qualitative parsimony is about the types of stuff that your theory is committed to. So if a theory is committed to causation and time, this would be less qualitatively parsimonious than one that that was only committed to causation (just an example, not meant to be an actual case). Qualitative parsimony is seen to be one of the key features of a desirable metaphysical theory. Accusations that your theory postulates extra ontological stuff but doesn't gain further explanatory power for doing so is basically the go to standard accusation against a metaphysical theory. Fundamentality is also a major philosophical issue - the idea that some stuff you postulate is ontologically fundamental and some isn't. Fundamentality views are normally coupled with the view that what really matters is qualitative parsimony of fundamental stuff (rather than stuff generally). So how does this differ from the claim that you're saying is not mainstream?
3Eliezer Yudkowsky8yThe claim might just need correction to say, "Many philosophers say that simplicity is a good thing but the requirement is not enforced very well by philosophy journals" or something like that. I think I believe you, but do you have an example citation anyway? (SEP entries or other ungated papers are in general good; I'm looking for an example of an idea being criticized due to lack of metaphysical parsimony.) In particular, can we find e.g. anyone criticizing modal logic because possibility shouldn't be basic because metaphysical parsimony?
9crazy888yIn terms of Lewis, I don't know of someone criticising him for this off-hand but it's worth noting that Lewis himself (in his book On the Plurality of Worlds) recognises the parsimony objection and feels the need to defend himself against it. In other words, even those who introduce unparsimonious theories in philosophy are expected to at least defend the fact that they do so (of course, many people may fail to meet these standards but the expectation is there and theories regularly get dismissed and ignored if they don't give a good accounting of why we should accept their unparsimonious nature). Sensations and brain processes [http://philosophyfaculty.ucsd.edu/faculty/rarneson/Courses/SMARTJACKphil1.pdf]: one of Jack Smart's main grounds for accepting the identity theory of mind is based around considerations of parsimony Quine's paper On What There Is [http://eimoe.tu-dresden.de/die_tu_dresden/fakultaeten/philosophische_fakultaet/iph/thph/braeuer/lehre/metameta/Quine%20-%20On%20What%20There%20Is.pdf] is basically an attack on views that hold that we need to accept the existence of things like pegasus (because otherwise what are we talking about when we say "Pegasus doesn't exist"). Perhaps a ridiculous debate but it's worth noting that one of Quine's main motivations is that this view is extremely unparsimonious. From memory, some proponents of EDT support this theory because they think that we can achieve the same results as CDT (which they think is right) in a more parsimonious way by doing so (no link for that however as that's just vague recollection). I'm not actually a metaphysician so I can't give an entire roll call of examples but I'd say that the parsimony objection is the most common one I hear when I talk to metaphysicians.
1Eugine_Nier8yWhy shouldn't it? I haven't seen any reduction of it that deals with this [http://lesswrong.com/lw/eva/the_fabric_of_real_things/7mdb] objection.
0Peterdjones8yWould that be desirable? If a contributor can argue persuasively for dropping parsimony, why should that be suppressed? Surely that should be modal realism.
0NancyLebovitz8y"Make things as simple as possible, but no simpler." --Albert Einstein How do you know whether something is as simple as possible? In terms of publishing, should the standard be as simple as is absolutely possible, or should it be as simple as possible given time and mental constraints?
0DanArmak8yYou keep trying to make it simpler, but you fail to do so without losing something in return.
1crazy888yIt still may be hard to resolve when something is as simple as possible. So modal realism (the idea that possible worlds exist concretely) has been highlighted a few times in this thread as an unparsimonious theory but Lewis has two responses to this: 1.) This is (at least mostly) quantitative unparsimony not qualitative (lots of stuff, not lots of types of stuff). It's unclear how bad quantitative unparsimony is. Specifically, Lewis argues that there is no difference between possible worlds and actual worlds (actuality is indexical) so he argues that he doesn't postulate two types of stuff (actuality and possibility) he just postulates a lot more of the stuff that we're already committed to. Of course, he may be committed to unicorns as well as goats (which the non-realist isn't) but then you can ask whether he's really committed to more fundamental stuff than we are. 2.) Lewis argues that his theory can explain things that no-one else can so even if his theory is less parsimonious, it gives rewards in return for that cost. Now many people will argue that Lewis is wrong, perhaps on both counts but the point is that even with the case that's been used almost as a benchmark for unparsimonious philosophy in this thread, it's not as simple as "Lewis postulates two types of stuff when he doesn't need to, therefore, clearly his theory is not as simple as possible."
3Mardonius8yIsn't this, essentially, a mild departure from late Logical Empiricism to allow for a wider definition of Physical and a more specific definition of Logical references?
1Eliezer Yudkowsky8yI don't see anything similar to this post on a quick skim of http://plato.stanford.edu/entries/logical-empiricism/ [http://plato.stanford.edu/entries/logical-empiricism/] . Please specify.
2Mardonius8yWell, I was specifically thinking of this passage Which, to my admittedly rusty knowledge of mid 20th century philosophy, sounds extremely similar to the anti-metaphysics position of Carnap circa 1950. His work on Ramsey sentences, if I recall, was an attempt to reduce mixed statements including theoretical concepts ("appleness") to a statement consisting purely of Logical and Observational Terms. I'm fairly sure I saw something very similar to your writings in his late work regarding Modal Logic, but I'm clearly going to have to dig up the specific passage.
8Rob Bensinger8yAmusingly, this endeavor also sounds like your arch-nemesis David Chalmers' new project, Constructing the World [http://consc.net/constructing/]. Some of his moderate responses [http://www.youtube.com/watch?v=YJ2Km3GjkLQ&t=7m17s] to various philosophical puzzles may actually be quite useful to you in dismissing sundry skeptical objections to the reductive project; from what I've seen, his dualism isn't indispensable to the interesting parts of the work.
3bryjnar8yJust to say that in general, apart from the stuff about consciousness, which I disagree with but think is interesting, I think that Chalmers is one of the best philosophers alive today. Seriously, he does a lot of good work.
3Will_Newsome8yHe also reads LessWrong, I think.
4Alejandro18yI am about 90% certain that he is djc [http://lesswrong.com/user/djc/].
4gwern8yI'd agree; the link to philpapers (a Chalmers project), claiming to be a pro, having access to leading decision theorists - all consistent.
4Rob Bensinger8yIt's either Chalmers or a deliberate impersonator. 'DJC' stands for 'David John Chalmers.'
1aaronsw8yIt's too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle's. Searle is clearer on some points and EY is clearer on others, but other than the AI stuff they take a very similar approach. EDIT: To be clear, John Searle has written a lot, lot more than the one paper on the Chinese Room, most of it having nothing to do with AI.
0Eliezer Yudkowsky8ySo... admittedly my main acquaintance with Searle is the Chinese Room argument that brains have 'special causal powers', which made me not particularly interested in investigating him any further. But the Chinese Room argument makes Searle seem like an obvious non-reductionist with respect to not only consciousness but even meaning; he denies that an account of meaning can be given in terms of the formal/effective properties of a reasoner. I've been rendering constructive accounts of how to build meaningful thoughts out of "merely" effective constituents! What part of Searle is supposed to be parallel to that?
4aaronsw8yI guess I must have misunderstood something somewhere along the way, since I don't see where in this sequence you provide "constructive accounts of how to build meaningful thoughts out of 'merely' effective constituents" . Indeed, you explicitly say "For a statement to be ... true or alternatively false, it must talk about stuff you can find in relation to yourself by tracing out causal links." This strikes me as parallel to Searle's view that consciousness imposes meaning. But, more generally, Searle says his life's work is to explain how things like "money" and "human rights" can exist in "a world consisting entirely of physical particles in fields of force"; this strikes me as akin to your Great Reductionist Project.
5pjeby8ySomeone should tell him this has already been done: dissolving that kind of confusion is literally part of LessWrong 101, i.e. the Mind Projection Fallacy. Money and human rights and so forth are properties of minds modeling particles, not properties of the particles themselves. That this is still his (or any other philosopher's) life's work is kind of sad, actually.
2aaronsw8yI guess my phrasing was unclear. What Searle is trying to do is generate reductions for things like "money" and "human rights"; I think EY is trying to do something similar and it takes him more than just one article on the Mind Projection Fallacy. (Even once you establish that it's properties of minds, not particles, there's still a lot of work left to do.)
-3Peterdjones8yOr maybe Searle is tackling a much harder version of the problem, for instance explaining how things like human rights and ethics can be binding or obligatory on people when they are "all in the mind", explaining why one person should be beholden to another's mind projection.
-2khafra8yNote that "should be beholden" is a concept from within an ethical system; so invoking it in reference to an entire ethical system is a category error. Also, I feel that the sequences do pretty well at explaining the instrumental reasons that agents with goals have ethics; even ethics which may, in some circumstances, prohibit reaching their goals.
-2Peterdjones8yNot necessarily. Many approaches to this problem try to lever an ethical "should" off a rational "should".
1Eliezer Yudkowsky8yWhy? Did I mention consciousness somewhere? Is there some reason a non-conscious software program hooked up to a sensor, couldn't do the same thing? I don't think Searle and I agree on what constitutes a physical particle. For example, he thinks 'physical' particles are allowed to have special causal powers apart from their merely formal properties which cause their sentences to be meaningful. So far as I'm concerned, when you tell me about the structure of something's effects on the particle fields, there shouldn't be anything left after that - anything left is extraphysical.
1Peterdjones8ySearle's views [http://en.wikipedia.org/wiki/Biological_naturalism] have nothing to do with attributing novel properties to fundamental particles. They are more to do with identifying mental properties with higher-levle physical properties, which are themselves irreducible in a sense (but also reducible in another sense).
-1Ritalin8yThat's confusing. What senses?
0Peterdjones8ySee the link I gave [http://en.wikipedia.org/wiki/Biological_naturalism] to start with.
-1pjeby8yPerhaps I'm confused, but isn't Searle the guy who came up with that stupid Chinese Room thing? I don't see at all how that's remotely parallel to LW philosophy, or why it would be a bad thing to be ideologically opposed to his approach to AI. (He seems to think it's impossible to have AI, after all, and argues from the bottom line for that position.)
3aaronsw8yI was talking about Searle's non-AI work, but since you brought it up, Searle's view is: 1. qualia exists (because: we experience it) 2. the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia) 3. if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not) Which part does LW disagree with and why?
2Ben Pace8yTo offer my own reasons for disagreement, I think the first point is unfounded (or misguided). We do things (like moving, and thinking). We notice and can report that we've done things, and occasionally we notice and can report that we've noticed that we've done something. That we can report how things appear to a part of us that can reflect upon stimuli is not important enough to be called 'quaila'. That we notice that we find experience 'ineffable' is not a surprise either - you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving). So, all we really have is the ability to notice and report that which has been advantageous for us to report in the evolutionary history of the human (these stimuli that we can notice are called 'experiences'). There is nothing mysterious here, and the word 'qualia' always seems to be used mysteriously - so I don't think the first point carries the weight it might appear to. Qualia is not clearly a basic fact of physics. I made the point that we would not expect a species designed by natural selection to be able to report or comprehend its most detailed, inner workings, solely on the evidence of what it can report and notice. But this is all skirting around the core idea of LessWrong: The map is not the territory. Just because something seems fundamental does not mean it is. Just because it seems like a Turing machine couldn't be doing consciousness, doesn't mean that is how it is. We need to understand how it came to be that we feel what we feel, before go making big claims about the fundamental nature of reality. This is what is worked on in LessWrong, not in Searle's philosophy.
-1Peterdjones8yIf the ineffabiity of qualia is down to the complexity of fine-grained neural behaviour, then the question is why is anything effable -- people can communicate about all sorts of things that aren't sensations (and in many cases are abstract and "in the head").
0Ben Pace8yI'm not sure that I follow. Can anything we talk about be reduced to less than the basic stimuli we notice ourselves having? All words (that mean anything) refer to something. When I talk about 'guitars', I remember experiences I've had which I associate with the word (i.e. guitars). Most humans have similar makeups, in that we learn in similar ways, and experience in similar ways (I'm just talking about the psychological unity of humans, and how far our brain design is from, say, mice). So, we can talk about things, because we've learnt to refer certain experiences (words) to others (guitars). Neither of the two can refer to anything other to the experiences we have. Anything we talk about is in relation to our experiences (Or possibly even meaningless).
-1Peterdjones8yMost of the classic reductions are reductions to things beneath perceivable stimuli,eg heat to molecular motion. Reductionism and physialism would be in very bad trouble if language and concpetualistion grounded out where perception does. The theory also mispredicts that we woul be able communicate our sensations , but struggle to communicate abstract (eg mathemataical) ideas with a distant rleationship, or no relationship to senssation. In fact, the classic reductions are to the basic entities of phyiscs, which are ultimately defined mathematically, and often hard to hard to visualise or otherwise relate to sensation.
0Ben Pace8yYou could point out the different constituents of experience that feel fundamental, but they themselves (e.g. Red) don't feel as though they are made up of anything more than themselves. When we talk about atoms, however, that isn't a basic piece of mind that mind can talk about. My mind feels as though it is constituted of qualia, and it can refer to atoms. I don't experience an atom, I experience large groups of them, in complex arrangements. I can refer to the atom using larger, complex arrangements of neurons (atoms). Even though, when my mind asks what the basic parts of reality are, it has a chain of reference pointing to atoms, each part of that chain is a set of neural connections, that don't feel reducible. Even on reflection, our experiences reduce to qualia. We deduce that qualia are made of atoms, but that doesn't mean that our experience feels like its been reduced to atoms.
-1Peterdjones8yWhere is that heading? Is it supposed to tell my why qualia are ineffable....or rather, why qualia are more ineffable than cognition?
0Ben Pace8yI'm saying that we should expect experience to feel as if made of fundamental, ineffable parts, even though we know that it is not. So, qualia aren't the problem for a turing machine they appear to be. Also, we all share these experience 'parts' with most other humans, due to the psychological unity of humankind. So, if we're all sat down at an early age, and drilled with certain patterns of mind parts (times-tables), then we should expect to be able to draw upon them at ease. My original point, however, was just that the map isn't the territory. Qualia don't get special attention just because they feel different. They have a perfectly natural explanation, and you don't get to make game-changing claims about the territory until you've made sure your map is pretty spot-on.
-1Peterdjones8yI don 't see why. Saying that eperience is really complex neurall activity isn't enough to explain that, because thought is really complex neural activity as well, and we can comminicate and unpack concepts. Can you write the code for SeeRed() ? Or are you saying that TMs would have ineffable concepts? You've inverted the problem: you have creatd the expectation that nothing mental is effable.
0Ben Pace8yNo, I'm saying that no basic, mental part will feel effable. Using our cognition, we can make complex notions of atoms and guitars, built up in our minds, and these will explain why our mental aspects feel fundamental, but they will still feel fundamental. I'm not continuing this discussion, it's going nowhere new. I will offer Orthonormal's sequence on qualia as explanatory however: http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/ [http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/]
-1Peterdjones8yYou seem to be hinting, but are not quite saying, that qualia are basic and therefore ineffable, whilst thoughts are non-basic and therefore effable. Confirming the above would be somewhere new.
1Ben Pace8yI'm saying that there are (something like) certain constructs in the brain, that are used whenever the most simple conscious thought or feeling is expressed. They're even used when we don't choose to express something, like when we look at something. We immediately see it's components (surfaces, legs, handles), and the ones we can't break down (lines, colours) feel like the most basic parts of those representations in our minds. Perhaps the construct that we identify as red, is set of neurons XYZ firing. If so, whenever we notice (that is, other sets of neurons observe) that XYZ go off, we just take it to be 'red'. It really appears to be red, and none of the other workings of the neurons can break it any further. It feels ineffable, because we are not privy to everything that's going on. We can simply use a very restricted portion of the brain, to examine other chunks, and give them different labels. However, we can use other neuronal patterns, to refer to and talk about atoms. Large groups of complex neural firings can observe and reflect upon experimental results that show that the brain is made of atoms. Now, even though we can build up a model of atoms, and prove that the basic features of conscious experience (redness, lines, the hearing of a middle C) are made of atoms, the fact is, we're still using complex neuronal patterns to think about these. The atom may be fundamental, but it takes a lot of complexity for me to think about the atom. Consciousness really is reducible to atoms, but when I inspect consciousness, it still feels like a big complex set of neurons that my conscious brain can't understand. It still feels fundamental. Experientially, redness doesn't feel like atoms because our conscious minds cannot reduce it in experience, but they can prove that it is reducible. People make the jump that, because complex patterns in one part of the brain (one conscious part) cannot reduce another (conscious) part to mere atoms, it must be a fundamental
2nshepperd8yI can't really speak for LW as a whole, but I'd guess that among the people here who don't believe¹ "qualia doesn't exist", 1 and 2 are fine, but we have issues with 3, as expanded below. Relatedly, there seems be some confusion between the "boring AI" proposition, that you can make computers do reasoning, and Searle's "strong AI" thing he's trying to refute, which says that AIs running on computers would have both consciousness and some magical "intentionality". "Strong AI" shouldn't actually concern us, except in talking about EMs or trying to make our FAI non-conscious. Pretty much disagree. Really disagree. And this seems really unlikely. ¹ I qualify my statement like this because there is a long-standing confusion over the use of the word "qualia" as described in my parenthetical here [http://lesswrong.com/lw/fv3/by_which_it_may_be_judged/81lg].
2aaronsw8yWell, let's be clear: the argument I laid out is trying to refute the claim that "I can create a human-level consciousness with a Turing machine". It doesn't mean you couldn't create an AI using something other than a pure Turing machine and it doesn't mean Turing machines can't do other smart computations. But it does mean that uploading a brain into a Von Neumann machine isn't going to keep you alive. So if you disagree that qualia is a basic fact of physics, what do you think it reduces to? Is there anything else that has a first-person ontology the way qualia does? And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what's the physical algorithm for looking at a series of physical particles and deciding whether it's executing a particular computation or not?
1nshepperd8ySomething brains do, obviously. One way or another. I should perhaps be asking what evidence Searle has for thinking he knows things like what qualia is, or what a computation is. My statements were both negative: it is not clear that qualia is a basic fact of physics; it is not obvious that you can't describe computation in physical terms. Searle just makes these assumptions. If you must have an answer, how about this: a physical system P is a computation of a value V if adding as premises the initial and final states of P and a transition function describing the physics of P shortens a formal proof that V = whatever.
1aaronsw8yThey're not assumptions, they're the answers to questions that have the highest probability going for them given the evidence.
1MugaSofer8yThere's your problem. Why the hell should we assume that "qualia is clearly a basic fact of physics "?
1aaronsw8yBecause it's the only thing in the universe we've found with a first-person ontology. How else do you explain it?
-1MugaSofer8yWell, I probably can't explain it as eloquently as others here - you should try the search bar, there are probably posts on the topic much better than this one - but my position would be as follows: * Qualia are experienced directly by your mind. * Everything about your mind seems to reduce to your brain. * Therefore, qualia are probably part of your brain. Furthermore, I would point out two things: one, that qualia seem to be essential parts of having a mind; I certainly can't imagine a mind without qualia; and two, that we can view (very roughly) images of what people see in the thalamus, which would suggest that what we call "qualia" might simply be part of, y'know, data processing.
1TheOtherDave8yAnother not-speaking-for-LW answer: Re #1: I certainly agree that we experience things, and that therefore the causes of our experience exist. I don't really care what name we attach to those causes... what matters is the thing and how it relates to other things, not the label. That said, in general I think the label "qualia" causes more trouble due to conceptual baggage than it resolves, much like the label "soul". Re #2: This argument is oversimplistic, but I find the conclusion likely. More precisely: there are things outside my brain (like, say, my adrenal glands or my testicles) that alter certain aspects of my experience when removed, so it's possible that the causes of those aspects reside outside my brain. That said, I don't find it likely; I'm inclined to agree that the causes of my experience reside in my brain. I still don't care much what label we attach to those causes, and I still think the label "qualia" causes more confusion due to conceptual baggage than it resolves. Re #3: I see no reason at all to believe this. The causes of experience are no more "clearly a basic fact of physics" than the causes of gravity; all that makes them seem "clearly basic" to some people is the fact that we don't understand them in adequate detail yet.
0pjeby8yThe whole thing: it's the Chinese Room all over again, a intuition pump that begs the very question it's purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word "understanding" is fudged in the Chinese Room argument, but basically it's the same.) I suppose you could say that there's a grudging partial agreement with your point number two: that "the brain causes qualia". The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides "qualia", e.g.: 1. Free will exists (because: we experience it) 2. The brain causes free will (because if you cut off any part, etc.) 3. If you simulate a brain with a Turing machine, it won't have free will because clearly it's a basic fact of physics and there's no way to tell just using physics whether something is a machine simulating a brain or not. It doesn't matter what term you plug into this in place of "qualia" or "free will", it could be "love" or "charity" or "interest in death metal", and it's still not saying anything more profound than, "I don't think machines are as good as real people, so there!" Or more precisely: "When I think of people with X it makes me feel something special that I don't feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X 'just a simulation'." This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work. Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth
1[anonymous]8yJust a nit pick: the argument Aaron presented wasn't an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn't beg the question. Aaron's argument was an argument agains artificial consciousness. Also, I think Aaron's presentation of (3) was a bit unclear, but it's not so bad a premise as you think. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won't experience qualia. So if we have qualia, and count as conscious in virtue of having qualia (1), then brain-simulating turing machines won't count as conscious. If we don't have qualia, i.e. if all our mental states are reducible to purely physical descriptions, then the argument is unsound because premise (1) is false. You're right that you can plug many a term in to replace 'qualia', so long as those things are not reducible to purely physical descriptions. So you couldn't plug in, say, heart-attacks. Could you explain this a bit more? I don't see how it's relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle's argument.
1pjeby8yIn order for the argument to make any sense, you have to buy into several assumptions which basically are the argument. It's "qualia are special because they're special, QED". I thought about calling it circular reasoning, except that it seems closer to begging the question. If you have a better way to put it, by all means share. When I said that our mind detection circuitry was the root of the argument, I didn't mean that Searle was overtly arguing on the basis of his feelings. What I'm saying is, the only evidence for Searle-type premises are the feelings created by our mind-detection circuitry. If you assume these feelings mean something, then Searle-ish arguments will seem correct, and Searle-ish premises will seem obvious beyond question. However, if you truly grok the mind-projection fallacy, then Searle-type premises are just as obviously nonsensical, and there's no reason to pay any attention to the arguments built on top of them. Even as basic a tool as Rationalist Taboo suffices to debunk the premises before the argument can get off the ground.
-1Peterdjones8yAny vald argument has a conclusion that is entiailed by its premises taken jointly. Circularity is when the whole conclusion is entailed by one premise, with the others being window-dressing. I think there is a way that ripe tomatoes seem visually: how is that mind-projection.
0MugaSofer8yBut ... if you're assuming that qualia are "not reducible to purely physical descriptions", and you need qualia to be conscious, then obviously brain-simulations wont be conscious. But those assumptions seem to be the bulk of the position he's defending, aren't they?
2[anonymous]8yRight, the argument comes down, for most of us, to the first premise: do we or do we not have mental states irreducible to purely physical conditions. Aaron didn't present an argument for that, he just presented Searle's argument against AI from that. But you're right to ask for a defense of that premise, since it's the crucial one and it's (at the moment) undefended here.
-2MugaSofer8yPresenting an obvious result of a nonobvious premise as if it was a nonobvious conclusion seems suspicious, as if he's trying to trick listeners into accepting his conclusion even when their priors differ. [Edited for terminology.]
0[anonymous]8yNot only suspicious, but impossible: if the premises are non-trivial, the conclusion is non-trivial. In every argument, the conclusion follows straight away from the premises. If you accept the premises, and the argument is valid, then you must accept the conclusion. The conclusion does not need any further support.
-1MugaSofer8yY'know, you're right. Trivial is not the right word at all.
-1Peterdjones8yTo pick a further nit, the argument is more that qualia can't be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.
0[anonymous]8yThat's a possibility, but not as I laid out the argument: if being conscious entails having qualia, and if qualia are all irreducible to purely physical descriptions, and every state of a turning machine is reducible to a purely physical description, then turing machines can't simulate consciousness. That's not very neat, but I do believe it's valid. Your alternative is plausible, but it requires my 'turning machines are reducible to purely physical descriptions' premise to be false.
0aaronsw8yHuh? This isn't an argument for the existence of qualia -- it's an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie? I do think essentially the same argument goes through for free will, so I don't find your reductio at all convincing. There's no reason, however, to believe that "love" or "charity" is a basic fact of physics, since it's fairly obvious how to reduce these. Do you think you can reduce qualia? I don't understand why you think this is a claim about my feelings.
2shminux8ySuppose that neuroscientists some day show that the quale of seeing red matches a certain brain structure or a neuron firing pattern or a neuro-chemical process in all humans. Would you then say that the quale of red has been reduced?
2aaronsw8yOf course not!
0shminux8yand why not?
1aaronsw8yBecause the neuron firing pattern is presumably the cause of the quale, it's certainly not the quale itself.
1shminux8yI don't understand what else is there.
6aaronsw8yImagine a flashlight with a red piece of cellophane over it pointed at a wall. Scientists some day discover that the red dot on the wall is caused by the flashlight -- it appears each and every time the flashlight fires and only when the flashlight is firing. However, the red dot on the wall is certainly not the same as the flashlight: one is a flashlight and one is a red dot. The red dot, on the other hand, could be reduced to some sort of interaction between certain frequencies of light-waves and wall-atoms and so on. But it will certainly not get reduced to flashlights. By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren't made out of neurons any more than red dots are made of flashlights.
0shminux8yOk, that's where we disagree. To me the subjective experience is the process in my brain and nothing else.
0Peterdjones8yThere's no arguemnt there. Your point about qualia is illustrated by your point about flashlights, but not entailed by it.
-1MugaSofer8yHow do you know this?
0Peterdjones8yThere's no certainty either way.
-2Peterdjones8yReduction is an explanatory process: a mere observed correlation does not qualify.
0pjeby8yI think that anyone talking seriously about "qualia" is confused, in the same way that anyone talking seriously about "free will" is. That is, they're words people use to describe experiences as if they were objects or capabilities. Free will isn't something you have, it's something you feel. Same for "qualia". Dissolving free will is considered an entry-level philosophical exercise for Lesswrong. If you haven't covered that much of the sequences homework, it's unlikely that you'll find this discussion especially enlightening. (More to the point, you're doing the rough equivalent of bugging people on a newsgroup about a question that is answered in the FAQ or an RTFM.) This [http://lesswrong.com/lw/pn/zombies_the_movie/] is probably a good answer to that question. Because (as with free will) the only evidence anyone has (or can have) for the concept of qualia is their own intuitive feeling that they have some.
-2Peterdjones8ySo you say. It is not standardly defined that way. Qualia are defined as feelings, sensations etc. Since we have feelings, sensations etc we have qualia. I do not see the confusion in using the word ""qualia"
0hairyfigment8yWell, would that mean writing a series like this [http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/]? My intuition certainly says that Martha has a feeling of ineffable learning. Do you at least agree that this proves the unreliability of our intuitions here?
-1aaronsw8yWho said anything about our intuitions (except you, of course)?
1hairyfigment8yYou keep making statements like, And you seem to consider this self-evident. Well, it seemed self-evident to me that Martha's physical reaction would 'be' a quale. So where do we go from there? (Suppose your neurons reacted all the time the way they do now when you see orange light, except that they couldn't connect it to anything else - no similarities, no differences, no links of any kind. Would you see anything? [http://en.wikipedia.org/wiki/File:Optical_grey_squares_orange_brown.svg])
1aaronsw8yI guess you need to do some more thinking to straighten out your views on qualia.
3Exiles8yGoodnight, Aaron Swartz.
0[anonymous]8ydownvoted posthumously.
0hairyfigment8yLet's back up for a second: * You've heard of functionalism, right? You've browsed the SEP entry [http://plato.stanford.edu/entries/functionalism/]? * Have you also read the mini-sequence I linked [http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/]? In the grandparent I said "physical reaction" instead of "functional", which seems like a mistake on my part, but I assumed you had some vague idea of where I'm coming from.
0MugaSofer8yOr you do. You claim the truth of your claims is self-evident, yet it is not evident to, say, hairyfigment, or Eliezer, or me for that matter. If I may ask, have you always held this belief, or do you recall being persuaded of it at some point? If so, what convinced you?
-1MugaSofer8yCould you expand on this point, please? It generally agreed* that "free will vs determinism" is a dilemma that we dissolved long ago. I can't see what else you could mean by this, so ... [*EDIT: here, that is]
0aaronsw8yI guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don't see how point one holds (we experience it), and the argument obviously doesn't go through.
0Peterdjones8yBut that's not contentious. Qualia are things like the appearence of tomatoes or taste of lemon. I've seen tomatoes and tasted lemons. But Searle says that feelngs, understanding, etc are properties of how the brain works. What he argues against is the claim that they are computational properties. But it is also uncontentious that physiclaism can be true and computationalism false.
-1Peterdjones8yIt isn't even clear to Searle that qualia are physically basic. He thinks consciousness is a a high-level outcome of the brain's concrete causal powers. His objection to computaional apporaches is rooted in the abstract nature of computation, not in the physcial basiscness of qualia. (In fact, he doesn't use the word "qualia", although he often seems to be talking about the same thing).

Not sure how good an example apple multiplication is, given that if you multiply 2 apples by 3 apples, you are supposed to get 6 square apples.

Hence my careful specification that you're multiplying the numbers, not the piles.

9johnswentworth8yI found the use of multiplication particularly useful, since it forced the reader to pay attention to the physical/logical distinction. If, say, addition had been used, then a determined reader could try to use physical constraints alone (though they would be cheating).
0faul_sname8yIf we assume that the 5 apples are spherical, and we cut the largest square sections possible out of each of them (leaving the top and bottom alone, as that doesn't affect whether the shape is a square when viewed from the top down), it turns out that these new squared apples have a volume of about 0.77 times that of a spherical apple. That means that your 2 round apples and your 3 round apples become about 6.49 squared apples. Rounding down, that is, in fact, 6 square apples. But I do think the illegal operation was kind of the point. It shows that not all mathematical operations can be strictly reduced to physical objects (well, outside of the substrate that's doing the computing, obviously). Edit: it was [http://lesswrong.com/lw/frz/mixed_reference_the_great_reductionist_project/7z59]
0[anonymous]8yYou might want to add some kind of smiley at the end of the first paragraph. (I didn't downvote, but I suspect that's the reason why someone did.)

This is an interesting post but, I have to say, kind of frustrating. I have tried to follow the discussions between Esar and RobbBB and your substantial elucidation as well as many other great comments, but I remain kind-of in the dark. Below are some questions which I had, as I read.

This question doesn't feel like it should be very hard.

What question? What exactly is the problem you are purporting to solve, here? If it is, "What is the truth condition of 'If we took the number of apples in each pile, and multiplied those numbers together, we'd ge... (read more)

2DSimon8yRegarding your point numbered 1 specifically: the causal history of matter is considered here as part of its physical properties in a block universe, so this objection doesn't apply. See the older sequence article Timeless Physics [http://lesswrong.com/lw/qp/timeless_physics/] for more on this. Regarding points 2 and 3: The OP is saying that for something to be an apple means that its low-level physical state matches some pattern, but not necessarily that the pattern matching function must return a strict True or False; there are fuzzy pattern matching functions as well. The older sequence article Similarity Clusters [http://lesswrong.com/lw/nj/similarity_clusters/] goes into this in more detail. On the other hand, your objections are totally legit within the context of this article and its examples alone, and as an introductory article that's a fine and appropriate context to be working from. Maybe the article would be improved by some footnotes and/or appropriate links? Then again, it's already pretty long. On the other other hand, as an introductory article its purpose is only to introduce what reductionism is and get people to grips with the notion of different levels of abstraction. If philosophical arguments are being made about e.g. what is "real", or more subtly about what makes for an appropriate definition of a word like "apple", then they aren't being made here, but in the articles that depend on this one. "Lying to children" and all that.

where both physical references and logical references are to be described 'effectively' or 'formally', in computable or logical form.

Can anyone say a bit more about why physical references would need to be described 'effectively'/computably? Is this based on the assumption that the physical universe must be computable?

0MrMind8yI think because if they are described by an uncomputable procedure, one for example involving oracles or infinite resources, then they (with very high probability) would not be able to be computed by our brains.
1Eugine_Nier8ySo? Use said oracles to upgrade our brains.
1amcknight8yMrMind is talking about an "oracle" in the sense of a mathematical tool. Oracles [http://en.wikipedia.org/wiki/Oracle_machine] in this sense are are well-defined things that can do stuff traditional computers can't.
0Eugine_Nier8yI'm perfectly aware what an oracle is. I was using it in the same sense.
0amcknight8yThis crossed my mind, but I thought there might be other deeper reasons.

I have had this question in my mind for ages. You say that these counterfactual universes don't actually exist. But, according to Many-Worlds, don't all lawful Universes actually really really exist? I mean, isn't there some amplitude for Mr. Oswald to not have shot Kennedy, and then you get a blob where Kennedy didn't get murdered?

I've been banging my head against a wall on this and still can't come to a conclusion. Are the decoherent blobs actually capable of creating multiple histories on the observable level, up here? It looks, to me, that they should ... (read more)

2Rob Bensinger8yAbstractions like probability and number are constructed by us; they don't strictly exist, but it's useful to act as though they do, since they help organize our reasoning. It could be that by coincidence that some part of the Real World corresponds precisely to the structure of our modal or mathematical reasoning; for instance, the many-worlds interpretation of QM could be true, or we could live in a Tegmark ensemble. But this would still just be an interesting coincidence. It wouldn't change the fact that our abstractions are our own; and if we discovered tomorrow that a Bohmian interpretation of QM is correct, rather than an Everettian one, it would have no foundational implications for such a high-level, anthropocentric phenomena as probability theory. Thinking in this way is useful for two reasons. First, it insulates our logical fictions from metaphysical skepticism; our uncertainty as to the existence of a Platonic realm of Number need not undermine our confidence that 2 and 2 make 4. Second, it keeps us from being tempted to slide down the slippery slope to treating all our fictions (like currency, and intentionality, and qualia, and Sherlock Holmes) as equally metaphysically committing.
0PedroCarvalho8yWell, whether probability and number exist or not is moot. The point of fact is that when you look at any quantum system there is a probability of finding it in any given (continuous set of) state(s) equals the squared modulus of the amplitude for it to be in such state. As mr. Yudkowsky once put, and I paraphrase, "I still want to know the nonexistent laws that coordinate my meaningless Universe". And my point is: assuming Quantum Physics is completely correct, without us adding the additional postulates, do all combinations of universes exist, superposed to each other? That is to say: is the quantum suicide limited to 50/50 strictly quantised experiments, or does our consciousness live on in a forever branching multiverse? Sort of.
1ArisKatsaris8yNitpick: All the Many-Worlds of QM still follow our particular set of physics. For "all lawful universes" to really really exist, you probably have to go to Tegmark IV or something like that....
0PedroCarvalho8yYes, I'm sorry, by "lawful" I'd meant exactly that, universes that obey our particular set of physics.
1NancyLebovitz8yMaybe the way out is that counterfactuals don't exist in their home universes.
0drnickbone8yI had the same reaction... Can this be the same Eliezer who authored the sequences, and gave such strong support for the reality of Many Worlds? I was half-expecting the other shoe to drop somewhere in the article... namely that if you are prepared to accept that the Many Worlds really exist, it makes the Great Reductionist Project a whole lot easier. Statements about causality reduce to statements about causal graphs, which in turn reduce to statements about counterfactuals, which in turn reduce to statements of actual fact about different blobs of the (real) quantum state vector. Similarly, statements about physical "possibility" and "probability" reduce to complicated statements about other blobs and their sizes as measured by the inner product on the state space. Maybe Eliezer will be leading that way later... If he isn't I share your confusion.
1PedroCarvalho8yIt was mentioned that if you were to make a continuous analog of the Bayesian Network, you'd end up with space and time, or some such. Maybe if you have a probabilistic Bayesian Network you get QM out of it? As in, any given parent node has a number of child nodes, each happening with a certain probability... and then if you make the continuous analog of such you'll get Quantum Mechanics and Many-Worlds. Mr. Yudkowsky has thoroughly convinced me of the reality of Many-Worlds (and my ongoing study of Q.M. has not yet even suggested otherwise), so... so what, then?
0Rob Bensinger8ySee SEP on Bohmian Mechanics [http://plato.stanford.edu/entries/qm-bohm/] for the main rival view.
1Peterdjones8yOr rather see relational QM [http://en.wikipedia.org/wiki/Relational_quantum_mechanics], otherwise known as MWI Done Right.
1DaFranker8yNow I want to read "If rQM had come first..." (Implied: I agree with you on this.)
0shminux8yRQM is MWI, so the story would be the same, maybe with less pathos.
1PedroCarvalho8yI have read about Bohmian Mechanics before, and it failed to convince me. This article keeps talking about 'non-determinism' inherent to Q.M. but I'm pretty sure Relative State is quite very deterministic. Also, adding the specification of a particle's position to a description doesn't sound at all to me like the simplest explanation possible. Maybe this is just me saying I prefer locality to counterfactual definiteness, but... Relative State still wins my favour.
3Rob Bensinger8yRead: traditional Q.M. Arguments for BM and for MW are both largely still responding to Copenhagenism's legacy of collapse theorists. The next stage in the dialectic should be for them to set aside the easy target of collapse and start going for each others' throats directly. Does adding Magical Reality Fluid and an infinity of invisible Worlds sound simple, at the outset? MW seems simple and elegant because it's familiar; this tempts us to forget just how much remains unresolved by the theory, and just how much it demands that we posit beyond the experimental observations. Let's be careful not to let unfamiliarity tempt us into treating BM in an asymmetric way. Bell's way of framing BM is very intuitive, I think: "Is it not clear from the smallness of the scintillation on the screen that we have to do with a particle? And is it not clear, from the diffraction and interference patterns, that the motion of the particle is directed by a wave? De Broglie showed in detail how the motion of a particle, passing through just one of two holes in screen, could be influenced by waves propagating through both holes. And so influenced that the particle does not go where the waves cancel out, but is attracted to where they cooperate. This idea seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored." Actually, I'm somewhat grateful that it was ignored (except by de Broglie), since its intuitiveness might otherwise have become such a firm orthodoxy that we wouldn't have the rich debate between MW theorists of today. Given our human tendency to fix on our first solution, it is very useful that the weakest theory (collapse) is the one people started with. "Prefer" as in it sounds more elegant, or as in it seems more likely to be true? Untangling those two is the real problem. We also need to keep in mind that the MW style of locality is a rather strange one. (Cons
0PedroCarvalho8yThat's not at all what Relative State states... it just states that the Schrödinger Equation is all there is, full stop. The existence of a number of worlds is a consequence, not an assumption. Please forgive me if I misunderstand, but that sounds, to me, just a way of making wavefunctions fit into the intuitive "particle" and "wave" molds. And it also looks like it ignores the fact that people are made of particles (wavefunctions), so whatever effects of any given particle (wavefunction) are detected by us would cause us to be superposed. I don't... really see a way out of being superposed at macroscopic level. "Prefer" as in both sounds more elegant and seems, to me, more likely to be true. Also, the conservation of energy is never violated, I don't think, since we already had to multiply the total energy by the normalised amplitude squared of the different states anyway. I'm sorry, you're right. What I meant by "failed to convince me" and "wins my favour" is that I still assign a > .5 probability to MW, or any interpretation that doesn't try to sneak away from macroscopic superposition, or tries to tell me physics is non-local. As I said, I have done my share of research on alternative interpretations of Q.M. after I started studying it (I'm not nearly done studying it, though) before, and the one that sounded to me the simplest was MW. I guess I don't take it seriously because, to my untrained eyes, it looks like a theory that's trying to escape quantum effects affecting the macroscopic world by sticking macroscopic intuitions into the quantum world.
-2Rob Bensinger8ySure, but the theory with the simplest sound-bite axiomatization may not be the most parsimonious theory at the end of the day. And your confidence in that starting point will depend heavily on how confident you are in the prospects for extracting the Born probabilities from the Schrödinger equation on its lonesome. A theist will claim that his starting point is maximally simple relative to its explanatory power -- heck, one of his axioms is that his starting point is maximally simple! that's how simplicity works, right? -- but the difficulty of actually extracting normality from theism without recourse to 'deep mysteries' undermines the project in spite of its promising convergences with the data. They aren't intuitive molds, in the system-1 sense; 'particle' and 'wave' are theoretical constructs, and we understand them via (and import them from) structurally similar macro-phenomena. 'Wave' and 'particle' are sufficiently simple ideas, as macro-phenomena go, that they may recur at multiple levels of organization. I don't assume that they must do so; but it's at least an idea worth assessing, if the resultant theory recaptures the whole of normality without paradox or mystery. The wave occurs at both positions (or with both spin components); the particle does not. Being made of particles, I have a determinate brain-state, not a superposed one; and I observe a determinate particle position, though the dynaymics of that particle (and of my brain-state) are guided by the wave function. Many Worlds seems to predict that I will both see a spin-up measurement result and a spin-down measurement result, when I observe the superposed state. But in fact I seem to either see spin-up or spin-down, not both. So at this simple stage, Bohm correctly predicts our observation, and Many Worlds does not. That's why the challenge for Many Worlds is to make sense of the probabilistic element of QM. The Schrödinger dynamics leave no room for probability; they are, as you note, dete
0PedroCarvalho8yI meant not simplest as in simplest sound bite, I meant in the way mr. Yudkowsky has painfully explained elsewhere when he treated Occam's Razor. One single equation is always a simpler proposition than two; and a whole intelligent being that sparked Existence itself and is not made of parts is so far off the map it's not even worth considering as a preliminary hypothesis. If you have any system that is in a given state A and that system interacts with another one that is in a superposition of states X and Y, it no longer makes sense to talk about the first and second system: the whole system is now in a superposition of states. Same thing with observing the measurement: what you actually observe is a computer telling you "spin-up" or "spin-down". So that's a gazillion atoms and molecules and particles and whatnot that's different depending simply on the state of the electron. Now suppose you somehow isolated that computer completely from the outside, so that not a single photon left it, then you could say that the computer is in a superposition. And as soon as you looked, so would you. The fact that you don't actually see the computer accusing both "spin-up" and "spin-down" or some combination is just a consequence of the fact that, while the whole system, including you, your brain, the computer, the room you're in, the air you're breathing, etc., is in a superposition, the amplitude for the two states to interact is infinitesimal. For all intents and purposes, these two states have decohered. That's not to say superposition is gone; it's just to say that the amplitude for those two states to interact is nearly zero. Eh... I don't know about that. I mean... well, I'll come to that in a bit. I'll comment on it in a bit, too. I think that is the same problem I had with any other theories. The very idea of non-locality triggers alarm bells all over my brain. That > .9 probability to MW, I believe, stems, at least partially, from an implicit < .01 probability to no
3Rob Bensinger8yYes, I grok. My point was that some theists don't just think that God is simple partwise; they think that in some unknown (perhaps ineffable) way he's maximally conceptually simple, i.e., if we were smarter we could formulate God in something equation-like and suddenly understand why everything about him really flows forth elegantly from a profoundly simple and unitary property. (And if everything else flows forth inevitably from God, the theory as a whole is no more complex than its God-term. Of course, free-will-invoking variants will be explanatorily inelegant by design; sudden inexplicable 'choices' will function for libertarians like collapse functions for Copenhagenists.) Obviously, this promise of being able to formulate God in conceptually (and not just mereologically) simple terms is not credible. But this was the point of my (admittedly unkind) analogy; we should be wary of theories that promise an elegant, unimpeachably Simple reduction but have difficulty connecting that reduction to normality even in a sweeping, generic fashion. MW is obviously much better in this regard than theism, but one of the problems with theism (it promises a simple reduction, but leaves the 'simple' undemonstrated) is interestingly analogous to the problem with MW (it promises a simple reduction, but leaves the 'reduction' undemonstrated). I don't take this to be a distinct argument against MW; I just wanted to call it to attention. Fair enough. This perhaps is the fundamental question: The naive interpretation of data from EPR-style experiments is quite simply that nonlocal causation (albeit not of the sort that can be used to transmit information) is in effect between distant entangled states. If your commitment to locality is strong enough, then you can recover locality by positing that you've imperceptibly fallen into another world in interacting with one of the particles, dragging everything around you into a somehow-distinct component of a larger, quasi-dialetheist (r
1PedroCarvalho8yI guess we'll have to wait until we have interstellar travels to observe completely superposed civilisations so that we can actually see MW? That was a joke, by the way. It's not really "fallen into another world" as much as "being in a superposed state." If you assume that superposition is a real effect of wavefunctions (particles), then you have to assume that you also belong in states. The only way of escaping that is not believing superposition is an actual, real effect, which to me looks like exactly what Bohm says. Now I'm not saying that I give a > .9 probability to MW. It's > .5, but I do not trust my own ability to gauge my probability estimates the way you did. Point. I think mr. Yudkowsky mentioned something about a non-existence of worlds at that intersection? As in, the leakage from the "larger" worlds is so big that the intersection ceases existing, and then you have clearly distinct universes. Or at least that's what I understood. I don't think I like or even agree with the idea; it, too, sounds to me like trying to fit physics into intuition. But anyway, I agree with you that one of the main points in my head against MW is that intersection. That, and what I mentioned above, of completely impossible situations (like zombie Kennedy) never having happened in recorded history. Point. Which is why I agree with you that BM is the only other serious candidate. [whine]But those initial commitments are really unpleasant.[/whine] Scary indeed. Magical reality fluid actually terrifies me, and if it turns out that MW requires it... well, I think I prefer non-locality to that.
-1Peterdjones8yI think that is pretty much the wrong way round. The only way you can model a dimensionless particle in QM is as a diract delta function, but they are mathematically intractible (with a parallel argument applying to pure waves), so in a sens there are no particles or waves in QM, and whatever w/p dualism is, it is not a dualism of sharply defined opposites, as would be implied by Bohr's yin-yang symbol! In fact, you see macroscopic pointer readings. That is an important point, since Many Worlders think that the superposition disappers with macroscopic decoehrence.
-1Rob Bensinger8yI wasn't specifically assuming dimensionless particles. Classical atoms could be modeled particulately without being points, provided each can be picked out by a fixed position and a momentum. Yes, this distinction is very important for BM too. For example, BM actually fails the empirical adequacy test if you treat 'spin-up' and 'spin-down' as measurable properties of particles.
0Peterdjones8yFor instance, David Deutsch's contribution that BM is just MW with unecesary additional complexity. Although one can still make the case that MW is BM Done Right. :-).a
0Rob Bensinger8yIf one wishes. But MW and BM give contrary answers to almost every question, in spite of their mutual empirical adequacy. They're sufficiently distinct as to almost qualify as alien physics -- incommensurate-yet-coherent in the way you might expect the theories of two independent civilizations to be. That in itself makes the act of trying to evaluate and compare the two kinds of model Bayesianly extremely useful and informative. It really gets to the heart of making some of our core priors explicit.
0Peterdjones8yYou could say, for instance, that BM is nonlocal, and MW local, but that is hardly in favour of BM.
0[anonymous]8yHasn't (some version of) that been ruled out by Bell test experiments?
0Rob Bensinger8yNo. Bell's theorem rules out local hidden-variable models.
0drnickbone8yOr indeed see the SEP on Modal Interpretations [http://plato.stanford.edu/entries/qm-modal/] since it is arguable that Bohmian mechanics is a special case: There is an interesting further question about whether the modal concept of "possibility" can be further reduced... I guess Eliezer would argue that it should be.
0PedroCarvalho8yI just read Mr. Yudkowsky's articles on Boltzmann Brains and the Anthropic trilemma... and I had thought of those questions a while ago. While they're not directly related to this comment, I guess I should comment about them here, too. I have no problem thinking of myself as a Boltzmann Brain. Since most (if not all) such Brains will die an instant after existing, I guess my existence could be accurately described as a string of Boltzmann Brains in different regions of spacetime, each containing a small (not sure how small) slice of my existence. Perhaps they all exist at the same time. And the Anthropic Principle would explain the illusion of continuity, somewhat. My main thoughts on the Boltzmann Brain idea is that any hypothesis that has no way to be tested even in principle is equivalent to the Null hypothesis. I guess what I mean is, if I found out right now, with P ~ 1, that my existence is a string of Boltzmann Brains, that would not affect my predictions. I'm not sure I should be thinking this... because this whole matter confuses the hell out of me, but that's my current mental state. As for the Anthropic Trilemma... well, I guess it pretty much means mr. Yudkowsky has the same doubts as I do. Very, very confusing business indeed. Sometimes I think I should just quit thinking and become a stripper. That was a joke, by the way.

Take the apples and grind them down to the finest powder and sieve them through the finest sieve and then show me one atom of sixness, one molecule of multiplication.

Discworld reference FTW. I would suspect that Pratchett's Death, being the secular humanist and life enthusiast that he is, would strongly approve of our efforts here to eventually render him irrelevant.

It may not be possible to draw a sharp line between things that exist from the things that do not exist. Surely there are problematic referents ("the smallest triple of numbers in lexicographic order such that a^3+b^3=c^3", "the historical jesus", "the smallest pair of numbers in lexicographic order such that a^3+24=c^2", "shakespeare's firstborn child") that need considerable working with before ascertaining that they exist or do not exist. Given that difficulty, it seems like we work with existence explicitly, as a... (read more)

I am not convinced -- by this article, at least -- that there could only be two kinds of stuff. It sounds like the answer to the question, "why two and not one or possibly three ?" is, "because I said so", and that's not very convincing.

I am also not entirely sure what the Great Reductionist Project is, or why it's important.

Note that I'm not arguing against reductionism, but solely against this post.

Could the Born probabilities be basic - could there just be a basic law of physics which just says directly that to find out how likely you are to be in any quantum world, the integral over squared modulus gives you the answer? And the same law could've just as easily have said that you're likely to find yourself in a world that goes over the integral of modulus to the power 1.99999?

But then we would have 'mixed references' that mixed together three kinds of stuff - the Schrodinger Equation, a deterministic causal equation relating complex amplitudes ins

... (read more)
0torekp8yIn the first paragraph you quoted, EY arbitrarily and pointlessly juxtaposes two different questions. I say "pointlessly" charitably, because if there is a point, it's a bad one, to (guilt-by-)associate an affirmative answer to the first, with an affirmative answer to the second. Could the Born probabilities be basic? "Could" would seem best interpreted here as "formulable consistently with the two-factor Great Reductionist approach." "Basic" I'll take as relative to a model: if a law is derived in the model, it's not basic. Now that we know what the question is, the answer is: sure, why not? Physical laws mention "electric charge", "time", "distance"; adding "probability" doesn't seem to break anything, as long as the resulting theory is testable. That basically probabilistic theory might not be the most elegant, but that's a different argument. And there's no need to top probabilities with fundamental-degree-of-realness sauce.
0shminux8yHe is not an instrumentalist, so he finds this approach (anything that helps one make good predictions goes) aesthetically unsatisfying.
0torekp8yI'm not saying or implying that "anything that helps one make good predictions, goes". I really don't think instrumentalism is relevant here; if we take it off the table as an option, there still doesn't seem to be any reason to disprefer a theory that posits "objective probability" to one that posits "electric charge", aside from the overall elegance and explanatory power of the two theories. Which are reasons to incline to believe that a theory is true, I take it, not just to see it as useful.
[-][anonymous]8y 1

Great post as usual, Eliezer! I have to admit that I never thought of logical and causal references being mixed before, but truly that is often exactly how we use them.

I have one question, though: I read through the quantum physics sequence, and I just don't understand - why are the Born probabilities such a problem? Aren't there just blobs of amplitude decohering? Is the problem that all the decoherence is already predicted to happen, without implying the Born rule? If someone could clarify this for me, I'd greatly appreciate it.

[This comment is no longer endorsed by its author]Reply
1PedroCarvalho8yI am not sure I am correct, but if I'm not mistaken, the problem with the Born rule is that no one so far has successfully (in the eyes of their peer physicists) proven they must be true. As in, they're additional. If you go by the standard Copenhagen interpretation, since Collapse is already an arbitrary additional rule, it already sort of contains the Born probabilities: they're just the additional rules that additionally condition how Collapse happens. But any other theories that remove objective, additional Collapse from the picture have this big problem: why, oh, WHY do we get the Born probabilities? Furthermore, we have an even more interesting question: what do they even mean?! Suppose you (temporarily) accept the Born probabilities. What are they probabilities of? Meaning: if there is a 75% chance that you will observe a photon polarised in a given direction, what does that mean, in the grand scheme? Are you divided into 100 copies of you, and 75 of them observe such polarisation, while 25 of them don't? That's... pretty much it. I hope I could help.

I am somewhat confused about the nature of logical axioms. They are not reducible to physical laws, and physical laws are not reducible to logic. So then, it what sense are they (axioms) real? I don't think you are saying that they are "out there" in some Platonic sense, but it also seems like you are taking a realist or quasi-empirical approach to math/logic.

1shminux8yPhysical laws are no more real than logical axioms. Both are human constructs, started as models used to explain observations and grown to accommodate other interests. Just like the physical law F=ma is a model to explain why a heavier ball kicked with the same force does not speed up as much, the logical axiom of transitivity "explains" why if you can trade sheep X for sheep Y and sheep Y for sheep Z, it is OK to trade sheep X for sheep Z in many circumstances.
-3Peterdjones8ySo is there any reason past regularities will continue into the future?
0Ben Pace8yLogical Axioms are the rules that decide what can and can't happen. Then, our physical world is one application of these to some starting physical position (and that may be logical defined too, read this post [http://lesswrong.com/lw/1zt/the_mathematical_universe_the_map_that_is_the/], or Good and Real). Logic is useful when we have uncertainty. If we are unsure about a certain variable, we can extrapolate to how the future will be given the different possibilites - the different variables that are logically consistent within a causal universe that fits with everything else we know. Of course, if we had no causal knowledge whatsoever, then we'd not have anything with which to apply logic (kinda like this post [http://lesswrong.com/lw/59/spocks_dirty_little_secret/], with causal reference being emotions, and logic being logic). So, I'm saying that logic can define how everything that could be would work, which we deduce from our universe's laws. If we have uncertainty, then logic defines the possibilites. If we pretend to have only the knowledge of one law, like '1 + 1 = 2', then we can find out more using logic. And this is the study of mathematics.
-1Peterdjones8yNo, logical axioms are much too general from that. You need physical laws to projoect the future state of the world, and they are much more specific than logical axioms.
0Ben Pace8yCould you provide an example please? I must apologise, I'm not competent with fundamental laws of physics, but why can't the most basic laws (the 'wave function' is apparently one of them) be specified logically? Wouldn't that just be a mathematical description of the first state of a universe? Then that whole universe, specified by the simplest law(s) would be one universe, and to those/us within that world would only be able to be affected by the things causally connected. (I suppose I'm talking Tegmark's stuff, although I've only read Drescher's account)
0Peterdjones8yYou could, but that is not what is usually meant by "logical axiom". The rules that decide what can and can't happen are called physical laws.
1Ben Pace8yOkay. I tried to respond here, but I'm not qualified to do so. I'll just state what I'm thinking, and then, if you could point out what I might be confused about, I'll leave it there and might go read some books. I think this is a confusion of definitions. If every universe is described in logic, then the physical laws are a subset of those. So, logic describes everything that is consistently possible and then whichever universe we're in is a subset. Logic describes how our universe works. So the Great Reductionist Project is defining which branch of logical description space we are, and showing on the way that no part of the universe is not describable within logic.
-1Peterdjones8yyes, largely. No, if you buy a book on logic, it doens't describe the universe.To get a description of our universe in mathematical/logical terms, you have to add in empirical information. There is a convenient shorthand for that: physics. Physics described how our universe works. Huh? How can it show that? Whether there is part of our universe that is not describable by logic is an empirical claim. Science could encouner somethig irreducible are any point.
0ThrustVectoring8yI think this may have been answered earlier. They are a set of ways you think a certain class of problem works. They're very much an element of your mental model of reality. In other words, math (or logical axioms) are what adding two pebbles and three pebbles has in common with adding two apples and three apples.
0JMiller8yThank you. In that case, does math rely on at least one particular agent or computer having some [true] model that 2+3 = 5?
1ThrustVectoring8yUhm, not really. I'm not entirely sure what you mean by "math relies on things doing math". Math isn't about the thinking apparatus doing math. It's a way of systematically reducing the complexity of your mental models - it replaces adding pebbles and adding apples with just adding. If you imagine a universe with 4 particles in it, then 2+3 is still 5.
0JMiller8yI found Eliezer's post "Math is Subjectively Objective" which explains his position very clearly. Thanks for your help.
1Peterdjones8yNo it doesn't, since it ends "Damned if I know."
3JMiller8yRight, which explains his position: math is real and 2+3 really is 5, but he does not know what that means, or where that is true. You are right though, it isn't a fully fleshed out account. All I said is that it explains his position clearly, not that his position itself is perfectly clear.
0Peterdjones8yI don't think it even makes it clear that math is real, just that mathematical truth is objective and timeless.
1[anonymous]8yYes (show me one atom containing the Peano axioms, containing math, etc.). Like you already implied, though, the statement "2+3 = 5" is "true" with respect to the Peano axioms whether an agent takes the time to look or not.
0DaFranker8yI think this question is somewhat ambiguous; you've gotten two correct answers that say "contradicting" (different) things and apparently answer different questions. When you say math, are you talking about the way apples and stones interact and the states of the universe afterwards when the universe performs "operations" on them? If so, then math is agent-independent, as the world-state of 2+3 apples will be five apples regardless of the existence of some agent performing "2+3=5" in that universe. If you're talking about the existence of the "rules of mathematics", our study of things and of counting, along with the knowledge and models that said abstract study implies, then it does rely on agents having 2+3=5 models, because otherwise there's just a worldstate with two blobs of particles somewhere, three blobs of particles elsewhere, and then a worldstate that brings the blobs together and there's a final worldstate that doesn't need "2+3=5" to exist, but requires an agent looking at the apples and performing "mathematics" on their model of those blobs of particles in order to establish the model that two and three apples will be five apples. In other words, what-we-know-as "mathematics" would not have been invented if there were no agent using a model to represent reality, as mathematics are abstract methods of description. However, the universe would continue to behave in the same manner whether we invented mathematics or not, and as such the behaviors implied by mathematics when we say "2+3 apples = 5 apples" are independent of agents.
0JMiller8ySo when an agent or computing device performs an operation on real numbers, say division of 1200 by 7, that result is real, even though the instance of this division requires the agent to do it? The answer IS the only answer, but without an agent, there would not be a question in the first place?
0DaFranker8yThat result is logically valid and consistent, but does not have any new physical real-ness that it didn't already have - that is, its correlation and systematic consistency with the rules of how the universe works. Otherwise, yes, exactly.
-2Armok_GoB8yYour assumption that physical laws are not reducible to logic is false. http://arxiv.org/abs/0704.0646 [http://arxiv.org/abs/0704.0646]
3shminux8yThis is extremely controversial, so I'd not use the word "false" here.
2JMiller8yI don't have time to read this this week, but when I do I will get back to you. Thanks for the article.
0JMiller8yThanks for the paper! I have started to read this and am admittedly overwhelmed. I think I understand the concept, but without the ability to understand the math, I feel limited in my scope to comprehend this. Would you be able to give my a brief summary of why we should accept MUH and why it is controversial?
1Armok_GoB8yWe should believe MUH because it's mathematically impossible to consistently believe in anything that's not maths, because beliefs are made of maths and can't refer to things that are not maths. It' controversial because humans are crazy, and can't ignore things genetically hard coded into their subconscious no matter how little sense it makes. RDIT: Appears I were stupid an interpreted your question literally instead of trying to make an actual persuasive explanation. Can't really help you with that, I absolutely suck at explain things, especially things I see as self evident. I literally can not imagine what it being any other way would even mean, so I can't explain how to get from there to here.
1JMiller8yI appreciate your attempt to try though. Thanks.
1ArisKatsaris8y...to me that sounds like saying "words are made of letters and can't refer to things that are not letters, therefore e.g. trees and clouds must be made of letters." It sounds like a map-territory confusion of insane degree. The Mathematical Universe Hypothesis may be true, but this argument doesn't really work for me.
0Armok_GoB8yCorrect, I've edited my post to clarify.
0Peterdjones8yI can see no evidence fror that. or that. or that. I also don't see how the conclusion follows even if they are all true.
0Armok_GoB8yYea I were stupid, edited my post.

This is just the same sort of problem if you say that causal models are meaningful and true relative to a mixture of three kinds of stuff, actual worlds, logical validities, and counterfactuals, and logical validities.

You have a typo there, I think. "Logical validities" appears twice. If it's not a type, the sentence is very unclear.

Tangential: I keep not understanding counterfactuals intuitively, not because of the usual reason, but simply because if I take my best model of the past and rerun it towardsthe future I do not arive at the present due to stochastic and chaos elevents.

Aka, trying to do the standard math: I throw a 100 sided dice, it comes out 73, "If 2+2 were equal to 4, the dice would with 99% certainty have come out 73".

2torekp8yThe statement is true, but because making a statement in a conversation is normally taken to have a point, nobody would ever say such a thing. If it rings false to your ears, that's your social instincts rightly warning you that making such a statement would be likely to deceive someone. Compare: my super-smart friend is studying for a test. I know he'll ace it no matter what. I wouldn't tell him "if you go to bed now and get some sleep you'll ace it tomorrow", and I wouldn't tell him "if you study all night you'll ace it", despite both of those being true. In either case he would think the first part of my statement was relevant.
1Armok_GoB8yThen how can anyone meaningfully talk about "what would have happened if X had happened instead of Y, Z years ago", when there'd be billions of changes due to randomness vastly larger than the kind of things humans tend to respond to that type of question with, completely drowning them out?
1fubarobfusco8yBut this is because the purpose of saying the above isn't merely to inform your friend of a true statement — it's to convince him to get a good night's sleep, in order to cause him to be well and happy.
0[anonymous]8yIt's not just that. See also Section 5 of this chapter [http://www.cambridge.org/assets/linguistics/cgel/chap1.pdf] of The Cambridge Grammar of the English Language.
[-][anonymous]8y 0

Technically you'd get 6apples^2 sorry just making a joke.. :)

EDIT: Doubly sorry, someone else already made this comment! How can this be deleted? :)

If great minds think alike, then from this evidence we can conclude that puny minds joke alike :)

[This comment is no longer endorsed by its author]Reply

Well, you've certainly ground the 'hogfathers' argument into dust, but I've gotta point out that 2 apples times 3 apples isn't 6 apples; it's 6 SQUARE apples. Just for what it's worth.

1MugaSofer8yThe number of apples, not the apples themselves, are being multiplied.
[-][anonymous]8y -1

unless you believe the Illuminati planned it all

Er... Why? :-)