Yesterday I talked about the cubes {1, 8, 27, 64, 125, ...} and how their first differences {7, 19, 37, 61, ...} might at first seem to lack an obvious pattern, but taking the second differences {12, 18, 24, ...} takes you down to the simply related level.  Taking the third differences {6, 6, ...} brings us to the perfectly stable level, where chaos dissolves into order.

    But this (as I noted) is a handpicked example.  Perhaps the "messy real world" lacks the beauty of these abstract mathematical objects?  Perhaps it would be more appropriate to talk about neuroscience or gene expression networks?

    Abstract math, being constructed solely in imagination, arises from simple foundations - a small set of initial axioms - and is a closed system; conditions that might seem unnaturally conducive to neatness.

    Which is to say:  In pure math, you don't have to worry about a tiger leaping out of the bushes and eating Pascal's Triangle.

    So is the real world uglier than mathematics?

    Strange that people ask this.  I mean, the question might have been sensible two and a half millennia ago...

    Back when the Greek philosophers were debating what this "real world" thingy might be made of, there were many positions.  Heraclitus said, "All is fire."  Thales said, "All is water."  Pythagoras said, "All is number."

    Score:    Heraclitus 0    Thales 0    Pythagoras 1

    Beneath the complex forms and shapes of the surface world, there is a simple level, an exact and stable level, whose laws we name "physics".  This discovery, the Great Surprise, has already taken place at our point in human history - but it does not do to forget that it was surprising.  Once upon a time, people went in search of underlying beauty, with no guarantee of finding it; and once upon a time, they found it; and now it is a known thing, and taken for granted.

    Then why can't we predict the location of every tiger in the bushes as easily as we predict the sixth cube?

    I count three sources of uncertainty even within worlds of pure math - two obvious sources, and one not so obvious.

    The first source of uncertainty is that even a creature of pure math, living embedded in a world of pure math, may not know the math.   Humans walked the Earth long before Galileo/Newton/Einstein discovered the law of gravity that prevents us from being flung off into space.  You can be governed by stable fundamental rules without knowing them.  There is no law of physics which says that laws of physics must be explicitly represented, as knowledge, in brains that run under them.

    We do not yet have the Theory of Everything.  Our best current theories are things of math, but they are not perfectly integrated with each other.  The most probable explanation is that - as has previously proved to be the case - we are seeing surface manifestations of deeper math.  So by far the best guess is that reality is made of math; but we do not fully know which math, yet.

    But physicists have to construct huge particle accelerators to distinguish between theories - to manifest their remaining uncertainty in any visible fashion.  That physicists must go to such lengths to be unsure, suggests that this is not the source of our uncertainty about stock prices.

    The second obvious source of uncertainty is that even when you know all the relevant laws of physics, you may not have enough computing power to extrapolate them.  We know every fundamental physical law that is relevant to a chain of amino acids folding itself into a protein.  But we still can't predict the shape of the protein from the amino acids.  Some tiny little 5-nanometer molecule that folds in a microsecond is too much information for current computers to handle (never mind tigers and stock prices).  Our frontier efforts in protein folding use clever approximations, rather than the underlying Schrödinger equation.  When it comes to describing a 5-nanometer object using really basic physics, over quarks - well, you don't even bother trying.

    We have to use instruments like X-ray crystallography and NMR to discover the shapes of proteins that are fully determined by physics we know and a DNA sequence we know.  We are not logically omniscient; we cannot see all the implications of our thoughts; we do not know what we believe.

    The third source of uncertainty is the most difficult to understand, and Nick Bostrom has written a book about it.  Suppose that the sequence {1, 8, 27, 64, 125, ...} exists; suppose that this is a fact.  And suppose that atop each cube is a little person - one person per cube - and suppose that this is also a fact.

    If you stand on the outside and take a global perspective - looking down from above at the sequence of cubes and the little people perched on top - then these two facts say everything there is to know about the sequence and the people.

    But if you are one of the little people perched atop a cube, and you know these two facts, there is still a third piece of information you need to make predictions:  "Which cube am I standing on?"

    You expect to find yourself standing on a cube; you do not expect to find yourself standing on the number 7.  Your anticipations are definitely constrained by your knowledge of the basic physics; your beliefs are falsifiable.  But you still have to look down to find out whether you're standing on 1728 or 5177717.  If you can do fast mental arithmetic, then seeing that the first two digits of a four-digit cube are 17__ will be sufficient to guess that the last digits are 2 and 8.  Otherwise you may have to look to discover the 2 and 8 as well.

    To figure out what the night sky should look like, it's not enough to know the laws of physics.  It's not even enough to have logical omniscience over their consequences.  You have to know where you are in the universe.  You have to know that you're looking up at the night sky from Earth.  The information required is not just the information to locate Earth in the visible universe, but in the entire universe, including all the parts that our telescopes can't see because they are too distant, and different inflationary universes, and alternate Everett branches.

    It's a good bet that "uncertainty about initial conditions at the boundary" is really indexical uncertainty.  But if not, it's empirical uncertainty, uncertainty about how the universe is from a global perspective, which puts it in the same class as uncertainty about fundamental laws.

    Wherever our best guess is that the "real world" has an irretrievably messy component, it is because of the second and third sources of uncertainty - logical uncertainty and indexical uncertainty.

    Ignorance of fundamental laws does not tell you that a messy-looking pattern really is messy.  It might just be that you haven't figured out the order yet.

    But when it comes to messy gene expression networks, we've already found the hidden beauty - the stable level of underlying physics.  Because we've already found the master order, we can guess that we  won't find any additional secret patterns that will make biology as easy as a sequence of cubes.  Knowing the rules of the game, we know that the game is hard.  We don't have enough computing power to do protein chemistry from physics (the second source of uncertainty) and evolutionary pathways may have gone different ways on different planets (the third source of uncertainty).  New discoveries in basic physics won't help us here.

    If you were an ancient Greek staring at the raw data from a biology experiment, you would be much wiser to look for some hidden structure of Pythagorean elegance, all the proteins lining up in a perfect icosahedron.  But in biology we already know where the Pythagorean elegance is, and we know it's too far down to help us overcome our indexical and logical uncertainty.

    Similarly, we can be confident that no one will ever be able to predict the results of certain quantum experiments, only because our fundamental theory tells us quite definitely that different versions of us will see different results.  If your knowledge of fundamental laws tells you that there's a sequence of cubes, and that there's one little person standing on top of each cube, and that the little people are all alike except for being on different cubes, and that you are one of these little people, then you know that you have no way of deducing which cube you're on except by looking.

    The best current knowledge says that the "real world" is a perfectly regular, deterministic, and very large mathematical object which is highly expensive to simulate.  So "real life" is less like predicting the next cube in a sequence of cubes, and more like knowing that lots of little people are standing on top of cubes, but not knowing who you personally are, and also not being very good at mental arithmetic.  Our knowledge of the rules does constrain our anticipations, quite a bit, but not perfectly.

    There, now doesn't that sound like real life?

    But uncertainty exists in the map, not in the territory.  If we are ignorant of a phenomenon, that is a fact about our state of mind, not a fact about the phenomenon itself.  Empirical uncertainty, logical uncertainty, and indexical uncertainty are just names for our own bewilderment.  The best current guess is that the world is math and the math is perfectly regular.  The messiness is only in the eye of the beholder.

    Even the huge morass of the blogosphere is embedded in this perfect physics, which is ultimately as orderly as {1, 8, 27, 64, 125, ...}.

    So the Internet is not a big muck... it's a series of cubes.

    New Comment
    45 comments, sorted by Click to highlight new comments since: Today at 11:45 AM

    I love the pun at the end. "The Internet is a series of cubes" - priceless!

    I thought you were going to talk about complexity. A finite set of object types and modes of combination can give rise to an infinity of possible structures, and so there will be no upper bound on the complexity that an individual structure might exhibit, even when the rules are simple. This might be conceived of as "messiness".

    And I think Heraclitus and Thales deserve more than zero points. In the three-way contest you describe, they are the advocates of substance, and Pythagoras is the platonist. What do you think of Anaximander - "All is apeiron"? It is a more sophisticated version of the Heraclitus-Thales position, one that does not identify the hypothesized universal substance with any particular observed substance, like fire or water, as if it were more fundamental than other observed substances. I suppose Anaximander's opposite would be Empedocles, who may have introduced the idea that there is more than one fundamental substance.

    The problem with saying that "all is mathematics" is that, like all platonism, it tries to short-circuit the relationship between substance and property, by focusing solely on properties.

    "The messiness is only in the eye of the beholder."

    Yes, and also the order, too-- at least the order that is conceived. That a particular idea of a particular order TRULY represents the state of reality is a matter of opinion (not necessarily arbitrary or careless opinion-- but still opinion).

    I don't know how to escape the fact that order, of some kind, is a fact. Otherwise I could not make my way in the world at all. But no good skeptic denies order in general. I don't either. I just keep asking "which order?" Is there only one possible order that can account for what we see? No conceivable argument can establish that.

    Competent skeptics, in my opinion, argue that you can't be certain what the order is. AND, they argue that you don't need certainty, anyway. Just say, "I've decided to treat this as true and here's why I think my way is better." You don't need to say "and it is impossible that a reasonable person could come to any different idea about this". Skepticism is a position that has heuristic value, just as there is heuristic value to being a dogmatist or a religious fundamentalist. I just like the heuristic of skepticism better. I harp on this because it seems that 3/4 of your writing makes a great case for openness, and the remaining stuff seems to sound a note of "Thou must bow to the Great Model that my Friends and I use to explain everything." You could instead talk about the benefits of your model and leave it at that.

    I'm combing through your text looking for an argument that justifies giving a privilege to order. I'm not finding it. You keep reasoning from coherence, but coherence is just more induction, and coherence requires that you decide a priori, what constitutes evidence and how much is needed before you stop. To echo your own form of argument, not since George Berkeley and Immanuel Kant has it been acceptable to ignore a priori categories of thought.

    Do you really not see the turtle your ideas stand on, or are you thinking that it's turtles all the way down?

    Do you really not see the turtle your ideas stand on, or are you thinking that it's turtles all the way down?

    Eliezer doesn't deign to acknowledge the existence of turtles... unless he's criticizing an idea that he doesn't like.

    What's the name of Bostrom's book, by the way?

    Also, can you really say that many worlds is "quite definitely" true? Or is that just a personal opinion of yours? I don't know of any convincing experimental data to back it up.

    Maybe I've misunderstood what you meant when you referred to different versions of ourselves getting different results when measuring some variable in a quantum system.

    Gödel's incompleteness theorem makes the notion of the "real world" as a consistent, complete mathematical system (or even modeling it as such) rather difficult, doesn't it?

    EY writes, "The best current knowledge says that the "real world" is a perfectly regular, deterministic, and very large mathematical object which is highly expensive to simulate."

    What about free will?

    That's easy, Cryonics. Free will doesn't exist, except in the compatibilist sense.

    Anon, Bostrom's book is Anthropic Bias: Observation Selection Effects in Science and Philosophy.

    Raj, Godel's Theorem does not say what people believe it does (would take a separate post). But in any case, Godel's Theorem surely does not show that natural numbers don't exist. It says you'll have trouble proving certain theorems. The observed universe is like the natural numbers, not like a theorem about them.

    James, you would seem to be committing the Mind Projection Fallacy. Whether reality is a perfectly regular mathematical object, and whether we can ever have perfect confidence of this fact even if it is true, are two entirely different questions.

    Free will (I mean real free will. "compatabilist" free will is meaningless.) might or might not exist. I think it does but there's not yet any conclusive evidence either way.

    Actually the universe is probably nondeterministic even if free will doesn't exist, just because of quantum mechanics. A hidden variable theory is a possibility but unless one is ever actually proposed I'll have to presume that yes, there really is random stuff. "Many worlds" is just ridiculous, and besides, even though it's deterministic in theory, so what? It doesn't help us make predictions in real life.

    According to a deterministic many-worlds theory, free exists, and not only in the compatibilist sense, but also in the libertarian sense. The compatibilist account is that my action is fixed, but I would be able to do otherwise if I wanted to; it is just that I don't want to and can't want to. But in the many worlds theory, I can do either, not only if I wanted to but in reality: for according to the many words theory, in fact I do both, which proves that I can do each of them. So many-worlds is not compatibilist, but libertarian, despite the fact that it is deterministic. (In another way of putting it: from the point of view of the outside observer, it is deterministic; from the viewpoint of an observer who is about to split into two observers, it is libertarian.)

    You can't decide which branch you go down. That's confusing the mental level of your model with the quantum level. (In reality, the branches split based only on the fundamental state of things, not whether that state contains a pattern we call a mind.)

    That's not really free though, because you're forced to make all possible choices. I guess there are ways that free will could be shoehorned onto the many-worlds model, but both make it even less attractive than it already is. One would be to say that the free-will-thingy only goes in one path rather than splitting. This would have nasty implications; since with an astronomically high probability every free-will-bearing person would be the only one in that universe, so it would be moral to be a psychopath. Another way would be that free will works by eliminating branches, but then it's no longer all possible universes, and if you're going that way why not go all the way and have just one anyway? So as far as I'm concerned, many-worlds -> no free will. Just one of the reasons I don't like it.

    The main problem I have with it though is it posits a huge amount of unobservable information. That's way too high a price to pay just to get rid of wavefunction collapse. I really don't buy the argument that the monstrous Everett multiverse is somehow simpler than the nice compact Copenhagen universe.

    "only because our fundamental theory tells us quite definitely that different versions of us will see different results".

    EY, on what do you base your 'quite definitely' ? David Lewis ?

    I mean many-worlds.

    I wish to hell that I could just not bring up quantum physics. But there's no real way to explain how reality can be a perfect mathematical object and still look random due to indexical uncertainty, without bringing up quantum physics.

    The Everett universe is simpler by the Bayesian version of Occam's Razor, algorithmic information. I'll write a post on this eventually, I think.

    But imagine a physics that is just like our physics, except that the amazing new theory says that whenever conventional physics predicts you won't be able to see an object any more (for example, you threw it away to infinity), that object ceases to exist, because it's "no longer necessary". This amazing new theory violates conservation laws, and worse, introduces a subjective note into physics (objects stop existing when people can't see them), and even worse, produces no new testable predictions and in fact has to peek at the simpler theory to find out when objects ought to "vanish".

    Well, that's just what the Copenhagen interpretation looks like relative to many-worlds - the Copenhagen interpretation says that large clouds of amplitude vanish from configuration space, at exactly the point when the simpler theory says that decoherence prevents the local you from seeing it any more. This violates unitarity, CPT symmetry, relativity, and half a dozen other basic principles of physics; plus it introduces a note of subjectivity that confused half the planet for half a century; furthermore it adds a strictly extra law of physics that produces no new predictions; and finally, it has to peek at decoherence calculations in order to find out whether or not it's safe to declare that an amplitude cloud has "vanished".

    So the probability of the many-world interpretation relative to the Copenhagen interpretation, is essentially equivalent to the probability of the theory that your missing socks are somewhere behind your dryer, relative to the theory that your missing socks have been banished from existence by supernatural fairies that only come out when you're not allowed to look behind the dryer.

    Not all physicists agree with this, perhaps because not all physicists realize that probability theory is a technical subject, so some of them talk about "Occam's Razor" without knowing how to do calculations that involve Occam's Razor, or talk about "falsifiability" without knowing how to calculate exactly how much a piece of evidence falsifies something. However, I believe that polls have shown that a majority of physicists do know better than to believe in Copenhagen fairies, at this point. I don't really feel like waiting for the other 37% or whatever of physicists to catch up. It's not the polls that nail down many-worlds, it's the evidence as interpreted sanely.

    I know this isn't a complete explanation, but I hope it gives you some idea of what's going through my mind when I just speak as if many-worlds is true, without Copenhagen caveats. It's for the same reason I don't end all my sentences with, "Unless a magic chocolate cake is altering my mind." Wave functions don't collapse. It was a silly idea.

    As for free will: "Free will" is a name for a state of confusion, not a name for something that either does or does not exist. Tell me an experiment I can do to find out whether someone has "free will", and I'll tell you whether or not it can exist in a mathematically regular universe.

    I'll write a post on this eventually, I think.

    I find this way too amusing in retrospect.

    It occurs to me that one consequence of learning about QM from the sequence (as many people are doing), is that you then need to un-learn wavefunction realism, if you want to think about the subject for yourself. A better way to learn QM is to approach it as an incomplete classical-looking theory. E.g. a particle isn't really a wavefunction; it's a particle, with a position and momentum that we only know imprecisely, and the wavefunction is a calculating device that gives you the probabilities. Once you're clear on that picture, then you can say "this theory is manifestly incomplete; what's the actual physical reality, and why does this wavefunction thing work?" And then you're in a position to consider whether the wavefunction itself could somehow be the actual physical object. But because the sequence presupposes wavefunction realism from the beginning - even the Copenhagen interpretation is mostly portrayed as being about an objectively existing wavefunction with two modes of evolution - it would take an unusually careful reader to come to the sequence with no prior knowledge of QM, and still notice the possibility that wavefunctions aren't real.

    Probably true.
    That said, I'm not sure how many readers could approach QM as a "classical-looking theory" and notice the possibility that particles aren't real.
    I'm also not sure there's a way to approach QM -- or, indeed, anything else -- that doesn't bias the reader in favor of some ontology.

    [-][anonymous]12y-20

    Incidentally, New Scientist ("Ghosts in the atom: Unmasking the quantum phantom", Aug 2, 2012) are now reporting that theoretical breakthroughs have disproved non-realist interpretations of QM. Its been shown that different interpretations of QM have different empirical consequences, and the naive version of the Copenhagen interpretation contradicts empirical data.

    "Now Matthew Pusey and Terry Rudolph of Imperial College London, with Jonathan Barrett of Royal Holloway University of London, seem to have struck gold. They imagined a hypothetical theory that completely describes a single quantum system such as an atom but, crucially, without an underlying wave telling the particle what to do.

    Next they concocted a thought experiment to test their theory, which involved bringing two independent atoms together and making a particular measurement on them. What they found is that the hypothetical wave-less theory predicts an outcome that is different from standard quantum theory. "Since quantum theory is known to be correct, it follows that nothing like our hypothetical theory can be correct," says Rudolph (Nature Physics, vol 8, p 476).

    Some colleagues are impressed. "It's a fabulous piece of work," says Antony Valentini of Clemson University in South Carolina. "It shows that the wave function cannot be a mere abstract mathematical device. It must be real - as real as the magnetic field in the space around a bar magnet."

    The Pusey-Barrett-Rudolph result was published (and much discussed) last year. Matt Leifer has a nice, non-sensationalist discussion of the theorem, and he argues convincingly that the theorem does not rule out any interpretation of QM held by contemporary researchers.

    Probably better to cite a description rather than blurbs. Scott Aaronson's post was linked on LW, and is a really good description.

    Quick summary: what the paper shows is that the "wave-function as knowledge" description is incompatible with QM.

    [-][anonymous]4y30

    What you describe is the hidden-value theory of QM, which has been invalidated experimentally. Any interpretation of QM must be inherently “weirder” than observers merely bring in a state of ignorance about the velocity and position of billiard ball particles.

    Any experiment testing many worlds would be an experiment testing free will, for the reason stated.

    OK thanks, nice intuition pump.

    But if you are one of the little people perched atop a cube, and you know these two facts, there is still a third piece of information you need to make predictions: "Which cube am I standing on?"

    This nicely illustrates why discrete uniform probability distributions (as I understand them) over infinite sets don't work very well. I can't make sense of this thought experiment. I'll dump my reasoning below, and would be grateful for any clarifications about what I'm doing wrong.

    Assume I'm one of the people on one of those cubes, know about the entire series including the people on them, and haven't looked at the number below me yet. What's the probability I'm on the first cube, 1? Well, that's one possibility, and there's ... countably infinitely many ... alternatives, so if that probability isn't zero, it's as close as I can make it. The same reasoning applies to every other cube. I know all of the cubes exist, each has a person, and I'm one such person. If this is all I know, I have no particular reason to assign a non-uniform probability distribution over the possible outcomes. So, since I will assign the same probability to finding myself on each of the cubes, that leaves me with the following options:

    1) I can assign a probability of zero, which blows up in my face since I have to conclude I won't find myself on any of the cubes. 2) I can assign a non-zero probability, which blows up in my face since by summing those probabilities I will necessarily get a total probability of greater than one (or any finite number, for that matter).

    For an alternative view, see this essay by Eric Raymond.

    So, Sebastian, you must use a non-uniform probability distribution.

    Yes, if one picks a random integer, it is logically impossible to use a uniform distribution, but one must pick a distribution that on average chooses lower values with a greater probability than higher values.

    But how can this possibly apply to Sebastian's situation? If the probability is merely anthropic, and the human being on the first cube is exactly like the human being on the second cube and every other one, what on earth does it mean to say that one is more likely to turn out to be on the first cube rather than the second, when there in fact is exactly one human being on each?

    It seems to me that this is a strong argument against the possibility of the situation with the infinite number of cubes. If someone has a different response I would like to see it.

    "Mathematics is beautiful" + "Reality is not like mathematics" doesn't add up to "Reality is ugly".

    Alternatively, Sebastian, you could find that an infinite uniform prior (or rather, the limit of a series of finite priors that approach uniformity) is manageable given the evidence you already have, whose likelihood function produces a nonuniform (limit) posterior. However, it is VERY IMPORTANT that you do all your calculations on finite distributions and then calculate how such finite distributions approach a limit, rather than assuming the limit already accomplished and trying to calculate directly with infinities. Otherwise you will have paradoxes up the wazoo.

    I receive my policy on this from the teachings of the master, E. T. Jaynes, whose holy words may be found in "Probability Theory: The Logic of Science", particularly Chapter 15.

    I wish to hell that I could just not bring up quantum physics. But there's no real way to explain how reality can be a perfect mathematical object and still look random due to indexical uncertainty, without bringing up quantum physics.

    MWI doesn't explain why the Universe has four large dimensions and three small neutrinos. In order to explain that by indexical uncertainty, you have to bring up other multiverse concepts anyway, and if you bring in "ultimate ensemble" theories, then MWI vs. non-MWI no longer matters for the rhetorical point you're making.

    I am personally unconvinced by the arguments that MWI does away with the need for a non-unitary operation, because of the inability of MWI proponents to show that MWI works in a rigorous way without one. I would bet that some combination of Objective Reduction + MWI is the correct physical theory. The point I'm trying to make is that the Eliezer's conclusion about indexical uncertainty may still be correct, even if you find Everett's MWI incoherent.

    because of the inability of MWI proponents to show that MWI works in a rigorous way without one

    Do you have any specific problem in mind? Have you read some of the post-2000 papers on how MWI works, like Everett and Structure?

    MW does have at least one specific problem: there is no easy way to account for the specific probabilities in quantum experiments. If an experiment has two branches, and the reason for the probability is indexical uncertainty, then each branch should have a probability of 50%, while in fact this is not necessarily the case.

    Robin Hanson has suggested an answer to this with his "Mangled Worlds" interpretation, but this answer has yet to be confirmed. It seems to me that Eliezer as usual is overconfident: MW might very well be true, but it is nowhere near as certain as he suggests.

    Do you have any specific problem in mind? Have you read some of the post-2000 papers on how MWI works, like Everett and Structure?

    From the paper:

    Two sorts of objection can be raised against the decoherence approach to definiteness. The first is purely technical: will decoherence really lead to a preferred basis in physically realistic situations, and will that preferred basis be one in which macroscopic objects have at least approximate definiteness. Evaluating the progress made in establishing this would be beyond the scope of this paper, but there is good reason to be optimistic. The other sort of objection is more conceptual in nature: it is the claim that even if the technical success of the decoherence program is assumed, it will not be enough to solve the problem of indefiniteness...

    So David Wallace would agree that "decoherence for free", mapping QM onto macroscopic operations without postulating a new non-unitary rule, has not yet been established on that tiny little, nitpicky "purely technical" level. The difference is that Wallace presumably believes that success is Right Around the Corner, whereas I believe the 50 years of failure are strong evidence that the basic approach is entirely wrong. (And yes, I feel the same way about 20 years of failure in String Theory.) Time will tell.

    The ultimate reality of the natural material world on Earth is beautiful but cannot be described by any mathematical formula.

    Merlin Wood

    It is not clear to me that we can actually imagine a universe which could not be described mathematically. If so, I'm not sure we have evidence that the universe "is" a mathematical object.

    Rolf, surely the simplicity of MWI relative to objective collapse is strong evidence that when we have a better technical understanding of decoherence it will be compatible with MWI?

    For those who liked the Wallace paper, there is more here and here.

    This looks interesting as a justification for using Bayesian probability theory (regardless of whether MWI is true).

    Rolf, surely the simplicity of MWI relative to objective collapse is strong evidence that when we have a better technical understanding of decoherence it will be compatible with MWI?

    What do you mean by "compatible"? Do you mean, the observed macroscopic world will emerge as "the most likely result" from MWI, instead of some other macroscopic world where objects decohere on alternate Thursdays, or whenever a proton passes by, or stay a homogeneous soup forever? That's a lot of algorithmic bits that I have to penalize MWI for, given that this has not been demonstrated.

    Here's the linchpin of my argument: why should I believe, a priori, that the observed macroscopic world has a decent chance of popping naturally out of MWI, any more than I should believe that the observed world might pop out from the philosophy "All Is Fire?" Should I believe this just because some people have convinced themselves that it probably does (even if they consistently fail to demonstrate it in a rigorous way?) But such post-hoc intuitive beliefs are notoriously unreliable. Extreme example: many people believe that quantum mechanics emerges naturally from Buddhist beliefs (yet, again, oddly they cannot demonstrate this in a rigorous way, and as an added coincidence, they only started saying this after quantum mechanics had already been discovered by secular experimentation.)

    Aside: if MWI'ers had started in 1890, and then used their "simple MWI" theory to go backwards from macroscopic observations to infer the possible existence of quantum mechanics by asking themselves "from what sets of simple theories might the macroscopic world naturally and intuitively emerge", now that would have impressed me.

    I believe that polls have shown that a majority of physicists do know better than to believe in Copenhagen fairies

    Whereas, I believe that different polls give wildly different results. I would rather say that polls show that physicists are subject to peer pressure. Also, I doubt most physicists have beliefs on this subject, since it's irrelevant to the vast majority of them. The was the point in Copenhagen, that it largely didn't matter.

    ‘If you stand on the outside and take a global perspective - looking down from above at the sequence of cubes and the little people perched on top - then these two facts say everything there is to know about the sequence and the people.’

    It seems to me that Bostrom simply has had a question answered differently than the answer given to the cube folk. Start Bostrom in the same initial state of information as the cube folk: Suppose there are cubes, that there are numbered (1, 8, 27, 64, 125…), that there are people that include Bostrom standing, and that only one is not standing on a cube.

    It seems to me that ‘Who is not standing on a cube?’ is the start of the search for predictability. The answer to that question seems to have been begged in the assumption that places Bostrom ‘not standing on a cube’, outside looking in.

    Being global (not standing on a cube) requires having an answer that contains a 'bit of information certainty' as a possibility for the question asked. A ‘bit of information certainty’ is that bit that when acquired, allows prediction to occur. In ‘Who is standing on a cube?’, ‘I am’ = no ‘bic’. ‘I am not’ = ‘bic’. For everyone else, a bic is still missing even though they all now understand that they are all on cubes: ‘‘But if you are one of the little people perched atop a cube, and you know these two facts, there is still a third piece of information you need to make predictions: "Which cube am I standing on?"’

    Accurate representation of Indexical Uncertainty?

    If so, does it make sense that Indexical Uncertainty is not having acquired the bic?

    No pen intended.

    (I’ll confess that I do not really understand how to treat infinities, but what I have gathered is that they, and zeros, are often hidden in calculations and discussions.

    When you have two of the three, the common sense, and the consistency, information is still needed. Not just any information. The information has to be bounded, treating infinities as a finite or series of finites. Change my facts from ‘’only one is not standing on a cube’ to ‘at least one is not standing on a cube’. Neither ‘I am standing on a cube’ nor ‘I am not standing on a cube’ contain a bic that allows prediction to begin because there is no bic available until the infinity is resolved.).

    Eliezer writes: "But in any case, Godel's Theorem surely does not show that natural numbers don't exist. It says you'll have trouble proving certain theorems. The observed universe is like the natural numbers, not like a theorem about them."

    I think whether Godels Theorem applies or not depends on how we define "understanding reality". A lot of people would interprete it as not only being able to theoretically predict the state of the universe at any given time (ignoring the pratical issues of course!), but being able to determine stuff like what can exist. Answering these types of questions requires much more complicated logic and could quite possibly be non-computatable.

    Nick Bostrom has written a book about it

    Which book would that be?

    Eliezer said in this comment that it's Anthropic Bias: Observation Selection Effects in Science and Philosophy.

    I somehow haven't read this one before. 

    This is a great post!

    I feel curious of what David Chapman would make of it. My guess is that he would disagree with something in here.

    But when it comes to messy gene expression networks, we've already found the hidden beauty - the stable level of underlying physics.  Because we've already found the master order, we can guess that we  won't find any additional secret patterns that will make biology as easy as a sequence of cubes.  Knowing the rules of the game, we know that the game is hard.  We don't have enough computing power to do protein chemistry from physics (the second source of uncertainty) and evolutionary pathways may have gone different ways on different planets (the third source of uncertainty).  New discoveries in basic physics won't help us here.

    If you were an ancient Greek staring at the raw data from a biology experiment, you would be much wiser to look for some hidden structure of Pythagorean elegance, all the proteins lining up in a perfect icosahedron.  But in biology we already know where the Pythagorean elegance is, and we know it's too far down to help us overcome our indexical and logical uncertainty.

    I'm a little confused about this account—in physics it seems like there are multiple levels of hidden beauty, e.g., the wave equation and Newtonian mechanics. What's the reasoning for expecting only one level of "Pythagorean elegance" for a given phenomena? Or to put it differently: if the first physical law that humanity had discovered was the wave equation, would you have predicted the existence of Newtonian laws of motion?