The simulation argument, as I understand it:

  1. Subjectively, existing as a human in the real, physical universe is indistinguishable from existing as a simulated human in a simulated universe
  2. Anthropically, there is no reason to privilege one over the other: if there exist k real humans and l simulated humans undergoing one's subjective experience, one's odds of being a real human are k/(k+l)
  3. Any civilization capable of simulating a universe is quite likely to simulate an enormous number of them
    1. Even if most capable civilizations simulate only a few universes for e.g. ethical reasons, civilizations that have no such concerns could simulate such enormous numbers of universes that the expected number of universes simulated by any simulation-capable civilization is still huge
  4. Our present civilization is likely to reach the point where it can simulate a universe reasonably soon
  5. By 3. and 4., there exist (at some point in history) huge numbers of simulated universes, and therefore huge numbers of simulated humans living in simulated universes
  6. By 2. and 5., our odds of being real humans are tiny (unless we reject 4, by assuming that humanity will never reach the stage of running such simulations)

When we talk about a simulation we're usually thinking of a computer; crudely, we'd represent the universe as a giant array of bytes in RAM, and have some enormously complicated program that could compute the next state of the simulated universe from the previous one[1]. Fundamentally, we're just storing one big number, then performing a calculation and store another number, and so on. In fact our program is simply another number (witness the DeCSS "illegal prime"). This is effectively the GLUT concept applied to the whole universe.

But numbers are just... numbers. If we have a computer calculating the fibonacci sequence, it's hard to see that running the calculating program makes this sequence any more real than if we had just conceptualized the rule[2] - or even, to a mathematical Platonist, if we'd never thought of it at all. And we do know the rule (modulo having a theory of quantum gravity), and the initial state of the universe is (to the best of our knowledge) small and simple enough that we could describe it, or another similar but subtly different universe, in terms small enough to write down. At that point, what we have seems in some sense to be a simulated universe, just as real as if we'd run a computer to calculate it all.

Possible ways out that I can see:

  1. Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]
  2. Accept the other conclusion: either simulations are impractical even for posthuman civilizations, or posthuman civilization is unlikely. But if all that's required for a simulation is a mathematical form for the true laws of physics, and knowledge of some early state of the universe, this means humanity is unlikely to ever learn these two things, which is... disturbing, to say the least. This stance also seems to require rejecting mathematical Platonism and adopting some form of finitist/constructivist position, in which a mathematical notion does not exist until we have constructed it
  3. Argue that something important to the anthropic argument is lost in the move from a computer calculation to a mathematical expression. This seems to require rejecting the Church-Turing thesis and means most established programming theory would be useless in the programming of a simulation[4]
  4. Some other counter to the simulation argument. To me the anthropic part (i.e. step 2) seems the least certain; it appears to be false under e.g. UDASSA, though I don't know enough about anthropics to say more

Thoughts?

 

[1] As I understand it there is no contradiction with relativity; we perform the simulation in some particular frame, but obtain the same events whichever frame we choose

[2] This equivalence is not just speculative. Going back to thinking about computer programs, Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) follows lazy evaluation: a value is not calculated unless it is used. Thus if our simulation contained some regions that had no causal effect on subsequent steps (e.g. some people on a spaceship falling into a black hole), the simulation wouldn't bother to evaluate them[5]

If we upload people who then make phone calls to their relatives to convince them to upload, clearly those people must have been calculated - or at least, enough of them to talk on the phone. But what about a loner who chooses to talk to no-one? Such a person could be more efficiently stored as their initial state plus a counter of how many times the function needs to be run to evaluate them, if anyone were to talk to them. If no-one has their contact details any more, we wouldn't even need to store that much. What about when all humans have uploaded? Sure, you could calculate the world-state for each step explicitly, but that would be wasteful. Our simulated world would still produce the correct outputs if all it did was increment a tick counter

Practically every programming runtime performs some (more limited) form of this, using dataflow analysis, instruction reordering and dead code elimination - usually without the programmer having to explicitly request it. Thus if your theory of anthropics says that an "optimized" simulation is counted differently from a "full" one, then there is little hope of constructing such a thing without developing a significant amount of new tools and programming techniques[4]

[3] Indeed, with an appropriate anthropic argument this might explain why the rules of physics are mathematically simple. I am planning another post on this line of thought

[4] This is worrying if one is in favour of uploading, particularly forcibly - it would be extremely problematic morally if uploads were in some sense "less real" than biological people

[5] One possible way out is that the laws of physics appear to be information-preserving; to simulate the state of the universe at time t=100 you can't discard any part of the state of the universe at time t=50, and must in some sense have calculated all the intermediate steps (though not necessarily explicitly - the state at t=20 could be spread out between several calculations, never appearing in memory as a single number). I don't think this affects the wider argument though

New to LessWrong?

New Comment
102 comments, sorted by Click to highlight new comments since: Today at 11:21 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
  1. Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]

Biting the bullet here is roughly equivalent to accepting Tegmark's Ultimate Ensemble. This was discussed on LW in ata's post from 2010, The mathematical universe: the map that is the territory.

See Tegmark (2008). In particular, Section 6, "Implications for the simulation argument". A relevant extract:

For example, since every universe simulation corresponds to a mathematical structure, and therefore already exists in the Level IV multiverse [the multiverse of all mathematical structures], does it in some meaningful sense exist “more” if it is in addition run on a computer? This question is further complicated by the fact that eternal inflation predicts an infinite space with infinitely many planets, civilizations, and computers, and that the Level IV multiverse includes an infinite number of possible simulations. The above-mentioned fact that our universe (together with the entire Level III multiverse) may be simulatable by quite a short computer program (Sect. 6.2) calls into question whether it makes any ontological difference whether simulations are “run” or not.

... (read more)

My thought is that your hypothesis is pretty similar to the Dust Theory.

http://sciencefiction.com/2011/05/23/science-feature-dust-theory/

And Greg Egan's counter-argument to the Dust Theory is pretty decent:

However, I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.

I think the same counter-argument applies to your hypothesis.

6VincentYu11y
A steelmanned version of Egan's counterargument can be found in what Tegmark calls the (cosmological) measure problem. Egan's original counterargument is too weak because we can simply postulate that there is an appropriate measure over the worlds of interest; we already do that for the many-worlds interpretation! In Tegmark (2008) (see my other comment): Tegmark makes a few remarks on using algorithmic complexity as the measure: Each of the analogous problems in eternal inflation and the string theory landscape is also called the measure problem (in eternal inflation: how to assign measure over the potentially infinite number of inflationary bubbles; in the string theory landscape: how to assign measure over the astronomical number of false vacua). In the many-worlds interpretation, the analogous measure problem is resolved by the Born probabilities.
0brazil8411y
I don't understand this at all. Can you give an example of such an appropriate measure?
2VincentYu11y
An example of a measure in this context would be the complexity measure that Tegmark mentioned, as long as we agree on a way to encode mathematical structures (the nonuniqueness of representation is one of the issues that Tegmark brought up). Whether this is an appropriate measure (i.e., whether it correctly "predicts conditional probabilities for what an observer should perceive given past observations") is unknown; if we knew how to find out, then we could directly resolve the measure problem! An example of a context where we can give the explicit measure is in the many-words interpretation, where as I mentioned, the Born probabilities resolve the analogous measure problem.
0brazil8411y
So you are saying that the "Born probabilities" are an example of an "appropriate measure" which, if "postulated," rebuts Egan's argument? Is that correct?
2lmm11y
The Born probabilities apply to a different context - the multiple Everett branches of MWI, rather than the interpretative universes available under dust theory. If we had an equivalent of the Born probabilities - a measure - for dust theory, then we'd be able to resolve Egan's argument one way or another (depending on which way the numbers came out under this measure). Since we don't yet know what the measure is, it's not clear whether Egan's argument holds - under the "Tengmark computational complexity measure" Egan would be wrong, under the "naive measure" Egan is right. But we need some external evidence to know which measure to use. (By contrast in the QM case we know the Born probabilities are the correct ones to use, because they correspond to experimental results (and also because e.g. they're preserved under a QM system's unitary evolution)).
0brazil8411y
I would guess you are probably correct that Egan's argument hinges on this point. In essence, Egan seems to be making an informal claim about the relatively likelihood of an orderly dust universe versus a chaotic one. Boiled down to its essentials, VincentYu's argument seems to be that if Egan's informal claim is incorrect, then Egan's argument fails. Well duh.
1[anonymous]11y
Here's a visual representation of the dust theory by Randall Munroe: http://xkcd.com/505/
0falenas10811y
I'm not sure I agree with that argument. The fact that quantum mechanics exists, and there are specifically allowed states, is exactly the type of thing I'd expect from a universe driven by a computer simulation. Discrete values are much easier than continuous sets. On the other hand, superposition and entanglement seem suboptimal.
1brazil8411y
I'm not sure I understand your point. Are you saying that a simulation which is just a mathematical construct would probably not result in a quantized universe?
0falenas10811y
I was intending to say the opposite; that a quantized world would seem like it would take less computational power than a continuous one, therefore the fact that we live in a quantized world is evidence of being in a simulation.
0brazil8411y
That's not an unreasonable point, but I think it goes more to the issue of simulation versus non-simulation than the issue of computer-based simulation versus mathematical construct simulation.
0Baughn11y
Well, I suppose we could postulate something like a continuous version of quantum mechanics for a host universe if we'd like.
0lmm11y
Glad to see this has been thought of; that argument was where I was headed in [3] (and this whole line of thought greatly annoyed me when reading Permutation City, so I'm glad Egan's at least looked at it a bit). This gets us a contradiction, not a refutation, and one man's modus ponens is another man's modus tollens. Can we use this to argue for a flaw in the original simulation argument? I think it again comes down to anthropics: why are our subjective experiences reverse-anthropically more likely than those of dust arrangements? And into which class would simulated people fall?
0brazil8411y
I don't think so since it's reasonable to hypothesize that man-made simulations would, generally speaking, by more on the orderly side as opposed to being full of random nonsense. But it's still an interesting question. One can imagine a room with 2 large computers. The first computer has been carefully programmed to simulate 1950s Los Angeles. There are people in the simulation who are completely convinced that the live in Los Angeles in the 1950s. The second computer is just doing random computations. But arguably there is some cryptographic interpretation of those computations which also yields a simulation of 1950s Los Angeles.
1Baughn11y
I'd like to see that argument. If you can find a mapping that doesn't end up encoding the simulation in the mapping, I'd be surprised.
3brazil8411y
Well why should it matter if the simulation is encoded in the mapping?
3Baughn11y
If it is, that screens off any features of what it's mapping; you can no longer be surprised that 'random noise' produces such output.
0brazil8411y
Again, so what? Let me adjust the original thought experiment: The operation first computer is encrypted using a very large one-time pad.
[-]V_V11y50

Epistemology 101: Proper beliefs are (probabilistic) constrants over anticipated observations.
How does the belief that we are living in a computer simulation/a projection of the Platonic Hyperuranium/a dream of a god constrain what we expect to observe?

0lukstafi11y
Only in objective modal sense. Beliefs are probabilistic constraints over observations anticipated given a context. So in the example with stars moving away, the stars are still observables because there is counterfactual context where we observe them from nearby (by traveling with them etc.)
-1lmm11y
I don't think that can be right. We believe in the continued existence of stars that have moved so far away that we can't possibly observe them (due to inflation).
2V_V11y
Yet, that belief constrains our observations.
1lmm11y
How does it? What would be observe differently if some mysterious god destroyed those stars as soon as they moved out of causal contact with humanity?
1V_V11y
No, but the hypothesis of a mysterius god destroying stars exactly when our best cosmological models predict we should stop seeing them is unparsimonious. And anyway, distant stars never appear to cross the cosmological event horizon from our reference frame. Their light becomes redshifted so much that we can't detect it anymore.
3lmm11y
Sure. But believing or not believing in it doesn't constrain what we expect to observe, just the same as "the belief that we are living in a computer simulation/a projection of the Platonic Hyperuranium/a dream of a god". What's different from the situation in your first post?
1Ishaan11y
Point of order: i feel like we shouldn't be putting these two so close together. "All mathematical statements are equally real" and "We are being simulated" seem like two different claims that shouldn't be blurred together - the first is a matter of ontology and semantics, the second is a matter of fact. If all mathematical structures are equally real it might have weird moral implications, especially for simulations, but even if we successfully reject the idea that all mathematical structures are equally real it does not rule out the simulation hypothesis, and if we accept the idea that all mathematical structures are equally real it does not confirm the simulation hypothesis.
1V_V11y
Epistemology 101, part two: choose the simplest hypothesis among those which are observationally undistinguishable from each other.
0lmm11y
I think the hypothesis that human civilization will at some point derive the ultimate laws of physics, along with enough observations about the state of the early universe to construct a reasonable simulation thereof, is simpler than the alternative - to say that we won't seems to require some additional assumption that scientific progress would stop. If we accept the existence of a large number of simulated universes, then while I don't have a good theory of anthropics, rationalists should win, and blindly assuming that one is not in a simulation seems like it leads to losing a lot of the time (e.g. my example of betting a cookie with Bob elsewhere in these comments).
1V_V11y
It is not possible, and it never will be possible, to simulate within our universe something as complex our own universe itself, unless we discover a way to perform infinite computations using finite time, matter and energy, which would violate many known laws of physics. We already are able to simulate "universes" simpler than our own (e.g. videogames), but this doesn't imply, even probabilistically, that our universe is itself a simulation. Analogy is not a sound argument.
0lmm11y
Why not? Because you assign them a low anthropic weighting, or some other reason? (I also had an argument that the Dyson computation applies, but I think that's actually beside the point) If the simplest possible explanation for our sensory observations includes a universe that contains simulations of other universes, it's a reasonable question which kind we are in, even if they don't all have the same physical laws or the same amount of matter. There's no a propi reason to privilege one hypothesis or the other.
0V_V11y
The hypothesis that there exist another universe, certainly much different from ours in many aspects, quite possibly with a different set of physical laws, is more complex that the hypothesis that no such universe exists. Futhermore, you could iterate the simulation argument ad infinitum, "turtles all the way down", yielding an infinitely complex hypothesis.
1lmm11y
A description of our own universe necessarily includes inner universes, certainly much different from ours in many aspects, quite possibly with different sets of physical laws, and many complex enough to have their own inner universes. So it's not at all obvious that the minimum message length to describe an outer universe containing ours as a simulation is greater than that to describe our universe.
-1V_V11y
Yes, but we observe our own universe. It is. This discussion is getting boring.

Mostly, my thought is that "there probably exist real people out there somewhere, and we are probably not among them; we are probably mere simulations in their world" doesn't seem equivalent to "what it means to be a real person, or a real anything, is to be a well-defined abstract computation that need not necessarily be instantiated" (aka Dust theory, as has been said).

That said, I can't really imagine why I would ever care about the difference for longer than it takes to think about the question.

Sure, the former feels more compellin... (read more)

I actually arrived at this believe myself when I was younger, and changed my mind when a roommate beat it out of me. I

I'm currently at the conclusion it's not the same, because an "artificial universe" within a simulation can still interact with the universe. The simulation can influence stuff outside the simulation, and stuff outside the simulation can influence the simulation.

Oddly, the thing that convinced me was thinking about morality. Thinking on it now, I guess framing it in terms of something to protect really is helpful. Ontological plat... (read more)

2lmm11y
I think this leads to unpleasant conclusions. If causality is all we care about, does that mean we shouldn't care about people who are too far away to interact with (e.g. people on an interstellar colony too far away to reach in our lifetime)? Heck, if someone dived into a rotating black hole with the intent to set up a civilization in the zone of "normal space" closer to the singularity, I think I'd care about whether they succeeded, even though it couldn't possibly affect me. Back on Earth, should we care more about people close to us and less about people further away, since we have more causal contact with the former? Should we care more about the rich and powerful than about the poor and weak, since their decisions are more likely to affect us? If you don't consider the possibility of being simulated it seems like you would make wrong decisions. Suppose that you agree with Bob to create 1000 simulations of the universe tonight, and then tomorrow you'll place a black sphere in the simulated universes. Tomorrow morning Bob offers to bet you a cookie that you're in one of the simulated universes. If you take the bet on the grounds that the model of the universe in which you're not in the simulation is simpler, then it seems like you lose most of the time (at least under naive anthropics). Now obviously in real life we don't have this indication as to whether we're a simulation. But if we're trying to make a moral decision for which it matters whether we're in a simulation, it's important to get the right answer.
1Ishaan11y
Didn't say that. We might be in a simulation. The question is, is that the more parsimonious hypothesis? Observation is the king of epistemology, and Parsimony is queen. If parsimony says we're simulated, then we're probably simulated. In the counter-factual world where I have a memory of agreeing with Bob to create 1000 simulations, then parsimony says I'm likely in a simulation. We might be in a universe where the most parsimonious hypothesis given current evidence is simulation, or we might not. Would that I had a parsimony calculator, but for now I'm just guessing not. There are observations that might lead a simulation hypothesis to be the most parsimonious hypothesis. I claim it as a question which is ultimately in the realm of science, although we still need philosophy to figure out a good way to judge parsimony. These two statements sum my current stance. Epistemic Rationality: Take every mathematical structure that isn't ruled out by the evidence. Rank them by parsimony. CDT (which I'll take as "instrumental rationality" for now):: If your actions have results, you can use actions to choose your favorite result. so, applying that to the points you raised... I have sufficient evidence to believe that both the poor and the rich exist. I care about them both. In the counter-factual world where I was more certain concerning the existence of the rich and less certain containing the existence of the poor, then it would make sense to direct my efforts to the rich. If I want to give people utils, and If I can give 10 utils to person R if I have 70% certainty that they exist to benefit from it, or 20 utils to person P if I have 10% certainty that they exist to benefit from it, I obviously choose person R. Back to reality: I've got incredible levels of certainty that both the rich and the poor exist. Once again, it's a question of certainty that they exist. If I told you that donating $100 to the impoverished Lannisters would be efficient altruism, wouldn't
0lmm11y
It seems to me the most parsimonious hypothesis is that the human race will create many simulations in the future - that seems like the natural course of progress, and I think we need to introduce an additional assumption to claim that we won't. If we accept this then the same logic as if we'd made that agreement with Bob seems to hold. Hang on. You've gone from talking about "what I can interact with" to "what I know exists". If logic leads us to believe that non-real mathematical universes exist (i.e. under available evidence the most parsimonious assumption is that they do, even though we can't causally interact with them), is that or is that not sufficient reason to weigh them in our moral decisionmaking?
0Ishaan11y
My mistake for using the word "interaction" then - it seems to have different connotations to you than it does to me. Receiving evidence - AKA making an observation - is an interaction. You can't know something exists unless you can causally interact with it. How can something non-real exist? I dispute the idea that what does or does not exist is a question of logic. I say that logic can tell you how parsimonious a model is, whether it contains contradiction, and stuff like that. But only observation can tell you what exists / is real. I'd argue that any simulations that humanity makes must be contained within the entire universe. So adding lower simulations doesn't make the final description of the universe any more complex than it already was. Positing higher simulations, on the other hand, does increase the total number of axioms. The story you reference contains the case where we make a simulation which is identical to the actual universe. I think that unless our universe has some really weird laws, we won't actually be able to do this. Not all universes in which humanity creates simulations are universes in which it is parsimonious for us to believe that we are someone's simulation.
0lmm11y
You're right, I was being sloppy. My point was: suppose the most parsimonious model that explains our observations also implies the existence of some people who we can't causally interact with. Do we consider those people in our moral calculations? I can see the logic, but doesn't the same argument apply equally well in the "agreement with Bob" case? True, but only necessary so that the participants can remember being the people they were outside the simulation; I don't think it's fundamental to any of the arguments.
0Ishaan11y
This is impossible. No causal interaction means no observations. A parsimonious model cannot posit any statements that have no implications for your observations. But I understand the spirit of your question: if they had causal implications for us, but we had no causal implications for them (implying that we can observe them and they can effect us, but they can't observe us and we can't effect them) then I would certainly care about what happened to them. But I still can't factor them into any moral calculations because my actions cannot effect them, so they cannot factor into any moral calculations. The laws of the universe have rendered me powerless. and I'm not sure I follow these two statements- can you elaborate what you mean?
2TheOtherDave11y
Wait, what? So, I go about my life observing things, and one of the things I observe is that objects don't tend to spontaneously disappear... they persist, absent some force that acts on them to disrupt their persistence. I also observe things consistent with there being a lightspeed limit to causal interactions, and with the universe expanding at such a rate that the distance between two points a certain distance apart is increasing faster than lightspeed. Then George gets into a spaceship and accelerates to near-lightspeed, such that in short order George has crossed that distance threshold. Which theory is more parsimonious: that George has ceased to exist? that George persists, but I can't causally interact with him? that he persists and I can (somehow) interact with him? other? Suppose my current actions can affect the expected state of George after he crosses that threshold (e.g., I can put a time bomb on his ship). Does the state of George-beyond-the-threshold factor into my moral calculations about the future?
0Ishaan11y
That George persists, but I can't causally interact with him. Yes. My rule: "A parsimonious model cannot posit any statements that have no implications for your observations" has not been contradicted by my answers. The model must explain your observation that a memory of George getting into that spaceship resides in your mind. As to whether or not George disappeared as soon as he crossed the distance threshold...it's possible, but the set of axioms necessary to describe the universe where George persists is more parsimonious than the set of axioms necessary to describe the universe where George vanishes. Therefore, you should assign a higher likelihood to the probability that George persists. This is the solution to the so called "Problem" of Induction. "Things don't generally disappear, so I'll assume they'll continue not disappearing" is just a special case of parsimony. Universes in which the future is similar to the past are more parsimonious.
1TheOtherDave11y
I basically agree with all of this. So, when lmm invites us to suppose that the most parsimonious model that explains our observations also implies the existence of some people who we can't causally interact with, is George an example of what lmm is inviting us to suppose? If not, why not?
0Ishaan11y
Semantics, perhaps. I considered things like George's memory trace as an example of an "interaction", the same way as seeing the moonlight is an "interaction" with the moon despite the fact that the light I saw is actually from a past version of the moon and not the current one. So maybe we were just using different notions of what "causal interaction" means? To me, "people we can't causally interact with" means people who don't cause any of our observations, including memory-related ones.
1TheOtherDave11y
So you would say that George is not an example of what lmm is inviting us to suppose, because we can causally interact with him, because he caused a memory? I don't think this is just semantics. You are eliding the difference between causal relationships that exist now and causal relationships that existed only in the past, presumably because you don't consider this difference important. But it seems like an important difference to me.
1Ishaan11y
You're right, it is important. But in my defense, look at the the original context: In this context, it makes sense to consider gaps of space and time as irrelevant. This idea is supposed to work no matter what your observations are, even if space and time aren't even involved. If I know that A causes B and A causes C, and I observe C, then I know that B is true. We can agree to say that A, B, and C are all part of one causal network. That's how I was thinking of it. A and B are causally interacting. A and C are causally interacting. Therefore, C and B are causally interacting. If causal lines (in any direction) connect C to B, then C and B are "causally interacting". At this level of abstraction, we can even do away with causality and just say that they are "interacting" within one system of logical statements. That's why George's memory trace causally links me to George. A = Past George. B = Present George C = my memory of George. Now that I've specified what I mean by a causal interaction, you can see why my answer to ... ...is no, since evidence for the existence of something must imply a causal interaction by my definition. It seemed like you interpreted "causal interaction" to be a synonym for "effect". And under that definition, yeah, C cannot effect B. Lesson learned: I shouldn't make up words like "causal interaction" and assume people know what is in my head when I say it. My mistake was that I thought most people would consider the phrase "A and B are causally interacting" to implicitly contain the information that causal interaction is always a bidirectional thing, and infer my meaning accordingly. edit... The whole idea I was championing is that in order to earn the label "real", something must interact with you. In other words, it must be within the same logical system as you. In other words, If my observation is "C" and "not F" then "F" cannot be real. "(E=>F)&E" cannot be real. "C" absolutely must be real. "A=>B&C" might be real. "A"
0lmm11y
Not just spelling fascism, I want to be sure I understand you correctly: do you mean effect or affect? So you're considering the region that's connected by any zigzag of causal events, in any direction? We care about Bob's daughter who we never met? We care about her cousin who is now so far away that not only is she causally disconnected from us, but also from Bob? I can't claim this is inconsistent, but it seems arbitrary. The category of people I can causally interact with (i.e. can affect and can be affected by) is a natural one, but I don't see why I should regard someone who's in a spacetime that used to be connected to mine but now isn't (i.e. Bob) any differently from someone who's in a parallel spacetime that's never been connected to my own. There doesn't seem to be any empiricallike distinction there.
1Ishaan11y
Er...I think it's "effect"? I find it confusing - I think my current use falls within the exception to the noun-verb heuristic but I'm not sure. =You interpreted "causal interaction" to be a synonym for "something which causes an alteration in another thing" =Alterations in C do not cause alterations in B." We consider them as real, yes. If the proposed parallel spacetime will one day be connected to your own, then it classifies as real but currently unknowable. Upon observing evidence of the newly connected spacetime, a rational agent would discard the most parsimonious hypothesis that it had held prior to the observation. This scenario can be summed up by the phrase "What if Russel's Teapot is real After All?" (What would happen is that we'd admit that we were wrong before, but assert that we had no way of seeing it coming) If the proposed parallel spacetime will never be connected to your own, then it isn't real.
1arundelo11y
It sounds to me like you want "affect". To effect something is to bring it about. (In other words: to cause it to come into being; to put it into effect.) "I effected [produced] an agreement between the disputants." "They sailed away without effecting [accomplishing] their purpose." To affect something is to influence it. (To have an effect on it.) Note that, confusingly, the verb "affect" can be defined in terms of the noun "effect".
1Ishaan11y
I touched on the flower. I influenced the flower. I affected the flower. I had an effect on the flower. I caused a commotion. I produced a commotion. I effected a commotion. Good? So "effect" is describing a specific cause-effect chain while Affect is describing the existence of some sort of cause-effect chain without specifying any particular one? (Overeating effects weight gain, Diet affects weight.)
1arundelo11y
"Affected the flower" and "effected a commotion" are right, but I think you'd be better of just banishing the verb effect from your vocabulary. It's extremely uncommon and I and other people associate it with pointy-haired bosses and bureaucrats. (There is another unrelated verb usage of effect used by musicians: to effect a signal is to process that signal with an effect.)
0Ishaan11y
Agreed that the words are terrible as communication tools. Is there a good substitute that i can use to talk about causality?
0lmm11y
Ok, I think I understand your position. I maintain that it's an unnatural distinction to draw - a universe that will be connected to ours in the future, or has been connected to ours in the past, isn't empirically different from one that is and will always be disconnected from ours. Thought experiment: suppose at some point after Bob disappeared over the horizon, two copies of the present state of the universe start running in parallel - or, better, that there have always been two copies running in parallel. Although copy A and copy B happen to have coincident histories, there's no causal connection between them and never has been, so to us in universe B, universe A isn't "real" in your terminology, right (and let's assume a quantum-mechanical collapse postulate applies, so after the "split" some random events start turning out differently in universes A and B, so you can tell whether you're in one or the other)? But I assert that there's no way for us to tell the difference between bob-in-universe-A and bob-in-universe-B. (The other example I've thought of is previous/subsequent universes in Penrose's "Conformal cyclic cosmology", but I don't think there are any important differences from the cases we've already talked about).
0Ishaan11y
Empirical: based on, concerned with, or verifiable by observation or experience rather than theory or pure logic. A universe that is totally disconnected is unverifiable by observation and experience. It lies in the realm of pure logic. It leaves no empirical traces. Granted, there are also some possible universes that are logically connected and yet leave no empirical traces. (One example of this is the "Heaven" hypothesis, which postulates a place which is totally unobservable at the present time. So our universe has an effect on Heaven-verse, creating a unidirectional causal link... but Heaven has no effect on us. It's the same with your example - the past has a unidirectional causal link with various possible futures.) So yes, I bite the thing you regard as a bullet. There are not necessarily any empirical differences. I still think that when the common person says "Reality", they mean something closer to my definition - something with a causal interaction with you. That's why people might say "heaven is real, despite the lack of evidence" or "Russel's Teapot might be real, though it's unlikely" but they never say "Harry Potter is real, despite the lack of evidence" or "Set theory is real, despite the lack of evidence". All of these things can be represented totally unobservable logical structures, but only the Heaven structure is proposed to interact with our universe - so only the Heaven structure is a hypothesis about reality. The rest are fantasy and mathematics. (If you want empiricism, I will say that the most parsimonious hypothesis is strictly limited to choosing the smallest logical structure which explains all observable things.) Edit: Oh cool - you've made me realized that my definition of reality implies random events create a universe for each option (so a stochastic coin flip creates a "heads" universe and a "tails" universe, both "real" | real = "causal interaction in either direction"). I hadn't explicitly recognized that yet. Thanks! I t
0lmm11y
I try not to say "reality" - I don't think laypeople have an intuition about the case where we disagree - that is, regions that are causally disconnected (in the sense of the relativistic term of art - whose meaning apparently doesn't align with your intuition?) from us, but can be reached by some zigzag chain of causal paths. In the Heaven case there's a one-directional causal link, and in Russell's teapot case there's a regular causal connection. Do people have an intuition about whether things that have fallen into a black hole, or over the cosmological event horizon, are "still real"? That said, on some level you're right; I do feel that Bob is "more real" than Harry Potter. I think that's just a function of Bob's universe being more similar to my own though. If Carol in another universe has a magical cross-universe teleporter and is thinking about whether to visit our universe, it seems wrong to say she's more real now if the decision she's about to make is yes than if the decision is no. (And the notion that she's already connected to our universe because she has the choice, even if she never actually visits our universe, feels equally suspect) (Feel free to stop replying if I'm getting repetitive, and thanks for the discussion so far in any case) I agree; I've never felt happy with the simulation argument in any form, and trying to chase through its more extreme implications was as much about hoping to find a contradiction as about exploring things that I thought were true. Like I've said, I'm hopeful that a good theory of anthropics will dissolve these questions.
0Ishaan11y
Now, that confuses me. I thought your post was largely about defining reality. Isn't the topic under discussion largely what the appropriate way to define reality is? Isn't the very premise of platonic realism that all tautologies are real?
0lmm11y
Hmm, you're right. Maybe I just object to "reality" because it implies a uniqueness that I don't think is justified.
0Ishaan11y
My philosophy on words is this: We often use words (soul, free will, etc) to define ideas that aren't well defined. Sometimes, on rigorous inspection, those ideas turn out to be nonsensical. This leaves us with two options: 1) Discard the words altogether 2) Re-define the words so as to get as close as possible to the original meaning, while maintaining self-consistency. (see Eliezer's posts on "free will" for an example of this which is carried out, I believe, successfully.). I generally opt for (2) in the cases where the underlying concept being described as some sort of value and there is no other word that quite tackles it. I maintain that "reality" is one of those words for which the underlying concept is valuable and un-described by any other word. I remain unsure of whether or not the laymen's intuitive definition of "Reality" is logically consistent. I'll continue trying to find a rigorous definition that completely captures the original intuition and nothing more. If I end up giving up I'll have to opt for (2) or (1)...If, under the closest definition, probabilistic-many-world-splitting turns out to be the only "weird-to-normal-people" consequence of changing the definition then I'm okay with picking (2), since at least the practical consequences add up to normality. I'd choose option (1) and abolish "reality" altogether, though, before I let it be turned into a synonym for "tautology". That's just too far from the original intuition to be a useful verbal label and we already have "tautology" anyhow. Plus, the practical consequences do not seem to add up to normality at all.
0TheOtherDave11y
(nods slowly) Yeah, OK, point accepted. I had lost track of the original context... my bad. Thanks for your patience.
0lmm11y
TheOtherDave's already covered this part Second one first: The only reason we need to assume the simulation is identical to the outer universe is so that our protagonists' memory is consistent with being in either. The only reason this is a difficulty at all is because the protagonists need to remember arranging a simulation in the outer universe for the sake of the story, as that's the only reason they suspect the existence of simulated universes like the one they are currently in. If the protagonists have some other (magical, for the moment) reason to believe that a large number of universes exist and most of those are simulated in one of the others, it doesn't matter if the laws of physics differ between universes - I don't think that's essential to any of the other arguments (unless you want to make an anthropic argument that a particular universe is more or less likely to be simulated than average because of its physical laws). Now for my first statement. Your argument as I understood it is: Even if the most parsimonious explanation of our observations necessitates the existence of an "outer" universe and a large number of simulated universes inside it, it is still more parsimonious to assume that we are in the "outer" universe. My response is: doesn't this same argument mean that we should accept Bob's bet in my example (and therefore lose in the vast majority of cases)?
1Ishaan11y
See the response to TheOtherDave Then there has been a miscommunication at some point. If you rephrase that as: "Even if the most parsimonious explanation of our observations necessitates the existence of an "outer" universe and a large number of simulated universes inside it, it is still sometimes more parsimonious to assume that we are in the "outer" universe." Then you'd be right. The fact that we have the capacity to simulate a bunch of universes ourselves doesn't in-and-of-itself count as evidence that we are being simulated. My argument is more or less identical to V_V's in the other thread. I would agree with that statement. If our universe turns out to have a ridiculously complex set of laws, it might actually be more parsimonious to posit an Outer Universe with much simpler laws which gave rise to beings which are simulating us. (In the same way that describing the initial conditions of the universe is probably a shorter message than describing a human brain)
-1torekp11y
I agree, and I'd like to offer additional argument. Mathematical objects exist. Almost no one would deny that, for example, there is a number between 7,534,345,617 and 7,534,345,619. Or that there is a Lie group with such-and-such properties. What distinguishes Tegmark's claims from these unremarkable statements? Roughly this: Tegmark is saying that these mathematical objects are physically real. But on his own view, this just amounts to saying that mathematical objects are mathematical objects. Yeah yeah Tegmark, mathematical objects are mathematical objects, can't dispute that, but don't much care. Now I'll turn my attention back to tangible matters. Tegmark steals his own thunder.
-1Ishaan11y
I think Tegmark's level 1-4 taxonomy is useful. Strip it of physics and put it to philosophy: Lv 1) What we can observe directly (qualia) Lv 2) What we can' t observe, but could be (Russel's teapot) Lv 3) What we can't observe, but we know might have happened if chance played out differently. (many-worlds) Lv 4) Mathematical universes. These are distinct concepts. The question is, where and how do you draw a line and call it reality? (I say that we can't include 4, nor can we only include 1. We either include 1, 2 or 1, 2, 3...preferably the former.)
0torekp11y
I took the portion of your comment I quoted to be about level 4 only. Anyway, that is where my comment is aimed, at agreeing that we can't include 4.
-1FeepingCreature11y
Yeah, but unmodified simulations are the same, whereas modified simulations diverge. The fact that something from the outside interacted with the simulation means that it's just one distinguishably-different one out of many. Purely statistically speaking, we'd expect not-screwed-with universes to form the biggest probability block by far.
2Ishaan11y
I'm not quite sure what you mean. Would you mind rephrasing or elaborating?
0FeepingCreature11y
The evolution of a universe that's not being influenced by its host universe is determined by its initial state. However, any interaction of a host universe with the nested universe adds bits to its description. Therefor, even if we'd numerically expect most host universes to screw with their child universes somehow (which still isn't given!) they'll all screw with them in different ways, whereas the unscrewed-with ones will all look the same. Thus, while most universes may be screwed-with (which isn't even a given!), the set of unscrewed-with universes is still the biggest subset.
0Ishaan11y
No, you can subtract information from things. Edge case: what if the host just replaces every bit in the hard drive with all 0's? In what? the platonic mathematical space? Or the subset of universes that a given host universe simulates? I think I do get your meaning, but it doesn't seem very well defined...
0FeepingCreature11y
Of course you can end up with a state that has a lower minimal description length. However, almost any interaction is gonna end up adding bits. Yes, and yes this is very ill-defined, and yes it's not clear why the set size should matter, but the simulation argument rests on the very same assumption - some kind of equal anticipation prior over causes for our universe? So if you already accept the premise that universe counting should matter for the simulation argument, you can just reuse that for the "anticipate being in the unscrewed with universe" argument. (Shouldn't you anticipate being in a screwed with universe, even if you don't know in which way it'd be screwed with? Hm. Is this evidence that most hosts end up not screwing with their sims?)
1Ishaan11y
If we're only talking about the platonic mathematical space, then why does it matter what hosts do or do not do to their simulations? The entire thing (host and simulation) is one interacting mathematical unit. There might also be a mathematical unit that represents the simulation, independently of the host, but we can count that separately. There are an infinite number of mathematical structures that could explain your observations. An infinite number of those involve simulations, and an infinite number of them don't involve simulations. Of the ones that involve simulations, an infinite number of them are "screwed" with and an infinite number are "unscrewed". So, if we want to choose a model where everything in the platonic mathematical space is "real" (One one level I want to condemn this as literally the most un-parsimonious model of reality, and on another level I'll just say that you have defined reality in a funny way and it's just a semantic distinction) and then we want to figure out where within this structure we are using the rule that "the likelihood of a statement concerning our location being true corresponds to the number of universes in which it is true and which also fit our other observations", then we have to find a way of comparing infinities. And that's what you're doing - comparing infinities. So ... what mechanism are you proposing for doing so?
0FeepingCreature11y
I don't know, but the fact that out of an infinity of possible universes we're practically in the single-digit integers, has to mean something. Ask a genie for a random integer and you'd be surprised if it ever finished spitting out numbers in the lifetime of the universe; for it to stop after a few minutes of talking would be absurd. So either we're vastly wrong about the information theoretic complexity of our universe, or the seeming simplicity of its laws is due to either sampling bias, or MU is wrong and this universe really just happens to just exist for no good answerable reason, there's a ludicrous coincidence at work, or there has to be some reason why we are more likely to find ourselves in a universe at the start of the chain, whose hosts are not visibly screwing with it. The point is to add up to normality, after all.
[-]V_V11y10

Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels)

Why this fascination with Haskell?
It seems more like a toy, or educational tool, or at the very best a tool for highly specialistic research, but pretty surely not suitable for any large scale programming.

1CronoDAS11y
http://xkcd.com/224/
0V_V11y
LoL!
1CronoDAS11y
See also.

Our present civilization is likely to reach the point where it can simulate a universe reasonably soon

I don't know about that, seems unlikely to me. A future civilization simulating us requires a) tons of information about us, that is likely to be irreversibly lost in the meantime, and b) enough computing power to simulate at a sufficiently fine level of detail (i.e. if it's a crude approximation, it will diverge from what actually happened pretty fast). Either of those alone looks like it makes simulating current-earth unfeasible.

But my main reaction t... (read more)

7Baughn11y
A future civilization simulating their own ancestors would require a lot of information about them, possibly impossibly-hard-to-get amounts. You're right about that. So what? They could still simulate some arbitrary, fictional pre-singularity civ. There is no guarantee whatsoever, if we're part of a simulation, that we were ever anything else.
1lmm11y
Possible ethical position: I care about the continued survival of humanity in some form. I also care about human happiness in some way that avoids the repugnant conclusion (that is, I'm willing to sacrifice some proportion of unhappy lives in exchange for making the rest of them much happier). I am offered the option of releasing an AI that we believe with 99% probability to be Friendly; this has an expectation of greatly increasing human happiness, but carries a small risk of eliminating humanity in this universe. If I believe I am not simulated, I do not release it, because the small risk of eliminating all humanity in existence is not worth taking. If I believe I am simulated, I release it, because it is almost surely impossible for this to eliminate all humanity in existence, and the expected happiness gain is worth it.

Modern philosophy is just a set of notes on the margins of Descartes' "Meditations".

0wedrifid11y
That is the most damning criticism of philosophy I have ever seen.
2lukstafi11y
(1) It's totally tongue-in-cheek. (2) By "modern" I don't mean "contemporary", I mean "since Descartes onwards". (3) By "notes" I mean criticisms. (4) The point is that I see responses to the simulation aka. Daemon argument recurring in philosophy.
0wedrifid11y
Ahh, that one makes a difference in connotation. There certainly seems to be more of that than I would judge worthwhile.

The Numerical Platonist's construct is just the universe itself again. No problem there.

If you're not a numerical platonist, I don't see how unexecuted computations could be experienced.

And that leaves us with regular simulation.

(Incidentally, point 6 has a hidden assumption about the distribution of simulated universes)

0lmm11y
Why? If it's just because the computations come out the same, doesn't that mean any simulation of the universe is also just the universe itself again?

Technically we are already running a perfect simulation of a universe literally indistinguishable from our own.

The fact that such a simulation is indistinguishable means that we should be ambivalent about whether it is simulated or not- however, simulations which we run ARE distinguishable from our reality, in the same sense that a Godel statement is true, even if it the difference is not apparent from within the simulation.

0lmm11y
Does that necessarily follow? Should we necessarily be ambivalent about e.g. events in any other inflationary bubble (i.e. in star systems that have become causally disconnected from our own)
0Decius11y
To your first question: Yes. If something has one of two characteristics, but no information that we can (even theoretically) acquire allows us to determine which of those is true, then it is not meaningful to care about which one is true. Dropping to the object-level, it would be contradictory to have a simulation which accepted as input ONLY a set of initial conditions, but developed sentient life that was aware of you. To your second question: "star systems that have become causally disconnected from our own" are distinguishable from our own. I'll answer the question "Should be necessarily be ambivalent about things which we cannot even theoretically interact with" as a general case. Utilitarian: Yes. (It has no effect on us) Consequentialist: Yes. (We have no effect on them) Social Contract: Only if we don't have a deal with them. Deist: Only if God says so. Naive: Yes; I can't know what they are, so I can't change my decisions based on them. What theory of ethics or decision has a non-trivial answer?
0lmm11y
It seems like we could reasonably have a utility function that assigns more or less value to certain actions depending on things we can't causally interact with. E.g. a small risk of wiping out all humanity within our future light cone would, I think, be less of a negative if I knew there was a human colony in a causally disconnected region of the universe.
0Decius11y
How much less? What's the asymptote (of the ratio) as the number of human colony ships that have exited the light cone approach infinity? ETA: Also, that scenario moved the goalposts again. The question was "Should we consider those hypothetical colonists opinions when deciding to risk destroying everything we can?"
2lmm11y
I don't have a ratio; it's more that I attach an additional (fixed) premium to killing off the entire human race, on top of the ordinary level of disutility I assign to killing each individual human. (nb I'm trying to phrase this in utilitarian terms but I don't actually consider myself a utilitarian; my true position is more what seems to be described as deontological?)
1Decius11y
So you attach some measure of utility to the statement 'Humanity still exists', and then attach a probability to humanity existing outside of your light cone based on the information available; if humanity is 99% likely to exist outside of the cone, then the additional disutility of wiping out the last human in your light cone is reduced by 99%? And the disutility of genocide and mass slaughters short of extinction remain unchanged?
0lmm11y
Yeah, that sounds like what I meant.

The problem with mathematical realism (which, btw, see also), is that it's challenging to justify the simplicity of our initial state - Occam is not a fundamental law of physics, and almost all possible universe-generating laws are unfathomably large. You can sort of justify that by saying "even universes with complicated initial states will tend to simulate simple universes first", but that just leaves you asking why the number of simulations should matter at all. (I don't have a good answer to that; if you find one, I'd love if you could tell me)

0lmm11y
Like I say, I think a good theory of anthropics is the best hope for this. Under UDASSA it's "obvious" that one would be most likely to find oneself in a simple universe - though that may just be begging the question, as I'm not aware of a justification for using a complexity measure in UDASSA.

Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) >follows lazy evaluation: a value is not calculated unless it is used.

In that case, why does the simulation need to be running all the time? Wouldn't one just ask the fancy, lambda-derived software to render whatever specific event one wanted to see?

If on the other hand whole_universe_from_time_immemorial() needs to execute every time, which of course assumes a loophole gets found to infinitely add information to the host universe, ... (read more)

1lmm11y
Indeed we would. If you believe we are such a simulation, that implies the simulator is interested in some event that causally depends on today's history. I don't think this matters though. Causality is preserved under relativity, AIUI. You may not necessarily be able to say absolutely whether one event happened before or after another, but you can say what the causal relation between them is (whether one could have caused the other, or they are spatially separated such that neither could have caused the other). So there is no problem with using naive time in one's simulations. Are you arguing that a simulatable universe must have a time dimension? I don't think that's entirely true; all it means is that a simulatable universe must have a non-cyclic chain of causality. It would be exceedingly difficult to simulate e.g. the Godel rotating universe. But a universe like our own is no problem.

If just the conceptual possibility of the universe is enough to experience it, as some have suspected to be the case, you still have to consider the possibility that the part of the universe you're conceptually in is a simulation inside of another conceptual universe.

Looking at it from another angle, I'm pretty sure we all accept that our minds are running on computers known as human brains, and we don't just experience the conceptual possibility of that brain. Mind you, the entire universe might just be some kind of conceptual possibility, but there is a ... (read more)

0lmm11y
Sure, but if anything it seems like they both apply - we are overwhelmingly likely to be simulated humans in a mathematical-construct universe. I was trying to make it clear where the tradeoff with mathematical Platonism is. If you believe mathematical things exist eternally, or exist when defined, or exist when explicitly calculated, that affects what limit you have to place on human civilization's achievements (and if you're a straight-up Platonist then you can't make this objection at all, because as you say, the idea of the universe already exists).

I think Can You Prove Two Particles Are Identical? explains the difference between the possibilities here very well: What is the difference? We cannot assume there is a difference simply for the sake of asking what the difference is. Though if you must, I should hope you're well aware of your assumption.