- Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]
Biting the bullet here is roughly equivalent to accepting Tegmark's Ultimate Ensemble. This was discussed on LW in ata's post from 2010, The mathematical universe: the map that is the territory.
See Tegmark (2008). In particular, Section 6, "Implications for the simulation argument". A relevant extract:
...For example, since every universe simulation corresponds to a mathematical structure, and therefore already exists in the Level IV multiverse [the multiverse of all mathematical structures], does it in some meaningful sense exist “more” if it is in addition run on a computer? This question is further complicated by the fact that eternal inflation predicts an infinite space with infinitely many planets, civilizations, and computers, and that the Level IV multiverse includes an infinite number of possible simulations. The above-mentioned fact that our universe (together with the entire Level III multiverse) may be simulatable by quite a short computer program (Sect. 6.2) calls into question whether it makes any ontological difference whether simulations are “run” or not.
My thought is that your hypothesis is pretty similar to the Dust Theory.
http://sciencefiction.com/2011/05/23/science-feature-dust-theory/
And Greg Egan's counter-argument to the Dust Theory is pretty decent:
However, I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.
I think the same counter-argument applies to your hypothesis.
Epistemology 101: Proper beliefs are (probabilistic) constrants over anticipated observations.
How does the belief that we are living in a computer simulation/a projection of the Platonic Hyperuranium/a dream of a god constrain what we expect to observe?
Mostly, my thought is that "there probably exist real people out there somewhere, and we are probably not among them; we are probably mere simulations in their world" doesn't seem equivalent to "what it means to be a real person, or a real anything, is to be a well-defined abstract computation that need not necessarily be instantiated" (aka Dust theory, as has been said).
That said, I can't really imagine why I would ever care about the difference for longer than it takes to think about the question.
Sure, the former feels more compellin...
I actually arrived at this believe myself when I was younger, and changed my mind when a roommate beat it out of me. I
I'm currently at the conclusion it's not the same, because an "artificial universe" within a simulation can still interact with the universe. The simulation can influence stuff outside the simulation, and stuff outside the simulation can influence the simulation.
Oddly, the thing that convinced me was thinking about morality. Thinking on it now, I guess framing it in terms of something to protect really is helpful. Ontological plat...
Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels)
Why this fascination with Haskell?
It seems more like a toy, or educational tool, or at the very best a tool for highly specialistic research, but pretty surely not suitable for any large scale programming.
Our present civilization is likely to reach the point where it can simulate a universe reasonably soon
I don't know about that, seems unlikely to me. A future civilization simulating us requires a) tons of information about us, that is likely to be irreversibly lost in the meantime, and b) enough computing power to simulate at a sufficiently fine level of detail (i.e. if it's a crude approximation, it will diverge from what actually happened pretty fast). Either of those alone looks like it makes simulating current-earth unfeasible.
But my main reaction t...
The Numerical Platonist's construct is just the universe itself again. No problem there.
If you're not a numerical platonist, I don't see how unexecuted computations could be experienced.
And that leaves us with regular simulation.
(Incidentally, point 6 has a hidden assumption about the distribution of simulated universes)
Technically we are already running a perfect simulation of a universe literally indistinguishable from our own.
The fact that such a simulation is indistinguishable means that we should be ambivalent about whether it is simulated or not- however, simulations which we run ARE distinguishable from our reality, in the same sense that a Godel statement is true, even if it the difference is not apparent from within the simulation.
The problem with mathematical realism (which, btw, see also), is that it's challenging to justify the simplicity of our initial state - Occam is not a fundamental law of physics, and almost all possible universe-generating laws are unfathomably large. You can sort of justify that by saying "even universes with complicated initial states will tend to simulate simple universes first", but that just leaves you asking why the number of simulations should matter at all. (I don't have a good answer to that; if you find one, I'd love if you could tell me)
Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) >follows lazy evaluation: a value is not calculated unless it is used.
In that case, why does the simulation need to be running all the time? Wouldn't one just ask the fancy, lambda-derived software to render whatever specific event one wanted to see?
If on the other hand whole_universe_from_time_immemorial() needs to execute every time, which of course assumes a loophole gets found to infinitely add information to the host universe, ...
If just the conceptual possibility of the universe is enough to experience it, as some have suspected to be the case, you still have to consider the possibility that the part of the universe you're conceptually in is a simulation inside of another conceptual universe.
Looking at it from another angle, I'm pretty sure we all accept that our minds are running on computers known as human brains, and we don't just experience the conceptual possibility of that brain. Mind you, the entire universe might just be some kind of conceptual possibility, but there is a ...
I think Can You Prove Two Particles Are Identical? explains the difference between the possibilities here very well: What is the difference? We cannot assume there is a difference simply for the sake of asking what the difference is. Though if you must, I should hope you're well aware of your assumption.
The simulation argument, as I understand it:
When we talk about a simulation we're usually thinking of a computer; crudely, we'd represent the universe as a giant array of bytes in RAM, and have some enormously complicated program that could compute the next state of the simulated universe from the previous one[1]. Fundamentally, we're just storing one big number, then performing a calculation and store another number, and so on. In fact our program is simply another number (witness the DeCSS "illegal prime"). This is effectively the GLUT concept applied to the whole universe.
But numbers are just... numbers. If we have a computer calculating the fibonacci sequence, it's hard to see that running the calculating program makes this sequence any more real than if we had just conceptualized the rule[2] - or even, to a mathematical Platonist, if we'd never thought of it at all. And we do know the rule (modulo having a theory of quantum gravity), and the initial state of the universe is (to the best of our knowledge) small and simple enough that we could describe it, or another similar but subtly different universe, in terms small enough to write down. At that point, what we have seems in some sense to be a simulated universe, just as real as if we'd run a computer to calculate it all.
Possible ways out that I can see:
Thoughts?
[1] As I understand it there is no contradiction with relativity; we perform the simulation in some particular frame, but obtain the same events whichever frame we choose
[2] This equivalence is not just speculative. Going back to thinking about computer programs, Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) follows lazy evaluation: a value is not calculated unless it is used. Thus if our simulation contained some regions that had no causal effect on subsequent steps (e.g. some people on a spaceship falling into a black hole), the simulation wouldn't bother to evaluate them[5]
If we upload people who then make phone calls to their relatives to convince them to upload, clearly those people must have been calculated - or at least, enough of them to talk on the phone. But what about a loner who chooses to talk to no-one? Such a person could be more efficiently stored as their initial state plus a counter of how many times the function needs to be run to evaluate them, if anyone were to talk to them. If no-one has their contact details any more, we wouldn't even need to store that much. What about when all humans have uploaded? Sure, you could calculate the world-state for each step explicitly, but that would be wasteful. Our simulated world would still produce the correct outputs if all it did was increment a tick counter
Practically every programming runtime performs some (more limited) form of this, using dataflow analysis, instruction reordering and dead code elimination - usually without the programmer having to explicitly request it. Thus if your theory of anthropics says that an "optimized" simulation is counted differently from a "full" one, then there is little hope of constructing such a thing without developing a significant amount of new tools and programming techniques[4]
[3] Indeed, with an appropriate anthropic argument this might explain why the rules of physics are mathematically simple. I am planning another post on this line of thought
[4] This is worrying if one is in favour of uploading, particularly forcibly - it would be extremely problematic morally if uploads were in some sense "less real" than biological people
[5] One possible way out is that the laws of physics appear to be information-preserving; to simulate the state of the universe at time t=100 you can't discard any part of the state of the universe at time t=50, and must in some sense have calculated all the intermediate steps (though not necessarily explicitly - the state at t=20 could be spread out between several calculations, never appearing in memory as a single number). I don't think this affects the wider argument though