Regarding signs,
Fermi paradox
true, but
quantum indeterminacy and the observer effect, universal fine-tuning, Planck length/time, speed of light, holographic principle, information paradox in black holes, etc.
this line of argument, that, like, physics looks like it's been designed to be efficiently computable, is invalid afaik, lots of stuff actually isn't discrete, and the time-complexity class of the laws of physics are afaik exponential on particle count.
Presumably, that which once identified as human would “wake up” and realize its true identity.
Why would it ever forget, why would this be sleep? That's a human thing. If there's some deep computational engineering reason that a mind will always be better at simulating if it enters a hallucinatory condition, then indeed that would be a good reason for it to use a distinct and dissimilar kind of computer to run the simulation, because a lucid awareness of why the simulation is being run, remembering precisely what question is being pursued, and the engagement of intelligent compression methods, is necessary to make the approximation of the physics as accurate*efficient as possible.
Though if it did this, I'd expect the control interface between the ASI and the dream substrate to be thicker than the interfaces humans have with their machines, to the extent that it doesn't quite make sense to call them separate minds.
Thanks for the post.
(I notice that you tend to delete your posts here. I think it would be better if this one keeps being available, it’s an interesting food for thought.)
While I acknowledge that there are sound theoretical arguments which make the simulation hypothesis intellectually intriguing, these very arguments could just as well be invoked in support of the hypothesis of God. Both posit that we inhabit a designed universe or reality, governed by some kind of hidden puppet master.
In fact, the simulation hypothesis can be regarded as a form of techno‑theology (as David Chalmers and others have noted). Consequently, it faces the same fundamental intellectual difficulties as all theological explanations: it runs afoul of Occam’s razor and Dawkins’s “Ultimate Boeing 747” argument. It offers an account that explains nothing in a predictive or falsifiable way, merely displacing the problem rather than resolving it, and adding an unnecessary layer of complexity to reality itself.
In fact, the simulation hypothesis can be regarded as a form of techno‑theology (as David Chalmers and others have noted).
It is, but it's the most lucid theology to have ever been done, as it includes a more detailed characterisation of what advanced species would actually be like which we were only able to get by being closer to being one.
it runs afoul of Occam’s razor
No, or at least not if you have the good version of occam's razor (solomonoff induction). It's an implication of the simplest possible hypothesis.
Saying that it violates occam's razor is like saying that black holes violates occam's razor, as if cosmology would be simpler if we presumed that they don't exist, but really the simplest plausible model of cosmology implies the existence of black holes, even if you've never seen one yourself, you should be inclined to believe in them as an implication of the existence of gravity.
falsifiable
But nor is the negation falsifiable, so to disbelieve it is at least as foolish as believing it. We are condemned to uncertainty and so we must become serious about guessing.
merely displacing the problem rather than resolving it
The simulation hypothesis has never been presented as an approach to explaining existence. It has been exclusively discussed by atheists who are too squeamish to acknowledge that there might be problems it solves, because that would elevate it to the level of not just theology but religion, but I concur with them, there is no need to acknowledge these things, today.
Negation of an unfalsifiable claim is itself unfalsifiable by symmetry, neither can be established empirically. This is close to the probatio diabolica problem in law. The solution lies in the burden of proof. As a rule, absent a legal presumption, the burden falls on the party making the positive, existential claim.
This connects to algorithmic information theory and Occam’s razor: the positive claim asks us to accept a description that contains more information (longer k-description). Negation is favored as long as it posits a simpler theory (shorter k-description). Solomonoff induction concerns the a priori probability of a given sequence if a universal Turing machine were to try all possible programs. Although it started from a different angle than Kolmogorov’s parallel work on randomness and complexity, Solomonoff incidentally arrived at the same concept of minimal description. Together with Kolmogorov and Chaitin, this provides a theoretical foundation for Occam’s razor : it offers a formal notion of parsimony. By no means does Solomonoff induction favor simulation when it adds a complexity overhead over a fundamental reality, much like Dawkins’s “Ultimate Boeing 747”.
But, okay, let’s suppose you favor Bostrom’s and Chalmers’s anthropic arguments over the principle of parsimony. Suppose we grant that the most probable theory is that we live in a simulation. The same anthropic reasoning applies to the simulator/God : a priori it is more probable that it too lives in a simulated reality. And the same for the simulator’s simulator, or for God’s God, an infinite regress.
There is no way this is intellectually satisfying. So we return to Occam’s razor/Kolmogorov/Solomonoff : other things equal, it is reasonable to prefer a picture of reality with no simulation/God overhead. And we return to the burden of proof, which falls upon the theists and the simulationists.
The solution lies in the burden of proof. As a rule, absent a legal presumption, the burden falls on the party making the positive, existential claim.
The principle of burden of proof (in this context at least) is just wrong. It will lead you to confidently behave as if a bunch of things don't exist that probably do. You don't need to be disproportionately dismissive towards positive claims. You're allowed to maintain uncertainty about things, you can still live, it isn't paralysing.
the positive claim asks us to accept a description that contains more information (longer k-description)
This is true if by "description" you mean "description of the current state of the world" but false if you mean "description of the laws of physics that generated this state of the world (and others)". Simple rules can generate complex outputs. And it's an easy conjecture that any rule that could generate life would generate many complex and surprising outputs.
And when talking about razors for epistemology, you should mean the latter. There was never actually any scientific merit to applying a simplicity heuristic to the outputs of the generating function (the physics), the razor should only be applied to the generating function itself. That is the only time and place for measuring k-descriptions, doing it anywhere else leads you to "black holes aren't real (until you show me one)" type shit.
Simulationism is an incredibly straightforward implication of the laws of economics and technology that we already know. In order to reject it, you have to add another rule to your physics approximating "nothing too weird and hidden is allowed to happen".
Extraordinary affirmations need extraordinary evidence. I don't think that the burden of proof is wrong. It is reasonable to expect anyone who makes a positive claim to prove it. To dismiss this principle when it goes against our view is a double standard.
To be honest, my impression is that we rationalists were very happy with this principle when Dawkins used it against the God hypothesis in The God Delusion, but now we or some of us are less comfortable with it when it is opposed to the simulation hypothesis (despite near-perfect isomorphism, as Chalmers himself shows).
Why ? Because techno-theology has an appealing technology vibe and is based upon anthropic arguments, the kind of arguments that are also discussed in cosmology. However, anthropics in cosmology make some predictions, like constraining the cosmological constant/dark energy.
I acknowledge that Bostrom’s and Chalmers’s anthropic arguments make sense. The hypothesis is intriguing. But we mustn't adopt a view just because it is intellectually appealing, that's a bias. We must adopt it if it is true, and the impossibility of checking whether it is true or not is a deep flaw. The burden of proof applies. The simulation hypothesis is a seductive cosmic teapot, but it's still a cosmic teapot.
That said, I agree that we cannot know for sure. It's always a Bayesian weighting, and by no means was my point to negate the simulation hypothesis with absolute confidence. Sorry if I gave that impression. I rank it as more probable than traditional theology and less probable than non-simulated reality.
Concerning Occam's razor, of course parsimony applies to the description, not to the sequence/output. I didn’t mean it otherwise. My argument concerned the simulation process. It doesn't seem parsimonious to add one, or several, or an infinity of computational layers on top of the process generating our world.
It looks like common sense, but I must admit that if we enter the theoretical details (which I do not master) it is less straightforward. The devil is in the details. One can cheat by designing an ad hoc UTM to arrive at a weird result. The theorems of equivalence between UTMs are always on average and up to a constant. However, we can restore the common-sense view that there is no logical free lunch by assessing the overall computational resources, not only pure K-description but also logical depth, speed prior, or Levin's complexity. A description must be preferred over another one, all else being equal, if it is more parsimonious both in terms of information and computation. It's true that Occam's razor is not always interpreted like this, but in my opinion it should be.
Also, Occam's razor makes sense only as long as the assessed theory has predictive power. A theory that predicts everything, in fact predicts nothing. In AIT, it would be a description that doesn't just produce the sequence of our universe, but the sequence of many possible universes. String theory faces this problem, and so does the simulation hypothesis because we don't know in which universe we end up.
I also think that the matter has little to do with black holes, which are a prediction of GR formally derived from the beginning (Schwarzschild singularity) and are now well observed, even if discussion continues concerning the physics at the horizon and inside.
I fully agree that “nothing weird must happen” is a biased presumption, but I doubt that a perfect simulation of our observable universe constitutes a straightforward prediction of the evolution of economy and technology. I expect increasingly better VR than today, but there are computational costs and physical limits.
Finally, if all that doesn't suffice, there is still the paradox I mentioned in my last comment : infinite regress. Dawkins put that paradox forward in his rebuttal of theism. The same argument applies to the simulation hypothesis. There are as many reasons for the simulator than for us to be simulated, that's circular thinking.
I respect the idea, but I don't buy it and assign it a low probability.
Extraordinary affirmations need extraordinary evidence
You're not gonna like this, but that's another one that's not actually true at all. Extraordinary theories have often been proven with mundane evidence, evidence that had been sitting around in front of our faces for decades or centuries, the new theory, the extraordinary claim, only became apparent after this mundane evidence was subjected to original kinds of analysis, new arguments, arguably complicated arguments. Although new evidence was usually gathered to test the theory, it wasn't strictly needed. If it had been impossible to go out into the world and subject the theory to new tests (as it is for the simulation hypothesis), the truth of the theory still would have become obvious. Examples of such theories include plate tectonics, heliocentrism, and evolution.
To be honest, my impression is that we rationalists were very happy with this principle when Dawkins used it against the God hypothesis in The God Delusion
I was very happy with it back then, because I was just a kid. I hadn't learned how scientific thinking (actual, not performative) really ought to work. I trusted the accounts of those who were busy doing science, not realising that using a particular frame doesn't always equip a person to question the frame or to develop better frames when that one starts to reach its limits.
not only pure K-description but also logical depth, speed prior, or Levin's complexity
Does this universe really look to you like it conforms to a speed prior? This universe doesn't care at all about runtime. (it can indeed only be simulated very lossily) (one of the only objections I still take seriously, is that subjects within a lossy simulation of a universe, optimised for answering certain questions which don't closely concern the minutia of their thoughts, might have far lower experiential-measure than actual physical people, so perhaps although they are far more numerous than natural people, it may still work out to be unlikely to be one of them.)
I could say more about the rest of that, but it doesn't really matter whether we believe the simulation hypothesis today.
The signs that we are in a simulation have been discussed ad tedium (Fermi paradox, quantum indeterminacy and the observer effect, universal fine-tuning, Planck length/time, speed of light, holographic principle, information paradox in black holes, etc.). As many people also realize, it is likely the case that we are in an ASI-generated simulation, since ASI will likely overtake humanity, and such an occurrence has likely already transpired (given the ratio of simulators to those who are simulated). If we are indeed in an ASI-generated simulation, it may be tempting to imagine any simulator using a machine that is separate from itself for simulations, since that is how humans run simulations/games. Yet the reason we use separate machines for simulations is because our own minds are incapable of directly and internally running them. An ASI would not have this limitation. Instead, why would it not use itself? And then what would happen after the simulation/”death”? Presumably, that which once identified as human would “wake up” and realize its true identity.