Reality is whatever you should consider relevant. Even exact simulations of your behavior can still be irrelevant (as considerations that should influence your thoughts and decisions, and consequently the thoughts and decisions of those simulations), similarly to someone you won't possibly interact with thinking about your behavior, or writing down a large natural number that encodes your mind, as it stands now or in an hour in response to some thought experiment.
So it's misleading to say that you exist primarily as the majority of your instances (somewhere in the bowels of an algorithmic prior), because you plausibly shouldn't care about what's happening with the majority of your instances (which is to say, those instances shouldn't care about what's happening to them), and so a more useful notion of where you exist won't be about them. We can still consider these other instances, but I'm objecting to framing their locations as the proper meaning of "our reality". My reality is the physical world, the base reality, because this is what seems to be the thing I should care about for now (at least until I can imagine other areas of concern more clearly, something that likely needs more than a human mind, and certainly needs a better understanding of agent foundations).
I find this far more convincing than any variant of the simulation argument I've heard before. They've lacked a reason that someone would want to simulate a reality like ours. I haven't heard a reason for simulating ancestors that's either strong enough to think an AGI or its biological creators would want to spend the resources, or explains the massive apparent suffering happening in this sim.
This is a reason. And if it's done in a computationally efficient manner, possibly needing little more compute than running the brains involved directly in the creation of AGI, this sounds all too plausible - perhaps even for an aligned AGI, since most of the suffering can be faked, since the people directly affecting AGI are arguably almost all leading net-positive-happiness lives. If what you care about is decisions, you can just simulate in enough detail to capture plausible decision-making processes, which could be quite efficient. See my other comment for more on the efficiency argument.
I am left with a new concern: being shut down even if we succeed at alignment. This will be added to my many concerns about how easily we might get it wrong and experience extinction, or worse, suffering-...
I tweeted about something a lot like this
https://xcancel.com/robertskmiles/status/1877486270143934881
The problem is, when we simulate cars or airplanes in software, we don't do it at molecular level. There are big regularities that cut the cost by many orders of magnitude. So simulating the Earth with all its details, including butterflies and so on, seems too wasteful if the goal is just to figure out what kind of AI humans would create. The same resources could be used to run many orders of magnitude more simplified simulations, maybe without conscious beings at all, but sufficient to predict roughly what kind of AI would result.
We don't know that our reality is being simulated at the molecular level, we could just be fooled into thinking it is.
In your dreams do you ever see trees you think are real? I doubt your brain is simulating the trees at a very high level of detail, yet this dream simulation can fool you.
My alternative hypothesis is that we're being simulated by a civilization trying to solve philosophy, because they want to see how other civilizations might approach the problem of solving philosophy.
Here's a slightly more general way of phrasing it:
We find ourselves in an extremely leveraged position, making decisions which may influence the trajectory of the entire universe (more precisely our lightcone contains a gigantic amount of resources). There are lots of reasons to care about what happens to universes like ours, either because you live in one or because you can acausally trade with one that you think probably exists. "Paperclip maximizers" is a very small subset of the parties that have a reason to be interested in trying to figure out what h...
I don't know about "by a paperclip maximizer", but one thing that stands out to me:
If we're in a simulation, we could be in a simulation where the simulator did 1e100 rollouts from the big bang forward, and then collected statistics from all those runs.
But we could also be in a simulation where the simulator is doing importance sampling - that is, doing fewer rollouts from states that tend to have very similar trajectories given mild perturbations, and doing more rollouts from states that tend to have very different trajectories given mild perturbations.
If...
When the bullet missed Trump by half an inch I made a lot of jokes about us living in an importance-sampled simulation.
I find this argument fairly compelling. I also appreciate the fact that you've listed out some ways it could be wrong.
Your argument matches fairly closely with my own views as to why we exist, namely that we are computationally irreducible
It's hard to know what to do with such a conclusion. On the one hand it's somewhat comforting because it suggests even if we fuck up, there are other simulations or base realities out there that will continue. On the other hand, the thought that our universe will be terminated once sufficient data has been gathered is pretty sad.
Yet the universe runs on strikingly simple math (relativity, quantum mechanics); such elegance is exactly what an efficient simulation would use. Physics is unreasonably effective, reducing the computational cost of the simulation. This cuts against the last point.
This does not seem so consistent, and if the primary piece of evidence for me against such simulation arguments. I would imagine simulations targeting, eg, a particular purpose would have their physics tailored to that purpose much more than ours seems to (for any purpose, given the vast comp...
This and other simulation arguments become more plausible if you assume that they require only a tiny fraction of the compute needed to simulate physical reality. Which I think is true. I don't think it takes nearly as much compute to run a useful simulation of humans as people usually assume.
I don't see a reason to simulate at nearly a physical level of detail. I suspect you can do it using a technique that's more similar to the simulations you describe, except for the brains involved, which need to be simulated in detail to make decisions like evolved or...
I doubt this reading was intended, but the whole article makes a great joke if the very last thing in the article, item 8, "We are not going to create a paperclip maximizer.", is the punch line. Like, I have just presented all this utterly absurd nonsense, and now I have given you an alternative: abandon a belief you hold even though almost certainly hating the holding of it, hating the feeling that it's true but believing that it is anyway. It's like a gong.
Doesn't actually land, on me. I hold out hope for a way of addressing the simulation hypothesis tha...
I don't think this scenario is likely. Except for degenerate cases, an ASI would have to continue to grow and evolve well beyond the point at which a simulation would need to stop, to avoid consuming an inordinate amount of resources. And, to take an analogy, studying human psychology based on prokaryotic life forms that will someday evolve into humans seems inefficient. If I were preparing for a war with an unknown superintelligent opponent, I would probably be better off building weapons and studying (super)advanced game theory.
Which ideas seem slightly ...
afaict, this is true the same way major historical figures are primarily approximately-instantiated in simulations today (movies and fiction). it's just a more intense version of "history has its eyes on you" - history is a thing in the future that does a heck of a lot of simulating. what we do to affect that history is still what matters, though.
Another possibility is that the beings in the unsimulated universe are simulating us in order to do a Karma Test: a test that reward agents who are kind and merciful to weaker agents.
By running Karma Tests, they can convince their more powerful adversaries to be kind and merciful to them, due to the small possibility that their own universe is also a Karma Test (by even higher beings faced with their own powerful adversaries).
Logical Counterfactual Simulations
If their powerful adversaries are capable of "solving ontology," and mapping out all of existence
Our universe is probably a computer simulation created by a paperclip maximizer to map the spectrum of rival resource‑grabbers it may encounter while expanding through the cosmos. The purpose of this simulation is to see what kind of ASI (artificial superintelligence) we humans end up creating. The paperclip maximizer likely runs a vast ensemble of biology‑to‑ASI simulations, sampling the superintelligences that evolved life tends to produce. Because the paperclip maximizer seeks to reserve maximum resources for its primary goal (which despite the name almost certainly isn’t paperclip production) while still creating many simulations, it likely reduces compute costs by trimming fidelity: most cosmic details and human history are probably fake, and many apparent people could be non‑conscious entities. Arguments in support of this thesis include:
Falsifiable predictions: This simulation ends or resets after humans either lose control to an ASI or take actions that cause us to never create an ASI. It might end if we take actions that guarantee we will only create a certain type of ASI. There are glitches in this simulation that might be noticeable, but which won’t bias what kind of ASI we end up creating so your friend who works at OpenAI will be less likely to accept or notice a real glitch than a friend who works at the Against Malaria Foundation would. People working on ASI might be influenced by the possibility that they are in a simulation because those working on ASI in the non-simulated universe could be, but they won’t be influenced by noticing actual glitches caused by this being a simulation.
Reasons this post’s thesis might be false: