My father, who is home recovering from surgery, emailed the following web page to me and a few other members of my family, and expressed interest in reading interesting responses.

If We Are In A Computer Simulation

Is the universe just a big computer simulation running in another universe? Suppose it is. Then I've got some questions:

  • Are intelligent entities modeled as objects? In other words, are they instantiated and managed as distinct tracked pieces in the simulation? Or is the universe just modeled as huge numbers of subatomic particles and energy?
  • If intelligences aren't modeled as objects are they at least tracked or studied by whoever is running the sim? Or is the evolution of intelligence in the sim just seen as an uninteresting side effect and irrelevant to the sim's purposes?
  • Is The Big Bang the moment the sim started running? Or did it start long before that point? Or has it been started at a much later point with data loading in to make it look like it started earlier?
  • Do the creators of the sim intervene as it runs? Do they patch it?
  • What is the purpose of the sim? Entertainment? Political decision-making? Scientific research?
  • Were the physical laws of the universe designed to reduce the computational cost of the sim? If so, what aspects of the physical laws were designed to make computation cheaper?

Imagine the purpose of the sim is entertainment or decision-making. Either way, it could be that out-of-universe sentient beings actually enter this universe and interact with some of its intelligent entities. Interact with simulated people for fun. Or interact in order to try out different experiments of political development. In the latter case I would expect more rerunning of the same sim backed up to restart at the same point but with some alteration of what some people say or do.

So what's your (simulated) gut feeling? Are you in a sim? If so, what's it for?

Any thoughts?

New to LessWrong?

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 4:29 PM

There are only a few conditions under which it could possibly matter to me that I'm in a simulation. Among them:

  1. The simulation is buggy. Under some conditions that are reachable from within the sim, it's possible to cheat at physics by overflowing some buffer or exploiting some other software bug.
  2. The simulators are watching and interacting. There's someone watching me right now who has the power to change a few runtime variables and cause a kilo of gold coins, a kilo of Purple Kush, and a verthandi to appear next to me. (They are cordially invited to do so.)

Most of these questions seem extremely hard if not impossible to answer without something like talking to the makers of the sim or finding hacks to get at the source code. Some of these are however slightly testable. For example, if it turns out that BQP is comparatively large (to fix an example say it contains NP), then it would suggest that if we are in a simulation then the simulation is not being run on a classical computer.

One possible test to see if we are in a simulation, although a very weak one, is to collect a very large set of data about decaying particles and then analyze the data. If the data turns out to be much closer to a pseudorandom rather than truly random sequence then this would strongly suggest we are in a simulation. Note that if presently believed complexity conjectures are true, distinguishing actual pseudorandom data from genuinely random data should be extremely tough.

One serious issue about engaging in tests to try to see if we are in a simulation is that the most obvious tests are often tests which have some danger of potentially crashing the simulation or causing the programmers to stop it. Thus for example, one test that jimrandomh mentioned to me was bouncing laser beams off of farther and farther objects to expand the radius which we know is being accurately simulated (this mainly is usable for ruling out a simulation that only simulates our solar system in detail). This is however exactly the sort of thing which one could see as leading to the equivalent of a seg fault or other error. On the other hand, if the simulators are only running a simulation on our solar system then they are more likely to care about humans or life in general on Earth (at minimum, we're then a much larger and more significant part of the simulation) and so are more likely to have anticipated this sort of problem and made sure the code is somewhat resistant. That said, if all probes stopped transmitting back to Earth or showing any signs of existence a bit after our solar system (say 75 AU out) and all attempts to bounce lasers or radar off of distance objects failed, that would strongly suggest we are in a simulation that just includes our solar system.

Xixidu tweets about experiments likely to crash the sim quite often; for example: http://www.centauri-dreams.org/?p=18718&utm_source=rss&utm_medium=rss&utm_campaign=spacetime-beyond-the-planck-scale

And I somehow feel like I've read a SF short story long ago where someone found out radioactive decay followed a simple linear congruential generator, or something.

The experiment in question doesn't seem to be the sort that is likely to crash the sim, since our part is purely passive, just looking at the differences in the pre-existing data. Note also that the simulation has to be fairly robust since there are a large number of highly varied interactions going on (e.g. cosmic rays hitting the Earth's atmospheres and all sorts of marginal exotic reactions in the sun that are going to occur simply because the sun is so large.) But the point that there's a general class of experiments that could plausibly cause a crash is a good one.

I don't see how the bouncing of the laser light from an exo-planet would prove anything. A simulated bounce can always be performed if somebody performs. Okay, if they fail, then it's obvious we are simulated, you are right.

But the testing how truly random are radioactive decays, could be a very interesting thing to do, indeed.

The broad answer to those questions is "depends what the simulation is being run for".

To see why, consider why we run simulations. Answers include entertainment, e.g. SimCity; training, e.g. flight simulators; fundamental research, e.g. simulation of neural networks; engineering, e.g. simulation of a model building to determine whether its design withstands this or that catastrophic event, such as a quake. I'm probably forgetting many other areas.

A simulation is generally intended to answer a specific question or class of questions about some aspects of reality. The degree of fidelity, the "shortcuts" taken, the possibilities for intervention and so on are all determined by what questions are of interest to the people running the simulation.

If we decided to run a simulation to answer the question "can intelligent life arise out of deterministic physics", then we would build the simulation to have simplified but realistic physics, such that we could run the sim at a much faster rate than our baseline reality. Such a sim would likely not model anything at a higher level of abstraction than the laws of physics, since that would defeat the point.

If, on the other hand, we were running a simulation for the aesthetic purpose of recreating forgotten parts of our history, we would likely not bother with such minute details. For those who've read Permutation City, we would make something closer to Copies than to the Autoverse.

In fact, you could do worse than buy your dad a copy of Permutation City to stimulate his thinking about such questions.

I find it very plausible that whatever would have the capacity to simulate us would have a wider range of motivations than we do.

If we are, I would place an extremely high credence in individuals not being modeled as objects.

  • Doesn't look like it.
  • Untestable, but it seems like it would be a lot of trouble.
  • The laws of physics seem to go back all the way, so at the big bang or before.
  • Doesn't look like it.
  • If I imagine making something like the universe, it would be because I could - and a little scientific curiosity.
  • Not sure. Our universe runs on pretty complicated and high-energy physics. If we're running inside a computer in a universe similar to ours, this computer would have to have a vastly higher maximum energy splitting than ours, or more generally work on physics that made computation really easy.

Actually, answering that last question led to an interesting thought. Ease of computation puts an ordering on the universes that can simulate each other. If our prior probability of the universe existing is can be thought of as a function of ease of computation in the universe (probably not, but simplifying), that puts an upper limit on the "tower o' universes." Then we can make assumptions about how rarely universes bother to simulate whole other universes and actually get finite numbers rather than pesky infinities! Thinking this has caused me to revise my probability of being in a simulation down.

If our universe was an artificial construct, then the potential scenarios spin out of control as we consider the possible motivations of the 'creators'. Although the assumption may not apply in universes with different physical laws, I'll operate on the premise that their computing power is finite.

In the first scenario, the one that normally shows up in popular science programmes referring to science fiction to entertain the audience, this is a game. If we were directly created by the 'creators', then we must be beings that they can conceive of, and reflect some side of their psyche. I don't think this planet is the most interesting setting for a game I could imagine, so assuming we can reflect our creators psyches with our own biases, the only data we'll ever get in this scenario, there are probably more perfect games for them to play. This implies they would only spend limited resources on this particular universe, and running a fully 'rendered' universe, minds and all, when a player isn't in the area would probably increase costs. I know I can think, feel, and, to a degree, comprehend the universe. You know you can think, feel, and, to a degree, comprehend the universe. Everyone reading this knows they can think, feel, and comprehend the universe, although none of us know for sure the others can, at least when we aren't interacting with them. If you're reading this while alone, we can probably say we do it even when another character isn't present, this implies we're more than just NPCs. If it's a game, then you, or someone you know, are probably one of the players. (Ask your dad straight out for a joke if he's playing a ridiculously complicated RPG.) An extension of this possibility is that this is simply a psychology experiment writ large, and I, or you, am actually a member of the creator species planted in this environment for the purposes of the experimenters. If you were the player, however, do you think you would be happy with the 'game' your playing now as pure recreation?

Therefore I think that if you, or me, are self-aware in a game, then we're either doing this as a psychology experiment, or we're side-characters for someone with a more engaging life. (Although the utility functions of any PC races are unknown.) They could be cleaners, sociopaths, or leaders, but we'd be looking primarily for people who show some degree of motivation.

The second possibility is that 'they' have the capabilities to generate an entire universe, or at least one solar system, down to the last atom, in cyber-space. If so, then we can only hope that we're very much a valued experiment, and won't be killed off like a lab rat once our immediate purpose if fulfilled.

Either way, if this is a simulation, we have no power over it, and won't be able to gain information on it. If it's a game, the servers could be reset, and if it's an experiment in such detail we all exist as we think we do, but in a simulation, we could likely be programmed to forget if it suited the programmers.

Besides, if this were a simulation, then we're presumably worthless or insane by the standards of the wider universe. Our actions have the greatest effect in a situation where our universe exists in full, and is a 'physical' universe, while they are all but worthless in a situation where the universe is a game. So to give the best chance of our actions mattering, we should operate under the assumption we, and everyone around us, exists.

Also, 'Well Done. Progress to Level 2!'

So what's your (simulated) gut feeling? Are you in a sim? If so, what's it for?

Any thoughts?

I just wanted to clarify that you are interested in undeveloped, naive thoughts as well? While I'm sure developed thoughts would be preferred, unless there are some science fiction writers out there who've had reason to think about this a lot, I think most people's thoughts would be undeveloped and naive, just because there's not much reason to dwell on such questions and, even if you do, not much entanglement with reality to train your thinking.

Personally, I enjoy this post because the hypotheticals are interesting and perhaps through discussion we can hone and develop our initially naive ideas into more mature ones. (Then, should I discover in a month we are in a simulation, I will be that much further advanced in my thinking compared to where I would have been...)

I just wanted to clarify that you are interested in undeveloped, naive thoughts as well?

I expect that he is, yes, or at least doesn't mind them. If nothing else, they're probably easier to follow than full-on scifi for someone who's probably on post-surgical painkillers and such.

I assign a very high probability to this being wrong -- in other words, I wouldn't risk anything on this as a claim -- but if you ask what I would think, if I found out we were in a simulation created by conscious entities I would feel that these conscious entities were most interested in us, as other conscious entities. I would also feel that they were benevolent and cared about us.

My reasons are that I think consciousness is special. Even though it just comes out of the physics, it's something that gives the universe a point of view and thus a locus -- a point of origin and some reason to consider a particular time and place 'important'. I also think that consciousness is unifying. As soon as you have a sense of self, an entity begins categorizing things as 'self' or as 'other'. A sufficiently intelligent conscious being will recognize that other conscious entities are more like themselves than anything else.

It's hard to imagine the motives of a general intelligent being. But a being that simulates other beings is already a smaller group. They are curious and possibly lonely. I just think that loneliness is the fundamental condition of a self-aware being.

We were either simulated by conscious entities or simulated by non-conscious entities. If the former, I think we are the purpose of the simulation (or an initial step in the purpose). If we were not simulated by a conscious being, then we are being simulated by 'accident' and in that case I see no difference between that hypothetical simulation and the one we're in. Because I believe we are in a simulation, by how I define simulation, I just wonder if that simulation was created with intention or not.

I would also feel that they were benevolent and cared about us.

Why?!

I should probably emphasize that I should expect that we are created by a mechanism that is indifferent to us. But given a mechanism that is not indifferent to us, I would guess that the conscious beings simulating us are empathetic and friendly. They are empathetic because they are conscious beings interested in simulating other conscious beings. If empathetic, I think they are friendly because I think that being empathetic and unfriendly is perverse -- a pattern twisted on itself and unstable in the long run. There is some probability that we are so different they are unable to empathize with us very well, or they empathize more fully with the our more intelligent predecessors, but in these scenarios they are not very intelligent and are not learning enough as they go from the simulation. And while I finally wouldn't give this case of somewhat-stupid simulators a necessarily small probability, that scenario is closer on the spectrum to our universe having been created by accident. Semi-intelligent beings might have managed to hack this universe together, in which case they didn't intend us and their attitudes aren't predictable.


Since you've made me think about this further (usually, no reason to develop such ideas very much), I will add that even if we were created by benevolent, caring beings, I doubt very much that exactly 'we' were the point of the simulation and chances are we are just a blip in an evolutionary procession towards their intended friends. In which case, I empathize with these future friends in considering them benevolent and caring.

I would guess that the conscious beings simulating us are empathetic and friendly. They are empathetic because they are conscious beings interested in simulating other conscious beings. If empathetic, I think they are friendly because I think that being empathetic and unfriendly is perverse -- a pattern twisted on itself and unstable in the long run.

This seems unjustified. Humans are interested in how other life works. We often poke it and do terrible stuff to it. We have our children cut off the heads of some living things just to see what happens. We engage in similar experiments on our own supposed best friends. And given the closest thing we have to simulations, people try to find all sorts of new and clever ways to torture. (And yes, each of those words is a separate link). The empirical data doesn't seem to support the claim of empathy and friendliness.

Humans seem very empathetic to me, since we do worry about, for example, the treatment of animals -- perhaps not planaria, but the important thing here is that we would dependably worry if we thought they minded. I cannot think of any mental distress that we would not be concerned about, no matter how far removed the 'organism'.

But of course we are also very cruel, and I do see that as perversion, because it is empathy turned against itself. For example, regarding our fascination with torture, at least that departs from the concept that torture is bad. Cruelty is 'interesting' because it stimulates our empathy. We wouldn't care about causing harm (causing harm for its own sake) if we didn't understand it was harm in the first place.

Whether people have empathy for animals varies a lot. I know at least one prominent Less Wronger who when discussing vegetarianism said recently that his utility function doesn't have a term for non-human animals suffering. Moreover, history is clear that many humans don't even have more much empathy for much beyond their own tribal group, and even when they have a theory of mind good enough to deceive and fight them, they still might not care.

If the beings who make the simulation are much smarter than we are, then I see no reason why they wouldn't take our suffering about as seriously as many humans take the suffering of animals for medical research, or even more blunt, how some humans take hunting or dog-baiting or cockfighting, or a hundred other activities that cause pain and suffering to with animals for their sheer amusement.

(Incidentally, I'm curious, do you think this universe looks to you like one which has creators who care for their creations?)

Sometimes humans have empathy towards suffering things, sometimes they don't. I guess you could say this is a capacity to not have empathy or a capacity to have empathy and our difference about human empathy is a cup half-full or half-empty thing. Of course, as a predictor of how aliens would treat us, the fact that humans aren't consistently empathetic would be a prediction that aliens might not treat us well. I don't expect aliens to treat us well, whereas I expect our simulators would. Perhaps I am giving the simulators too much credit, intelligence-wise and empathy-wise, for how much I am grateful for certain aspects of the universe. Maybe they just cut and pasted a lot from their own universe and I give them too much credit.

If the beings who make the simulation are much smarter than we are, then I see no reason why they wouldn't take our suffering about as seriously as many humans take the suffering of animals for medical research,

I suppose, but nevertheless I'm OK with this. It would be nice to have a purpose.

or even more blunt, how some humans take hunting or dog-baiting or cockfighting, or a hundred other activities that cause pain and suffering to with animals for their sheer amusement.

While I would expect these behaviors from any evolutionary evolved intelligence (excepting whales perhaps), they are so contradictory with other evolved traits, I think they must be transient. For example, many people don't enjoy such things at all and cockfighting is illegal where I live.

If such threads are not meant to be transient, then I am wrong about all of this.

While I would expect these behaviors from any evolutionary evolved intelligence (excepting whales perhaps), they are so contradictory with other evolved traits, I think they must be transient. For example, many people don't enjoy such things at all and cockfighting is illegal where I live.

Sure, many people don't but how much of that is simply due to cultural norms? Many such activities are outlawed more because they are associated with lower classes or marginalized groups. Look at how in the United States hunting is in many areas a popular past-time, while in most of the US dogfighting is illegal. Why? Well, without delving too much into the mindkilling of politics, dogfighting is a sport historically popular with lower-income black people, while hunting is popular among a variety of different income groups among white people.

Among humans it does seem like the general trend among humans is towards more empathy and caring. But for another species, even if we think that such a trend will occur, there's no reason to think that that trend will outpace the growth of technology enough that they will not want to cause harm to their sims.

Oops, I just realized that in this last comment (the sibling to this one) I blurred two compartments of thought. I don't mind that I have different compartments, but I consider it a failure if I cannot remain in one throughout a thread. I guess what happened is that you convinced me there is reason to be cynical about human empathy, which became cynicism about human value, which inevitably leads to a set of grooves about value drift and my dissatisfaction with the lack of a framework of objective value ("FOOV"). So if you had the impression I switched gears regarding my initial position, you are correct.

By the way, I don't consider cynicism or optimism about human moral progress to be a factual matter, but two perspectives of the same scene. Over the weekend I attended a meeting that had me swayed in the optimism direction.

Sure, many people don't but how much of that is simply due to cultural norms?

It is probably entirely the evolution of cultural norms, but why dismiss that? The important question is whether there is a predetermined direction to the evolution of cultural norms, and it seems we agree that a general trend is towards more empathy and caring (with some reservations) but that this isn't necessarily reliable.

I often think about whether or not humanity is 'good' and whether the cultural development of our empathy will outpace other factors, and I've settled on the conclusion that if our universe is not designed, it will probably not work out well but if it was designed by benevolent, caring entities it will somehow work out no matter how small the probability.

In other words, without a designer, we're doomed anyway to a universe of random and arbitrary entities that won't conform to our (also) random and arbitrary moral preferences. With a designer, there is finally the possibility of a plan (and an imposed external set of moral preferences) and there is some probability (that I count as high) that we are part of the plan and thus we could trust that we would be happy with the outcome of that plan. Where 'we' doesn't necessarily mean us specifically, but future humans or another self-aware lineage or at least the designers themselves. Some set of conscious entities being happy with the universe seems like a good thing to me, better than a random flux of dissatisfied ones.

So to answer your question a couple comments up, at the moment I don't believe that our universe looks like it was designed by a caring entity, or that humanity is necessarily good. In my mind the problem is that there is no designer. A designer after all would terrifically increase the chances of moral success (for someone’s point of view) compared to a random universe.

My gut feeling is, that we are NOT inside a simulation. At least not in a one run by some anthropomorphic creatures. Or any creatures at all.

But suppose we are. Are our simulators also simulated? How far or high does it go? Why lose the raw computing power with some nested simulations?

Why lose the raw computing power with some nested simulations?

Coolness factor. One test of how interesting the simulation you're running is whether it buds off simulations of its own.

But suppose we are. Are our simulators are also simulated? How far or high does it go?

Turtles all the way down?

Why lose the raw computing power with some nested simulations?

Accuracy perhaps. Also what if it turns out that that feature hasn't been disabled or made difficult for us in some devious way, I mean its not like we've tried and succeed, yet.

Are intelligent entities modeled as objects? In other words, are they instantiated and managed as distinct tracked pieces in the simulation? Or is the universe just modeled as huge numbers of subatomic particles and energy?

We have no reason to believe that intelligent entities behave in a way that contradicts reductionism. This is a fact mostly independent of whether we're being simulated. So, intelligent beings may be tracked by the simulation, but they behave like collections of particles, so it seems simpler to suppose they are simulated the same way everything else is.

If intelligences aren't modeled as objects are they at least tracked or studied by whoever is running the sim? Or is the evolution of intelligence in the sim just seen as an uninteresting side effect and irrelevant to the sim's purposes?

Assuming we are simulated, we still know nothing about the simulators other than that they exist. To discuss the probability of propositions about them (are they studying us?), we have to make some limiting assumptions, and I know of no good prior for choosing these.

For instance, if there are infinitely many universes which may simulate each other, there are different ways of assigning probability mass to each of them: does a universe gain probability mass through being simulated multiple times, or in multiple other universes? Does this mass depend on the simulator's own mass? Can a universe have a nonzero probability mass at all if it's never simulated (that is, can there be an Unsimulated Simulator universe)? Does the whole concept even make sense when dealing with infinitely many "existing" universes?

Under some assumptions, every possible universe (every point in universe-history phase space) may have at least one other universe simulating it, or perhaps infinitely many. If you allow universes that have more physical computing power than our own appears to have (i.e. more than a Turing machine), then a single such universe may simulate all possible universes in finite time, and maybe for beings living in it that's an obvious first step before they start searching among the simulated universes for the one they want to study. And so on.

Gut feeling, no. I can give a reason for this: the Simulation Argument gives two other options, and I think each of the most credible assumptions that would reduce the likelihood of one No-Sim option would increase that of the other. But I've never devoted that much thought to it.

(I give fairly strong credence to the claim that reality acts more like a Turing machine than like any other model we could use. But I think this needs a different name, to discourage us from explaining the 'Turing machine' by one of our other 'models'.)