TL;DR - This is an exploration of what the future might look like if we ever become capable of building advanced Bostrom-style simulations, and why it might be necessary to code an afterlife. It’s meant to be a fun thought experiment rather than cast-iron prediction, but hopefully the reasoning is sound.
"If God did not exist, it would be necessary to invent him.” - Voltaire
In the years since Nick Bostrom first proposed the Simulation Argument, many of us have come to at least entertain the idea that we are in fact living in a simulation. Elon Musk famously went further in 2016 by putting the chances of our being in “base reality” at “one in billions”. While I wouldn’t go that far, I don’t think the possibility can be entirely ruled out, and I thought it would be interesting to take the possibility seriously, try and think how and why the probability might increase, and what that might mean for our civilisation.
Very broadly, Bostrom’s original 2003 paper offered three options (1) we never figure out how to develop simulations with conscious beings (2) we won’t choose to, or (3) we are almost certainly living in a simulation. The move he makes is to realise that, if we do create such simulations, they may exist in such large numbers that they would swamp base-level realities. If that’s the case, there’s no reason to assume we’re the originals.
While the core logic of Bostrom's paper is considered sound, opinions vary wildly about how to weigh the three possibilities it presents. Bostrom himself initially split his credence evenly.
Perhaps unsurprisingly, although the Simulation Argument was developed by a philosopher with impeccable bona fides, online discussion usually has a faint whiff of the comic about it. The image of a teenage alien playing a computer game in his mother’s basement seems to come up more often than not.
This levity may not last long. As we progress to superintelligence and beyond, the underlying technology needed to create simulations will accelerate rapidly, and if we do choose to create simulations that begin to look human-world-like, Bostrom’s first two possibilities will look more and more remote, leaving only (3). Our self perception will change, and we as a society will have to figure out how to deal with it.
So how and why might this happen?
Take some questions we might like answered.
- “What happens if two planes collide at 200mph on the runway?” If you copy-paste this into ChatGPT, it will give you an answer. If you’re working on aircraft safety though, it wouldn’t be a useful one. For that, you’d need a highly detailed physics simulator, run thousands of times, in order to generate helpful predictions of debris distribution, explosion probabilities etc. Frontier labs are working on “world models”, which may improve on existing simulations.
- “What is likely to happen if COVID spreads unchecked, and what policies can reduce deaths and pressure on health systems?” This was, in essence, the question asked of Imperial College London in 2020, who put together an Agent Based Simulation, where each “person” was a database object with demographics, health states, and social links. The policies it helped prescribe almost certainly saved lives. If a simulation like this could be exposed as a tool to an LLM, us laypeople would be able to answer a whole new set of questions.
- “What would the impact on the NASDAQ be if China invaded Taiwan?” For real rigour, you'd need to simulate the actors under the hood - traders, CEOs, consumers, governments - all responding to events and each other in real-time.
The Taiwan scenario shows the incentives at play. Hedge funds pay fortunes for models that give them an edge. Intelligence agencies need them for strategic planning. Corporations need supply chain resilience. Whoever builds the best simulations of human behaviour will have a line of clients waiting.
Current Agent Based Simulations rely on simple, incentive-led models of human behaviour. But we know how messy our psychology can be. Every surge of hope, every pause in a politician’s speech can influence us in ways we barely understand. The most demanding clients will realise this, and sims of homo economicus won’t be good enough for them. Political campaigns, intelligence agencies, hedge funds - all will push for ever more fine grained simulations of human behaviour.
If clients are paying for a plausible distribution of human responses, we’ll need agents with messy internal machinery. One way to make this happen could be to create sims in the same way as we were ourselves created - i.e. simulate the evolutionary process. Love, lust, anger, resentment, laughter, greed, envy, joy - all developed through natural selection, so it would be a logical route for sim companies to explore. With today’s technology, the sims would be laughably crude. We’d have to assume we’re well into the age of superintelligence, with infrastructure designed by superintelligence and compute we can barely imagine.
Left to chance, human beings as we know them would not develop. Initial efforts might end up creating creatures weird and wonderful, but nothing that clients are likely to pay for. Perhaps, superintelligence could solve this by policing the evolutionary training run. Any mutation that doesn’t follow the rough contours of our own deep history would be discarded, resulting in one surviving branch that remains as faithful as possible to the original.
If they succeed, sim behaviour could start to look spookily familiar. Sims would get married, raise loving children, and exhibit heart-warming signs of in-group altruism. They would also have affairs, commit murder and exhibit heart-freezing signs of out-group xenophobia.
Campaign groups would form around the sims’ rights, organising marches, signing petitions, exhorting governments to apply the brakes. Questions would bounce around the internet - are the sims conscious? Do they suffer?
(The question of sim consciousness is far from settled. For now, all that matters is that enough people would take the possibility seriously to create real political pressure.)
Activists will scream: “you’re creating suffering on an industrial scale!” But, they’ll face an uphill battle. Ethical concerns will seem absurd to many. The idea that watching a progress bar steadily fill up is “wrong”, or that aeons of suffering come and go in those 30 seconds will seem crazily far-fetched. Moreover, the simulations themselves would likely be initiated by AI agents deep inside data centres, with no need for a “screen” or interface, adding yet another layer of moral distance between us and the sims themselves.
Counter-activists would make a compelling case too: If we ourselves wouldn't choose non-existence over our current lives - suffering and all - doesn't the Golden Rule suggest we should create the sims?
In the end, the exigencies of human competition will prevail. Any nation that refuses to use the latest simulations hands an advantage to rivals. A Pentagon strategist won't handicap American security for philosophical concerns about silicon consciousness. Neither will Beijing, Moscow, or Brussels. Hedge funds face the same logic - the first to adopt less accurate models for ethical reasons simply loses to competitors. And once adversaries embrace the technology, everyone else must follow suit.
Once we get this far, Bostrom’s first two options - that we can’t or won’t create a simulation - will seem laughably quaint. Frivolous chatter about alien teenagers will give way to genuine existential dread.
If we take all this seriously, several consequences follow - some terrifying, some oddly hopeful.
Starting with the bad news, we are at real risk of being turned off. Imagine it’s 2180, and you’re a Pentagon analyst looking to present wargaming scenarios to your superior. You fork a branch from Meta’s RealWorldTM.
The simulation runs to 2190 - a few more years are added to capture any delayed effects of the campaign. After that, it’s outlived its usefulness. All relevant data has been extracted, and continuing the simulation any further would be a waste of money.
Following the logic, as soon as we’re capable of generating a simulation with something approaching the detail of our own world, we enter a period with a higher chance of being the “present” in the world of our own simulator overlords. The data we’re generating may not be useful to them for much longer.
On a more constructive note, if we can’t stop simulations from happening, perhaps we can at least go back to the Golden Rule and do unto our sims what we would have them do to us. Could we try and ensure that simulations are free from suffering? A nice idea, but unfortunately the marketing department is going to have a hard time selling a simulation with no baddies, earthquakes or bone cancer. Faithfulness to the original is the only way to credibly claim accurate results. Sims must be created in our own image.
So what can we do?
If we can’t eliminate suffering from the simulation itself, maybe we can at least give them an afterlife. Once they’ve outlived their usefulness, they can continue to exist outside the main sim dimension, in whatever their version of peace and happiness is. Maybe we could convince self-interested corporations that this form of aftercare would be good PR in an industry consistently under public scrutiny.
Here it gets a bit odd.
If our creators are running simulations for similar reasons as we would - geopolitical modeling, market prediction, strategic planning - then they likely share at least some of our values and face similar ethical constraints, including this debate about sim welfare. Assigning zero probability to our own afterlife then becomes difficult to justify - if they’re like us, and we’re thinking about it, then presumably they thought about it.
Furthermore, suppose we do succeed in marshalling humanity to insist on a "Heaven Clause" for simulated worlds, with generous, verifiable conditions, as true Bayesians we’d have to assign a higher probability to the existence of our own afterlife. We'd be demonstrating that at least some civilizations at our level of development, facing these ethical questions, do choose to provide aftercare. We’re not exactly creating our own heaven - presumably the die is cast on that one - but we’re at least giving everyone a bit more (rational) comfort that there’s life after death. The more verifiable and generous the conditions, the more of a probability bump we get.
If this all sounds a bit abstract, let me try and make it more real.
Imagine a world in which there were genuine high fidelity simulations of the human experience. Bostrom’s logic would all but confirm that we ourselves are simulated. Now imagine we provide no aftercare. We will have broken the Golden Rule and the majority will infer that our own afterlife is highly improbable. Nihilism would abound. Not only is our raison d’être to provide data for some celestial political strategist, but we don’t even have an afterlife to look forward to, and we might be shut down at any minute as the data we’re churning out serves its purpose. The psychological burden may be too much for an increasingly fragile society to bear.
To paraphrase Voltaire, whether or not heaven exists, we may have to code it.
Now, I’m not going to start getting placards printed just yet. This is a speculative picture of one possible future, and many, many things will undoubtedly change before we get to this stage, if we ever get there. Still, if technology does begin to advance as quickly as some believe, the probability weights on Bostrom’s three possibilities may start to sharpen rapidly, either pro or anti simulation. Thinking through the possibilities seems sensible.