A rational approach to the issue of permanent death-prevention

by Nanashi 4 min read11th Feb 201528 comments


Edit: Removed intro because it adds no value to the post. Left in for posterity. The vast majority of all ethical and logistical problems revolve around a single inconvenient fact: human beings die unwillingly. "Should we sacrifice one person to save ten?" or "Is it ethical to steal a loaf of bread to feed your starving family?" become irrelevant questions if no one has to die unless they want to. Similarly, almost all altruistic goals have, at their core, the goal of stopping death in some way shape or form.

The question, "How can we permanently prevent death?" is of paramount importance, and not just to Rationalists. So, it should be a surprise to no one that mystics, crackpots, spiritualists and pseudo-scientists of all walks of life have co-opted this quest as their own. The loftiness of the goal, combined with the cosmic implications of its success, combined with the sheer number of irrational people also seeking to achieve the same goal may make it tempting to apply the non-central fallacy and say, "I'm not interested in stopping death; that's something crazy people do." 

But it's a fallacy for a reason: there is a rational way to approach the problem. Let's start with a pair of general statements:

  • X is the cause of the perception of consciousness. (Current hypothesis: X="human brain").
  • Recreation of X with >Y% fidelity results in a the perception of a consciousness functionally indistinguishable from the original to an outside observer. original text: "results in the continuation of the perception of consciousness".   
These two statements border on tautological, and so they aren't that helpful by themselves. It doesn't sound nearly as impressive to say "Something causes something else," nor does it sound impressive to say, "If you copy all properties of X, all properties of X are duplicated." 

But it's important because it lays down the basic framework for which an extremely complex question can begin to be solved. In this case, the solution can be broken down into at least two major sub-problems: The Collection Problem ("How do we 'collect' enough information on X in order to be able to recreate it with Y% fidelity") and The Creation Problem ("Once we have that information, how do we create a physical representation of it?").

Neither of these problems are trivial, quite the opposite. They are ridiculously difficult and me describing them simplistically should not be mistaken for me implying they are simple problems. 

The Collection Problem

This problem is most pressing, because once we solve it, it buys us time. Once that data is stored securely, you've dramatically extended your effective timeline. Even if you, personally, happen to die, you've still got a copy of yourself in backup that some future generation will hopefully be able to reconstruct. But, more importantly, this also applies to all of humanity. Once the Collection Problem is solved, everyone can be backed up. As long as you can stay alive until the problem is solved, (especially if you live in a first-world country), you have probably got a pretty good shot at living forever. 

The Collection Problem brings to mind a number of non-trivial sub-problems, but they are fairly trivial *in comparison* to the monumental task of scanning a brain (assuming the brain alone is the seat of consciousness) with sufficient fidelity. Such as logistics, data-storage and security, etc.. I don't mean to blithely dismiss the difficulties of these problems. But these are problems that humanity is already solving. Logistics, data-storage, and security are all billion dollar industries. 

The Creation Problem

Once the Collection Problem is solved, you have another problem which is how to take that data and do something useful with it. There's a pretty big gap between an architect drawing up a plan for a building and actually creating that building. But, once this problem is resolved, it's very likely that its solution will also make life itself much, much more convenient. Any method that can physically create something as complex as a human brain at-will can almost certainly be adopted to create other things. Food. Clean water. Shelter. etc.  Those likely benefits, of course, are orthogonal, but they are a nice cherry on top.

One of the potential solutions to the Creation Problem involves simulations. I won't go into a ton of detail there because that's a pretty significant discussion unto itself, whether life in a simulation is as valid or fulfilling as life in the "real world". For the purposes of this thought exercise though, it is fairly irrelevant. If you consider a simulation to be an acceptable solution, great. If you don't, that's fine too, it just means the Creation Problem will take longer to solve. Either way, it's likely you're going to be in cold storage for quite some time before the problem does get solved. 


What about the rest of us?

All this theory is fine and good. But what if you get hit by a bus tomorrow and don't live to see the resolution of the Collection Problem? What about all of us who have lost loved ones in the past? This is where this exercise dovetails with traditional ethics. Given this system, it's easy enough to argue that we have a responsibility to try to ensure that as many human beings as possible survive until the Collection Problem is resolved. 

However, for those of us unlucky enough to die before that, there's one final get-out-of-jail free card: The Recreation Problem. This problem may be thoroughly intractable. And to be sure, it is probably the most difficult problem of them all. In extremely simple (and emotionally charged) terms: "How can we bring back the dead?" Or, if you prefer to dress it up in the literary genre of science: "How can we recreate a system that occurred in the past with Y% fidelity using only knowledge of the present system?" 

This may be so improbable as to be effectively impossible. But it's not actually impossible. There's no need for perfect physical fidelity (which is all-but-proven to be impossible). We only need to achieve Y% fidelity, whatever Y% may be. Conceptually, we do this all the time. A ballistics expert can track the trajectory of a bullet with no prior knowledge of that trajectory. A two-way function can be iterated in reverse for as many steps as you have computing power. Etc. 

A complex system can be recreated. Is there an upper limit to how far in the past a system can be before it is infeasible to recreate it? Quite possibly. Let's say that upper limit is Z seconds (incidentally, the Collection Problem is actually just a special case of the Recreation Problem where Z is approximately equal to zero). The fact that Z is unknown means you can't simply abandon all your ethical pursuits and say, "It doesn't matter, we're all going to be resurrected anyway!"  Z may in fact be equal to approximately zero. 

The importance of others.

It is most likely that you, individually, will not be able to solve all three problems on your own. Which means that if you truly desire to live forever, you have to rely on other people to a certain extent. But, it does give one a certain amount of peace when contemplating the horror of death: if every human being commits themselves to solving these three problems, it does not matter if you, personally, fail. All of humanity would have to fail. 

Whether that thought actually gives any comfort depends largely on your estimation of humanity and the difficulty of these problems. But regardless of whether you derive any comfort from that, it doesn't diminish the importance of the contributions of others. 

The moral of this story...

As a rationalist, you should take a few things away from this.

  1. You should try as hard as possible to stay alive until the Collection Problem is resolved. 
  2. You should try as hard as possible to make sure everyone else stays alive until that point as well. 
  3. When feasible, you should try to bring other people around to the ways of rationalism. 
  4. Death is a tragedy, but it is conceptually reversible.
  5. Don't despair if you don't make any progress towards resolving these problems in your lifetime.


Post Script:

Note: this was added on as an edit due to feedback in the comments. 

The original intent of this article was to explain that there's a rational, scientific way to approach the logistical problem of "living forever". 


  • I removed the first introductory paragraph. It was inconsistent in both tone and scope with the rest of the post. 
  • I've changed the title and removed references to "immortality" to try to eliminate some of the "science fiction" vibe.
  • I've tried to update the language so as not to imply that it is universally agreed upon that backing up a brain is a valid method of generating consciousness.