There are a lot of steps that all need to go correctly for cryonics to work. People who had gone through the potential problems, assigning probabilities, had come up with odds of success between 1:4 and 1:435. About a year ago I went through and collected estimates, finding other people's and making my own. I've been maintaining these in a googledoc.

Yesterday, on the bus back from the NYC mega-meetup with a group of people from the Cambridge LessWrong meetup, I got more people to give estimates for these probabilities. We started with my potential problems, I explained the model and how independence works in it [1]. For each question everyone decided on their own answer and then we went around and shared our answers (to reduce anchoring). Because there's still going to be some people adjusting to others based on their answers I tried to randomize the order in which I asked people their estimates. My notes are here. [2]

The questions were:

  • You die suddenly or in a circumstance where you would not be able to be frozen in time.
  • You die of something where the brain is degraded at death.
  • You die in a hospital that refuses access to you by the cryonics people.
  • After death your relatives reject your wishes and don't let the cryonics people freeze you.
  • Some law is passed that prohibits cryonics before you die.
  • The cryonics people make a mistake in freezing you.
  • Not all of what makes you you is encoded in the physical state of the brain (or whatever you would have preserved).
  • The current cryonics process is insufficient to preserve everything (even when perfectly executed).
  • All people die (existential risks).
  • Society falls apart (global catastrophic non-existential risks).
  • Some time after you die cryonics is outlawed.
  • All cryonics companies go out of business.
  • The cryonics company you chose goes out of business.
  • Your cryonics company screws something up and you are defrosted.
  • It is impossible to extract all the information preserved in the frozen brain.
  • The technology is never developed to extract the information.
  • No one is interested in your brain's information.
  • It is too expensive to extract your brain's information.
  • Reviving people in simulation is impossible.
  • The technology is never developed to run people in simulation.
  • Running people in simulation is outlawed.
  • No one is interested running you in simulation.
  • It is too expensive to run you in simulation.
  • Other.

To see people's detailed responses have a look at the googledoc, but bottom line numbers were:

person chance of failure odds of success
Kelly 35% 1:2
Jim 80% 1:5
Mick 89% 1:9
Julia 96% 1:23
Ben 98% 1:44
Jeff 100% 1:1500

(These are all rounded, but one of the two should have enough resolution for each person.)

The most significant way my estimate differs from others turned out to be for "the current cryonics process is insufficient to preserve everything". On that question alone we have:

person chance of failure
Kelly 0%
Jim 35%
Mick 15%
Julia 60%
Ben 33%
Jeff 95%


My estimate for this used to be more positive, but it was significantly brought down by reading this lesswrong comment:

Let me give you a fuller view: I am a neuroscientist, and I specialize in the biochemistry/biophysics of the synapse (and interactions with ER and mitochondria there). I also work on membranes and the effect on lipid composition in the opposing leaflets for all the organelles involved.

Looking at what happens during cryonics, I do not see any physically possible way this damage could ever be repaired. Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted. You can't simply replace unfolded proteins, since their relative position and concentration (and modification, and current status in several different signalling pathways) determines what happens to the signals that go through that synapse; you would have to replace them manually, which is a) impossible to do without destroying surrounding membrane, and b) would take thousands of years at best, even if you assume maximally efficient robots doing it (during which period molecular drift would undo the previous work).

Etc, etc. I can't even begin to cover complications I see as soon as I look at what's happening here. I'm all for life extension, I just don't think cryonics is a viable way to accomplish it.

In the responses to their comment they go into more detail.

Should I be giving this information this much weight? "many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted" seems critical.

Other questions on which I was substantially more pessimistic than others were "all cryonics companies go out of business", "the technology is never developed to extract the information", "no one is interested in your brain's information", and "it is too expensive to extract your brain's information".

I also posted this on my blog

[1] Specifically, each question is asking you "the chance that X happens and this keeps you from being revived, assuming that all of the previous steps all succeeded". So if both A and B would keep you from being successfully revived, and I ask them in that order, but you think they're basically the same question, then A basically only A gets a probability while B gets 0 or close to it (because B is technically "B given not-A")./p>


[2] For some reason I was writing ".000000001" when people said "impossible". For the purposes of this model '0' is fine, and that's what I put on the googledoc.

New Comment
89 comments, sorted by Click to highlight new comments since: Today at 9:50 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A fault tree showing all the reasons why a car might not start was shown to several groups of experienced mechanics.96 The tree had seven major branches--insufficient battery charge, defective starting system, defective ignition system, defective fuel system, other engine problems, mischievous acts or vandalism, and all other problems--and a number of subcategories under each branch. One group was shown the full tree and asked to imagine 100 cases in which a car won't start. Members of this group were then asked to estimate how many of the 100 cases were attributable to each of the seven major branches of the tree. A second group of mechanics was shown only an incomplete version of the tree: three major branches were omitted in order to test how sensitive the test subjects were to what was left out. If the mechanics' judgment had been fully sensitive to the missing information, then the number of cases of failure that would normally be attributed to the omitted branches should have been added to the "Other Problems" category. In practice, however, the "Other Problems" category was increased only half as much as it should have been. This indicated that the mecha

... (read more)

It would have been interesting if they had done a third group and added spurious categories (probably wouldn't work with experienced mechanics) and/or broke down legitimate categories into many more sub categories than necessary. What would that have done to the "other problems" category?

7Eliezer Yudkowsky11y would be really nice if someone had bothered to actually check statistics on how many car failures were actually due to each of the possible causes. This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don't know what level of granularity would've led mechanics to be accurate, and furthermore, the main way to produce accuracy would've been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is not the lesson of this study. If I was trying to use this effect for a Grey Arts explanation (conveying a better image of what I honestly believe to be reality, without any false statements or omissions, but using explanatory techniques that a Dark Arts practitioner could manipulate to make people believe something else instead, e.g., writing a story as a way of conveying an idea) I would try to diagram cryonics possibilities into a tree where I believed the branches of a given level and the leaf nodes all had roughly equal probability, and just showing the tree would recruit the equal-leaf-size effect to cause the audience to concretely represent this probability estimate.

This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don't know what level of granularity would've led mechanics to be accurate, and furthermore, the main way to produce accuracy would've been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is not the lesson of this study.

My suspicion is that conjunctive and disjunctive breakdowns exhibit different behavior which can be manipulated to increase or decrease a naive probability estimate:

  • in a conjunctive case, such as cryonics, the more finely the necessary steps are broken down, the lower you can manipulate a naive estimate.

    To some extent this is appropriate since people are usually overconfident, but I suspect at some granularity, the conjunctions start getting unfairly negative: imagine if people were unwilling to give any step >99% odds, then you can break down a process into a hundred fine steps and their elic

... (read more)
4Eliezer Yudkowsky11y
Except that people intuitively average these sorts of links, so hostile manipulation involves negating the conjunction and then turning it into a disjunction - please, dear reader, assign a probability to not-A, and not-B, and not-C - oh, look, the probability of A and B and C seems quite low now! If you were describing an actual conjunction, a Dark Arts practioner would manipulate it in favor of cryonics by zooming in and dwelling on links of great strength. To hostilely drive down the intuitive probability of a conjunction, you have to break it down into lots and lots of possible failure modes - which is of course the strategy practiced by people who prefer to drive down the probability of cryonics. (Motivation is shown by their failure to cover any disjunctive success modes.)
This is just the complement of the previous probability you computed: 1-0.99^100, which is indeed approximately 0.632. Rather than compute this directly, you might observe that (1-1/n)^n converges very quickly to 1/e or approximately 0.368.
Yeah, nsheppard pointed that out to me after I wrote the fold. Oh well! I'll know better next time.
Can you clarify whether the following is correct? "The study shows that domain experts add less weight than non-experts to 'other' when important categories are removed."
Fortunately for you, I have already jailbroken the PDF:

Science has moved away from considering memories to be simply long-term structural changes in the brain to seeing memories as the products of "continuous enzymatic activity" (Sacktor, 2007). Enzyme activity ceases after death, which could lead to memory destruction.

For instance, in a slightly unnerving study, Sacktor and colleagues taught mice to avoid the taste of saccharin before injecting them with a PKMzeta-blocking drug called ZIP into the insular cortex. PKM, an enzyme, has been associated with increasing receptors between synapses that fire together during memory recollection. Within hours, the mice forgot that saccharin made them nauseous and began guzzling it again. It seems blocking the activity of PKM destroys memories. Since PKM activity (like all enzyme activity) also happens to be blocked following death, a possible extension of this research is that the brain automatically "forgets" everything after death, so a simulation of your brain after death would not be very similar to you.

Accessing long term memory appears to be a reconstructive process, which additionally results in accessed memories becoming fragile again; this is what I believe is occurring here. The learned aversion is reconstructed and as then susceptible to damage much more than other non-recently accessed LTM. Consider that the drug didn't destroy ALL of the mice's (fear?) memories, only that which was most recently accessed.

So no worries to cryonics!

Long-term structural maintenance requires continuous enzymatic activity. For example, the average AMPA receptor lasts only around one day: The actin cytoskeleton, made up of molecules which largely specify the structure of synapses, also requires continuous remodeling. If a structure is visibly the same after vitrification (not trivial), that means the molecules specifying it are likely to not have changed much.

I think Robin's reply to that comment (which he left there last week) got to the heart of the issue:

No doubt you can identify particular local info that is causally effective in changing local states, and that is lost or destroyed in cryonics. The key question is the redundancy of this info with other info elsewhere. If there is lots of redundancy, then we only need one place where it is held to be preserved. Your comments here have not spoken to this key issue.

It may be that what the brain uses to store some vital information is utterly destroyed by cryonics, but there is some other feature of the arrangement of atoms in the brain, possibly some side effect that has no function in the living brain, that is sufficiently correlated with the information we care about that we can reverse-engineer what we need from it. This is the "hard drive" argument for cryonics (I got it from the Sequences, but I would suspect it didn't originate there): it's not that hard (I think, though I do not know much about this topic) to erase data from a hard drive so that the normal functionality of the hard drive can't bring it back, but it's rather difficult to erase it in a way that someo... (read more)

Note useful discussion today by wedrifid and Eliezer, arguing that kalla724's comments clearly suggest that they haven't. I got the same vibe, but my knowledge of the relevant science is so spotty that I didn't want to make a confident prediction myself.
This seems like a good place to inject a related point. One of the failure modes listed is The contrary of which is: Reviving people in simulation is possible. But there is also this possibility to consider: Reviving people in the flesh is possible. So it would seem that we need to branch here, and then estimate the combined probability after assessing each branch. Maybe P( possible in-flesh | impossible in-simulation) is very small, and this branch can be safely ignored. I haven't looked for other branching points, but I don't feel assured that there aren't more.
Branching points are important and could definitely make the whole thing more probable. So if you or anyone else sees others, please point them out. This particular branching point is one I've thought about (cell D26) and don't think is likely enough to even show up in the final odds. The chemicals they use as cryoprotectants are toxic at the concentrations they need to be using, and while that's fine if you're going to be uploaded it's potentially a big problem if you're going to be revived. Future medicine would need to be really good to keep these cells from dying immediately on rewarming. Expense issues are also mostly worse for in-flesh revival. (One branching that would help would be if plastination became possible, because it removes the problem of needing cryonics organizations to stay existant, functional, and legal.)
Hmm, even plastination could have legal problem where I live. I'm not sure we can do anything other than burning or burying the corpse. Now if one is willing to break the law, this is only a cubic foot to keep hidden around. I would be willing to face the risk if it meant my family.
The advantage of plastination is that once you're preserved you stay that way. Laws keeping you from being preserved hit plastination and cryonics equally.
Low temperature permits a wider range of molecular machinery to function. For example, you could have a burrowing micro-scale machine (it doesn't need to be nano-scale, although components obviously could be) which slowly removes extracellular cryoprotectant and water, replacing it with a nontoxic cryoprotectant. The replacement matter could be laced with other helpful drugs like ischemia blockers and cell membrane fortifiers, which would activate upon warming.

There's a possibly-important probability missing from your analysis.

For it to be worth paying for cryonics, it has to (1) work and (2) not be redundant. That is: revival and repair has to become feasible and not too expensive before your cryonics company goes bust, disappears in a collapse of civilization, etc. -- but if that happens within your lifetime then you needn't have bothered with cryonics in the first place.

So the success condition is: huge technical advances, quite soon, but not too soon.

Whether this matters depends on (a) whether it's likely that if revival and repair become viable at all they'll do so in the next few decades, and (b) whether, in that scenario, the outcome is so glorious that you simply won't care that you poured a pile of money into cryonics that you could have spent on books, or sex&drugs&rock&roll, or whatever.

The cost of life insurance scales with your risk of death in the covered period: if cryonics is rendered redundant then you can stop paying for the life insurance (and any cryonics membership dues) thereafter.

Redundancy would be a significant worry if, counterfactually, you had to pay a non-refundable lump sum in advance.

Two other potential forms of redundancy: * Future civilizations have the power and motivation to restore even people who were simply buried * Everything you ever coherently wanted to get out of cryopreservation can be achieved by a cheaper method, e.g. having children I don't think the first point has significant probability, but I'll throw it out there in case it inspires someone to find more possibilities I've overlooked.
If the alternative is between saving for retirement and cryonics then for a lot of probability mass of cryonics being redundant nanotech or time travel has made us extremely rich perhaps reducing the cost to us of having not saved (although interest rates might have been high, still you can check for this along the way). For much of the probability mass of cryonics not working, our species has gone extinct (and not in a good way) eliminating the value of money and the harm of not having saved as much as you would have had you not done cryonics. I'm an Alcor member.
In my case (and I think for a significant number of others on lw) the alternative is donating more to effective charities. When your money might be going to helping people now or reducing existential risk we have a real tradeoff.
So your savings for retirement is < the cost of cryonics? I doubt this is true for many lw >30 years old.
I agree that the first part of that may well be true -- it was (b) in my last paragraph -- but I'm not so convinced by the first bit. My own evaluation is that most of the probability mass of "cryonics fails for me" involves things going wrong after the end of my life, and while I would indeed very much prefer our species not to go extinct soon after my death, knowing that it will wouldn't stop me caring how comfortable my retirement is, or even caring how much money I'm able to leave to others when I die. Actually, I'm skeptical of this sort of argument whichever way it goes; my (b) was more a concession to those who think differently than anything else. My preference for the next (say) 20-50 years of my life to be more comfortable isn't materially altered if what follows is going to be infinite blissful heaven, or if it's going to be infinite tormented hell. (Whether the heaven/hell in question are technological or religious or whatever else.) So if cryonics is unnecessary because we all win anyway, I would rather not spend any money preparing for it.
Assume that one of the following is true: 1) Cryonics will help you. 2) Cryonics will not help you. Money you save today will not make you happier in the future. 3) Cryonics will not help you. Money you save today will make you happier in the future. Keeping the likelihood of (1) constant while raising the likelihood of (2) makes cryonics a better bet.

To me this just looks like a bias-manipulating "unpacking" trick - as you divide larger categories into smaller and smaller subcategories, the probability that people assign to the total category goes up and up. I could equally make cryonics success sound almost certain by lumping all the failure categories together into one or two big things to be probability-assigned, and unpacking all the disjunctive paths to success into finer and finer subcategories. Which I don't do, because I don't lie.

Also, yon neuroscientist does not understand the information-theoretic criterion of death.

There's another effect of "unpacking", which is that it gets us around the conjunction/planning fallacy. Minimally, I would think that unpacking both the paths to failure and the paths to success is better than unpacking neither.

3Eliezer Yudkowsky11y
I wonder if that would actually work, or if the finer granularity basically just trashes the ability of your brain to estimate probabilities.
I think it's also good to mention that this kind of questionnaire does not account for possible future advancements which are not included due to lack of availability. The same though applies for further negative changes in the future, but when looking at that list for an example items follows are completely missing: * Legislation for improving the safety and conditions of cryopreserved people is passed * Neuroscientists develop new general techniques for restoring function in patients with braindamage * Breakthrough in nanotechnology allows better analysis and faster repair of damaged neurons * Supercomputers can be used to retrace the original condition of modified or damaged brain * Supercomputers (with the help of FAI?) can be used to reconstruct missing data from redundancy(like mentioned above in Benja's comment ) etc.. ..That is to say it's one thing to 'unpack' a proposition and another to do it accurately or at least I would think a questionnaire with uncertain positive and negative future events would seem less biased. I think it's also worthwhile to consider the possibility that this unpacking business is an sort of an inverse of conjunction fallacy - although it's not exactly the same thing, but I think it's a very closely related topic?

Also, yon neuroscientist does not understand the information-theoretic criterion of death.

They appear to, they are questioning whether current cryonic practice preserves said information at all - they are saying it will destroy it.

No they're not, they're describing functional damage and saying why it would be hard to repair in situ, not talking about what you can and can't information-theoretically infer about the original brain from the post-vitrification position of molecules. In other words, the argument does not have the form of, "These two cognitively distinct states will map to molecularly indistinguishable end states". I'm not saying you have to use that exact phrasing but it's what the correct version of the argument is necessarily about, since (modus tollens) anything which defeats that conclusion in real life causes cryonics to work in real life.

Are you referring to the neuroscientist's discussion linked in the OP? This comment seems quite clear regarding the information-theoretic consequences:

Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind. (...) (information simply isn't there to be read, regardless of how advanced the reader may be).

In our lingo: the state transformation is a non-injective function (=loss of information).

However, the import of the distance between a "best guess" facsimile and the original is hard to evaluate. Would it be on the order of the difference between before and after a night's sleep? Before and after a TBI injury (yay pleonasm)?

Undifferentiable from your current self in a hypothetical Turing test variant, with you squaring off against such a carbon copy?

Speculatively, I'd rather think all that damage to not play that big of a role. Disrupted membranes should still yield the location of the synapses with high spatial fidelity, and the way we interfere with neurotransmitters constantly, the exact concentration in each synapse does not seem identity-constituting.

Otherwise, we'd incur information-theoretic death of our previous selves each time we take e.g. a neurotransmitter manipulating drug such as an SSRI. Which we do in a way, just not in a relevant way.

2Swimmer963 (Miranda Dixon-Luinenburg) 11y
I thought you meant "neoplasm", then I actually Googled pleonasm and there's a good chance you mean that. Which is it???
Heh, pleonasm, since the "I" in the TBI acronym already refers to "injury", thus rendering the second injury as an overkill. Let's get side-tracked on that, typical LW style :) Pleonasm, neoplasm ... potato, topota.
kalla724 quotes from the thread: These appear to be saying just what I thought they were saying - current cryonics practice destroys the information - and, given the above, I don't see sufficient evidence to assume your reading.

These appear to be saying just what I thought they were saying - current cryonics practice destroys the information - and, given the above, I don't see sufficient evidence to assume your reading.

At best you can get the impression that kalla is in principle aware of the information-theoretic criterion of death but in practice just conflating it with functional damage and knowledge of how hard it would be to repair in situ. What I observe is a domain expert (predictably, and typically) overestimating the relevance of their expertise to a situation outside what they are actually trained and proficient in. Most salient points:

Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted.

Irretrievably? I'd be surprised if that word means what he thinks it means. In particular, for him to have a correct understanding of the term would require abandoning notions of what his field currently considers possible and doing advanced study in probability theory and physics. (To be credible in this claim he'd essentially have to demonstrate that he isn't thi... (read more)

What wedifrid said. Everything the guy says is about functional damage. Talking about the impossibility of repairing proteins in-place even more says that this is somebody thinking about functional damage. Throwing in talk about "information destruction" but not saying anything about many-to-one mappings just tells me that this is somebody who confuses retrievable function with distinguishable states. The person very clearly did not get what the point was, and this being the case, I see no reason to try and read his judgments as being judgments about the point.

I'd like to be absolutely clear on the claim that's being made here. If I overstate the claim, understate the claim or even state it in a manner that seems unduly silly, please do correct me - my aim here is to ascertain precisely what the claim being made is. As I understand it, you are claiming that: * current cryonics practice will preserve sufficient information that a future superintelligence (that we do not presently understand enough about to construct or predict the actions of) may, using unspecified future technologies, be able to use the information in the brain preserved using current cryonics practice to reconstruct the personality that was in said brain at the time of its preservation to a sufficient fidelity that it would count to the personality signing up for such preservation as revival; * that having no idea what technologies the superintelligence might use to perform this (presently apparently physically impossible) task and having almost no idea about almost any characteristic of this future superintelligence, beyond a list of things we know we don't want it to do, doesn't count as an objection of substance; * and that kalla724 being unable to conclusively disprove this is enough reason to dismiss kalla724's objections in toto. Have I left anything out, overstated or understated anything here? If the above is wildly off base, could you please summarise the actual claim in your own words?

Wildly off base. The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain; if this is true, we can expect sufficiently advanced technology generally, and systems described in Drexler's highly specific Nanosystems book particularly, to be sufficient albeit not necessary (brain scanning might work too). There's also a lot of clueless objections along lines of "But they won't just spring back to life when you warm them up" which don't bear on the key question one way or another. Real debate on this subject is from people who understand the concept of information loss, offering neurological scenarios in which information loss might occur; and real cryonicists try to develop still-better suspension technology in order to avert the remaining probability mass of such scenarios. However, for information loss to actually occur, given current vitrification technology which is actually pretty darned advanced, would require that we have learned a new fact presently unknown to neuroscience; and so scenarios in which present cryonics technology fails are speculative. It's not a question of "fail to disprove", i... (read more)

This is, of course, not anywhere in anything that kalla724 or I said. Thank you, this is a solid claim that current cryonics practice preserves sufficient information (even if we presently have literally no idea how to get it out).

This is, of course, not anywhere in anything that kalla724 or I said.

If you complain about how it would be hard to in-situ repair denatured proteins - instead of talking about how two dissimilar starting synapses would be mapped to the same post-vitrification synapse because after denaturing it's physically impossible to tell if the starting protein was in conformation X or conformation Y - then you're complaining about the difficulty of repairing functional damage, i.e., the brain won't work after you switch it back on, which is completely missing the point.

If neuroscience says conformation X vs. conformation Y makes a large difference to long-term spiking input/output, which current neuroscience holds to be the primary bearer of long-term brain information, and you can show that denaturing maps X and Y to identical end proteins, then the ball has legitimately been hit back into the court of cryonics because although it's entirely possible that the same information redundantly appears elsewhere and the brain as a whole still identifies as single person and their personality and memories, telling us that cryonics worked would now tell us a new fact of neuroscience we didn't prev... (read more)

Maybe I'm missing something, but even with cremation, on a molecular level probably no more than one person gets mapped to one specific pile of ash, because it would be a huge coincidence if cremating two different bodies ended up creating two identical piles of ash.

You're missing something. Any one person gets mapped to a very wide spread of possible piles of ash. These spreads overlap a lot between different people. Any one pile of ash could potentially have been generated by an exponentially vast space of persons.

I understood a pretty important element in the cryonics argument is assuming that you stick to things that are feasible given our current understanding of physics, though not necessarily given our current level of technology. Conflating technology and physics here will turn the arguments into hash, so it's kinda important to keep them separate. It's generally assumed that the future superintelligences will obey laws of physics that will be pretty much what we understand them to be now, although they may apply them to invent technologies we have no idea about. "Things will have to continue working with the same laws of physics they're working with now" seems different to me from "any random magical stuff can happen because Singularity", which you seem to be going for here. I'm not sure if "just don't break the laws of physics" is strong enough though. Few people think it very feasible that there would be any way to reconstruct a human body locked in a box and burnt to ash, but go abstract enough with the physics and it's all just a bunch of particles running on neat and reversible trajectories, and maybe some sort of Laplace's demon contraption could track enough of them and trace them back far enough to get the human persona information back. (Or does this run into Heisenberg uncertainty?) The "possible physically but not technologically" seems like a rather tricky type of reasoning. Imagine trying to explain that you should be able to build a nuclear reactor or a moon rocket to someone who has never heard of physics, in 1920 when you don't have the tech to do either yet. But it seems like the key to this argument, and I rarely see people engaging with it. The counterarguments seem to be mostly about either the technology not being there or philosophical arguments about the continuity of the self.
H. G. Wells did it: Also, people can sometimes do it themselves: Relevant quote:
Note that what I posit as the apparent argument makes no contentions about continuity of self - let's assume minds can in fact be copied around like MP3s. Yes, I'm annoyed when people pull out a hypothetical magic-equivalent superintelligence that will make everything all better as an argument so solid that the burden of proof is to disprove it: "we don't know what such a being could do (or, indeed, anything else about it), therefore you must prove that such a hypothetical being could not do (whatever magic-equivalent is needed at that point)." They don't know how to get there from here, but they're trying really hard, therefore this hypothetical being should be assumed?
I just said we're assuming we know it can't break the laws of physics. We can tell that if you blow up someone with antimatter, putting them back together would have to involve breaking the speed of light unless you start out controlling the entire surrounding light cone before the person was blown up. If the person was vitrified, there isn't a similar obvious violation of laws of physics involved in putting them back together. So it seems like cryonics after death gives you a better chance at being eventually reanimated than antimatter burial after death. With regular burial definitely leaning towards the antimatter option, the causal stuff that needs to be traced back to get you together gets spread too wide. Yet people still argue as if cryonics should be treated just the same as regular burial as long as there's no demonstrable technology that shows it working for humans. I'm not sure why it's a dealbreaker to assume that the technology side will advance into something we can't fully anticipate. Today's technology is probably extremely weird from the viewpoint of someone from 1900, but barring the quantum mechanical bits, it's still based on the laws of physics a physicists from 1900 would be quite familiar with.
The GPS depends on relativity. And "barring the quantum mechanical bits" is a hell of an overwhelming exception. (But make that "a physicist from 1930" and I will agree.)
Heavy functional damage still rules out some possible revival methods, so reduces probability of success.

"Warm 'em up and see if they spring back to life" was a possible revival method that cryonicists already didn't believe in, so pointing out its impossibility should not affect probability estimates relative to what cryonicists have already taken into account.


as you divide larger categories into smaller and smaller subcategories, the probability that people assign to the total category goes up and up

The idea that when people disagree over complex topics that they should break their disagreement down is one I've learned in part from Robin Hanson and in fact he applies it cryonics

While Robin has fewer categories, if you look at the detailed probabilities that people gave we could throw out most of their answers without changing their final numbers; people were good about saying "that seems very unlikely" and giving near-zero probabilities. Most of the effect on the total comes from a few questions where people were saying "oh, that seems potentially serious". If I do this more I'll fold many of the less likely questions into more likely ones (mostly so I get a shorter survey) but I don't think that will change the outcome much.

I would expect unpacking to work for two reasons: to help avoid the planning fallacy and to let us see (and focus on) the individual steps people most disagree on.

unpacking all the disjunctive paths to success into finer and finer subcategories

As far as I can tell there's really only one... (read more)

I raised an alternative path to success when we discussed this Sunday, at the end when you asked for probability of "other failure" and I argued that it should go both ways. Specifically, I suggested that we could be in a multiverse such that being cryopreserved, even if poorly, would increase the probability of other universes copying you into them. I don't remember the probability I gave this at the time, but I believe it was on the order of 10^-2 - small, but still bigger than your bottom-line probability of 1/1500 (which I disagree with) for cryonics working the obvious way. Some other low-probability paths-to-win that you neglected: * My cryopreservation subscription fees are the marginal research dollars that prevent me from dying in the first place, via a cryonics-related discovery with non-cryonics implications * I am unsuccessfully preserved but my helping cryonics reach scale saves others; a future AI keeps me alive longer because having signed up for cryonics signaled that I value my life more * While my cryopreserved brain is not adequate to resurrect me by itself, it will be combined with electronic records of my life and others' memories of me to build an approximation of me.
There are also some less-traditional paths-to-lose: * Your cryopreservation subscription fees prevent you from buying something else that ends up saving your life (or someone else's) * You would never die anyway, so your cryopreservation fees only cost pre-singularity utilons from you (or others you would have given the money to). * Simulation is possible, but it is for some reason much "thinner" than reality; that is, a given simulation, even as it runs on a computer existing in a quantum MWI, follows only a very limited number of quantum branches, so has a tiny impact on the measure of the set of future happy versions of you (smaller even than the plain old non-technological-quantum-immortality versions who simply didn't happen to die). * You are resurrected by a future UFAI in a hell-world. For instance, in order to get one working version of you, the UFAI must create many almost-versions which are painfully insane; and its ethics say that's OK. And it does this to all corpsicles it finds but not to any other dead people. I have strong opinions of the likeliness of these (I'd put one at p>99% and another at p<1%) but in any case they're worth mentioning.
Hmm, regarding quantum immortality, I did think about it. Taken to its extreme, I could perform quantum suicide while tying the result of the quantum draw to the lottery. Then it occurred to me that the vast majority of worlds, in which I did not win the lottery, would contain one more sad mother. Such a situation scores far lower in my utility function than the status quo does. I feel I should treat quantum suicide by cryostination the same way. The only problem is that the status quo bias works against me this time.
Sorry: I edited the comment you were responding to to clarify my intended meaning, and now perhaps the (unintended?) idea you were responding to is no longer there.
Whoops; this totally slipped my mind. Thanks for including them here.
Yes, that was the claim.
I'd be interested to see someone do that. There are a lot of variants on this exercise that could be studies in bias. The five of us doing this estimate on the bus, for example, realized that our answers came out clustered while Jeff's was far away because we had done it together. For each individual question we were supposed to think of our own answer before anyone spoke, to avoid anchoring. But we were anchored by the answers the others had given to all the previous questions.
How do you know the raised estimate with this "trick" is worse than the estimate without? I could just as easily say, "As you merge smaller categories into larger and larger categories, the probability that people assign to the total category goes down."
'Subadditivity effect'
Which points in the opposite direction.

Upvoted the post. Worthy thing to discuss.

A reply to kalla724 that you did not mention is here:

Kalla724 claims that it is not possible to upload a C. elegans with particular memories and/or behaviors. I think that this is a testable claim and should shed light on kalla724's views on preserving personal identity with vitrification. I also think it is likely wrong.

Whether C. elegans can be uploaded with particular memories and/or behaviors has no bearing on whether human personal identity is preserved, since the C. elegans nervous system is completely identified - every C. elegans brain grows identically to every other C. elegans brain, so there is no structural wiring differences between one C. elegans and another. "Memories" (better thought of as stimulus-based behavioral divergences in something so small) are not encoded in the C. elegans' neural pattern at all, the way they are encoded in the human brain; they're merely held in a sort of 'active loop' of neurochemical feedback mechanisms. It's certainly possible that the same sort of thing happens with human brains, but on a much more complex scale - but it definitely seems true that human brains actively re-wire our neural interconnections in a way that C. elegans doesn't.
I wouldn't say it has no bearing. If C. elegans could NOT be uploaded in a way that preserved behaviors/memories, you would assign a high probability to human brains not being able to be uploaded. So: If (C. elegans) & ~(Uploading) goes up, then (Human) & ~(Uploading) goes WAY up. Of course, this commits us to the converse. And since the converse is what happened we would say that it does raise the Human&Uploadable probabilities. Maybe not by MUCH. You rightly point out the dissimilarities that would make it a relatively small increase. But it certainly has some bearing, and in the absense of better evidence it is at least encouraging.

It would be very interesting to see cryonics for very simple brains of other species. This could determine or narrow down the range of probability for several factors.

Edit: Removed doubled word

There is a helpful web page on the probability that cryonics will work.

There are also some useful facts at the Alcor Scientists' Cryonics FAQ.

The neuroscientist might wish to pay attention to the answer to "Q: Can a brain stop working without losing information?" The referenced article by Mayford, Siegelbaum, and Kandel should be particularly helpful.

What is the chance that some other means are found of simulating your personality without physical access to your brain (preserved or otherwise)?

Would you like to consider the possibility of cryonic preservation / plastination becoming redundant in your estimates?

It probably depends on how faithful a copy you'd be contented with, as well as on how much evidence about you you leave behind (writings, internet posts, other people's memories, etc. -- lifelogging being the extreme version).
I wouldn't, because a simulation of me is effectively a copy, and having a copy lying around would not keep me from dying. It's not like I know a huge number of people would be thrilled at having a simulation of me to interact with (and probably annoy, hehehe). Having a simulation of me while I'm still alive, though, would probably come in handy, so it's not an idea to which I am opposed. I just don't see it making anything with a chance of preserving this instance of me redundant.
Every future state of you is a copy. I believe having a copy of me lying around would keep me from dying. However, I was referring to processes that might be put into place after a person's death. To name three, consequences of the simulation hypothesis, personality emulation from recorded sources, or advances in physics allowing observations of past events. Three more: multi world hypothesis, fundamental error in worldview, ongoing extra-terrestral intervention. And the big one, FOOM! I'm not sure how to cheat death, but I am open to examining options.

Not all of what makes you you is encoded in the physical state of the brain (or whatever you would have preserved).

This is probably true, isn't it? Most of what makes you, you, is in your brain, but another large part of it is mediated by hormones going to and from the rest of your body... I think. Yet most LWers who are into cryo go the 'neuro' route. Is there some reason why this consideration is not nearly as big a deal as I think? Is the idea that making a 'spare' human body is cheap?

(2) I'm not sure whether I should generalize to much of LW, but when people talk about extracting information from the brain, the plan is not repair, but to make a new brain, whether physical or in simulation. Making a new body is very cheap compared to this. (1) Simulating hormones is important, but is there any information there to preserve? If the brain controls hormones, then there is no information outside the brain. Of course, it doesn't control them directly, but mediated by glands that probably have different responsiveness in different people; certainly in people with glandular tumors. But there are just a few parameters to determine, basically average levels for that person. Testing different levels for a person would be like giving them external hormones. This changes people's personalities, but only temporarily. Thus it does not appear that much long-term information is stored in hormone levels. In principle the glands could do lots of information processing, but I don't think that there's any reason to believe that. However, the spinal column is made of nerves, which we do know are all about information processing, so it is likely that some information is stored there.
I see your point, thanks!

I just ran the numbers assuming I pay US $3000 /year (I forget Hoffman's actual figure) for 33 years (mind you, I think is too pessimistic there) discounted at 3% /year (the average annual inflation rate since 1913 equals 3.24%). The EPA set the value of a human life at $9.1 million two years ago. Perhaps I'm rigging the numbers by updating this for (actual) inflation and only discounting it by the 1/1500 probability. But I first estimated the value of my own life at $20 million, and I don't think I'd actually kill myself in return for (say) an SI donation that size.

The 'official' numbers would appear to make cryonics under-priced by $1403 in present value. (Edited to use official figures.)

Poking at deathtimer I'm not sure it's adjusting for "you've already lived to age X". It says I'll probably live to 77 which is pretty close to what this table has for my life expectancy at birth.
Plus it's definitely not adjusting for potential future medical advances.

"Brain degradation after death" is the key point in this list that I'd be interested in learning about. I'm not sure if it's proper to ask this in a comment now or should I be studying diligently around the issue, but I think it's also an interesting subject so excuse me.

The cryonics process is often analoguously compared the the event of a harddrive being broken, and the data being retrievable, but brains and harddrives store information in very different ways and this problem always strikes me as very unnerving. Without going into too much deta... (read more)

[This comment is no longer endorsed by its author]Reply
Yes, good intuition. This is what Mike Darwin considers the largest problem in cryonics:
I hear it's common to overestimate the cost of cryonics. Have you actually checked on the prices? If not, it may be lower than you think. Full disclosure: I am not signed up for cryonics.
The prices range from ~$400/year for life insurance and membership fees if you're young and healthy to ~$100,000 if you're about to die and need to pay for it in full.
Presumably the $400/year should be expected to increase over time as you grow older and less healthy, and you should expect to end up contributing enough on average (one way or another) to pay that ~$100k when you finally die?
Upvoted because the idea is correct, but $100k is the upper end of the scale: Alcor charges $80,000 for neuropreservation (though $200,000 for whole-body, but really, why would you want that?); with Cryonics Institute you can get by with $28,000 for the cryopreservation and $1,250 for a lifetime membership (plus $120 per year until you can afford the $1,250); and Kriorus only charges $10k for neuropreservation.
Fixed rate life policies are available, but they tend to cost a bit more.
I don't expect there to be a way to cheat statistics: if the life policies all have the same payout, they most likely all have the same expected cost when you take into account interest rates. The insurance company wants to make money (in expectation), after all.

Question: Why do people here seem to only focus on the technical aspects of cryonics, and assume "future society will revive you-who-are-frozen" as a given? I can't see much reason to do this, other than as a historical curiosity.

Replying to my own question: Xachariah made a more detailed argument about a similar issue, a while back.