My girlfriend/SO's grandfather died last night, running on a treadmill when his heart gave out.

He wasn't signed up for cryonics, of course.  She tried to convince him, and I tried myself a little the one time I met her grandparents.

"This didn't have to happen.  Fucking religion."

That's what my girlfriend said.

I asked her if I could share that with you, and she said yes.

Just so that we're clear that all the wonderful emotional benefits of self-delusion come with a price, and the price isn't just to you.

New Comment
189 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Is it okay to prefer to be an organ donor instead of signing up for cryonics?

This is the only reason I haven't signed up. What I want to do is sign up for neuropreservation and donate any organs and tissues from the neck down, but as far as I can tell that's not even remotely feasible. Alcor's procedure involves cooling the whole body to 0C and injecting the cryoprotectant before removing the head (and I can understand why perfusion would be a lot easier while the head is still attached). Also, I think it's doubtful that the cryonics team and the transplant team would coordinate with each other effectively, even if there were no technical obstacles.
Okay? Do whatever you want to do. If you know your expected value for your cryopreservation and and the expected value you have for the life-saving you could be doing with your organs then it's simple. Eleizer's say so matters only in as much as he may be able to help with the math of translating your preferences into a coherent utility function.
Seems worth mentioning: I think a thorough treatment of what "you" want needs to address extrapolated volition and all the associated issues that raises. To my knowledge, some of those issues remain unsolved, such as whether different simulations of oneself in different environments necessarily converge (seems to me very unlikely, and this looks provable in a simplified model of the situation), and if not, how to "best" harmonize their differing opinions... similarly, whether a single simulated instance of oneself might itself not converge or not provably converge on one utility function as simulated time goes to infinity (seems quite likely; moreover, provable , in a simplified model) etc., etc. If conclusive work has been done of which I'm unaware, it would be great if someone wants to link to it. It seems unlikely to me that we can satisfactorily answer these questions without at least a detailed model of our own brains linked to reductionist explanations of what it means to "want" something, etc.
2Eliezer Yudkowsky
You'd need reliable statistics on the average number of lives saved per organ donor. If it works out to 0.1 then I wouldn't accept that reply, no.
A Google search gives some hospitals and organizations claiming an average of 3.75 lives saved per organ donor.
I imagine this is the case per case of successful recovery. But a lot of people die such that their organs aren't recovered. That obviously needs to be factored in. **On edit- It occurs to me that a lot of the cases where organs aren't recovered are also cases where cryogenic preservation wouldn't be possible. So I might be wrong about this. Maybe 3.75 is the right number to use. Can someone think of cases where preservation is possible but organ recovery isn't?**
Elderly patient suffering organ failure due to aging. Death by cancer (not of the brain). Potential donor had HIV or othervery dangerous infectious diseases. Severe abdominal trauma. Probably other stuff, too.
1Eliezer Yudkowsky
Sounds slightly suspicious. QALYs?
I'm actually pretty surprised that you haven't looked this up yourself yet. Is there a point of effectiveness at which you would switch to organ donation over cryopreservation? ETA: Yes, I'm comparing you to a higher standard of rationality, diligence and altruism than I use for others, including myself.
7Eliezer Yudkowsky
Probably not, for two reasons. One, Kantian-type reasoning: Someone has to lead the way through the transition, since the ideal would be enough people cryosuspending that they could just integrate the organ donation protocols into it. Two, and more important, there's a nonzero possibility that someone ends up wanting my brain for something interesting Before It's Over - that I wouldn't literally be out of the game.
Do you also, simply, desire to live ? Or do you mean to say that if your life didn't possess those useful qualities, then it would be better, for you, to forfeit cryonics, and have your organs donated, for instance ? And I'm actually asking that question to other people here as well, who have altruistic arguments against cryonics. Is there an utility, a value your life has to have, like if you can contribute to something useful, in order to be cryopreserved ? For then that would be for the greatest good for the greatest number of people ? A value below which, your life would be best not cryopreserved, and your body, used, for organ donations, or something equally destructive to you, but equally beneficial to other people (and certainly more beneficial than whatever value you could create yourself if you were alive) ?
This seems to assume that the probability that someone will be eventually successfully revived given that they have signed up for cryonics is >10%.
Personally, I'd rather sign up for cryonics. However, if your goal is to maximize the amount and quality of life lived, a plausible case can be made for either cryonics or organ donation. Organ donation will save some number of lives between 0 and maybe a dozen at best, depending on how you die. These lives will likely be elderly people who aren't signed up for cryonics. The money that would have gone to pay for your suspension can also be optimally donated to save some more lives; the most commonly tossed around number is 28 third-world lives vs. a high-quality suspension from Alcor. The benefit of cryonics depends on its chance of working, and on how long and happy your post-revival life would be. A detailed analysis is here. It came out that both options are pretty close, i.e. within the massive error bars of each other. In conclusion, I'd say either preference is "okay." Go with your conscience.
If memory serves, you've said that your plan is to wait until your parents die and then kill yourself. Even if you do that and donate your organs, you should cryopreserve your head for a chance at waking up in a world you'd want to live in or could better help you with that. It's much worse a strategy than just trying to live to see it, but still better than final death.
Are you sure you can undergo neuropreservation while donating your organs (in light of simpleton's comment)? Has it been done?
I don't know of such cases. From If I understand correctly, at least Alcor's current procedure for neuropreservation would be compatible with removing organs to be donated.
Thanks, it looks like I misremembered -- if they're now doing perfusion after neuroseparation then it's much more likely to be compatible with organ donation. I've sent Alcor a question about this.
Do you have any particular reason to care what lifestyle choices people here consider 'okay'?
2Eliezer Yudkowsky
Do you have any particular reason to suggest that every attempt to ask anyone else for advice makes the requester a conformist?
Not at all, which is precisely why I haven't done that. CronoDAS wasn't asking for advice. Depending on how his question is interpreted, he was looking for permission / approval.
Specifically, I expect that he's looking for community validation of the extremely low value he places on his own life. Which is actually an interesting question, as I (unfortunately) don't think it's defensible to tell someone "No, your life is worth more than you personally value it at".
Some people have times when they are suicidally depressed. I think it's quite defensible to tell those people that their life is worth more than they personally value it at. More generally, I don't see any strong reasons to expect people to be less mistaken about their own life worth than about any other sort of value judgment. Also, I don't see any case yet for interpreting CronoDAS as doing anything more than simply asking a community that may have some insight into a given field (rationality), whether his reasoning or conclusions check out.
Yes, but valuable to whom? To themselves? That seems contradictory. To others? Sure, but what are you going to do about, tell them they can't do as they please with their life because other people value it more than they do? In some general sense of intrinsic value? That's going to be difficult to define. This is an old comment so I no longer remember clearly, but he made remarks previously that were strongly indicative of my interpretation. I can possibly dig them up if you really wanted.

I doubt religion is a significant cause of not becoming persuaded. The walls of taboo around the subject and the strength of absurdity heuristic seem to me to be about as high in atheists' minds. At least, that's my experience, and it is in harmony with intuition about how to expect the state of affairs to be. Does anyone have any kind of anecdotal data points on that?

5Eliezer Yudkowsky
Well - it is my girlfriend who said it. I think the primary damage done by religion to atheists is the propagation of such things as "No one can possibly know" (which even some atheists unthinkingly repeat), a general tradition of avoiding the subject, an idea that you can say anything you want, and the contamination-by-association of any possible trick for living on after you stop by breathing. The question you want is: in a world where religion had never existed, but people's reasoning abilities were otherwise mostly the same level, how many people would now be signed up for cryonics? This is the damage done by religion alone.
Arguably religion does the most damage by de-legitimizing concerns like immortality and discontinuous world-changing events by surrounding them with a cloud of wishful and otherwise mistaken thinking.
Anecdotal? Sure. I'm pretty much an atheist and I'm not signed up for cryonics (and likely never will). Less-anecdotally, you could compare the amount of atheists and/or non-religious people, to the amount actually signed up for cryonics. Without having the numbers handy, I'd guess that at least shows religion doesn't tell the whole story.
Why? Are you the sort of person who refuses to use the save-points in computer games?
Right now, there's virtually no evidence that cryonics works. If I wanted to spend money on something not proven to work, I could do it much more cheaply - I bet someone on the street outside would happily sell me an immortality potion for like 5 bucks. It makes a lot more sense to me to spend my money on things that will make my life better, for reals.
What evidence would you expect if it did work (that is, if it was a true fact that N years in the future the cryonically preserved people will return to life)? What kind of evidence would you accept as sufficient to be persuaded that it works?
I tend to vacillate on the cryonics debate and for me its beside the point since I really can't afford it as a broke college student (who isn't particularly at risk of dying). But one can certainly imagine better evidence that it would work other than an actual revivification. All sorts of discoveries in cryobiology could provide additional evidence that cryonics will work. Better results freezing and reviving other animals, for example.
Inverting the event, you may say that you are looking for evidence that it will never, ever be possible to revive someone. What sort of evidence will work for that? You are not looking for what is impossible now, you are not looking at what will be impossible for the next 50 years. You are looking for what will never be possible. I don't see how any details of the progress in technology are in the slightest relevant to that question.
That is a good point. But progress matter because there is a non-zero chance that some disaster strikes, or the cryogenics firm dissolves and you never get revived. I also think the farther into the future you get the less interested future people will be in reviving (by comparison) the mentally inferior. Plus I'd much rather wake up sooner than later since I'd rather not be so far behind my new contemporaries. So confidence that revival will be possible sooner than later increases the incentive to pay for the procedure. Edit- also, the longer revivification technology takes the more likely the chances are for one of alicorn's dystopian scenarios. Plus the far future might be throughly repugnant to the values of the present day, even if it isn't a dystopia.
This sounds possible but not at all obvious. It seems to me that so far, interest in historical people and compassion for the mentally inferior have if anything increased over time. This certainly doesn't mean they'll continue to do so out into the far future, but it does mean I'd need some really good reasons to support expecting them to.
So I can envision future persons wanting to meet some people from the past for historical reasons as you say. But I'm not sure we'd bring back thousands of Homo Habilis if we had the chance. One or two might be interesting- but what would we do with thousands?
"Future persons" are not a monolithic agent; all it takes is one agent able and willing to revive you, maybe the cryonics organization. And as Mulciber said, compassion is a likely motivation as well.
Thousands would still only be one per ~million citizens. Cryonauts would be at least as rare.
That depends on on what the population is in the far far future and the future popularity of cryonics. The farther into the future we're talking about the more uncertainty we should have about these things. I was never claiming that it is particularly likely the preserved would be unwanted, just that such uncertainties give reason to be concerned with progress in cryobiology.
0Paul Crowley
Frankly, I think that future societies will be so resources-rich that they'll revive everyone because the small increase in entertainment thus provided will easily pay for the costs. However, if that's not so, there's an advantage to being one of the rare early preservees over the common later ones you suppose might arise; we would have better novelty value, and we'd remember things from further back.
Don't think of hedonic entertainment, think of the subjectively objective right thing to do.
I don't know. After I met my hundredth white, male, transhumanist who died circa 2050 I'd probably go back to whatever I was doing before I started reviving people. I imagine if we're so resource rich there will be somewhat better forms of entertainment. But yeah, If I sign up I'm definitely hoping people in the future are obsessed with stories from the past and will pay me quite a bit for them... since I really won't have any other marketable skills.
Probably something like this scenario (I just made up): Basically, the process working ever would be evidence that the process might ever work. Until then, consider me in the 'control group'.

I think it was Mike Li who analogized this to refusing to get on an airplane until after it has arrived in France. The whole point of cryonics is as an ambulance ride to the future; once you're in the future, you don't need cryonics any more. I severely, severely doubt that anyone will ever again be frozen after the time a cryonics revival is possible.

Isn't there some gut, intuitive level on which you can see that your objection obviously makes no sense, because conditioning on the proposition that cryonics with present-day vitrification technology does in fact work as an ambulance ride to the future, we still would not expect to see a revival in the present time?

I take it more to be like refusing to get on an airplane until any one has arrived anywhere, ever. For all I know, cryonics makes it harder to revive people. Not that I think it's likely that's the case, but it certainly doesn't seem worth my time and money.

It's like being the guy who checks the Wright brothers' calculations, finds them correct, and still refuses to leap onboard their untried prototype to escape a tiger, but instead prefers to stand and be eaten.

Look, conventional death makes it maximally hard to revive a person. Their information has dissipated. You would essentially need a time machine. Cryonics is a guaranteed improvement over that - at least you have something to work with.

Perhaps more like the Wright brothers were planning to figure out how to land the plane after they throw it off a cliff. And your example throws out the benefits of not signing up for cryonics, which are a major factor for me.
Note that if Wright brothers didn't believe that there was a considerable chance of the plain not crashing, it would be a bad investment to build the plain in the first place. The question is about the cost: does the current state of knowledge support the positive outcome sufficiently to think of designing a plane? To design a plane? To build a plane? To perform an experiment, risking its destruction? To test-pilot a plane, risking one's life? The same goes for cryonics, here you risk something like 100 bucks a year.
So they haven't figured out the landing gear. So you might break your neck, might break your arm - but the tiger is sprinting towards you! It certainly will eat you!
I'll take my odds against a tiger rather than a cliff any day. How confident are you that you won't live forever?
Do you believe in the "Then I would feel sorry for the good Lord. The theory is correct." situations?
Sure - if Einstein signed up for cryonics, I might even follow suit. But a lot of really smart people are signing up for 'heaven', and I'm not listening to them, either.
Missing the point I think. Einstein wasn't stating this as any sort of appeal to authority. He was expressing his confidence in his mathematical proofs.
Mathematical proofs are an appeal to authority. Their standards rest entirely on the ability of experts on Mathematics to understand them. If we had a canonical mechanical proof-checker or something, it might be a different story.
But they were Einstein's proofs. He was confident in the math that he understood. If he were trying to convince someone else, then yes he'd be using himself as an authority.
So, you concede that it's possible to know the outcome in advance without empirical observation of success. Now, what makes Einstein a special person for this purpose? Can it be you that decides?
Sure, it could be me that decides. That's why I've decided. What's your point?
That was an allusion to this question, which you still haven't answered. If, in principle, you could indeed decide that successful revival is possible, based only on theoretical knowledge, before any successful revival was ever performed, then you should be able to explain what kind of indirect evidence it would take to persuade you that successful revival is sufficiently likely for you to decide to sign up for cryonics.
This is too unintuitive an assumption to use in a basic refutation. I doubt it's even true, if revival is performed by non-AGI means, simply because of improved preservation technology, which may well become possible at some point.
Agreed. Suppose we simply learn how to revive someone who's frozen first (unlikely, I know). Then, we would selectively freeze/unfreeze people based on the further limitations of medicine at the time (can treat gunshot wounds / can't treat lukemia)
Yes, that's one use case. I'm really not competent to estimate with any certainty how biologically feasible is that, and I assume it's not very feasible. If I remember correctly, the brains of currently preserved, even after vitrification, get cracked during the freezing, so they won't work even if unfrozen, detoxicated, etc. I don't know whether it's possible to find a solution to this problem with anything from the repertoire of current technology. But the decision concerns the current situation. What do you answer on these questions?
Aside: it looks a lot more feasible to me if you don't try to repair the original biology, but rather try to extract information from it for re-instantiation. Then for example brain cracks become a problem in image-alignment rather than in nanosurgery.
This argument forced me to change my mind a little: indeed, to do the neurosurgery, you need an image anyway, possibly of the same order of resolution or even greater than required for scanning, so emulation may be easier than repair, and realignment of the image should be relatively easy once you have a scan. Still, I don't see emulation working for a long time still, I'd give it expected 60 to 150 years, and it's hard to say how the process will look at that point, on the progress of what kinds of technologies the feasibility of this process will depend.
That was "Whole Brain Emulation Roadmap" from the Future of Humanity Institute. (You shouldn't post bare links.) I'm sure aware of it. It's a feasibility study, and I'm sure it's feasible, so no great revelations there. The problem is that this study is roughly analogous to estimating that the progress in steam engine technology will allow very fast and efficient trains eventually, which means that there will be fast trains based on some technology, if they are still needed, at least that good. Which was basically my sentiment: you list all these technologies, but they, specifically, may be of little relevance. This isn't an argument for emulation to be infeasible. I also reserve an option for revival not being the best thing you can do with a dead body, but this is an argument this thread is too small to contain.
What kind of evidence would it take to convince you that cryonics has a small, but considerable chance of working in the future, prior to there being any successful revivals?
I don't like to deal in probabilities, but I'd reckon a successful revival of a dolphin would count. Short of that? Probably nothing, if by 'considerable' you mean 'worth spending my money on'. Things other than evidence might convince me though - like my wife wanting to sign up for cryonics for whatever fool reason.
Does it have to be a dolphin, or would successful revival of a mouse count? Try not to look up if that's been done before you answer. If you do know, try to imagine whether you'd count it as evidence, if you didn't already know.
No, that's out. Yes, I do mean that. This means, that no matter what you observe, you always estimate the probability of cryonics working as very low, right up to the point where it does succeed (if that ever happens). Which is equivalent to a priori estimating the probability of it working eventually very low also. Do you believe that progress will never be made, that it will never be possible to revive a very slowly changing frozen body? In 100 years? In 10000 years? Never ever?
You are not following the argument. You said that you accept the possibility of knowing the outcome from purely theoretical (indirect) argument (that is, not the kind of data where you are presented with successful revival of anything), as in the Einstein's anecdote. I ask what kind of indirect data/argument that would be, that is enough to convince you to sign up. That you may do that for signaling reasons is irrelevant to the question.
Vitrification works in organs. Neurons are being simulated in software. Stem cells tech is improving. We already pretty much have the electron-microscope and chemical assay tech to dice, slice, scan and digitize a frozen brain. We don't yet know exactly what to digitize, but neuroscience is a heavily studied field. The fact of revival isn't here yet, but the peripheral evidence is strong.
You may be surprised, but none of these arguments significantly move me. I think that damage is too great and complex for such techniques to work for a long time, and when something will finally become up to the task, the particular list of hacks you mention won't be relevant at all.
I've seen slides, the earliest ones were really wrecked by ice, but a modern vitrification process is much less destructive. Cryonics is going to be very much LIFO, but the last few in might well be fixable with barely more than hacks.
I assume you mean to compare the ratio of atheists among general population to the ratio of atheists among signed up. Won't work very well, as the exposure to the argument is too tilted towards atheists, and it's too hard to correct for that
Nope, I meant compare amount signed up to the amount of atheists (raw numbers). That doesn't tell you whether religion is a factor in avoiding cryonics, but it does tell you whether religion is the only thing keeping everybody from signing up for cryonics. Since by far the majority of atheists are not signed up for cryonics, it's pretty clear that religion isn't what's stopping people. ETA: Okay, Vladimir_Nesov (below) has convinced me I wasn't considering the same question.
That's silly. Too few people know of the idea, and it's too hard to persuade any given person. The question wasn't about absolute difficulty of getting the argument through, but on the relative effect of being religious on the ability of a person to accept the procedure.
I'm an atheist and I'm not currently persuaded by the case for cryonics. I'm unpersuaded purely on a (non-rigorous, informal) cost-benefit analysis. It just seems to me that there are better things to spend my money on. It seems to me that you can make a similar case for being a survivalist - stocking up on guns, ammo and emergency supplies in case of major disaster - and while the argument is sound I just don't judge the expected utility to be worth the outlay. The social stigma is certainly a factor in both cases.
Hmmm... Interesting point, I'm not at all sure how feasible the advantage of having a survivalist hideout is. On the other hand, my position on cryonics pushes the feasibility through the roof, so it's easier to decide.
A lot of the factors you have to consider when deciding the likelihood of being revived with cryonics are the same risk factors you'd consider for maintaining a survivalist hideout but operating in the opposite direction. The more likely you consider economic or social collapse, natural disasters or other societal disruptions which would make a cryonic revival less likely the more value you'd place on survivalist preparations. It's plausible to me that my chances for living long enough to see radical life extension become feasible would be improved by survivalist preparations to a greater extent than expending the same resources on cryonics would improve my chances of being revived at some future date. The relative benefits here would depend on age and other personal factors, though again I'm not claiming to have done a rigorous cost-benefit analysis.
Factors may be the same, but the probabilities of success are on the different sides of these factors. Where cryonics succeeds, survivalist hideout is likely unnecessary, but where cryonics fails, survivalist hideout is only useful within the border cases where the society breaks down, but it's still possible to survive. And there, how much does the advance preparation help? Groups of people will still be more powerful and resilient, so I'm not convinced it's of significant benefit.
I think the history of the 20th Century has quite a few examples of situations where society broke down to a large extent within certain regions and yet it was possible to survive (in a world which overall was progressing technologically) for long enough to relocate somewhere safer. Survival in those situations probably depends on luck to quite an extent but survivalist type preparations would likely have increased the chance of survival. The US (where cryonics seems to be most popular) did not really suffer any such situations in the 20th century, with the possible exception of a few natural disasters, but much of Europe and Asia did. I think the main area where I differ from most cryonics advocates on the probability of it working is in the likelihood of the cryonics institution surviving intact until revival is possible. I think in a future scenario somewhat like WWII in Europe or the cultural revolution in China a cryonics institution would be unlikely to survive but human civilization would as would lucky and/or prepared individuals.
How much do you expect it to cost?
At a guess somewhere around a $250,000 value life insurance policy? I don't know how much that costs but somewhere around $2000 a year maybe? I could go and look it up but those are my off the top of my head guesses.
The Cryonics Institute does whole-body preservation for $28,000. (I looked it up.)
That is cheaper than I expected. Surprisingly cheap - storage costs must be pretty low if that covers initial preservation and enough funds for the investment return to cover storage in perpetuity.
2Eliezer Yudkowsky
Liquid nitrogen is not very expensive.
Still, that money presumably has to fund storage costs in perpetuity. Assuming some of the money goes to up-front freezing costs, say you have $25,000 in 20 year TIPS yielding a fairly risk free inflation indexed 2.5%, you've got $625 a year to cover storage. That barely pays for a small self-storage unit around here. It's almost suspiciously cheap.
3Eliezer Yudkowsky
Liquid nitrogen is on the order of $80 - which is either the cost per month per cryostat or the cost per customer per year, I don't recall which. The Cryonics Institute owns its own building, and you can keep more than one body in a single cryostat (big cylinder of liquid nitrogen). The annual fixed costs of cryonics are practically nothing. The costs would decline even further with economies of scale and the scale to invest in better technology. Immortality for everyone in the United States would be a rounding error in the stimulus bill.
For everyone? Well, there'd also be the cost of building the facilities... Anyways, maybe we really should try to push something like that? (Yeah yeah, I know, unlikely.) Anyways, did you get the PM I sent? (About talking me through some of the specifics of actually signing up?)
I emailed The Cryonics Institute this morning with my details based on this application form - I got a reply almost immediately. Then I sent $1,250 to via paypal. And I'm signed up. I also have to send a copy of the signed app form by post. I'm lucky enough to have saved up the $28,000 needed for the cryopreservation, but I reckon it's not too expensive to get a life insurance policy for the amount. I have cheated on this decision by writing down the bottomline without figuring out an answer for myself. But if I had to give one reason to justify it, it's simple: I want to live. The arguments against cryonics in the comments here have any ground only in a world accustomed to disposable human life. Now I have a chance to wake up in a world which is not so.
Cool! (well, very cold, I guess... :)) and thanks. I think I'll probably be doing the "link an insurance policy to it" thing instead, though. I think I want to sign up as a neuro... but I think CI doesn't do those, only Alcor... Now the thing I'm trying to figure out is this: Are the Alcor membership fees the same for both whole body members and neuro members? Because, if so, it would seem that costs push me more toward CI. (seems silly that a full body suspension would be less expensive, but...)
0Eliezer Yudkowsky
Congratulations. See you 'round - eventually!
Thanks Eliezer. I am imagining waking up to see you on a plasma screen with a long white beard saying "Welcome. Didn't I tell you I'll see you 'round - eventually? Now look here, meet Bruce the Friendly AI". Sorry that is pretty bad but I couldn't resist...
1Eliezer Yudkowsky
PM? Nope, I'm not sure how to check PMs here. (Can you please ask someone else, though? Almost anyone else in the world would probably be better...)
The button isn't there explicitly (There probably should be, but should get you to your inbox) And okay. The message explains why I was asking you in specific. Hopefully, given the context, there should be others I could ask instead. Well, thanks anyways.) But yeah, I'm basically at a "okay. I want to sign up. I seem to be able to afford to. Now I just need to actually work out the steps to do so (including all the specific legal details I need to take care of to make it all work), decide on CI vs Alcor, etc..." stage. And technically "almost anyone else in the world" is very unlikely. I mean, "the space of people that have actually signed up or are otherwise familiar with the specific details of doing so" is rather smaller, no? :) But okay. Well, I may as well see if anyone else who sees this message and has already signed up would be available to talk me through some of that.
1Eliezer Yudkowsky
Try Rudi Hoffman, who sells cryonics-friendly life insurance policies and can talk you through other aspects as well. He handled mine.
Okay, will check into that, thanks!
Cost of facilities per person should go down significantly as the number of people gets large, right?
Also, don't bother with whole-body preservation. It's useless, because regrowing a body is the least of revival problems, and it's harmful, because your brain spends longer warm while the whole useless hunk of meat attached to it is cooling down. Plus it costs more.
I'd feel more comfortable with that if we knew more about the extent to which the glial cells around the heart -- not to mention the remainder of the nervous system -- play a role in learning, decisionmaking, emotion etc. I'd hate to lose any non-recoverable data from those systems and have to recreate it, e.g. learning to walk again or being missing emotional reactions, or who knows what else. I think I'd want to keep the "useless hunk of meat" around, just in case, even if it had to be separated from the head for better cooling.
If they did play such an important role in human thought, wouldn't you expect there to be case studies of people who become psychologically impaired after heart surgery (in particular, the installation of an artificial heart)?
None, I can guarantee it. The wires are too long. Local cells might well be involved in handling local problems - heartbeat, reflex loops, etc. They won't be collaborating with the brain except by providing info and carrying out orders.
CI only offers full-body, but it's cheaper than Alcor's neuro option.
I find that my absurdity heuristic gives a strong signal against. Also, we can't be certain that it will work and we can't be certain how well it will work. This makes it very hard for me to evaluate as an investment. If I can't quantify the payoff or the odds, how can I justify the expense?
That's how the absurdity heuristic is supposed to work. But sometimes, it goes hilariously wrong, turning into an absurdity bias. You can't be certain, but you can make estimates. Every time you decide one way or the other, you make an implicit estimate. If you decide not to invest, you basically state that, given you current knowledge, you judge the investment as not worthwhile. This is not at all the same as "not being able to evaluate". You have to, every time you need to make a decision. What remains is to make sense of your decision, trying to not get it wrong.

This may be a naïve question, but could someone make or link me to a good case for cryonics?

I know there's a fair probability that we could each be revived in the distant future if we sign up for cryonics, and that is worth the price of admission, but that always struck me as a mis-allocation of resources. Wouldn't it be better, for the time being, if we dispersed all the resources used on cryonics to worthwhile causes like Iodized salt, clean drinking water, or childhood immunization and instead gave up our organs for donation after death? Isn't the c... (read more)

I'd agree that signing up for cryonics and being a traditional utilitarian (valuing all human life equally) aren't really compatible. I'm not a utilitarian so that's not my problem with cryonics but it does seem to be hard to reconcile the two positions. It's hard to reconcile any western lifestyle with traditional utilitarianism though so if that's your main concern with cryonics perhaps you need to reconsider your ethics rather than worry about cryonics.
One of the beauties of utilitarianism is that its ethics can adapt to different circumstances without losing objectivity. I don't think every "western lifestyle" is necessarily reprobate under utilitarianism. First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild. We can't all afford to be Gandhi. The rub is trying to avoid being a part of really harmful, unsustainable things like commercial ocean fishing or low fuel-efficiency cars without causing an ethically greater amount of inconvenience or economic harm. All that said, I'd be really interested in reading a post by you on rationalist but non-utilitarian ethics. It seems to me that support for utilitarianism on this site is almost as strong as support for cryonics.
Universalizability arguments like this are non-utilitarian; it's the marginal utility of your decision (modulo Newcomblike situations) that matters. It definitely seems to me that refraining from these things is so much less valuable than making substantial effective charitable contributions (preferably to existential risk reduction, of course, but still true of e.g. the best aid organizations), probably avoiding factory-farmed meat, and probably other things as well.
Interesting. I'm not certain, but I think this isn't quite right. In theory, the westerners would just be sending their money to desperately poor people, so aggregate demand wouldn't necessarily decline, it would move around. Consumption really doesn't create wealth. Of course rational utilitarian westerners would recognize the transfer costs and also wouldn't completely neglect their own happiness. Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values. If you have no regard for yourself then pursue pure altruism. Leave yourself just enough that you can keep producing more wealth for others. Study Mother Teresa. If you have no regard for others, then a policy of selfishness is for you. Carefully plan to maximize your total future well-being. Leave just enough for others that you aren't outed as a sociopath. Study Anton LaVey. If you have equal regard for the happiness of yourself and others, pursue utilitarianism. Study Rawls or John Stuart Mill. Most people aren't really any of the above. I, like most people, am somewhere between LaVey and Mill. Of course defending utilitarianism sounds better than justifying egoism, so we get more of that.
4Paul Crowley
Hitchens: The pope beatifies Mother Teresa, a fanatic, a fundamentalist, and a fraud.
Yeah, I heard about this on Bullshit with Penn & Teller. I considered choosing someone else, but Mother Teresa is still the easiest symbol of pure altruism. (That same episode included a smackdown on the Dalai Lama and Ghandi, so my options look pretty weak.
Yes, 'pure altruism' is a pretty weak position, and you won't find many proponents of it. Altruism as an ethical position doesn't make any sense; you keep pushing all of your utils on other people, but if you consider a 2-person system doing this, nobody actually gets to keep any of the utils.
Agreed, but under certain conditions relating to how much causal influence one has on others vs. oneself, utilitarianism and pure altruism lead to the same prescriptions. (I would argue these conditions are usually satisfied in practice.)
0Eliezer Yudkowsky
Gandhi? Really? My impression is that the "smackdown" on Gandhi is vastly, vastly less forceful than the smackdown on Teresa. Though I haven't watched that particular episode, I've read other critiques that seemed to be reaching as far as possible, and they didn't reach very far.
It mostly had to do with Gandhi being racist.
Unsure if it's worth reading, but here is a long critical article.
Perhaps you should reconsider the value of 'pure altruism'.
I'm not an economist, and but I think you could model that as a kind of demand. And I don't think I stipulated to there being a transfer of wealth. For me, the interesting question is how one goes about choosing "terminal values." I refuse to believe that it is arbitrary or that all paths are of equal validity. I will contend without hesitation that John Stuart Mill was a better mind, a better rationalist, and a better man than Anton LaVey. My own thinking on these lines leads me to the conclusion of an "objective" morality, that is to say one with expressible boundaries and one that can be applied consistently to different agents. How do you choose your terminal values?
Yes that was my point. I go on to say that aggregate demand would not decrease. I recommend Eliezer's essay regarding the objective morality of sorting pebbles into correct heaps.
Short answer? We don't. Not really. Human beings have an evolved moral instinct. These evolutionary moral inclinations lead to us assigning a high value to human life and well-being. The closest internally coherent seeming ethical structure seems to be utilitarianism. (It sounds bad for a rationalist to admit "I value all human life equally, except I value myself and my children somewhat more.") But we are not really utilitarians. Our mental architecture doesn't allow most of us to really treat every stranger on earth as though they are as valuable as ourselves or our own children.
Only because that's logically contradictory. If you drop the equally part it sounds fine to me: "I value all human life, but I value some human lives more than others.". Utilitarianism is clearly not a good descriptive ethical theory (it does a poor job of describing or predicting how people actually behave) and I see no good reason to believe it is a good normative theory (a prescription for how people should behave).
5Paul Crowley
How are you going to evaluate a normative theory, except by comparison to another normative theory, or by gut feeling?
'Gut feeling' is pretty much how I am evaluating it (and is a normative theory in a sense - what is good is what your intuition tells you is good). Utilitarianism says I should value all humans equally. That conflicts with my intuitive moral values. Given the conflict and my understanding of where my values come from I don't see why I should accept what utilitarianism says is good over what I believe is good. I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure. Ethics has to address the problem of what to do when two agents have conflicting desires rather than trying to wish away the conflict.
1Paul Crowley
What do you mean by an "ethical theory" here? Do you mean something purely descriptive, that tries to account for that side of human behavour that is to do with ethics? Or something normative, that sets out what a person should do? Since it's clear that people express different ideas about ethics from each other, a descriptive theory that said otherwise would be false as a matter of fact. However, normative theories are generally applicable to everyone through no other reason than that they don't name specific individuals that they are about. Utilitarian is a normative proposal, not a descriptive theory.
I mean a normative theory (or proposal if you prefer). Utilitarianism clearly fails as a descriptive theory (and I don't think it's proponents would generally disagree on that). A normative theory that proposes everything would be fine if we could all just agree on the optimal outcome isn't going to be much help in resolving the actual ethical problems facing humanity. It may be true that if we all were perfect altruists the system would be self consistent but we aren't, I don't see any realistic way of getting there from here, and I wouldn't want to anyway (since it would conflict with my actual values). A useful normative ethics has to work in a world where agents have differing (and sometimes conflicting) ideas of what is an optimal outcome. It has to help us cooperate to our mutual advantage despite imperfectly aligned goals rather than try and fix the problem by forcing the goals into alignment.
0Paul Crowley
Utilitarianism is a theory for what you should do. It presupposes nothing about what anyone else's ethical driver is. If cooperating with someone with different ethical goals furthers total utility from your perspective, utilitarianism commends it.
Shouldn't this be evidence that utilitarianism isn't close to the facts about ethics?
Only if you think we're wired to be ethical.
I believe that was part of what knb was saying.
The rest of our brains are wired to give close-enough approximations quickly, not to reliably produce correct answers (cf. cognitive biases). It's not a given that any coherent defition of ethics, even a correct one, should agree with our intuitive responses in all cases.
A longer answer looks at what 'choice' means a little more closely and wonders how tracable causality implies lack of choice in this instance and yet still manages to have any meaning whatsoever.
I'm interested in a system that allows a John Stuart Mill and an Anton LaVey to peacefully coexist without attempting to judge who is more 'objectively' moral. I wish to be able to choose my own terminal values without having to perfectly align them with every other agent. Morality and ethics are then the minimal framework of agreed rules that allows us all to pursue our own ends without all 'defecting' (the prisoner's dilemma is too simple to be a really representative model but is a useful analogy). The extent and nature of that minimal framework is an open question and is what I'm interested in establishing.
You might be interested in the literature in normative ethics on what is called the overdemandingness problem. In particular, check out Liam Murphy on what he calls the cooperative principle. It takes utilitarianism but establishes a limit set on the amount individuals are required to sacrifice... Murphy's theory sets the limit as that which the individual would be required to sacrifice under full cooperation. So rather than sacrificing all your material wellbeing until giving more would reduce your wellbeing to beneath that of the people you're trying to help you instead need only sacrifice that which would be required of you if the entire western world and non-western elites were doing their part as well.
You're talking about 'politics', not 'ethics'. Politics is about working together, ethics is about what one has most reason to do or want. What the political rules should say and what I should do are not necessarily going to give me the same answers.
I disagree with your definitions. You seem to be talking about normative ethics - what you 'should' do. I'm more interested in topics that might fall under meta-ethics, descriptive ethics and applied ethics. There is certainly cross-over with politics but there is a lot of other baggage that comes with the word politics that means it's not a word I find useful to talk about the kind of questions I'm interested in here.
Think coordination. Two agents may coordinate their actions, if doing so will benefit both. In this sense, it's cooperation. It doesn't include fighting over preferences, fighting over preferences will just consist in them acting on environment without coordination. But this should never be possible, since the set of coordinated plans is strictly greater than a set of uncoordinated plans, and as a result it should always contain a solution that is a Pareto improvement on the best uncoordinated one, that is at least as good for both players as the best uncoordinated solution. Thus, it's always useful to coordinate your actions will all other agents (and at this point, you also need to dole the benefit of coordination to each side fairly, think Ultimatum game).
Peaceful coexistence is not something I object to. Neither does anything oblige agents to perfectly align their values, each is free to choose. I strongly endorse people with wildly different values cooperating in areas of common interest: I'm firmly in Anton LaVey's corner on civil liberties, for instance. It should be recognized, though, that some are clearly more wrong than others because some people get poor information and others reason poorly through akrasia or inability. Anton LaVey was not trying hard enough. I think the question is worth asking, because it is the basis of building the minimal framework of rules from each person's judgement: How are we supposed to choose values?
It seems to me that most problems in politics and other attempts to establish cooperative frameworks stem not from confusion over terminal values but from differing priorities placed on conflicting values and most of all on flawed reasoning about the best way to structure a system to best deliver results that satisfy our common preferences. This fact is often obscured by the tendency for political disputes to impute 'bad' values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.
On the whole, we're agreed, but I still don't know how I'm supposed to choose values. I think this tactic works best when you're dealing with a particular constituency that agrees on some creed that they hold to be objective. Usually, when you call your opponent a bad person, you're playing to your base, not trying to grab the center.
I don't think objectivity is an important feature of ethics. I'm not sure there's such a thing as a rationalist ethics. Being rational is about optimally achieving your goals. Choosing those goals is not something that rationality can help much with - the best it can do is try to identify where goals are not internally consistent. I gave a rough exposition of what I see as a possible rationalist ethics in this comment but it's incomplete. If I ever develop a better explanation I might make a top level post.
It often turns out that generating consistent decision rules can be harder than one might expect. Hence the plethora of "impossibility theorems" in social choice theory. (Many of these, like Arrow's arise when people try to rule out interpersonal utility comparisons, but there are a number that bite even when such comparisons are allowed, e.g. in population ethics.)
Yeah, expecting to achieve consistency is probably too much too ask but recognizing conflicts at least allows you to make a conscious choice about priorities.
Ok, here is what I don't agree with: I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn't we try to pin down and either discard or accept some version of "purpose," as a sort of first instrumental rationality? I mention objectivity because I don't think you can have any useful ethics without some static measure of comparability, some goal, however loose, that each person can pursue. There's little to discuss if you don't, because "everything is permitted." That said, I think ethics has to understand each person's competence to self-govern. Your utility function is important to everyone, but nobody knows how to maximize your utility function better than you. Usually. Ethics also has to bend to reality, so the more "important" thing isn't agreement on theoretical questions, but cooperation towards mutually-agreed goals. So I'm in substantial agreement with: And I would enjoy thoroughly a post on this topic.
Why do you think it needs to be confronted? I know there are many things that I want (though some of them may be mutually exclusive when closely examined) and that there are many similarities between the things that I want and the things that other humans want. Sometimes we can cooperate and both benefit, in other cases our wants conflict. Most problems in the world seem to arise from conflicting goals, either internally or between different people. I'm primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts. I have no desire to change my goals except to the extent that they are mutually exclusive and there is a clear path to a more self consistent set of goals. To the extent that we share a common evolutionary history our goals as humans overlap to a sufficient extent that cooperation is beneficial more often than not. Even where goals conflict, there is mutual benefit to agreeing rules for conflict resolution such that not everything is permitted. It is in our collective interest not to permit murder, not because murder is 'wrong' in some abstract sense but simply because most of us can usually agree that we prefer to live in a society where murder is forbidden, even at the cost of giving up the 'freedom' to murder at will. That equilibrium can break down and I'm interested in ways to robustly maintain the 'good' equilibrium rather than the 'bad' equilibrium that has existed at certain times and in certain places in history. I don't however feel the need to 'prove' that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle - I simply take it as a given.
I think it needs to be confronted because simply taking things as given leads to sloppy moral reasoning. Your preference for self-preservation seems to be an impulse like any other, no more profound than a preference for chocolate over vanilla. What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves? Again, this is the ultimately important part. Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want. Further, we discipline ourselves so that our goals are clear and consistent. All I'm saying is that you may want to look into the basis of your own goals and systematize them to enhance clarity.
I'm very interested in those questions and have read a lot on evolutionary psychology and the evolutionary basis for our sense of morality. I feel I have a reasonably satisfactory explanation for the broad outlines of why we have many of the goals we do. My curiosity can itself be explained by the very forces that shaped the other goals I have. Based on my current understanding I don't however see any reason to expect to find or to want to find a more fundamental basis for those preferences. Our goals are what they are because they were the kind of goals that made our ancestors successful. They're the kind of goals that lead to people like us with just those kind of goals... There doesn't need to be anything more fundamental to morality. To try to explain our moral principles by appealing to more fundamental moral principles is to make the same kind of mistake as to try to explain complex entities with a more fundamental complex creator of those entities. Hopefully we can all agree on that.
I think we are close. Do you think enjoyment and pain can be reduced to or defined in terms of preference? We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also. Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for. To me, preference is significant because it usually underlies the start of desirable cognitions or the end of undesirable ones, in me and other conscious things. The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized. That is the whole hand-off from evolution to "objective" morality, from there, the faculties of rational discipline and the minimal framework of society take over. Is it too much?
Certainly close enough to hope to agree on a set of rules, if not completely on personal values/preferences. I don't really recognize a distinction here. The explanation explains why preferences are their own justification in my view. I think I at least partially agree - sometimes we should override our immediate moral intuitions in light of a deeper understanding of how following them would lead to worse long term consequences. This is what I mean when I talk about recognizing contradictions within our value system and consciously choosing priorities. This looks like the utilitarian position and is where I would disagree to some extent. I don't believe it's necessary or desirable for individuals to prefer 'aggregated' utility. If forced to choose I will prefer outcomes that maximize utility for myself and my family and friends over those that maximize 'aggregate' utility. I believe that is perfectly moral and is a natural part of our value system. I am however happy to accept constraints that allow me to coexist peacefully with others who prefer different outcomes. Morality should be about how to set up a system that allows us to cooperate when we have an incentive to defect.

I'm sad that I can't downvote this article. It's ridiculously off-topic.

ETA: still, it's terrible. That's how Douglas Adams died!

Don't worry. I'd guess that posting this comment resulted in other people downvoting the article to compensate. Which makes me think the karma limit on downvotes doesn't prevent downvotes (among high-karma members) so much as make them something that's done indirectly by posting a comment, rather than clicking "vote down."
It seems almost designed to degenerate into a flame-war concerning cryonics!

I'm not signed up for cryonics. Partly, this is because I'm poor. Partly, it's because I'm extremely risk-averse and I can imagine really really horrible outcomes of being frozen just as easily as I can imagine really really great outcomes - in the absence of people walking around who were frozen and awakened later, my imaginings are all the data I have.

I'm sorry for your loss and that of your girlfriend, and I wish her grandfather had not died. While I'm at it, I'll wish he'd been immortal. But there are two mistaken responses to the fact that human b... (read more)

By "extremely risk-averse" do you mean "working hard to maximise persistence odds" or "very scared of scary scenarios"? You're right that death while signed up for cryonics is still a very bad thing, though. I don't think Eliezer would be fine with deaths if they were signed up, but sometimes he makes it seem that way.
I mean something like the second thing. Basically, I invariably would rather bet one dollar than bet two when the expected utility is identical with both bets - even odds, say. And if you make it a $1000 bet versus $2000, I'll probably prefer the first bet over the second even if the expected utility is strictly worse, simply because I can't tolerate any risk of being out two thousand dollars. (I can't tolerate much risk of being out a thousand either, given my poor-grad-student finances, but this is assuming I have no "don't gamble at all" option.)
2Eliezer Yudkowsky
I show no particular tendency to flinch from the deaths of those near me who were not preserved. Do you think my fear of my own death is so much greater as to drive me to irrationality only there, and only on cryonics? I could as easily accuse you of sour grapes for presently not having the money to sign up. Not that I am so accusing - but be wary of who you accuse of rationalization; there are many tragedies in this universe, but you should be careful not to go around accepting the ones that aren't inevitable.
When I spoke of "not dealing with it", I didn't mean to say that you do this with people who die and aren't signed up for cryonics. (I had already read and was very moved by your piece on Yehuda.) When someone does get frozen, though, it's easy to categorize them as "maybe not dead" - since if a frozen person weren't maybe-not-dead, no one would be frozen.
2Eliezer Yudkowsky
Alicorn, not everything that is less than absolutely awful to believe, is therefore false. In the end, either the information is there in the brain or not, and that's a question of neuroscience and the limits of possible revival tech; that's not something which can be possibly settled by observing which answers are comforting or discomforting.
I'm obviously not being very clear. I'm not making a case that it's irrational to sign up for cryonics - I'm just saying it's not appropriate for someone with a very high risk-aversion, such as myself. I'm informed by the same person who taught me about levels of risk aversion in the first place that no given level of risk aversion is necessarily irrational or irrational; it's just a personal characteristic. It's quite possible that by making these choices you'll be around, enjoying a great quality of life, in four thousand years, and I won't. That would be awesome for you and less awesome for me. I'm just not willing to take the bet.
Describing this as being averse to risks doesn't make much sense to me. Couldn't a pro-cryonics person equally well justify her decision as being motivated by risk aversion? By choosing not to be preserved in the event of death, you risk missing out on futures that are worth living in. If you want to take this into bizarre and unlikely science fiction ideas, as with your dystopian cannon fodder speculation, you could easily construct nightmare scenarios where cryonics is the better choice. Simply declaring yourself to have "high risk aversion" doesn't really support one side over the other here. This reminds me of a similar trope concerning wills: someone could avoid even thinking about setting up a will, because that would be "tempting fate," or take the opposite position: that not having a will is tempting fate, and makes it dramatically more likely that you'll get hit by a bus the next day. Of course, neither side there is very reasonable.
I call it risk aversion because if cryonics works at all, it ups the stakes. The money dropped on signing up for it is a sure thing, so it doesn't factor into risk, and if I get frozen and just stay dead indefinitely (for whatever reason) then all I've lost compared to not signing up is that money and possibly some psychological closure for my loved ones. But the scenarios in which cryonics results in me being around for longer - possibly indefinitely - are ones which could be very extreme, in either direction. I'm not comfortable with such extreme stakes: I prefer everything I have to deal with to be within my finite lifespan, in the absence of having a near-certainty about a longer lifespan being awesome. I don't doubt that there are some "nightmare" situations in which I'd prefer cryonics - I'd rather be frozen than spend the next seventy years being tortured, for example - but I don't live in one of those situations.
That's starting to sound like a general argument for shorter lifetimes over longer ones. Is there a reason this wouldn't apply just as well to living for five more years versus fifty? There's more room for extreme positive or negative experiences in the extra 45 years.
Not at all - I'd take straight up immortality, if somebody offered, although I'd rather have a suicide option loophole for cases where I'm the only person to survive the heat death of the universe or something. Perhaps I unduly value the (illusion of?) control over my situation. But my reasoning is about the choice as a gamble: my risk aversion makes me prefer not to take the gamble that cryonics unambiguously is, which could go well or badly and has a cost to play.
Are you just scared of the idea of evil aliens, or do you actually think that it's a significant risk that cryonicists recklessly ignore?
It's not high on my list of phobias. I don't judge the risk to be very serious. But then, the tiny risk of evil aliens isn't opposed to a great chance of eternal bliss; it's competing with an equally tiny chance of something very nice.
1Paul Crowley
I would guess that however small the chances of being reanimated by benevolent people are, the chances of being reanimated by non-benevolent people are much smaller, just because any benevolent person with the capacity to do so cheaply will want to do so, while most non-benevolent futures I can imagine won't bother.
Sadists exist even in the present. Unethical research programs are not unheard of in history. This is a little like saying that I shouldn't worry about walking alone in a city at night in an area of uncertain crime rate, because if someone benevolent happens by they'll buy me ice cream, and anyone who doesn't wish me well will just ignore me.
But you wouldn't choose to die rather than walk through the city, would you? It's hard for me to take the nightmare science fiction scenarios too seriously when the default actions comes with a well established, nonfictional nightmare: you don't sign up for cryonics, you die, and that's the end.
Economics are key here. What do people have to gain from taking certain actions on you/against you? Also note that notions of "benevolence" have varied throughout the ages -- and it has not been a monotonically increasing function! There are times and places in this world when a lone drifter would have been -- by default -- "benevolently" enslaved by the authorities, but where this default action would change to "put to death" several decades later. How well one is treated always depends on the economic and political power of the group you are associated with. Do our notions of lawful ownership match those of ancient civilizations? They do match in broad outlines, but in terms of specific artifacts, our notions diverge dramatically. If we somehow managed to clone Tutankhamen and recover his mind from the ether and re-implant it, what are the chances he's going to get all of his stuff back?
I agree the chances are much smaller, but the question is what happens when you multiply by utility.
OK, you're risk averse. Specifically, you're scared. If you put a bit of imaginative effort into it you can play out scenarios of awakening into a dystopia, or botched revival, or abusive uploading, or various nastiness. Fair enough. I propose that you haven't stretched your imagination far enough. Staying in doom-n-disaster mode, what are the other ways you could suffer? Illness, madness, brain damage, disability, mistreatment, war, famine, plague, loneliness... it just goes on and on. Switching to happy mode, what are the good scenarios? Love, long life, wealth and good ideas to use it on... again it goes on and on. Then if you take all those scenarios, and add a whole lot more of mediocre and tolerable and mildly downbeat ones, and you scatter them out ahead of you into an imaginary branching map of infinite reachable futures. Not all equally easy to reach. There are probability assignments on each, shifting and flowing as your actions and experiences move the chances. This sort of visualization helps me put my own worrying into perspective. Worrying is a kind of grasping for control, but the future is too big and surprising to be pinned down that way. You can't control what you get. You can steer into a region with more good chances than bad. To do that you have to learn to discount the low chance of bad as just the price of admission.
I'm curious what the really horrible outcomes you can imagine are? That's not something that had ever occurred to me, I can't imagine a worse outcome than not being revived which seems to be equivalent to just being normally dead.
This is probably symptomatic of reading too much science fiction, but I could be revived by evil aliens, or awakened into a dystopian society that didn't have enough raw materials to make robots and wanted frozen people for cannon fodder, or I could be uploaded instead of outright defrosted and then suffer a glitch that would cause eternal torment/boredom/arithmetic problems, or some form of soul theory could turn out to be right and there could be grandiose metaphysical consequences... I have a very fertile imagination.
Perhaps you have read too much science fiction and not enough history - I worry far more about what is likely to happen between now and when I can expect to die in 30-50 years based on recent history than I do about the essentially unknowable far future.

I don't really see any commentary on the underlying assumptions here made about the badness of being dead. In summary for a physicalist, being dead has no value: it is a null state. Null states cannot be compared with non-null states, so being dead is not worse than being alive.

To put that another way, I cannot be worse off by being dead because there won't be an I at that point. An argument can be made that I have no personal interest in my being dead - only other living people have a stake in that. That doesn't change the fact that I want to live. There... (read more)

It's counterintuitive to say that being dead is basically null value. If I'm choosing between two courses of action, and one difference is that one of them will involve me dying, that's a strong factor in making me prefer the other option. I can think of possible explanations for this that preserve the claim that being dead has value zero, but I'm not seeing a way that would do so only in non-cryonics cases.
Notice the subtle difference in language though. You are talking about dying. Dying is pretty obviously a bad thing. Its only once you are dead that you are in a null state. Cryo-preservation does not prevent you from dying. You still go through the dying process, and I doubt you are very much comforted by the small chance that you could be revived at some point.

Sorry to hear about the loss.

I'm not sure that religion is the main devil here, though. Most of my family isn't religious, nonetheless none of them would ever sign up for cryonics. I focus my efforts on encouraging them to exercise and eat well. I can at least effect some change in that direction.

Not particularly relevant, because the point about religion isn't that all atheists sign up for cryonics. It's that more atheists sign up for it, because delusional afterlife believers perceive no incentive to. I'd bet that a rise in atheism correlates with a rise in cryonics subscription.
I imagine so. What I deny is that religion is the main factor preventing the adoption of cryonics. My family isn't proof of this but it's certainly evidence. If the ratio of atheists who sign up for cryonics as opposed to not is higher than for theists, and if that ratio remained constant as the entire world gave up religion... there still wouldn't be that many people signed up for cryonics.
That seems at least plausible, but it doesn't refute the harm done by religion (and of course discounts any indirect damage done to atheists' thinking by widespread theism). To counter one anecdote with another: The fact is that most atheists don't know how accessible cryonics is. By mentioning that fact alone (very truly alone, along the lines of "cryonics is actually pretty accessible, google it"), I've peaked the interest of at least two atheists I know. So in terms of cryonics awareness, I suppose you could make the argument that it's not so much religion itself hindering it, as it is lack of atheist (or rationalist) connectivity. But atheist connectivity is obviously inhibited by the dominance of theism. Also, since a >1 atheist/theist sign-up ratio would at least point to an "easy" set of people that would sign up in the absence of religion, any increase in that ratio directly opposes the notion that religion isn't preventing adoption. I fully expect this ratio to climb in the near future as full ignorance of cryonics burns itself out. I'm not confident that religion is the primary factor preventing adoption when plain ignorance seems to be playing such a large role, but it certainly seems non-negligible, especially moving forward.
0Paul Crowley
Has anyone ever heard of a theist signing up for cryonics? That would seem very odd.
Theists don't necessarily believe in an afterlife. People who believe in an afterlife (whether theists or not) don't necessarily think it will be preferable to this life, either.
I don't see why they shouldn't, given that most of them don't refuse (other) medical care.
0Eliezer Yudkowsky
It's been known to happen.
I guess you could make a sort of reverse Pascal's Wager argument for it - if it turns out that there is no immortal soul after all then you've got a backup plan.

"Just so that we're clear that all the wonderful emotional benefits of self-delusion come with a price, and the price isn't just to you."

Is this a warning for or against buying into the idea of cryonics?