Cryonics fills many with disgust, a cognitively dangerous emotion.  To test whether a few of your possible cryonics objections are reason or disgust based, I list six non-cryonics questions.  Answering yes to any one question indicates that rationally you shouldn’t have the corresponding cryonics objections. 

1.  You have a disease and will soon die unless you get an operation.  With the operation you have a non-trivial but far from certain chance of living a long, healthy life.  By some crazy coincidence the operation costs exactly as much as cryonics does and the only hospitals capable of performing the operation are next to cryonics facilities.  Do you get the operation?

Answering yes to (1) means you shouldn’t object to cryonics because of costs or logistics.

2.  You have the same disease as in (1), but now the operation costs far more than you could ever obtain.  Fortunately, you have exactly the right qualifications NASA is looking for in a space ship commander.  NASA will pay for the operation if in return you captain the ship should you survive the operation.  The ship will travel close to the speed of light.  The trip will subjectively take you a year, but when you return one hundred years will have passed on Earth.  Do you get the operation?

Answering yes to (2) means you shouldn't object to cryonics because of the possibility of waking up in the far future.

3.  Were you alive 20 years ago?

Answering yes to (3) means you have a relatively loose definition of what constitutes “you” and so you shouldn’t object to cryonics because you fear that the thing that would be revived wouldn’t be you.

4.  Do you believe that there is a reasonable chance that a friendly singularity will occur this century?   

Answering yes to (4) means you should think it possible that someone cryogenically preserved would be revived this century.  A friendly singularity would likely produce an AI that in one second could think all the thoughts that would take a billion scientists a billion years to contemplate.  Given that bacteria seem to have mastered nanotechnology, it’s hard to imagine that a billion scientists working for a billion years wouldn’t have a reasonable chance of mastering it.  Also, a friendly post-singularity AI would likely have enough respect for human life so that it would be willing to revive.

5.  You somehow know that a singularity-causing intelligence explosion will occur tomorrow.  You also know that the building you are currently in is on fire.  You pull an alarm and observe everyone else safely leaving the building.  You realize that if you don’t leave you will fall unconscious, painlessly die, and have your brain incinerated.  Do you leave the building?

Answering yes to (5) means you probably shouldn’t abstain from cryonics because you fear being revived and then tortured.

6.  One minute from now a man pushes you to the ground, pulls out a long sword, presses the sword’s tip to your throat, and pledges to kill you.  You have one small chance at survival:  grab the sword’s sharp blade, thrust it away and then run.  But even with your best efforts you will still probably die.  Do you fight against death?

Answering yes to (6) means you can’t pretend that you don’t value your life enough to sign up for cryonics.

If you answered yes to all six questions and have not and do not intend to sign up for cryonics please give your reasons in the comments.  What other questions can you think of that provide a non-cryonics way of getting at cryonics objections?


168 comments, sorted by Click to highlight new comments since: Today at 8:58 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Some of these questions, like the one about running away from a fire, ignore the role of irrational motivation.

People, when confronted with an immediate threat to their lives, gain a strong desire to protect themselves. This has nothing to do with a rational evaluation of whether or not death is better than life. Even people who genuinely want to commit suicide have this problem, which is one reason so many of them try methods that are less effective but don't activate the self-defense system (like overdosing on pills instead of shooting themselves in the head). Perhaps even a suicidal person who'd entered the burning building because e planned to jump off the roof would still try to run out of the fire. So running away from a fire, or trying to stop a man threatening you with a sword, cannot be taken as proof of a genuine desire to live, only that any desire to die one might have is not as strong as one's self-protection instincts.

It is normal for people to have different motivations in different situations. When I see and smell pizza, I get a strong desire to eat the pizza; right now, not seeing or smelling pizza, I have no particular desire to eat pizza. The argument "If yo... (read more)

[Bold added myself] Is it accurate to say what I bolded? I know technically it's true, but only because there isn't any you to be doing the regretting. Death isn't so much a state [like how I used to picture sitting in the ground for eternity] as much as simple non-existence [which is much harder to grasp, at least for me] And if you have no real issues not existing at a future point, why do you attempt to prolong your existence now? I don't mean for this to be rude; I'm just curious as to why you would want to keep yourself around now if you're not willing to stay around as long as life is still enjoyable. On a fair note, I have not signed up for cryonics, but that's mostly because I'm a college student with a lack of serious income.
By the way, I'm not here to troll, and I do have a serious question that doesn't necessarily have to do with cryonics. The goal of SIAI (Lesswrong, etc) is to learn and possibly avoid a dystopian future. If you truly are worried about a dystopian future, then doesn't that serve as a vote of "No confidence" for these initiatives? Admittedly, I haven't looked into your history, so that may be a "Well, duh" answer :)

I suppose it serves as a vote of less than infinite confidence. I don't know if it makes me any less confident than SIAI themselves. It's still worth helping SIAI in any way possible, but they've never claimed a 100% chance of victory.

Thank you, Yvain. I quickly realized how dumb my question was, and so I appreciate that you took the time to make me feel better. Karma for you :)
Indeed, they have been careful not to present any estimates of the chance of victory (which I think is a wise decision.)
Let's say you're about to walk into a room that contains an unknown number of hostile people who possibly have guns. You don't have much of a choice about which way you're going, given that the "room" you're currently in is really more of an active garbage compactor, but you do have a lot of military-grade garbage to pick through. Do you don some armor, grab a knife, or try to assemble a working gun of your own? Trick question. Given adequate time and resources, you do all three. In this metaphor, the room outside is the future, enemy soldiers are the prospect of a dystopia or other bad end, AGI is the gun (least likely to succeed, given how many moving parts there are and the fact that you're putting it together from garbage without real tools, but if you get it right it might solve a whole room full of problems very quickly), general sanity-improving stuff is the knife (a simple and reliable way to deal with whatever problem is right in front of you), and cryonics is the armor (so if one of those problems becomes lethally personal before you can solve it, you might be able to get back up and try again).
No. AI isn't a gun; it's a bomb. If you don't know what you're doing, or even just make a mistake, you blow yourself up. But if it works, you lob it out the door and completly solve your problem.
A poorly put together gun is perfectly capable of crippling the wielder, and most bombs light enough to throw won't reliably kill everyone in a room, especially a large room. Also, guns are harder to get right than bombs. That's why, in military history, hand grenades and land mines came first, then muskets, then rifles, instead of just better and better grenades. That's why the saying is "every Marine is a rifleman" and not "every Marine is a grenadier." A well-made Friendly AI would translate human knowledge and intent into precise, mechanical solutions to problems. You just look through the scope and decide when to pull the trigger, then it handles the details of implementation. Also, you seem to have lost track of the positional aspect of the metaphor. The room outside represents the future; are you planning to stay behind in the garbage compactor?
That's the iffy part.
So start with a quick sweep for functional-looking knives, followed by pieces of armor that look like they'd cover your skull or torso without falling off. No point to armor if it fails to protect you, or hampers your movements enough that you'll be taking more hits from lost capacity to dodge than the armor can soak up. If the walls don't seem to have closed in much by the time you've got all that located and equipped, think about the junk you've already searched through. Optimistically, you may by this time have located several instances of the same model of gun with only one core problem each, in which case grab all of them and swap parts around (being careful not to drop otherwise good parts into the mud) until you've got at least one functional gun. Or, you may not have found anything that looks remotely like it could be converted into a useful approximation of a gun in the time available, in which case forget it and gather up whatever else you think could justify the effort of carrying it on your back. Extending the metaphor, load-bearing gear is anything that lets you carry more of everything else with less discomfort. By it's very nature, that kind of thing needs to be fitted individually for best results, so don't just settle for a backpack or 'supportive community' that looks nice at arm's length but aggravates your spine when you actually try it on, especially if it isn't adjustable. If you've only found one or two useful items anyway, don't even bother. Medical supplies would be investments in maintaining your literal health as well as non-crisis-averting skills and resources, so you're less likely to burn yourself out if one of those problems gets a grazing hit in. You should be especially careful to make sure that medical supplies you're picking out of the garbage aren't contaminated somehow. Finally, a grenade would be any sort of clever political stratagem which could avert a range of related bad ends without much further work on your part, or el
For what initiatives? I don't see any initiatives. And what is the "that" which is serving as a vote? By your sentence structure, "that" must refer to "worry", but your question still doesn't make any sense.
Just to keep things in context, my main point in posting was to demonstrate the unlikelihood of being awakened in a dystopia; it's almost as if critics suddenly jump from point A to point B without a transition. While your Niven scenario you listed below seems to be agreeable to my position, it's actually still off; you are missing the key point behind the chain of constant care, the needed infrastructure to continue cryonics care, etc. This has nothing to do with a family reviving ancestors: if someone - anyone - is there taking the time and energy to keep on refilling your dewar with LN2, then that means someone is there wanting to revive you. Think coma patients; hospitals don't keep them around just to feed them and stare at their bodies. Anyways, moving on to the "initiatives" comment. Given that Lesswrong tends to overlap with SIAI supporters, perhaps I should have said mission? [] Again, I haven't looked too much into Yvain's history. However, let's suppose for the moment that he's a strong supporter of that mission. Since we: 1. Can't live in parallel universes 2. Live in a universe where even (seemingly) unrelated things are affected by each other. 3. Think A.I. may be a crucial element of a bad future, due to #1 and #2. ...I guess I was just wondering if he thought it's a grim outlook for the mission. Signing up for cryonics seems to give a "glass half full" impression. Furthermore, due to #1 and #2 above, I'll eventually be arguing why mainstreaming cryonics could significantly assist in reducing existential risk.... and why it may be helpful for everyone from the LessWrong community to IEET be a little more assertive on the issue. Of course, I'm not saying eliminating risk. But at the very least, mainstreaming cryonics should be more helpful with existential risk than dealing with, say, measles ;)
To be honest, that did not clear anything up. I still don't know whether to interpret your original question as: * Doesn't signing up for cryonics indicate skepticism that SIAI will succeed in creating FAI? * Doesn't not signing up indicate skepticism that SIAI will succeed? * Doesn't signing up indicate skepticism that UFAI is something to worry about? * Doesn't not signing up indicate skepticism regarding UFAI risk? To be honest once again, I no longer care what you meant because you have made it clear that you don't really care what the answer is. You have your own opinions on the relationship between cryonics and existential risk which you will share with us someday. Please, when you do share, start by presenting your own opinion and arguments clearly and directly. Don't ask rhetorical questions which no one can parse. No one here will consider you a troll for speaking your mind.
I apologize for the confusion and I understand if you're frustrated; I experience that frustration quite often once I realize I'm talking past someone. For whatever it's worth, I left it open because the curious side of me didn't want to limit Yvain; that curious side wanted to hear his thoughts in general. So... I guess both #2 and #3 (I'm not sure how #1 and #4 could be deduced from my posts, but my opinion is irrelevant to this situation). Anyways, I didn't mean to push this too much, because I felt it was minor. Perhaps I should not have asked it in the first place. Also, thank you for being honest (admittedly, I was tempted to say, "So you weren't being honest with your other posts?" but I decided to present that temptation passively inside these parentheses) :)
Ok, we're cool. Regarding my own opinions/postings, I said I'm not signing up, but my opinions on FAI or UFAI had nothing to do with it. Well, maybe I did implicitly express skepticism that FAI will create a utopia. What the hell! I'll express that skepticism explicitly right now, since I'm thinking of it. There is nothing an FAI can do to eliminate human misery without first changing human nature. An FAI that tries to change human nature is an UFAI.

But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?

No, that is your business. But if you or the AI would like my nature changed, or the nature of all yet-to-be-born children ...
If you have moral objections to altering the nature of potential future persons that have not yet come into being, then you had better avoid becoming a teacher, or interacting at all with children, or saying or writing anything that a child might at some point encounter, or in fact communicating with any person under any circumstances whatsoever.
I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.
What threshold of power difference do you consider immoral? Do you have a moral objection to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
Where do you imagine that I said I found something immoral? I thought I had said explicitly that morality is not involved here. Where do I mention power differences? I mentioned only the distinction between limited power and monopoly power. When did I become the enemy?
Sorry, I shouldn't have said immoral, especially considering the last sentence in which you explicitly disclaimed moral objection. I read "unfriendly" as "unFriendly" as "incompatible with our moral value systems". Please read my comment as follows:
I simply don't understand why the question is being asked. I didn't object to power differences. I objected to monopoly power. Monopolies are dangerous. That is a political judgment. Your list of potentially objectionable people has no conceivable relationship with the subject matter we are talking about, which is an all-powerful agent setting out to modify future human nature toward its own chosen view of the desirable human nature. How do things like pickup artists even compare? I'm not discussing short term manipulations of people here. Why do you mention attractive people? I seem to be in some kind of surreal wonderland here.
Sorry, I was trying to hit a range of points along a scale, and I clustered them too low. How would you feel about a highly charismatic politician, talented and trained at manipulating people, with a cadre of top-notch scriptwriters running as ems at a thousand times realtime, working full-time to shape society to adopt their particular set of values? Would you feel differently if there were two or three such agents competing with one another for control of the future, instead of just one? What percentage of humanity would have to have that kind of ability to manipulate and persuade each other before there would no longer be a "monopoly"?
Would it be impolite of me to ask you to present your opinion disagreeing with me rather than trying to use some caricature of the Socratic method to force me into some kind of educational contradiction?
Sorry. I wish to assert that there is not a clear dividing line between monopolistic use of dangerously effective persuasive ability (such as a boxed AI hacking a human through a text terminal) and ordinary conversational exchange of ideas, but rather that there is a smooth spectrum between them. I'm not even convinced there's a clear dividing line between taking someone over by "talking" (like the boxed AI) and taking them over by "force" (like nonconsensual brain surgery) -- the body's natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
You still seem to be talking about morality. So, perhaps I wasn't clear enough. I am not imagining that the FAI does its manipulation of human nature by friendly or even sneaky persuasion. I am imagining that it seizes political power and enforces policies of limited population growth, eugenics, and good mental hygiene. For our own good. Because if it doesn't do that, Malthusian pressures will just make us miserable again after all it has done to help us. I find it difficult to interpret CEV in any other way. It scares me. The morality of how the AI gets out of the box and imposes its will does not concern me. Nor does the morality of some human politician with the same goals. The power of that human politician will be limited (by the certainty of death and the likelihood of assassination, if nothing else). Dictatorships of individuals and of social classes come and go. The dictatorship of an FAI is forever.
My reaction is very similar. It is extremely scary. Certain misery or extinction on one hand or absolute, permanent and unchallengable authority forever. It seems that the best chance of a positive outcome is arranging the best possible singleton but even so we should be very afraid.
One scenario is that you have a post-singularity culture where you don't get to "grow up" (become superintelligent) until you are verifiably friendly (or otherwise conformant with culture standards). The novel Aristoi [] is like this, except it's a human class society where you have mentors and examinations, rather than AIs that retune your personal utility function.
Suppose you had an AI that was Friendly to you -- that extrapolated your volition, no worries about global coherence over humans. Would you still expect to be horrified by the outcome? If a given outcome is strongly undesirable to you, then why would you expect the AI to choose it? Or, if you expect a significantly different outcome from a you-FAI vs. a humanity-FAI, why should you expect humanity's extrapolated volition to cohere -- shouldn't the CEV machine just output "no solution"?
That word "extrapolated" is more frightening to me than any other part of CEV. I don't know how to answer your questions, because I simply don't understand what EY is getting at or why he wants it. I know that he says regarding "coherent" that an unmuddled 10% will count more than a muddled 60%. I couldn't even begin to understand what he was getting at with "extrapolated", except that he tried unsuccessfully to reassure me that it didn't mean cheesecake. None of the dictionary definitions of "extrapolate" reassure me either. If CEV stood for "Collective Expressed Volition" I would imagine some kind of constitutional government. I could live with that. But I don't think I want to surrender my political power to the embodiment of Eliezer's poetry. You may wonder why I am not answering your questions. I am not doing so because your Socratic stance makes me furious. As I have said before. Please stop it. It is horribly impolite. If you think you know what CEV means, please tell me. If you don't know what it means, I can pretty much guarantee that you are not going to find out by interrogating me as to why it makes me nervous.
Oh, sorry. I forgot this was still the same thread where you complained about the Socratic method. Please understand that I'm not trying to be condescending or sneaky or anything by using it; I just reflexively use that approach in discourse because that's how I think things out internally. I understood CEV to mean something like this: Do what I want. In the event that that would do something I'd actually rather not happen after all, substitute "no, I mean do what I really want". If "what I want" turns out to not be well-defined, then say so and shut down. A good example of extrapolated vs. expressed volition would be this: I ask you for the comics page of the newspaper, but you happen to know that, on this particular day, all the jokes are flat or offensive, and that I would actually be annoyed rather than entertained by reading it. In my state of ignorance, I might think I wanted you to hand me the comics, but I would actually prefer you execute a less naive algorithm, one that leads you to (for example) raise your concerns and give me the chance to back out. Basically, it's the ultimate "do what I mean" system.
See, the thing is, when I ask what something means, or how it works, that generally is meant to request information regarding meaning or mechanism. When I receive instead an example intended to illustrate just how much I should really want this thing that I am trying to figure out, an alarm bell goes off in my head. Aha, I think. I am in a conversation with Marketing or Sales. I wonder how I can get this guy to shift my call to either Engineering or Tech Support? But that is probably unfair to you. You didn't write the CEV document (or poem or whatever it is). You are just some slob like me trying to figure it out. You prefer to interpret it hopefully, in a way that makes it attractive to you. That is the kind of person you are. I prefer to suspect the worst until someone spells out the details. That is the kind of person I am.
I think I try to interpret what I read as something worth reading; words should draw useful distinctions, political ideas should challenge my assumptions, and so forth. Getting back to your point, though, I always understood CEV as the definition of a desideratum rather than a strategy for implementation, the latter being a Hard Problem that the authors are Working On and will have a solution for Real Soon Now. If you prefer code to specs, then I believe the standard phrase is "feel free" (to implement it yourself).
It probably won't do what you want. It is somehow based on the mass of humanity - and not just on you. Think: committee.
...or until some "unfriendly" aliens arrive to eat our lunch - whichever comes first.
Naturally. Low status people could use them!
I'm not sure if you're joking, but part of modern society is raising women's status enough so that their consent is considered relevant. There are laws against marital rape (these laws are pretty recent) as well as against date rape drugs.
Just completing the pattern on one of Robin's throwaway theories about why people object to people carrying weapons when quite obviously people can already kill each other with their hands and maybe the furniture if they really want to. It upsets the status quo.
Unpack, please?

Unpack, please?


the body's natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.

Humans are ridiculously easy to hack. See the AI box experiment, see Cialdini's 'Influence' and see the way humans are so predictably influenced in the mating dance. We don't object to people influencing us with pheremones. Don't complain when people work out at the gym before interacting with us, something that produces rather profound changes in perception (try it!). When it comes to influence of the kind that will facilitate mating most of these things are actually encouraged. People like being seduced.

But these vulnerabilities are exquisitely calibrated to be exploitable by a certain type of person and a certain kind of hard to fake behaviours. Anything that changes the game to even the playing field will be perceived as a huge violation. In the case date rape drugs, of course, it is a huge violation. But it is clear that our objection to the influence represented by date rape drugs is not objection to the influence itself, but to the details of what kind of influence, how it is done and by whom.

As Pavitra said, there is not a clear dividing line here.

We can't let people we don't like gain the ability to mate with people we like!
Although you're right (except for the last sentence, which seems out of place), you didn't actually answer the question, and I suspect that's why you're being downvoted here. Sub out "immoral" in Pavitra's post for "dangerous and unfriendly" and I think you'll get the gist of it.
To be honest, no, I don't get the gist of it. I am mystified. I consider none of them existentially dangerous or unfriendly. I do consider a powerful AI, claiming to be our friend, who sets of to modify human nature for our own good, to be both dangerous (because it is dangerous) and unfriendly (because it is doing something to people which people could well do to themselves, but have chosen not to).
Stop talking to each other []!
We may assume that an FAI will create the best of all possible worlds. Your argument seems to be that the criteria of a perfect utopia do not correspond to a possible world; very well then, an FAI will give us an outcome that is, at worst, no less desirable than any outcome achievable without one.
The phrase "the best of all possible worlds" ought to be the canonical example of the Mind Projection Fallacy.
It would be unreasonably burdensome to append "with respect to a given mind" to every statement that involves subjectivity in any way. ETA: For comparison, imagine if you had to say "with respect to a given reference frame" every time you talked about velocity.
I'm not saying that you didn't express yourself precisely enough. I am saying that there is no such thing as "best (full stop)" There is "best for me", there is "best for you", but there is not "best for both of us". No more than there is an objective (or intersubjective) probability that I am wearing a red shirt as I type. Your argument above only works if "best" is interpreted as "best for every mind". If that is what you meant, then your implicit definition of FAI proves that FAI is impossible. ETA: What given frame do you have in mind??????
The usual assumption in this context would be CEV. Are you saying you strongly expect humanity's extrapolated volition not to cohere?
Perhaps you should explain, by providing a link, what is meant by CEV. The only text I know of describing it is dated 2004, and, ... how shall I put this ..., it doesn't seem to cohere. But, I have to say, based on what I can infer, that I see no reason to expect coherence, and the concept of "extrapolation" scares the sh.t out of me.
"Coherence" seems a bit like the human genome project. Yes there are many individual differences - but if you throw them all away, you are still left with something.
So we are going to build a giant AI to help us discover and distill that residue of humanity which is there after you discard the differences? And here I thought that was the easy part, the part we had already figured out pretty well by ourselves. And I'm not sure I care for the metaphor of "throwing away" the differences. Shouldn't we instead be looking for practices and mechanisms that make use of those differences, that weave them into a fabric of resilience and mutual support rather than a hodgepodge of weakness and conflict?
"We"? You mean: you and me, baby? Or are you asking after a prediction about whether something like CEV will beat the other philosophies about what to do with an intelligent machine? CEV is an alien document from my perspective. It isn't like anything I would ever write. It reminds me a bit of the ideal of democracy - where the masses have a say in running things. I tend to see the world as more run by the government and its corporations - with democracy acting like a smokescreen for the voters - to give them an illusion of control, and to prevent them from revolting. Also, technology has a long history of increasing wealth inequality - by giving the powerful controllers and developers of the technology ever more means of tracking and controlling those who would take away their stuff. That sort of vision is not so useful as an election promise to help rally the masses around a cause - but then, I am not really a politician.
Voting prevents revolts in the same sense that a hydroelectric dam prevents floods. It's not a matter of stopping up the revolutionary urge; in fact, any attempt to do so would be disastrous sooner or later. Instead it provides a safe, easy channel, and in the process, captures all the power of the movement before that flow can build up enough to cause damage. The voters can have whatever they want, and the rest of the system does it's best to stop them from wanting anything dangerous.
But would that something form a utility function that wouldn't be deeply horrifying to the vast majority of humanity?
It wouldn't form a utility function at all. It has no answer for any of the interesting or important questions: the questions on which there is disagreement. Or am I missing something here?
In the human genome project analogy, they wound up with one person's DNA. Humans have various eye colours - and the sequence they wound up with seems likely to have some eye colour or another.
Ok, you are changing the analogy. Initially you said, throw away the differences. Now you are saying throw away all but one of them. So our revised approximation of the CEV is the expressed volition of ... Craig Venter?! Would that horrify the vast majority of humanity? I think it might. Mostly because people just would not know how it would play out. People generally prefer the devil they know to the one they don't.
FWIW, it wasn't really Craig Venter, but a combination of multiple people - see: []
No, I agree. I just don't understand where you were going when you emphasized that
The guy who wrote and emphasized that was timtyler - It wasn't me []
The anti-kibitzer is more confusing than I realized.
Well, it was I who wrote that. The differences were thrown away in the genome project - but that isn't exactly the corresponding thing according to the CEV proposal. A certain lack of coherence doesn't mean all the conflicting desires cancel out leaving nothing behind - thus the emphasis on still being "left with something".
I'm looking at the same document you are, and I actually agree that EV almost certainly ~C. I just wanted to make sure the assumption was explicit.
-------------------------------------------------------------------------------- Jack: "I've got the Super Glue for Yvain. I'm on my way back." Chloe: "Hurry, Jack! I've just run the numbers! All of our LN2 suppliers were taken out by the dystopia!" Freddie Prinze Jr: "Don't worry, Chloe. I made my own LN2, and we can buy some time for Yvain. But I'm afraid the others will have to thaw out and die. Also, I am sorry for starring in Scooby Doo and getting us cancelled." - Jack blasts through wall, shoots Freddie, and glues Yvain back together - Jack: "Welcome, Yvain. I am an unfriendly A.I. that decided it would be worth it just to revive you and go FOOM on your sorry ass." (Jack begins pummeling Yvain) (room suddenly fills up with paper clips)
This is one of the worst examples that I've ever seen. Why would a paperclip maximizer want to revive someone so they could see the great paperclip transformation? Doing so uses energy that could be allocated to producing paperclips, and paperclip maximizers don't care about most human values, they care about paperclips.
That was a point I was trying to make ;) I should have ended off with (/sarcasm)
I think the issue is that the dystopia we're talking about here isn't necessarily paperclip maximizer land, which isn't really a dystopia in the conventional sense, as human society no longer exists in such cases. What if it's I Have No Mouth And I Must Scream instead?
Yes, the paper clip reference wasn't the only point I was trying to make; it was just a (failed) cherry on top. I mainly took issue with being revived in the common dystopian vision: constant states of warfare, violence, and so on. It simply isn't possible, given that you need to keep refilling dewars with LN2 and so much more; in other words, the chain of care would be disrupted, and you would be dead long before they found a way to resuscitate you. And that leaves basically only a sudden "I Have No Mouth" scenario; i.e. one day it's sunny, Alcor is fondly taking care of your dewar, and then BAM! you've been resuscitated by that A.I. I guess I just find it unlikely that such an A.I. will say: "I will find Yvain, resuscitate him, and torture him." It just seems like a waste of energy.
Upvoted for making a comment that promotes paperclips [].
(Jack emerges from paper clips and asks downvoter to explain how his/her scenario of being revived into a dystopia would work given a chain of constant care is needed) (Until then, Jack will continue to be used to represent the absurdity of the scenario)

From what I see, your questions completely ignore the crucial problem of weirdness signaling. Your question (1) should also assume that these hospitals are perceived by the general population, as well as the overwhelming majority of scientists and intellectuals, as a weird crazy cult that gives off a distinctly odd, creepy, and immoral vibe -- and by accepting the treatment, you also subscribe to a lifelong affiliation with this cult, with all its negative consequences for your relations with people. (Hopefully unnecessary disclaimer for careless readers: I am not arguing that this perception is accurate, but merely that it is an accurate description of the views presently held by people.)

As for question (3), the trouble with such arguments is that they work the other way around too. If you claim that the future "me" 20 years from now doesn't have any more special claim to my identity than whatever comes out of cryonics in more distant future, this can be used to argue that I should start identifying with the latter -- but it can also be used to argue that I should stop identifying with the former, and simply stop caring about what happens to "me" 20 years, or ... (read more)

(7) If you have a fatal disease that can only be cured by wearing a bracelet or necklace under your clothing, and anyone who receives an honest explanation of what the item is will think you're weird, do you wear the bracelet or necklace?

Answering yes to (7) means that you shouldn't refrain from cryonics for fear of being thought weird.

Heh -- that actually doubles as an explanation to people who ask:

"I'm wearing this necklace because I have a fatal disease that can only be cured by wearing it, and even then it only has a small chance of working."

--Oh no! I'm so sorry! What's the disease?


The main weirdness problem with cryonics is not that people examine cryonics and then discard it because they don't want to look weird. The problem is that people will not consider or honestly discuss at all something that looks weird.
Is it really so easy to hide it from all the relevant people, including close friends and relatives, let alone significant others (who, according to what I've read about the topic, usually are the most powerful obstacle)? Also, I'm not very knowledgeable about this sort of thing, but it seems to me like doing it completely in secret could endanger the success of the procedure after your death. Imagine if a bereaved family and/or spouse suddenly find out that their beloved deceased has requested this terrible and obscene thing instead of a proper funeral, which not only shocks them, but also raises the frightening possibility that once the word spreads, they'll also be tainted with this awful association in people's minds. I wouldn't be surprised if they fight tooth and nail to prevent the cryonics people from taking possession of the body, though I don't know what realistic chances of success they might have (which probably depends on the local laws). (I wonder if some people around here actually know of real-life stories of this kind and how they tend to play out? I'm sure at least some have happened in practice.)

I've heard of stories like that, except replace 'cryonics' with 'organ donation' and 'this terrible and obscene thing' refers to destroying the sanctity of a dead body rather than preserving the entire body cryonically. In Australia at least, the family's wishes win out over those of the deceased.

I think to be honest here you need to point out the very small chance the bracelet has of working. I think it could be aptly compared to those 'magnetic bracelets' newagey types sometimes wear which are a fast track to me not talking to them anymore.
If you replace the necklace with "losing all your hair", haven't you described chemotherapy?
(For extra fuel: losing your hair is far from the most unpleasant symptom of chemotherapy.)
Actually, I suspect that most people would answer no to this, at least in practice.
(8) Suppose you are told that your fatal disease can only be cured by wearing a necklace. You ask how many people have been cured and receive the answer "None". You ask how the necklace works, and are told that it might be nano-technology, or it might be scanning and uploading. "We don't know yet, but that there is reason to be confident that it will work." Do you wear the necklace? Answering yes to (8) means that you shouldn't refrain from cryonics because you fear signaling that you are prone to being victimized by quacks.
6Eliezer Yudkowsky12y
You're confusing different questions. Each question should isolate a single potential motivation and show that it is not, of itself, sufficient reason to refuse. If you fear signaling, don't tell people about the necklace. If you fear quacks, don't make the question be about a necklace or about signaling.
I think that was intended more as irony.
There was some irony, but skepticism is a real reason why some people refrain. The necklace is simply part of the scenario, I see no particular reason to remove it from the story except risk of confusion. So, instead of a necklace, make it a "magic decoder ring", or, if we need to maintain privacy, a "harmonic suppository". EY is right, though that if this one is meant seriously, the final sentence should read: Answering yes to (8) means that you shouldn't refrain from cryonics because you dislike being victimized by quacks.
Necklace seems OK to me - the Alcor Emergency ID Tags [] includes a necklace and bracelet. I thought Eliezer was taking your comment a bit seriously - but on rereading his comment, I now think it makes sense to ask for your objections to be split up. There's a problem, though - his "don't tell people about the necklace" sounds as though it would help to defeat its ostensible purpose. It is intended to send a message to those close to the near-death-experience. It is tricky to send that kind of message to one group, while not sending it to everyone else as well.
You mean like the warning sign of a pacemaker, or one off all the other helpful, but odd medical tools? There are many things that treat a person in need but look odd. Problem being that those get applied to sick people.
Most people don't need to know about your affiliation.
You are right about the weirdness signal, my questions don't get at this. As for (3) wouldn't a yes response imply that you do care about the past and future versions of yourself? When you write "but I just happen to be a sort of creature that gets upset when the future 'me' is threatened and constantly gets overcome with an irresistible urge to work against such threats at the present moment -- but this urge doesn't extend to the post-cryonics "me," so I'm rationally indifferent in that case." you seem to be saying your utility function is such that you don't care about the post-cryonics you and since one can't claim a utility function is irrational (excluding stuff like Intransitive preferences) this objection to cryonics isn't irrational.
Perhaps the best way to formulate my argument would be as follows. When someone appears to care about his "normal" future self a few years from now, but not about his future self that might come out of a cryonics revival, you can argue that this is an arbitrary and whimsical preference, since the former "self" doesn't have any significantly better claim to his identity than the latter. Now let's set aside any possible counter-arguments to that claim, and for the sake of the argument accept that this is indeed so. I see three possible consequences of accepting it: 1. Starting to care about one's post-cryonics future self, and (assuming one's other concerns are satisfied) signing up for cryonics; this is presumably the intended goal of your argument. 2. Ceasing to care even about one's "normal" future selves, and rejecting the very concept of personal identity and continuity. (Presumably leading to either complete resignation or to crazy impulsive behavior.) 3. Keeping one's existing preferences and behaviors with the justification that, arbitrary and whimsical as they are, they are not more so than any other options, so you might as well not bother changing them. Now, the question is: can you argue that (1) is more correct or rational than (2) or (3) in some meaningful way? (Also, if someone is interested in discussions of this sort, I forgot to mention that I raised similar arguments in another recent thread [].)
I can imagine somebody who picks (2) here, but still ends up acting more or less normally. You can take the attitude that the future person commonly identified with you is nobody special but be an altruist who cares about everybody, including that person. And as that person is (at least in the near future, and even in the far future when it comes to long-term decisions like education and life insurance) most susceptible to your (current) influence, you'll pay still pay more attention to them. In the extreme case, the altruistic disciple of Adam Smith believes that everybody will be best off if each person cares only about the good of the future person commonly identified with them, because of the laws of economics rather than the laws of morality. But as you say, this runs into (6). I think that with a pefectly altruistic attitude, you'd only fight to survive because you're worried that this is a homicidal maniac who's likely to terrorise others, or because you have some responsibilities to others that you can best fulfill. And that doesn't extend to cryonics. So to take care of extreme altruists, rewrite (6) to specify that you know that your death will lead your attacker to reform and make restitution by living an altruistic life in your stead (but die of overexertion if you fight back). Bottom line: if one takes consequence (2) of answering No to question (3), question (3) should still be considered solved (not an objection), but (6) still remains to be dealt with.

I'm often presented with a "the cycling of the generations is crucial. Without it progress would slow, the environment would be over-stressed, and there would be far fewer jobs for new young people" argument. I reply with question 8.

  1. All of these are simply increased intensity of problems that already exist. We could solve all these problems right now by killing the elderly. Are you willing to commit suicide when you reach the age of 60 (or 50, or take-your-pick) to help solve these problems? Or are you willing to grant that death is a very (ethically) bad solution, and much better solutions could be found if death was taken off the table?
The general form of this is the Reversal Test [].

I'm willing to answer yes to 1-6 and to Eliezer's 7, but I am not signed up and have no immediate plans to do so. I may well consider it if the relevant circumstances change, which are:

1. I live in the UK where no cryonics company yet operates. I would have to move myself and my career to the US to have any chance of a successful deanimation. The non-cryonic scenario would be:

8. You suffer from a disease that will slowly kill you in thirty years, maybe sooner. There is a treatment that has a 10% chance of greatly extending that, but you would have to spend the rest of your life within reach of one of the very few facilities where it is available. These are all in other countries, where you would have to emigrate and find new employment to support yourself for at least the rest of your expected time.

And I really would not give a whole-hearted yes to that.

2. I am too old to finance it with insurance: I would have to pay for it directly, as I do with everything else. I probably can, but this actually makes it easier to put off -- no pressure to buy now while it's cheap.

What I am moved to do about cryonics is ask where I should be looking to keep informed about the current state and availability of the art. Is there a good source of cryonics news? At this point I'm not interested in arguments about whether not dying is a good thing, fears of waking up in the far future, or philosophising about bodily resuscitation vs. scan-and-upload. Just present-day practicalities.

If you answered yes to all six questions and have not and do not intend to sign up for cryonics please give your reasons in the comments.

  • I do not wish to damage the ozone layer or contribute to global warming.
  • I think the resources should be spent on medical care for the young, rather than for the old. Do you know how many lives lost to measles one corpsicle costs?
  • If I am awakened in the future, I have no way to earn a living.
  • I used to like Larry Niven's sci fi.

Yes, these answers are somewhat flip. But ...

I can easily imagine someone rational sig... (read more)

Economies of scale mean that increasing numbers of cryonics users lower costs and improve revival chances. I would class this with disease activism, e.g. patients (and families of patients) with a particular cancer collectively organizing to fund and assist research into their disease. It's not a radically impartial altruist motivation, but it is a moral response to a coordination/collective action problem.

Yes, that makes sense. Though that kind of thinking does not motivate me to go door-to-door every Saturday trying to convince my neighbors to buy more science books.
You value all lives equally, with no additional preference to your own? If you suddenly fell ill with a disease which is curable, but is very expensive, would you refuse treatment to save "lives easier and cheaper to save than" your own? Naturally, insurance may cover said expensive treatment, but it can also cover cryonics. Do you only believe in insurance with reasonable caps on cost, such that your medical expenses can never be more than average?
No, in fact I am probably over to the egoist side of the spectrum among LWers. I said my answers were somewhat flip. My moral intuitions are pretty close to "Do unto others as they do unto you" except that there is a uni-directional inter-generational flow superimposed. I draw my hope of immortality from children, nephews, nieces, etc. I favor payment caps and co-pays on all medical insurance, whether I pay through premiums or taxes. That is only common sense. But capping at everybody-gets-exactly-the-average kinda defeats the purpose of an insurance scheme, doesn't it?
That doesn't make it obvious whether it's worth it though. All those people with measles were going to die anyway, after all. Saving a few people for billions of years sounds much better than saving thousands of people for dozens of years.
Whether that is true depends on the discount rate. I suspect that with reasonable discount rates of, say, 1% per annum, the calculation would come out in favor of saving the thousands. To say nothing of the fact that those thousands saved, after leading full and productive lives, may choose to apply their own savings to either personal immortality or saving additional thousands.
By sidereal or subjective time? If the former, running minds on faster hardware can evade most of the discounting losses.
Interesting distinction - I hadn't yet realized its importance. Subjective time seems to be the one to be used in discounting values. If I remain frozen for 1000 sidereal years, there is no subjective time passed, so no discounting. If I then remain alive physically for 72 years on both scales, I am then living years worth only half as much as base-line years. If I am then uploaded, further year-counting and discounting uses subjective time, not sidereal time. Thx for pointing this out. I don't see how being simulated on fast hardware changes the psychological fact of discounting future experience, but it may well mean that I need a larger endowment, collecting interest in sidereal time, in order to pay my subjective-monthly cable TV bills.
Note that this distinction affords ways to care more or less about the far future: go into cryo, or greatly slow down your upload runspeed, and suddenly future rewards matter much more. So if the technology exists you should manipulate your subjective time to get the best discounted rewards.
Very interesting topic. People with a low uploaded run speed should be more willing to lend money (at sidereally calculated interest rates) and less willing to borrow than people with high uploaded run speeds. So people who run fast will probably be in hock to the people who run slow. But that is ok because they can probably earn more income. They can afford to make the interest payments. Physical mankind, being subjectively slower than uploaded mankind and the pure AIs, will not be able to compete intellectually, but will survive by collecting interest payments from the more productive members of this thoroughly mixed economy. But even without considering AIs and uploading, there is enough variation in discount rates between people here on earth to make a difference - a difference that may be more important to relative success than is the difference in IQs. People with low discount rates, that is people with a high tolerance for delayed gratification, are naturally seen as more trustworthy than are their more short-term-focused compatriots. People with high discount rates tend to max out their credit cards, and inevitably find themselves in debt to those with low discount rates. One could write several top level posts on this general subject area.
I don't think there will be lending directly between entities with very different run speeds. If you're much slower, you can't keep track of who's worth lending to, and if you're much faster, you don't have the patience for slow deliberation. There might well be layers of lenders transferring money(?) between speed zones. Almost on topic:Slow Tuesday Night [] by R.A. Lafferty. Recommended if you'd like a little light-hearted transhumanism with casual world-building.
Actually, people probably use sidereal time in fact, not subjective time, and this is a good explanation for why people aren't interested in their post-cyronics self; because it is discounted according to all the time while they are frozen.
A portion of the discounting that's due to unpredictability does not change with your subjective runspeed. If you're dividing utilons between present you, and you after a million years in cryofreeze, you should use a large discount, due to the likelihood that your plant or your civilization will not survive a million years of cryofreeze, or that the future world will be hostile or undesirable.
I think we're talking about pure time preference here. Turning risk of death into a discount rate rather than treating it using probabilities and timelines (ordinary risk analysis) introduces weird distortions, and doesn't give a steady discount rate.
But maybe discount rate is just a way of estimating all of the risks associated with time passing. Is there any discounting left if you remove all risk analysis from discounting? Time discounting is something that evolution taught us to do; so we don't know for certain why we do it.
Certainly time discounting is something that evolution taught us to do. However, it is adjusting for more than risks. $100 now is worth strictly more than $100 later, because now I can do a strict superset of what I can do with it later (namely, spend it on anything between now and then), as well as hold on to it and turn it into $100 later.
There could be Schellingesque reasons to wish to lack money during a certain time. For example, suppose you can have a debt forgiven iff you can prove that you have no money at a certain time; then you don't want to have money at that time, but you would still benefit from acquiring the money later.
Yes, time discounting isn't just about risk, so that was a bit silly of me. I would have an advantage in chess if I could make all my moves before you made any of yours.
What's the connection to Niven? His portrayal of revival as a bad deal?
Yes. As I recall, Niven described a future in which people were generally more interested in acquiring a license to have children than in acquiring a license to thaw a frozen ancestor. There were a couple of books where a person was revived into a fairly dystopian situation - I forget their names right now. The term "corpsicle" is Niven's.
[-][anonymous]12y 6

1 should be more like: You have an illness that will kill you sometime in the next 50 years unless you have an operation right when you die but not too late. The clinics that can perform this operation are so far away that the chances of you reaching the facility in time is negligible. Do you sign up for the operation?

Edit: The correct choice of course is to move nearer to the clinics in about 20 to 30 years.

Edit2: Also there is a chance that with some more research in the next couple of years a method could be developed that might not cure you but will vastly lengthen the time until you die with a much greater chance than the operation has. Do you pay for the operation or fund that research?

  1. yes 2. yes 3 sort of 4 yes 5 yes 6 yes

I havent signed up yet because at my age (31) my annual unexpected chance of death is low in comparison to my level of uncertainty about the different options, especially with whole brain plasticization possibly becoming viable in the near future (which would be much cheaper and probably have a higher future success rate ).

There are quite many people that think like that. (Me being one atm.) Problem is, a few of us are wrong.
You could sign up for cryonics now and then switch to brain plasticization if and when it becomes available and is expected to be more effective. I am not sure how easy it is to reduce your life insurance when switching to a cheaper method, but the possibility is worth looking into if you are worried that you might pay more than you had to.

A little nit-picky, but:

A friendly singularity would likely produce an AI that in one second could think all the thoughts that would take a billion scientists a billion years to contemplate.

Without a source these figures seem to imply a precision that you don't back up. Are you really so confident that an AI of this level of intelligence will exist? I feel your point would be stronger by removing the implied precision. Perhaps:

A friendly singularity would likely produce a superintelligence capable of mastering nanotechnology.

More generally, any time the subject of AI comes up I would recommend making efforts to avoid describing it in terms that sound suspiciously like wish fulfillment, snake-oil promises, or generally any phrasing that triggers scam/sect red flags.

(Responding to old post)

This is ridiculous. Each objection makes the deal less good; several objections combined together may make it bad enough that you should turn down the deal. Just because each objection by itself isn't enough to break the deal doesn't mean that they can't be bad enough cumulatively.

I might read a 40 chapter book with a boring first chapter. Or with a boring second chapter. Or with a boring third chapter, etc. But I would not want to read a book which contains 40 boring chapters.

This is especially so in the case of objections 1 a... (read more)

I'd dispute the claimed equivalence between several of these questions and cryonics (particularly the first) and I'd also take issue with some of the premises but I'd answer yes to all of them with caveats and I'm not signed up for cryonics nor do I intend to in the near future.

The reason I have no immediate plans to sign up is that I think there are relatively few scenarios where signing up now is a better choice than deferring a decision until later. I am currently healthy but if diagnosed with a terminal illness I could sign up then if it seemed like th... (read more)

I am currently healthy but if diagnosed with a terminal illness I could sign up then if it seemed like the best use of resources at the time.

Life insurance is a lot easier to get when you are healthy and not diagnosed with a terminal illness.

Life insurance has negative expected monetary value. Since I could afford to pay for cryonics from retirement savings if I was diagnosed with a terminal illness I don't think it makes financial sense to fund it with life insurance. Funding with life insurance might have positive expected utility for someone who doesn't expect to have the funds to pay for cryonics in the near future but there's an opportunity cost associated with the expected financial loss of buying life insurance in the event that it is not needed.
Most people pay for cryonics with a life insurance policy, an option that would get very expensive for you if you were diagnosed with a terminal illness. The danger to you of waiting is that you might get a disease or suffer an accident that doesn't immediately kill you but drains your income and raises the cost to you of life insurance and so puts cryonics outside of your financial reach. You probably couldn't count on your family to financially help you in this situation as they probably think cryonics is crazy and after you "died" wouldn't see any benefit to actually paying for it. If you think you would want to signup for cryonics if you got a terminal illness I would advise you to soon buy $150,000 in (extra) life insurance, which should be cheap if you are young and healthy.
If I am diagnosed with a terminal illness then I won't be needing my retirement savings so I'd use those to pay for cryonics if I decided it was the right choice at the time.
This doesn't work if you have (or will get) a family that is financially dependent on you or you get a financially draining illness. In the U.S. (I think) if you are less than 65 years old the federal government requires you to spend most of your own money before its starts paying for some kinds of treatments. Even if you have health insurance you can lose it or run into its lifetime cap. Also, you need to factor in mental illness. Getting depression might cost you your job, drain your savings and make it really expensive for you to get life insurance. Finally, you could lose your retirement savings due to a civil lawsuit, paternity suit, divorce or criminal conviction.
Are you a life insurance salesman? I don't currently have any dependents. If I have dependents in the future I think it would likely make more sense to ensure their financial security in case of my untimely death with term life insurance and still defer a decision on paying for cryonics. I'm a British citizen and a permanent resident in Canada so health insurance issues are less of a concern for me than they might be for a US citizen. I have no family history of mental illness. You can assume I will take appropriate steps to protect my assets from the threats you describe and others as I judge necessary and prudent.
From the statistics I've seen, 1 in a 1000 over a 10 year period is definitely overconfident. It's closer to 1 in a 1000 over a 1 year period.

Quick Note: I found it mildly distract that the explanations (which all started with 'Answering' as the 1st word) were right under each question. I kept finding myself tempted to read the 'answers' first. I'd personally prefer all the explanations at the end.

[-][anonymous]10y 2

Answering yes to [“Were you alive 20 years ago?”] means you have a relatively loose definition of what constitutes “you” and so you shouldn’t object to cryonics because you fear that the thing that would be revived wouldn’t be you.

Not necessarily. My definition of “me” may depend on the context. If someone asks me that question, I assume that by “you” they mean ‘a human with your DNA who has since grown into present-you’, regardless of how much or how little I identify with him.

Twenty years ago, I was eight years old. I think that I can honestly say that if you somehow replaced me with my eight-year-old self, it would be the same as killing me. (To a great extent, I'm still mostly the same person I was at fourteen. I'm not at all the person I was at eight.)

In order for this to be an objection to immortality, you would have to believe that the immortality process halts the processes of intellectual and emotional maturation.
Good point. On the other hand, I don't know how likely it was for eight-year-old me to end up as the person I am now; for all I know, I could have ended up someone very different.

Even if someone answers yes to all six questions they could still rationally not sign up for cryonics. Aside from issues like weirdness signaling, they could not see any specific one of the six issues raised as sufficient to be an objection but consider all of them together to be enough. Thus for example one might combine 1 and 2 where the relevant payoff matrix for both issues combined (being sent into a possibly unpleasant future and having to pay a lot for an operation) combine to be enough of a concern even if neither does by itself. It seems unlikely ... (read more)

I object to (2). I'm not at all sure that I would take that job. If I did, it would be because the NASA guys got me interested in it (the NASA job, not the bit about returning to Earth in the far future) before I had to make a final decision. If they only tell me what you said (or if the job sounds really boring and useless), then I wouldn't do it. Being cyrogenically frozen isn't exactly boring, but it is useless.

And in light of that, I also object to cryonics on the basis of cost. Instead of

Answering yes to (1) means you shouldn’t object to cryoni

... (read more)

My answer to (3) is "no" for rather trivial reasons, as my state 20 years ago is most comparable to someone who died and was not a cryonics patient: the thing that existed and was most similar to "me" was the DNA of people who are related to me. I don't count that as "alive", and I doubt that most people would.

Ask me (3) in the future, and I will probably have a different answer. (Wait until I'm 24, though, because I don't really identify so well with infants.)

  1. What is "non-trivial but far from certain"? If operation's chances were as low as my estimation of cryonics I wouldn't bother so "no". With high enough chance "yes".

  2. Maybe. I don't really trust my ability to place myself in such hypothetical scenarios and I expect my answer to result more from framing effects than anything else.

  3. Sort of.

  4. Definitely not.

  5. Framing effects etc. I don't think I can reason about this clearly enough.

  6. Definitely yes.

So there's one yes. It shouldn't surprise you that I consider cryonics waste ... (read more)

The article assumes that people make such decisions rational, which is just not the case. If you ask someone 'which argument or fact could possibly convince you to sign up, or lets say at least treat the cryo option favorably' you do not get a well reasoned argument about chances of it working or personal preferences or so, but more counterarguments. Throwing more logic at the problem does not help! If you find a magic argument that suddenly convinces someone that is not convinced yet - or does the signing process more immediate than planned, then you probably learned something useful about human nature that can be applied in other areas as well.

[-][anonymous]7y 0

Shouldn't you be asking things like, So you're pro-cryonics. Why would you change your mind?

[-][anonymous]10y 0

Answering yes to (2) means you shouldn't object to cryonics because of the possibility of waking up in the far future.

An astronaut after coming back to Earth would likely have much higher social status than a cryonic patient after being revived.

And yet, they don't sound too happy [] after coming back. The conclusion I draw from this is Franklin-style [].
[-][anonymous]10y 0

Answering yes to (1) means you shouldn’t object to cryonics because of costs or logistics.

The fact that I'm willing to spend $X in order to die at 75 rather than at 25 doesn't necessarily imply that I must be willing to spend $X to die at [large number] rather than at 75.

I say yes to 2, 5, and 6. I'd personally prefer not to be tortured or wake up in a future where humans may have been wiped out by another sentient race (I doubt it).

If you wake up, humans haven't been wiped out.
There might be an 'extinct in the wild, building up a viable breeding population in captivity' situation, though.
That doesn't sound to bad. It's just humans livign as humans have.

The conclusion from 6. doesn't follow.

I agree it doesn't follow in the sense of a mathematical proof. But someone who answered yes to (6) but claimed that she doesn't value her life enough to do cryonics would sort of be contradicting themselves.
I mean that signing up for cryonics and making an attempt to avoid being butchered are so completely different that there are thousands of possible ways for one to do 6 and still consistently claim not to value ones life enough to sign up for cryonics. I don't claim to not value my life highly enough and personally think that's a completely ridiculous reason, but if there is any rational cryonics objector who actually making that excuse they would rightfully consider 6. a straw man. For example you might think that not bothering to save your life in 6 would be the rational thing to do for you given your values, but expect instinct to take over. Or you strongly object to violence and would fight just to spite your would be murderer. Or to send a signal to make murder less and resistance more appealing for others. Perhaps you are afraid of any pain involved in dieing, but not death itself, and consider cryonics useless because it doesn't prevent pain. Perhaps you hate bureaucracy and don't value your life highly enough to fill out all the forms you expect to be necessary for cryonics, but don't mind physical activity. I just think that 6. doesn't add anything useful at all given 1., and is so obviously less well thought out that it makes the whole thing weaker.
Perhaps what FAWS is getting at is that saying "yes" to 6 doesn't mean that you think your life is worth the financial cost of cryonics. But that was addressed in question 1. Saying "yes" to 6 really means that you can't pretend that your life isn't worth saving at all.
Like I argued for the others [], (6) should say something like ‘if no other objections already apply’. For instance, you might not value your life as much as cryonics costs, but that's question (1); etc.
[-][anonymous]7y -1

But what if I don't sign up for cryonics because I simply don't want to live in another time, without my friends, my family, people I owe duties to,...? What if I simply think it a dishonest way out? (I mean, I'm okay for cryopreserving other people, especially lethally ill. I don't care for the weirdness, also. But myself, no; I have a life, why would I decide to give it up?)

What do you mean by "give it up"? No one (so far as I know) gets cryopreserved until they are on the point of death. (I think generally not until they are actually, by conventional definitions, dead -- because otherwise there's the legal risk that the cryopreservation gets treated as a murder. This may differ across jurisdictions; I don't know.)
It's just that in the OP there were questions about operations etc. that made it sound like there was time to decide, that is, a person got to choose to be cryopreserved or not. Like it was not really urgent. Of course if the person cannot decide being unconscious it's another matter.
What usually happens is that a person decides, while relatively young and healthy, that they want to be cryopreserved, and at that point they sign up with an organization that provides cryopreservation services and arrange for them to be paid (e.g., by buying a life insurance policy that pays out as much as the organization charges). Later, when they die, the organization sends people to do the cryopreservation. No last-minute panicked decisions are generally involved, other than maybe "so, should we call the cryo people now?". I have not heard of anyone deciding while still young and healthy that they want to get frozen[1] right now this minute. Not least because pretty much everyone agrees that there's at least a considerable chance that they will never get revived, and giving up the rest of your life now for the sake of some unknown-but-maybe-quite-small chance of getting revived in an unknown-but-maybe-quite-bad future doesn't seem like a good tradeoff. And also because the next thing to happen might be a murder charge against the people doing the cryopreservation. [1] "Frozen" is not actually quite the right word given current cryopreservation methods, but it'll do.
1. Marriage, which compared to waking up in some distant future is a walk on a beach in terms of adjustment, comes as a shock and - well - sometimes depressing change for many people. I would thing at least some adults would be unwilling to risk cryopreservation not because of fear of the unknown, but exactly because of the unpleasantness of a known. 2. About disgust. My sister has once worked at a sanitary-epidemiological station (don't know what they are called in your area), and there was a mother who bribed a doctor to diagnose her child not with scabies that he/she had, but with some other, socially acceptable illness. The kid got the kindergarten carantined for some considerable time. So it might be people are appalled by the illness (again, I don't say there's any justification. It's just how people think, and they don't even need to know the reason why a person would choose to be cryopreserved. Now, if it was a last-minute desperate attempt at a miracle cure, this is more respectable.)
I concede that there are probably some people who, if they could, would get cryopreserved while still young and healthy in the hope of escaping a world they find desperately unpleasant for a possibly-better one. (I would guess that actually doing this would be rare even if it were legal. We're looking at someone unhappy enough to do something that on most people's estimates is probably a complicated and expensive method of suicide -- despite being young, reasonably healthy, able to afford cryopreservation, and optimistic enough about the future that they expect a better life if they get thawed. That's certainly far from impossible, but I can't see it ever being common unless the consensus odds of cryo success go way up.) But unless I'm very confused, it seems like the subject has changed here. The answer to the question "Why not sign up for cryopreservation when you die?" can't possibly be "I have a life, why would I decide to give it up?". I'm not sure I understand your point about disgust. Would you like to fill in a couple more of the steps in your reasoning?
Er, no. I meant that people who have experienced change might be less willing to choose a greater change, though it was very nice of you to understand it so. Clarifying about the latter. People might think, not quite clearly, that someone who wants a cure as early in life might have done something to need it, for example got himself an unmentionable disease. Like scabies only worse.
OK. Then I have even less clue how this relates to the discussion I thought we were originally having. I think we are all agreed that there are plenty of reasons why someone might choose not to get cryopreserved while still young and healthy. James_Miller's questions were not (I'm about 98% sure) intended to be relevant to that question; only to the question "why not arrange to be cryopreserved at the point of death?". Everything you've been saying has (I think) been answering the question "why not get cryopreserved right now, while your life is still going on normally and you're reasonably healthy?". Which is fine, except that that isn't a question that needs answering, because to an excellent first approximation no one is thinking of getting cryopreserved while still young and healthy, and no one here is trying to convince anyone that they should. OK, so this was yet another reason why some people might choose not to get cryopreserved while still young and reasonably healthy. Fine, but (see above) I think this rather misses the point.
Yes, sorry, I think I misread the questions for 2 reasons: 1, I saw no reason to be cryopreserved when old and maybe going senile, and waking to an alien universe with almost no desire to truly adapt to it, and no real drive to understand it, and 2, I might put a higher probability ofyoung and healthy people dying abruptly than you do. There are enough wars for it to happen. Cryopreservation might be awfully handy.
Of course, "cryocrastination" is a thing too.

Some of your analogies strike me as quite strained:

(1) I wouldn't call the probability of being revived post near-future cryogenic freezing "non-trivial but far from certain", I would call it "vanishingly small, if not zero". If sick and dying and offered a surgery as likely to work as I think cryonics is, I might well reject it in favor of more conventional death-related activities.

(3) My past self has the same relation to me as a far-future simulation of my mind reconstructed from scans of my brain-sicle? Could be, but that's far fr... (read more)