Imagine Omega came to you and said, "Cryonics will work; it will be possible for you to be resurrected and have the choice between a simulation and a new healthy body, and I can guarantee you live for at least 100,000 years after that. However, for reasons I won't divulge, your surviving to experience this is wholly contingent upon you killing the next three people you see. I can also tell you that the next three people you see, should you fail to kill them, will die childless and will never sign up for cryonics. There is a knife on the ground behind you."

You turn around and see someone. She says, "Wait! You shouldn't kill me because ... "

What does she say that convinces you?

[Cryonics] takes the mostly correct idea "life is good, death is bad" to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can't be more specific).

That's a quote from a comment in a post about cryonics. "I can't be more specific" is not doing this comment any favors, and overall the comment was rebutted pretty well. But I did try to imagine other these other valuable parts, and I realized something that remains unresolved for me.

Guaranteed death places a limit on the value of my life to myself. Parents shield children with their bodies; Casey Jones happens more often. People run into burning buildings more often. (Suicide bombers happen more often, too, I realize.)

I think this is a valuable part of humanity, and I think that an extreme "life is good, death is bad" view does do violence to it. You can argue we should effect a world that makes this willingness unnecessary, and I'll support that; but separate from making the willingness useless, eliminating that willingness does violence to our humanity. You can argue that our humanity is overrated and there's something better over the horizon, i.e. the cost is worth it.

But the incentives for saving 1+X many lives at the cost of your own just got lessened. How do you put a price on heaven? orthonormal suggests that we should rely on human irrationality here to keep us moral, that thankfully we are too stupid and slow to actually change the decisions we make after recognizing the expected value of our options has changed, despite the opportunity cost of these decisions growing considerably. I think this a) underestimates humans' ability to react to incentives and b) underestimates the reward the universe bestows on those who do react to incentives.

I don't see a good "solution" to this problem, other than to rely on cognitive dissonance to make this not seem as offensive as it is now in the future. The people for whom this presents a problem will eventually die out, anyway, as there is a clear advantage to favor it. I guess that's the (ultimately anticlimactic) takeaway: Morals change in the face of progress.

So, which do you favor more - your life, or identity?

EDIT: Well, it looks like this is getting fast-tracked for disappeared status. I think it's interesting that people seem to think I'm making a statement about a moral code. I'm not; I'm talking about incentives and what would happen, not what the right thing to do is.

Let's say Eliezer gets his wish and cryonics many, many parents sign up for cryonics and sign their children up for cryonics. Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency? I don't think it's possible to say that with a straight face and mean it; populations respond to incentives, and the incentives just changed for that population.


111 comments, sorted by Click to highlight new comments since: Today at 8:08 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Guaranteed death places a limit on the value of my life to myself

It puts a limit on the value of other lives, too.

Whatever a life is worth, so long as it's the same factor affecting the potential worth of all lives, the dilemma of altruism or selfishness is the same.

A standard measure of how much a life is worth is an estimate of the value "time until death."

Your overall point seems to be: "If some people live a really, really long time, and others don't, we won't value the lives of the 'mortals' as much as we do those of the 'immortals.'"

But don't we value saving nine-year-olds more than ninety-year-olds? The real question is, "If I'm immortal, why aren't they?"

You also miss the obvious positive effects of valuing life more greatly. War would be virtually impossible between immortal nations, at least insofar as it requires public support and soldiers. It would also be (to some degree) morally defensible for immortal nations to value citizen-lives higher than they value the lives of mortal-nations, which means they would be more willing to use extreme force, which means mortal nations would be much more hesitant to provoke immortal nations. Also, our expenditures on safety and disaster preparedness would probably increase exponentially, and our risk-taking would also decrease dramatically.

In other words, I'm not sure this post clearly communicates your point, and, to the extent it does, your point seems underdeveloped and quite probably bad.

This depends to an extent on the nature of the immortalizing technology. I agree with you if the technology doesn't permit backups, but I disagree with you if backups can be done (at least with respect to the risk of local death). In particular an uploading-based technology, with an easy way to make backups, might result in the average person taking more risks (at least risks of one copy being killed - but not the whole ensemble of backups) than they do now.
I'm not yet sold on the perfect substitutability of backups, but the point, while interesting, is quite irrelevant in this context. If backups aren't perfect substitutes, they won't affect people's behaviour. If they are, then increased risk is essentially immaterial. If I don't care about my mortality because I can be easily resurrected, then the fundamental value of me taking risks changes, thus, the fact that I take more risks is not a bad thing. Now, there may be a problem that people are less concerned with other people's lives, because, since those people are backed up, they are expendable. The implications there are a bit more complex, and that issue may result in problems, though such is not necessarily the case.
Richard Morgan's sci-fi trilogy, Altered Carbon [], Broken Angels [] and Woken Furies [] have an entertaining take on the implications of universal backups.
Many thanks!
We value saving lives who have a high expected time until death, so yes, we value saving nine-year-olds more than ninety-year-olds. This would presumably become reversed if the child had 1/10th the expected time until death as the old man. The real answer is it doesn't matter - not everyone will enroll. Our expenditures on safety and disaster preparedness would increase, but you're probably overrating the relative benefit, because the tragedy from accidents would increase suddenly while our ability to mitigate them lags - we would be playing catch-up on safety measures for a long time.
At least to the extent that this preference comes from deliberative knowledge, rather than free-floating [] norms about the value of children, or instinct [].
Yes, to that extent. The amount that we value the child's life does start with an advantage against the amount we value the old man's life, which is why I chose a drastic ratio.
If you mean that humans intuitively measure things on a comparative scale, and thus increasing the value of an outcome that you failed to get can make you feel worse than not having had the chance in the first place -- yes, I agree that it is descriptively true. But the consequentialist in me says that that emotion runs skew to reality. On reflection, I won't choose to discount the value of potential-immortality just because it increases the relative tragedy of accidental death.

What does she say that convinces you?

"The entity that gave you instruction did not provide you adequate evidence in support of its claims! The odds that it's just messing with you are more orders of magnitude than you can count more likely than the truth of its statement."

What does she say that convinces you?

She doesn't have to say anything - she would have had to push herself well out of the norm and into the range of "people whose richly deserved death would improve the world" before I would even consider it.

I would just say "Omega, you're a bastard", and continue living normally.

Imagine Omega said, "The person behind you will live for 30 seconds if you don't kill her. If you kill her, you will continue leading a long and healthy life. If you don't, you will also die in 30 seconds." Do you say the same thing to Omega and continue enjoying your 30 seconds of life?
No difference. I won't buy my life with murder at any price. (Weighing one-against-many innocents is a different problem.) And I'd be calling Omega a bastard because, as an excellent predictor, he'd know that, but decided to ruin my day by telling me anyway.
Can you explain, then, how this is different then suicide, since your theft of her life is minimal, yet your sacrifice of your own life is large?
It's not suicide, I'm just bumping into a moral absolute - I won't murder under those circumstances, so outcomes conditional on "I commit murder" are pruned from the search tree. If the only remaining outcome is "I die", then drat.
For 30 seconds, I kill her. For an hour, we both die. I think my indecision point is around 15 minutes.
7Eliezer Yudkowsky12y
...thank you for your honest self-reporting, but I do feel obliged to point out that this does not make any sense.
I didn't think it through for any kind of logical consistency -- it's pure snap judgment. I think my instinct when presented with this kind of ethical dilemma is treat my own qalys (well, qalsecs) as far less valuable than those of another person. Or possibly I'm just paying an emotional surcharge for actually taking the action of ending another person's life. There was some sense of "having enough time to do something of import (e.g., call loved ones)" in there too.
But isn't this time relative to lifespan? What if your entire lifespan were only 30 minutes?
I think my reaction would be "fuck you Omega". If an omniscient entity decides to be this much of a douchebag then dying giving them the finger seems the only decent thing to do.
My implied assumption was Omega was an excellent predictor, not an actor - I thought this was a standard assumption, but maybe it isn't.
Showing that the original question had little to do with cryonics...
This question is a highly exaggerated example to display the incentives, but cryonics subscribers will be facing choices of this kind, with much more subtle probabilities and payoffs.

What does she say that convinces you?

  • I am wired with explosives triggered by an internal heart rate monitor.
  • My husband, right next to me, is 100 kg of raw muscle and armed.
  • I was the lead developer of an AGI that is scheduled to hit start in three weeks. I quit when I saw that the 'Friendliness' intended is actually a dystopia and my protested were suppressed. I have just cancelled my cryonics membership and the reason your cryonic revival is dependent on killing me is that I am planning to sabotage the AI.
  • A catch all: Humans can always say with si
... (read more)
Is it weird that my first reaction is to ask her specific questions about the Sequences to test the likelihood of that statement's veracity?
Upvoted for being the only one to actually answer that question. I'm not comfortable answering, but let's just say that I would have an eternity to atone for my sins.

Imagine Omega came to you and said, "Cryonics will work; it will be possible for you to be resurrected and have the choice between a simulation and a new healthy body, and I can guarantee you live for at least 100,000 years after that. However, for reasons I won't divulge, your surviving to experience this is wholly contingent upon you killing the next three people you see.

This offer could have positive expected value in terms of number of lives if, for example, you were a doctor who expected to save more than three lives during the next 100,000 ye... (read more)

Once upon a time, there was a policeman, called John Perry.

I just feel like saying this:


Sorry. (I don't mean anyone here, I just had to say it.)

Cryonics is good because life is good. The subjective value of my life doesn't make it ok to kill someone I perceive as less valuable.

Here's another argument against: if murder suddenly becomes a defensible position in support of cryonics, then how do you think society, and therefore societal institutions, will respond if murder becomes the norm? I think it becomes less likely that cryonic institutions will succeed, and thus jeopardize everyone's chances of living 100,000+ years.

It's not about what's okay; it's about what people will actually do when their life expectancy goes up drastically.
That's the point I'm trying to make. An action that could appear to increase life expectancy drastically could actually have the opposite affect (in the situation I propose by affecting the institutional structure required for cryonics to succeed).
Yes, once cryopreservation is widespread across the globe. But when only some people access and others don't, and we have a decent shot of actually being revived, the tragedy from a cryonics subscriber losing their life is much greater than when a non-cryonics subscriber loses their life.
Also known as the Categorical Imperative.

People's willingness to sacrifice their own lives might change drastically, agreed.

But there are counteracting factors.

People will think far more long term and save more. They might even put more thought into planning. The extra saving might result in an extra safety widget that saves more lives. You can't really disregard that.

They will be more polite and more honest . Because life's too long and the world's too small. Think ten times before cheating anyone. The extra business that will generate and the prosperity that will bring might save more lives th... (read more)

There are other similar dilemmas like: why do you go to the cinema if that money could be spent saving one of 16,000 children who die from hunger every day?

My answer is: we are all selfish beings but whereas in our primitive environment(cavemen) the disparities wouldn't be that great for lack of technology nowadays those who have access to the latest technology can leverage much more advantage for them. But unfortunately if you have to make the decision between cryonics for yourself vs. saving N children from starvation: If you still want to be alive in 10... (read more)

I agree that I don't think moral judgments are the issue. I don't think that's what mores are. Values are critical to identity; changing them to increase life expectancy changes your identity.

I would be interested to hear from those who are actually signed up for cryonics. In what ways, if any, have you changed your willingness to undertake risks?

For example, when flying, do you research the safety records of the airlines that you might travel with, and always choose the best? Do you ride a motorbike? Would you go skydiving or mountaineering? Do you cross the road more carefully since discovering that you might live a very long time? Do you research diet, exercise, and other health matters? Do you always have at the back of your mind the though... (read more)

I haven't significantly changed my willingness to take risks, but then again I have always been very risk averse. I would never ride a motorbike or go mountaineering etc. I eat well, don't smoke, try to avoid stress and exercise regularly. I did all these things even before I took cryonics seriously . This is because it was obvious that being alive is better than being dead, and these things seemed like obvious ways in which to preserve my life as long as possible. If I found out tomorrow that cryonics was proven to NOT work, I'd still continue crossing the road very carefully.
That matches my intuition, which I'd express as: it's a particular disposition toward life risks that makes someone interested in cryonics, rather than signing up for cryonics which makes someone more prudent. (Just a hunch, I'm not saying I've thought this through.) There are some activities I like which seem riskier than they are, such as treetop courses; the equipment makes them perfectly safe but I enjoy the adrenalin rush. When I travel by plane I enjoy takeoff and landing for similar reasons, and flying in general whenever there is a clear view of land. (Not everything abouf flying is enjoyable.)
I'm another one who is signed up, has always been risk-averse, and hasn't changed risk-averse behavior as a result of cryonics membership. One general comment: To my mind, infinite life has something like a net present value with a finite interest rate. I probably don't apply a consistent discount rate (yes, I've read the hyperbolic discounting article). For a order-of-magnitude guess, assume that I discount at 1% annually and treat a billion year life expectancy as being roughly as valuable as 100 years of life - and a 1% chance of that as being roughly equal to gaining an extra year. I'm now 51, so adding 1 year to say 25 years is a 4% gain. Not trivial, but not a drastic increase either.
I'm not following this. Is the billion year life expectancy roughly as valuable as 100 years certain?
I'm saying that, roughly speaking, I value next year at 99% of this year, the year after that at (99%)^2 of this year, the year after that at (99%)^3 and so on. The integral of this function out to infinity gives 100 times the value of one immediate year. I'm not quite sure what you mean by "certain". Could you elaborate? I'm not trying to calculate the probability that I will actually get to year N, just to very grossly describe a utility function for how much I'd value year N from a subjective view from the present moment.
But you can't value year N at a time-discounted rate, because the unit you are discounting is time itself. Why discount 1,000,000,000 years if you won't discount 100 years? I don't understand why one can be discounted, yet the other cannot.
It's not time you're discounting; it's the experience of living that quantity of time.
I don't see the difference. Money can be spent at once, or over a period of time, so the time value of money makes sense. But you can't live 100 years over any period other than 100 years. I don't understand the time value of the experience of living.
Me-in-100 years is not me-now. I can barely self identity with me-10-years-ago. Why should I value a year someone else will live as much as I value this year that I'm living?
I have a pretty different set of values than I had fifteen years ago, but I still consider all those experiences to have happened to me. That is, they didn't happen to a different identity just because I was 18 instead of 33. It is possible that if the me of 18 and the me of 33 talked through IRC we wouldn't recognize each other, or have much at all to talk about (assuming we avoid the subject of personal biography that would give it away directly). The me of 67 years from now, at 100 years of age, I can also expect to be very different from the me of now, even more different from the me of now than the me of 18. We might have the same difficulty recognizing ourselves in IRC. Yet I'm confident I'll always say I'm basically the same person I was yesterday, and that all prior experiences happened to me, not some other person; regardless of how much I may have changed. I have no reason not to think I'd go on saying that for 100,000 years.
I remember all (well, most) of my prior experiences, but memory is a small aspect of personal identity, in my opinion. Compared to my old self 10 years ago, I react different, feel differently, speak differently, have different insights, different philosophical ideas, different outlooks. It comes down to a definition though, which is arbitrary. What's important is: do you identify with your future self enough to have a 1-1 trade off between utility for yourself (you-now) and utility for them (you-in-the-future)?
It is not just memory of experiences from fifteen years ago that make me consider it may be the same identity but the fact that those experiences shaped my identity today, and it happened slowly and contiguously. Every now and then I'd update based on new experiences, data or insights but that didn't make me a different person the moment it happened. If identity isn't maintained through contiguous growth and development then there really is no reason to have any regard at all for your possible future, because it isn't yours. So smoke 'em if you got 'em.
I think you're making things artificially binary. You offer these two possibilities: * Contiguous experience implies I completely identify with my past and future selves * Identity isn't maintained through contiguous growth so there is no reason to have any regard at all for my future Why can't contiguous experience lead one to partially identify with their past and future selves?
Good point. Maybe my future self isn't exactly me, but its enough like me that I still value it. It doesn't really matter though, because I never get to evaluate my future self. I can only reflect on my past. And when I do I do I feel like it is all mine...
So can you state the discount rate equivalencies in terms of Me-in-an-amount-of-years?
Jordan states it correctly. bgrah449, to put it in terms of decisions: I sometimes have to make decisions which trade off the experience at one point in time vs the experience at another. As you noted in your most recent post, money can be discounted in this way, and money is useful because it can be traded for things that improve the experience of any given block of time. By discounting money exponentially, I'm really discounting the value of experience - say eating a pizza - at say 5 years from now vs. now. Now I also need to be alive (I'm taking that to include an uploaded state) to have any experiences. I make choices (including having set up my Alcor membership) that affect the probability of being alive at future times, and I trade these off against goods that enhance my experience of life right now. When I say that I value a year of time N years from now at around .99^N of the value I place on my current year, it means that I would trade the same goods for the same probability change with that difference of weights for those two years. If I, say, skip a pizza to improve my odds of surviving this year (and experiencing all of the events of the year) by 0.001%, I would only be willing to skip half a pizza to improve my odds of surviving from year 72 to year 73 (and having the more distant experiences of that year) by 0.001%. Is that clearer?
Yes - thanks!
I don't think I have. Compared to most people I'm probably a bit of a risk-taker. I don't research airline safety records. I ride a motorcycle. I haven't gone skydiving or mountaineering but both of those sound fun. I don't cross the road more carefully. I exercise (I've always enjoyed running), but don't diet (I've always enjoyed ice cream). Unless I'm discussing cryonics with someone else I mostly don't think about it.
Most of this is just scope insensitivity and the availability bias: for instance, air travel is ridiculously safe, but airplane accidents are well publicized. Researching airline safety records is a little silly given how safe air travel is. I ride a motorcycle too, and it's much safer than it appears to some people, as long as you are properly trained, wear a helmet, and don't drink and drive. Same goes for skydiving or mountaineering.
I agree with you except for when it comes to motorcycles. Motorcycles are 4-5 times more deadly than cars in terms of death rate per vehicle and 30 times more deadly in terms of deaths per miles traveled. (See [] ) Some of that is due to unsafe behavior by riders, but the fact is that a metal cage protects you better than a metal horse. Also, losing tire grip in a car means you slide. Losing grip on a bike means you fall.

My actual reaction in the scenario you describe would be to say "Piss off" before I turned around.

But cryonics is a wash as far as taking risks goes. First, nobody is sure it will work, only that it gives you better odds than burial or cremation. Second, suppose you were sure it will work, becoming a fireman looks like a better deal -- die in the line of duty and be immortal anyway. Granted something might happen to destroy your brain instantly, but there's no reason to believe that's more likely than the scenario where you live to be old and your brain disintegrates cell by cell while the doctors prolong your agony as far as they can.

"I don't want to achieve immortality through [dying while rushing into a burning building]; I want to achieve immortality through not dying."
I'm curious, what's the flaw in my logic that the downvoters are seeing? Or is there some other reason for the downvotes?
The only logic flaw I see is that dying in a fire doesn't seem conducive to having a well-preserved brain - being on fire is sure to cause some damage, and as I understand it buildings that are on fire are prone to collapsing (*squish*). (There is an upside: If cryo was common, there'd likely be a cryo team on standby for casualties during a fire that was being fought. That wasn't obvious to me when I first thought about it, though.)
Are you serious? You conflated the fame of a firefighter who dies in the line of duty (which doesn't even last very long) with the immortality of actually living forever.
Ah! Thanks for the clarification -- I don't know why people thought I was talking about fame, but given that they did, that would certainly account for the down votes! What I mean is that in most cases where you die in the line of duty, your body will be recoverable and brain preservable. Yes, there are ways for this to not happen -- but there are also ways for it to not happen when you die of old age. Any claim that cryonics makes taking hazardous jobs irrational from a self-preservation viewpoint would have to provide some basis for believing the latter to have better odds than the former.
Ah, your original comment makes more sense with that explanation. I had originally interpreted your statement as meaning that the risks/costs and rewards of cryonics was a wash, and with that framing, I misinterpreted the rest of it.
First, the only certainties in life are death and taxes. Cryonics aside, we should talk in probabilities, not certainties, and this is true of pretty much everything, including god, heliocentrism, etc. Second, cryonics may have a small chance of succeeding - say, 1% (number pulled out of thin air) - but that's still enormously better than the alternative 0% chance of being revived after dieing in any other way. Dieing in the line of duty or after great accomplishment is similar to leaving a huge estate behind - it'll help somebody, just not you. Third, re senile dementia, there is the possibility of committing suicide and undergoing cryonics. (Terry Pratchett spoke of a possible assisted suicide, although I see no indication he considered cryonics.) If cryonics feels like a wash, that's a problem with our emotions. The math is pretty solid.

Cryonics aside, we should talk in probabilities, not certainties, and this is true of pretty much everything, including god, heliocentrism, etc. Second, cryonics may have a small chance of succeeding - say, 1% (number pulled out of thin air) - but that's still enormously better than the alternative 0% chance of being revived after dieing in any other way.

Did these two sentences' adjacency stick out to anybody else?

Good eyes! And it drills down to the essential problem with the but-it's-a-chance argument for cryonics: is it enough of a chance relative to the alternatives to be worth the cost?
Pardon me, now I'm the one feeling perplexed: where did I screw up?
0% is a certainty.
Cf. "But There's Still a Chance, Right? []".
Expressing certainty ("0% chance of being revived after dieing in any other way").
You are strictly correct, but after brain disintegration, probability of revival is infinitesimal. You should have challenged me on the taxes bit instead :-)
If you represent likelyhoods in the form of log odds, it is clear that this makes no sense. Probabilities of 0 or infinitesimal both are equivalent to having infinite evidence [] against a proposition. Infinitesimal is really the same as 0 in this context.
I accept this correction as well. Let me rephrase: the probability, while being positive, is so small as to be on the magnitude of being able to reverse time flow and to sample the world state at arbitrary points. This doesn't actually change the gist of my argument, but does remind me to double-check myself for nitpicking possibilities...
I like epsilon [] and epsilon-squared [] to represent too-small-to-be-worth-calculating quantities.
I don't have a problem with that usage. 0% or 100% can be used as a figure of speech when the proper probability is small enough that x < .1^n (4 (or something appropriate) < n) in 0+x or 1-x. If others are correct that probabilities that small or large don't really have much human meaning, getting x closer to 0 in casual conversation is pretty much pointless. Of course, a "~0%" would be slightly better, if only to avoid the inevitable snarky rejoinder.
0pdf23ds12y []
"What weighs seventy kilos?" ['s_7#Orbit]
1Paul Crowley12y
I remember that vividly! Though I tend to prefer to quote the next line, much as I think "twenty seconds to comply" is less cool than "you now have fifteen seconds to comply"...

Guaranteed death places a limit on the value of my life to myself. Parents shield children with their bodies; Casey Jones happens more often. People run into burning buildings more often. (Suicide bombers happen more often, too, I realize.)

I'm not sure I interpret this the same way you do.

My understanding is that parents are willing to risk their lives for their children mostly because that's how we've been programmed by evolution by natural selection, not because we consciously or unconsciously feel that our death is putting a limit on the value of our... (read more)

We're trying to develop the means to overcome weaknesses evolution has left in us. As life expectancy grows much higher, there will be an incentive to overcome those instincts that cause us to risk our life, no matter what our current moral instincts say about the reason for doing so.
That's possible. But that wouldn't happen in a vacuum. Fewer people might risks their lives for others, but at the same time, society would probably put a lot more resources in making everything much safer, so the overall effect would be that fewer people will end up in situations where someone else would have to risk their lives to save them (something that isn't reliable [] even now, which is why it's so mediatized when it happens).
When I hear the word 'safer' I reach for my gun.
I assume "safer" means things like "Click It or Ticket []" - what are you referring to?
Yeah, I was mostly thinking about things like safer cars (more safety testing and more stringent tests, better materials, next generation 'vehicle stability control' and laser cruise control used for emergency braking, mesh networking, 4-point seatbelts, etc), better design of sidewalks and bike paths, the hardening of buildings in earthquake and hurricane-prone areas, automatic monitoring systems on swimming pools to prevent accidental drowning, etc. But really, what do people die from in stable rich countries? It's really the diseases of aging that we need to cure (see []). After that, your chances of dying from an accident are already very low as things stand, and there are still lots of low-hanging fruit ways to make things safer... I don't think making us very safe in the near-term requires a Big Brother state keeping us in plastic bubbles. And if we take care of aging, most people will probably live long enough without dying in an accident to either see Friendly AI or some form of brain backup technology further reduce risk, or they'll all die from an existential risk that we've failed to prevent.

Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency?

Agreed--indeed, I suspect that one of the first steps to fundamentally altering the priorities of society may be the invention of methods to materially prolong life, such that it really does become an unspeakable tragedy to lose somebody permanently.

Humans risk their lives for less noble causes as well. Extreme sports and experimental air craft being some examples. I have a romantic streak in me that says that yes death is worse than life, but worrying overly about death also devalues life.

Should I pore over actuarial statistics and only select activities that do not increase my chance of death?

The question isn't should you; the question is whether you would, especially considering that people do it now.
As I said I don't like to worry about death, not that I find my death unpleasant to talk about, just that valuable brain cycles/space will be used doing so. And I'd much rather be thinking about how people can live well/efficiently/happily than obsess over extending my life. So I wouldn't. In my question I was trying to gauge the activism of the community. I already have people trying to convince me to freeze myself, will they also be campaigning against mountain climbing/hiking in the wilderness? ETA: I do worry about the destruction of the human species, but that has less impact on my life than worrying about death would.
1) You're being recruited to sign up for cryonics because it makes the recruiters' own cryonics investment a) better and b) less weird. A large population of frozen people encourages more investment than a small number of frozen cranks. 2) Probably to the same extent that people discourage smoking and riding a bike without a helmet - subtly try to make their own safety precautions seem less timid by trying to label those who disregard them as stupid.
Surely you already take into account how dangerous various activities are before deciding to do them? Everyone has different thresholds for how much risk they are willing to take. Anyone that does not take risk into account at all will die very rapidly.
And anyone who obsesses over risk too much will have a life not worth living, which - compared to the risk of injury from mundane activities - is the greater risk. “Life is not measured by the number of breaths we take but by the moments that take our breath away." That's perhaps a slight exaggeration, a long life of small pleasures would compare favorably to a shorter life filled with ecstatic experiences, but the point is that a warm breathing body does not a life make.
People's actions reveal that they do not measure life this way.
There is a difference between conscious thought and gut feeling. I'm quite happy to rely on my gut feeling for danger(as I get it for free), but do not want to promote it to conscious worrying in my every day life.
I'm kind of the opposite. My 'gut' feelings tend to rate most things as being dangerous, and I rely on my awareness of actual risk to be able to do pretty much anything. I don't think I obsess over risk either - but that's maybe because I have been doing this all my life :-). I also don't think my life has not been worth worth living - quite the opposite, or I wouldn't have signed up for Cryonics!
Your gut feeling is informed by what you consciously choose to read.

An alternate scenario: Omega forms an army and conscripts three people into it, and orders them to kill you. Omega then hands you a knife, with which you can certainly dispatch the unarmed, untrained conscripts who obediently follow their commander's wishes (despite vague apprehensions about war and violence and lack of a specific understanding for why they are to kill you).

Unfortunately, Omega is a very compelling commander and no surrender or retreat is possible. Its kill or be killed.

What do you do?

The debate already exists, for altruists who care about future generations as well: would you kill three people to stop an asteroid/global warming/monster of the week from killing more in future?

This is just the same question, made slightly more immediate and selfish by including yourself in that future.

[-][anonymous]12y 0

ETA: To the extent that your post is asking about personal behaviour, you perhaps should have made that point clear. You appear to be making a general point about morality, and your "kill three people" hypothetical appears to distract from your actual point, and is probably a large part of why you're getting downvoted, as it's rather antagonistic. I'll keep the rest of my comment intact, as I believe it to be generally relevant.

This would be more constructive were it not self-centered, i.e. if the question were, "I'll grant so-and-so 100,000... (read more)

I assumed rational readers would know that they are not immune to incentives that affect "other people."
[-][anonymous]12y 0

You turn around and see someone. She says, "Wait! You shouldn't kill me because ... "


She says, "Wait! You shouldn't kill me because I'm signed up for cryonics too! This means that the total utility change will be negative if you kill me and the other people!"


"Wait! You shouldn't kill me because selfishly murdering others for personal gain is not a characteristic of a virtuous man!"


"Wati! You shouldn't kill me because it's against the rules! Against the Categorical Imperativ... (read more)

The utilitarian justification doesn't work because Omega said the victims aren't signed up for cryonics.
Thanks for pointing that out. Comment deleted.
I wish you'd kept the rest of that comment: the other justifications were good. There are other utilitarian justifications as well, based on the harm that murder does to society. (See Zachary_Kurtz's comment above.)

I don't see what there is to learn from this question.

If I kill the next three people, are they cryogenically preserved? Or is the next sentence implying an upper bound to the value of their life as opposed to contrasting with what would happen should you kill them?

I can also tell you that the next three people you see, should you fail to kill them, will die childless and will never sign up for cryonics. There is a knife on the ground behind you."

So, if you fail to kill them, they wind up childless without cryonics

Does this mean that if you do kill them that they will get Cryonics and Children?