Singularitarians frequently lament the irrevocably dead and the lack of widespread application of cryonics. Many cryonocists feel that as many lives as possible should be (and in a more rational world, would be) cryopreserved. Eliezer Yudkowsky, in an update to the touching note on the death of his younger brother Yehuda, forcefully expressed this sentiment:

"I stand there, and instead of reciting Tehillim I look at the outline on the grass of my little brother's grave. Beneath this thin rectangle in the dirt lies my brother's coffin, and within that coffin lie his bones, and perhaps decaying flesh if any remains. There is nothing here or anywhere of my little brother's self. His brain's information is destroyed. Yehuda wasn't signed up for cryonics and his body wasn't identified until three days later; but freezing could have been, should have been standard procedure for anonymous patients. The hospital that should have removed Yehuda's head when his heart stopped beating, and preserved him in liquid nitrogen to await rescue, instead laid him out on a slab. Why is the human species still doing this? Why do we still bury our dead? We have all the information we need in order to know better..."

Ignoring the debate concerning the merits of cryopreservation itself and the feasibility of mass cryonics, I would like to question the assumption that every life is worth preserving for posterity.

Consider those who have demonstrated through their actions that they are best kept excluded from society at large. John Wayne Gacy and Jeffrey Dahmer would be prime examples. Many people write these villains off as evil and give their condition not a second thought. But it is quite possible that they actually suffer from some sort of mental illness and are thus not fully responsible for their crimes. In fact, there is evidence that the brains of serial killers are measurably different from those of normal people. Far enough in the future, it might be possible to "cure" them. However, they will still possess toxic memories and thoughts that would greatly distress them now that they are normal. To truly repair them, they would likely need to have many or all of their memories erased. At that point, with an amnesic brain and a cloned body, are they even really the same person, and if not, what was the point of cryopreserving them?

Forming a robust theory of mind and realizing that not everyone thinks or sees the world the same way you do is actually quite difficult. Consider the immense complexity of the world we live in and the staggering scope of thoughts that can possibly be thought as a result. If cryopreservation means first and foremost mind preservation, maybe there are some minds that just shouldn't be preserved. Maybe the future would be a better, happier place without certain thoughts, feelings and memories--and without the minds that harbor them.

Personally, I think the assumption of "better safe than sorry" is a good-enough justification for mass cryonics (or for cryonics generally), but I think that assumption, like any, should at least be questioned.

 

New to LessWrong?

New Comment
79 comments, sorted by Click to highlight new comments since: Today at 7:54 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

From consequentialist perspective, the value of not saving a life is the same as the value of killing someone. In which light, the title of your post becomes, "Is every person really worth not killing?" Try re-reading the argument with this framing in mind.

(Avoiding measures that save lives with certain probability is then equivalent to introducing the corresponding risk of death.)

[-]smijer12y160

If the value of not saving a life is the same as the value of killing someone, that's fine. We can do that exercise and re-frame in terms of killing, and do the consequentialist calculation from there. The math is the same. If the goal is to bring ourselves to calculate from the heightened emotional perspective associated with killing, though, it is time to drop that frame and just get back to the math.

In terms of the opening post, the math is going to be similar even for the creation of all possible minds. If we have a good reason to restore every mind that has lived, it seems very probable that we have the exact same reason to create every mind that has not lived.

I'm not sure I see what that value is, though. Even if I want to live forever - and continue to want to live forever right up to the point that I am dead... One second after that point, I no longer care. At that point, only other living minds can find value in having me alive. It's up to them if they want to invest their resources in preserving and re-animating me or prefer to invest more of their resources in keeping themselves alive and creating more novel new minds through reproduction.

9wedrifid12y
Well spotted. I was wondering if anyone was going to notice that Vladimir's (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.
3ArisKatsaris12y
If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn't you so mention it yourself -- instead of waiting to see if anyone else said it? I can conceive of some comments that are good to be made by only specific individuals, given specific contexts -- but I don't see this being one of them. I find the attitude of "waiting to see if anyone else does this" and afterwards condemning/praising people collectively for failure/success in doing whatever person-failed-to-do-themselves an extremely distasteful one to me.
9wedrifid12y
I did write a reply when Vladimir first wrote the comment. But I deleted it since I decided I couldn't be bothered getting into a potential flamewar about a subject that I know from experience is easy to spin for cheap moral-high-group points ("you're a murderer!", etc). I long ago realized that it is not (always) my responsibility to fix people who are wrong on the internet. Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.
4ArisKatsaris12y
Okay, I think I find this a good reason. Thank you for explaining.
0fortyeridania12y
You find this a good reason for what? (1) For supporting smijer's comment (2) For not chiming in when he first had the idea If you mean the first...why? That wasn't the issue. The issue was why wedrifid hadn't chimed in. As for the second, wouldn't this imply that wedrifid was holding out because he expected someone with low karma to speak up first?
0ArisKatsaris12y
For the seeming inconsistency I had noticed between (1) and (2).
3fortyeridania12y
Not wanting to get into a flamewar is, of course, reasonable. But daring to be the first to dissent is a valuable service, too.
2smijer12y
I appreciate the support.
1XiXiDu12y
Off topic: If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site. In any case, here are some snippets from comments made by you in the past 30 days: I predict that within 5 years you will become frequently appalled by the voting behavior on this site and in another 10 years you'll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong because it doesn't refine what you deem rational nor does it provide valuable feedback but instead does lend credence to the arguments of trolls (as you would call them).
3wedrifid12y
I doubt I ever took such a broad stance. You seem to have generalized to a large category so that you can fit me into it. In fact one of those artfully trimmed quotes you make there should have, if parsed for meaning rather than scanned for quotable keywords, given a far more reasonable impression of where my preferences lie on that subject. Quite possible. A few years after that I may well start telling kids to get off my lawn and tell stories about "When I was your age". Money. Make the prediction with money. Because I want to take it. Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.
5gwern12y
At least for myself, I'm happy to give that a low probability. Even with the lowered quality since Eliezer stopped writing, LW is still much better - thanks to karma - than OB or SL4 were.
2XiXiDu12y
How do you know this? Would a reputation system cause the Tea Party movement to become less wrong? The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It's the people who make places better off than others. It is trivially true that the lesswrong reputation system would fail if there were more irrational people here than rational people, where 'rational' is defined according to your criteria (not implying that your criteria are wrong). I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don't like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses. And as I wrote before, the curren reputation system favors non-technical posts. More technical posts often don't receive the same amount of upvotes as non-technical posts and technical posts that turn out to be wrong are downvoted more extensively. This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.
3shminux12y
A reputation system necessarily favors status quo. This community are mostly aspired rationalists, not professionals in philosophy/decision theory/psychology, though there are a number of experts around. Accuracy of technical posts is hard to judge, so people probably go by the post quality, their gut feeling and how well it conforms to what has been agreed upon as correct before. Plus the usual points for humor. Minus penalty for poor spelling/grammar/style. An example of a reputation system that works for a technical forum is MathOverflow, though partly because the mods are quite ruthless there about off-topic posts. ...which likely means that this forum is not the right one for them. LW is open enough to resist "evaporative cooling", and rapid downvoting inhibits all but expert trolling. I think that is the idea. Educating people "about basic rationality" is a much more viable goal than doing basic research collaboratively. LW is often used as a sounding board for research write-ups, but that is probably as far as it can go. Anything more would require excluding amateurs from the discussion, to reduce the noise level. I am yet to see a public forum where "important problems" are solved "collaboratively". Feel free to provide counterexamples.
2gwern12y
Yes. They would still have their major shibboleths like Obama being a Muslim born in Kenya, but reputation systems would at least reduce the most mouth-breathing comments. People are a factor. People are not the only factor which is solely determinative. Code is Law. And that is why LW has orders of magnitude less comments and posts than OB or SL4 did. Wait, never mind, I meant 'more'. Or it discourages attempts to bamboozle with rigor. I don't remember terribly many rigorous proofs on LW, but then, I don't remember terribly many on OB or SL4 either.
1XiXiDu12y
I retracted the comment. Not sure why I made it and why I haven't used my brain more, sorry. Likely, because I hate reputation systems. Peer pressure is already bad enough as it is. But if a reliable study is being conducted that shows that reputation systems cause groups to become more rational I will of course change my mind. Betting money seems to be a pretty bad idea if the bet depends on the decision of someone participating in the bet.
8roystgnr12y
If you found someone in the process of killing another, what actions would you be willing to undertake to stop them? Would you be willing to undertake those same actions every time you found someone whose non-subsistence expenditures exceeded $X, the minimum expenditure necessary to [buy enough malaria nets, etc... to] have an expected outcome of one life saved? Even consequentialism is supposed to acknowledge that ethical rules need to be evaluated in terms of their long-term consequences rather than just their immediate outcomes.
7buybuydandavis12y
That's just very poor consequentialism in my eyes. Instead of me pointing out the most abominable scenarios that I believe immediately follow from such a consequentialism, why don't you supply one that you think would be objectionable to others, but which you'd be willing to defend? As for your spin on the question, while I think it is a different question than the original, I see no need to shy away from it. Some people are worth killing. That's not to say there isn't something of value in them, but choice is about tradeoffs, and I don't expect that to change with greater technology. The particular tradeoffs will change, but that there are tradeoffs will not. And in the same way, a great many more people are not worth saving either.
0Vladimir_Nesov12y
Sure, assuming we're clear on what the question means.
0[anonymous]12y
The reframed version gets much of its psychological strength from 1) intuitions that say killing is bad on top of its bad consequences and 2) intuitions that say killing has bad consequences that letting die does not have. You're taking both of those intuitions as invalid (as you have to for the framing to be equivalent), so you can't rely on conclusions largely caused by them.
0Normal_Anomaly12y
I think you mean "uncertain probability"?
5Vladimir_Nesov12y
"Certain" as in a figure of speech, like "ice cream of certain flavor", not indication of precision. (Although probabilities can well be precise...)
0shminux12y
Taking this argument ad absurdum: Roe vs Wade is a crime against humanity, since a fetus is potentially a person.

The alternatives I'm comparing are a living person dying vs. not dying. Living vs. never having lived is different and harder to evaluate.

5shminux12y
No, the alternatives you are comparing are reviving a frozen brain vs doing something potentially more useful, once the revival technology is available. For example, if creating a new mind has a positive utility some day, it's the matter of calculating what to spend (potentially still limited) resources on: creating a new happy mind (trivially easy even now, except for the "happy" part) or reviving/rejuvenating/curing/uploading/rehabilitaing a grandpa stiff in a cryo tank (impossible now, but still probably much harder than the alternative even in the future).
7Vladimir_Nesov12y
My comment is unrelated to cryonics, I posted it to remind about framing effects of saying "not saving lives" as compared to "killing". (Part of motivation for posting it is that I find the mention of Eliezer's dead brother in the context of an argument for killing people distasteful.) As I said, harder to evaluate. I'm uncertain on which of these particular alternatives is better (considering a hypothetical tradeoff), particularly where a new mind can be made better in some respects in a more resource-efficient way.
0shminux12y
Ah, OK. I thought you were commenting on the merits of cryopreservation.
3Viliam_Bur12y
What exactly makes it absurd? I am not sure what units are best for measuring a value of human life, so let's just say that a life of average adult person has value 1. What would be your estimate of value of a 3-month fetus, 6-month fetus, 9-month fetus, a newborn child, 1/2 year old child, 1 year old child, etc.? If you say that a fetus has less value than an adult person, but still a nonzero value, for example it could be 0.01, then killing 100 fetuses is like killing 1 adult person, and killing 100 000 fetuses is like killing 1 000 adult people. Calling the killing of 1 000 adult people "crime against humanity" would be perhaps exaggerated, but not exactly absurd. If you have strong opinions on this topic, I would like to see your best try to estimate the shape of "human life value" curve for fetuses and small childs. At what age does killing a human organism become worse than having a proverbial dustspeck in rationalist's eye?
4TheOtherDave12y
Thousands of adults are in fact killed in auto accidents every year, and yet it seems to me very strange indeed to call auto accidents a crime against humanity. Thousands of adults are killed in street crimes, and it seems very strange to me to call street crime a crime against humanity. Etc., etc., etc. I conclude that my intuitions about whether something counts as a "crime against humanity" aren't especially well calibrated, and therefore that I should be reluctant to use those intuitions as evidence when thinking about scales way outside my normal experience. And of course, the value-to-me of an individual can vary by many orders of magnitude, depending on the individual. I would likely have chosen to allow my nephew's fetal development to continue rather than preserve the life of a randomly chosen adult, for example, but I don't generally value the development of a fetus more than an adult. But leaving the "crimes against humanity" labeling business aside, and assuming some typical value for a fetus and an adult, then sure, if I value a developing fetus 1/N as much as I value a living adult, then I prefer to allow 1 adult to die rather than allow the development of N fetuses to be terminated.
-4[anonymous]12y
Actually, much worse: Roe vs Wade effectively enables serial genocide.

This appears to be a different frame for the death penalty debate.

[-][anonymous]12y100

Far enough in the future, it might be possible to "cure" them. However, they will still possess toxic memories and thoughts that would greatly distress them now that they are normal. To truly repair them, they would likely need to have many or all of their memories erased. At that point, with an amnesic brain and a cloned body, are they even really the same person, and if not, what was the point of cryopreserving them?

Why not "cure" them by building a mind that can bare them without too much distress? A sufficiently different mind can I think bear any thoughts a human mind "diseased" or not can have. Do such minds necessarily or even probably hold no value to us?

At least 50% of US university students have had a homicidal fantasy this year. Guess how common rape fantasies are. To a more "sensitive" mind something like that could seem horrifying. But how ... can they think such things and still have sympathy and not go around stabbing each other all the time?

Forming a robust theory of mind and realizing that not everyone thinks or sees the world the same way you do is actually quite difficult. Consider the immense complexity of the world

... (read more)

Many people write these villains off as evil and give their condition not a second thought. But it is quite possible that they actually suffer from some sort of mental illness and are thus not fully responsible for their crimes.

What's the difference?

In fact, there is evidence that the brains of serial killers are measurably different from those of normal people.

They are measurably different. The simplest measure is the number of people murdered by them.

We could just keep them as-is and use different methods of keeping them from killing each other.

I ... (read more)

1Vladimir_Nesov12y
One consideration is that permanently terminated lives could be significantly undesirable, compared to continuing ones, which could outweigh the benefits of implementing a different better mind instead.
-2DanielLC12y
But all lives are permanently terminated immediately after they're created. They're then replaced with a slightly different one. I don't like the idea of having a utility system complicated enough to distinguish from those things.
7Vladimir_Nesov12y
You already do.
1DanielLC12y
No I don't.
-1Vladimir_Nesov12y
That link describes what you believe, not why those beliefs are true; my point was that you're mistaken.
1DanielLC12y
No, I'm not. I know my own values. I know the utopia dictated by my values. Please do not accuse me of being mistaken.
5FAWS12y
I think you and Vladimir are talking about different things. You probably follow your surface level moral theory as much as any human follows theirs, and unlike most people you seem to be willing to bite the bullets implied, but you don't follow it the way an AI would follow their utility function. You still notice the bullets you bite, you do things for all sorts of other reasons when you don't have the opportunity to think it through in terms of total happiness caused, and you probably eliminate all sorts of strategies that might raise total happiness for other reasons before they rise to conscious attention and you can evaluate them properly.
0DanielLC12y
If you want to really get down to it, I am not a utility maximizer. Insomuch as I try to maximize any sort of utility, I try to bring happiness. I may feel bad thinking about things that have a net increase of happiness, but I still try to bring them about. If imagining a possible future makes me feel bad, this is a fact about me, and not a fact about the possible future. I wish to get rid of the bad feeling. My instinct is to do it by averting that future, but I know better. I just make sure it's not a future in which I feel bad about it.
4Vladimir_Nesov12y
Why do you believe you do? That's an antiproductive attitude for a rationalist.
6DanielLC12y
I know my own values because they're what I try to maximize. All that's apparent to me is my qualia, and, while I concede that other people have qualia, I see no importance in anything that isn't someone's qualia. I mentioned that I know the utopia dictated by my values to show that I didn't just convince myself that it's all that I care about and ignore its implications. The utopia is tiling the universe with orgasmium. If you have a particular reason to believe that I am mistaken, please say so. If you simply accuse me of being mistaken about my own values, that doesn't help. You are not me, and you can't just assume I am like you. You don't know nearly as much about me as I do. You gave me a reason why I might not know about my own values. I showed that I had already taken this into account. You did not ask for clarification. You did not find a reason I may have failed to correctly take it into account. You did not give me another reason I might be incorrect. You simply claimed that I was wrong.
1[anonymous]12y
I'm not disputing your line of thought, but I still wonder about something I touched upon before, if neuroscience or the likes would dissolve qualia into smaller components, and it would become apparent to you that "There is no unitary thing as qualia/mind frame, the momentary experience is reducible, but an anthill, a screen with pixels". Would that exhort you to reassess your utopia?
0dlthomas12y
I see no reason that the reducibility of something would deny it's potential status as something to be valued. I could value whirlpools without denying that they're made of water, or (for an example closer to reality) literature without denying that it's made up of words which are made up of letters.
0[anonymous]12y
Sorry for taking such a long time to answer. Agreed. But if you read DanielLC's argument he seem to think that that the reducibility of for example personal identity makes it unimportant in terms of value since it can be reduced to "mind frames" over time. Basically I wonder if his understanding of qualia (if that even such a thing really exists) would be totally wrong or could be reduced, would he then claim that mind frames are morally unimportant because the can be reduced to something ells or that the concept is misleading.

I do not understand this obsession with preserving every living mind (it seems to me that EY and LW in general implicitly or explicitly subscribe to the popular notion that a body is a vessel for the mind).

Those who wish to be frozen and can afford it are free to take their chances, those who believe in eternal soul or reincarnation are free to take theirs, those who would rather die forever should not be judged, either.

It sure sucks if you want to get frozen but cannot afford it, and it is a reasonable goal to reduce cost/improve odds of revival, but it is but one of many useful goals to work on.

6Vladimir_Nesov12y
There are laws of thought, and correct decisions (that we don't know very well). What people believe is mostly irrelevant to what the right thing to do is. People might have the right (power) to do whatever they believe, they might indeed in practice be free to implement any decision they choose, and they might intrinsically value this power, but this fact is irrelevant for judging the correctness of their decisions. (In short, I object to the "everyone can make up their own correctness" mindset. We do know better than to let considerations about souls and afterlife determine the right answer.)
2shminux12y
I am not sure what you mean by "correct" and "right answer" in this case. Life is not math. If your goal is to improve the subjective quality of life of each person, there is no clear-cut answer to how to do that. If your goal is something "bigger", you better state what it is upfront, so that it and the means to achieve it can be discussed first.

Life is not math.

It's much harder than the most difficult math that humans are able to do, but the answers are still non-mysterious, and it is your calling and power as a person to seek them.

4Stuart_Armstrong12y
It seems very likely that if cryopreservation was the default option - or even just a rather standard option - that many, many, many more people would go for it than do at present. And still while exercising free choice. Also, there seem to be few religious commandments against cryopreservation, so (if it worked) there would always be the option of dying or reincarnating later on. So, two possible worlds, both with free choice, and one with much more death in one than the other - I see why we'd want to tilt the balance away from it.
1shminux12y
I guess I just don't give as much value to a generic human life as you do.
0Stuart_Armstrong12y
Do you give it any value? Would generic acceptance of cryopreservation be something you'd take if it were free?
0shminux12y
Not sure what you are asking. I would pay for cryo for myself if I could afford it and considered it a worthwhile investment vs other alternatives (such as a nice vacation while still alive). Presumably, if cryo was affordable and mainstream enough, many people would go for it. After all, people pay more to get buried rather than cremated, and there is precious little rationale for that.
2dlthomas12y
This seems to intersect non-trivially with positions on suicide.
0shminux12y
There are many grey areas, sure, some more politically/emotionally charged than others. Let's not complicate things by adding the terms like suicide, abortion and euthanasia into the mix.
0lessdazed12y
What does "not be judged" mean? Who disputes this?

To truly save them, they would likely need to have many or all of their memories erased.

This is question-begging. Sure, if my experiences and memories are a net negative, such that I and my surroundings are improved by wiping all of that away and starting fresh, then there's no particular reason to preserve those experiences and memories. Of course.

OTOH, if they're a net positive, then there is.

I am hesitant, and I think many others may be hesitant to engage in a debate on eugenics, not because it might trigger strong feelings (I think we as a community are capable of setting those aside), but because of the way it might be perceived by casual visitors to the site.

It would be nice if we could get some sort of agreement to ignore political correctness/face the consequences of political incorrectness and engage in what I think would be a very healthy debate.

If erasing the memories were done by artificially stimulating the mechanism that causes normal forgetting, I think they'd be the same person. After all, I don't consider myself a new person whenever I forget something. But maybe there's something I'm missing. 

To deny any thought feeling or memory or the mind that harbored it... seems a bit extreme i don't know if theirs a term for it ? Maybe we should save our imperfections they in many ways are what make us human.

(I'm sorry if my entry's are not as polished as most I'm a little unrefined.)

0Karmakaiser12y
Are you sure of this? I could point to many of our technological imperfections that caused great uneasiness in their day but are now considered normal and even natural. Further I could point to the eradication of diseases which were considered scourges of God and ultimately "part of the deal" or being human. Part of being a trans-humanist is a belief that contracts can and should be written as humanity becomes more technologically advanced. Now changing a personality through manufactured means is considered an acceptable treatment of Autism, Bi-Polar Disorder, ADHD, Depression and many others. Now the drugging of conscious agents against their will is a moral problem, but to deny that as an option seems to be to be backward, rather than forward thinking. If I was offered a pill that would render my innate biases inert I would have little hesitation in taking it. As I simulate the decision now, the only objections I can think of are social biases wondering what people would think of a person with a black belt in rationality.
3AspiringKnitter12y
Actually, with regard to autism, at least, those are fighting words.
[-][anonymous]12y10

Consider those who have demonstrated through their actions that they are best kept excluded from society at large. John Wayne Gacy and Jeffrey Dahmer would be prime examples. Many people write these villains off as evil and give their condition not a second thought. But it is quite possible that they actually suffer from some sort of mental illness and are thus not fully responsible for their crimes. In fact, there is evidence that the brains of serial killers are measurably different from those of normal people. Far enough in the future, it might be poss

... (read more)
-3RationallyOptimistic12y
By "not fully responsible" I was trying to sidestep a free will debate. My point was that "bad" people might just have "bad" brains; perhaps they were exposed to too much serotonin while in the womb or inherited a bad set of genes, and that plus some trauma early in life might have damaged them in such a way that they were willing to commit unspeakable acts that "normal" people would not. I think it's not unlikely that whatever makes a serial killer a serial killer will eventually be identified, screened for and cured. But what to do with existing serial killers is different problem.

I ask a diferent question: in a time constrain scenario, what lives are worth dying? Some people are elevating the risk of human extintion – producting weapons of mass destruction -- , perhaps for this they deserve to die?

Every mind is sacred,

Every mind is great.

If a mind is wasted,

EY gets quite irate.

1Normal_Anomaly12y
Upvoted, but it would scan better if you took out the "quite". (Assuming you pronounce "EY" as 2 syllables.)
-7codythegreen12y
8[anonymous]12y
Sorry, that made me laugh, but in order to preserve the signal to noise ratio I down voted the post. If dming has taught me anything, it has taught me that Monty Python references need to be nipped in the bud.
0lessdazed12y
All italicization is potentially also an allusion to something by EY.

Voted you down. This is deontologist thought in transhumanist wrapping paper.

Ignoring the debate concerning the merits of eternal paradise itself and the question of Heaven's existence, I would like to question the assumption that every soul is worth preserving for posterity.

Consider those who have demonstrated through their actions that they are best kept excluded from society at large. John Wayne Gacy and Jeffrey Dahmer would be prime examples. Many people write these villains off as evil and give their condition not a second thought. But it is quite p

... (read more)
5steven046112y
Sure sounds like consequentialism to me.
5dlthomas12y
Is consequentialism an essential part of transhumanism?
5Alicorn12y
No.
1Vladimir_Nesov12y
Why should that be an interesting question? (What's "transhumanism", again?) What matters is whether this allows you to find correct decisions, perhaps whether it's a useful sense of "correct" to rely on when you have something to protect.
0dlthomas12y
It seemed relevant to the parent's objection to the original article.