I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription

by ChrisHallquist2 min read11th Jan 2014182 comments

53

Bounties (closed)
Frontpage

Background:

On the most recent LessWrong readership survey, I assigned a probability of 0.30 on the cryonics question. I had previously been persuaded to sign up for cryonics by reading the sequences, but this thread and particularly this comment lowered my estimate of the chances of cryonics working considerably. Also relevant from the same thread was ciphergoth's comment:

By and large cryonics critics don't make clear exactly what part of the cryonics argument they mean to target, so it's hard to say exactly whether it covers an area of their expertise, but it's at least plausible to read them as asserting that cryopreserved people are information-theoretically dead, which is not guesswork about future technology and would fall under their area of expertise.

Based on this, I think there's a substantial chance that there's information out there that would convince me that the folks who dismiss cryonics as pseudoscience are essentially correct, that the right answer to the survey question was epsilon. I've seen what seem like convincing objections to cryonics, and it seems possible that an expanded version of those arguments, with full references and replies to pro-cryonics arguments, would convince me. Or someone could just go to the trouble of showing that a large majority of cryobiologists really do think cryopreserved people are information-theoretically dead.

However, it's not clear to me how well worth my time it is to seek out such information. It seems coming up with decisive information would be hard, especially since e.g. ciphergoth has put a lot of energy into trying to figure out what the experts think about cryonics and come away without a clear answer. And part of the reason I signed up for cryonics in the first place is because it doesn't cost me much: the largest component is the life insurance for funding, only $50 / month.

So I've decided to put a bounty on being persuaded to cancel my cryonics subscription. If no one succeeds in convincing me, it costs me nothing, and if someone does succeed in convincing me the cost is less than the cost of being signed up for cryonics for a year. And yes, I'm aware that providing one-sided financial incentives like this requires me to take the fact that I've done this into account when evaluating anti-cryonics arguments, and apply extra scrutiny to them.

Note that there are several issues that ultimately go in to whether you should sign up for cryonics (the neuroscience / evaluation of current technology, estimate of the probability of a "good" future, various philosophical issues), I anticipate the greatest chance of being persuaded from scientific arguments. In particular, I find questions about personal identity and consciousness of uploads made from preserved brains confusing, but think there are very few people in the world, if any, who are likely to have much chance of getting me un-confused about those issues. The offer is blind to the exact nature of the arguments given, but I mostly foresee being persuaded by the neuroscience arguments.

And of course, I'm happy to listen to people tell me why the anti-cryonics arguments are wrong and I should stay signed up for cryonics. There's just no prize for doing so.

53

182 comments, sorted by Highlighting new comments since Today at 12:25 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Cryonics success is an highly conjunctive event, depending on a number of different, roughly independent, events to happen.

Consider this list:

  • The cryorpreservation process as performed by current cryo companies, when executed perfectly, preserves enough information to reconstruct your personal identity. Neurobiologists and cryobiologists generally believe this is improbable, for the reasons explained in the links you cited.
  • Cryocompanies actually implement the cryorpreservation process susbstantially as advertised, without botching or faking it, or generally behaving incompetently. I think there is a significant (>= 50%) probability that they don't: there have been anecdotal allegations of mis-behavior, at least one company (the Cryonics Institute) has policies that betray gross incompetence or disregard for the success of the procedure ( such as keeping certain cryopatients on dry ice for two weeks ), and more generally, since cryocompanies operate without public oversight and without any mean to assess the quality of their work, they have every incentive to hide mistakes, take cost-saving shortcuts, use sub-par materials, equipment, unqualified staff, or even outright defrau

... (read more)

You forgot "You will die in a way that keeps your brain intact and allows you to be cryopreserved".

[-][anonymous]8y 25

"... by an expert team with specialized equipment within hours (minutes?) of your death."

2[anonymous]8y"...a death which left you with a functional-enough circulatory system for cryoprotectants to get to your brain, didn't involve major cranial trauma, and didn't leave you exposed to extreme heat or other conditions which could irretrievably destroy large amounts of brain information. Also the 'expert' team, which probably consists of hobbyists or technicians who have done this at best a few times and with informal training, does everything right." (This is not meant as a knock against the expert teams in question, but against civilization for not making an effort to get something better together. The people involved seem to be doing the best they can with the resources they have.)
5khafra8y...Which pretty much rules out anything but death from chronic disease; which mostly happens when you get quite old; which means funding your cryo with term insurance is useless and you need to spring for the much more expense whole life.

(My version of) the above is essentially my reason for thinking cryonics is unlikely to have much value.

There's a slightly subtle point in this area that I think often gets missed. The relevant question is not "how likely is it that cryonics will work?" but "how likely is it that cryonics will both work and be needed?". A substantial amount of the probability that cryonics does something useful, I think, comes from scenarios where there's huge technological progress within the next century or thereabouts (because if it takes longer then there's much less chance that the cryonics companies are still around and haven't lost their patients in accidents, wars, etc.) -- but conditional on that it's quite likely that the huge technological progress actually happens fast enough that someone reasonably young (like Chris) ends up getting magical life extension without needing to die and be revived first.

So the window within which there's value in signing up for cryonics is where huge progress happens soon but not too soon. You're betting on an upper as well as a lower bound to the rate of progress.

There's a slightly subtle point in this area that I think often gets missed.

I have seen a number of people make (and withdraw) this point, but it doesn't make sense, since both the costs and benefits change (you stop buying life insurance when you no longer need it, so costs decline in the same ballpark as benefits).

Contrast with the following question:

"Why buy fire insurance for 2014, if in 2075 anti-fire technology will be so advanced that fire losses are negligible?"

You pay for fire insurance this year to guard against the chance of fire this year. If fire risk goes down, the price of fire insurance goes down too, and you can cancel your insurance at will.

3NoSuchPlace8yI don't think that this is meant as a complete counter-argument against cryonics, but rather a point which needs to be considered when calculating the expected benefit of cryonics. For a very hypothetical example (which doesn't reflect my beliefs) where this sort of consideration makes a big difference: Say I'm young and healthy, so that I can be 90% confident to still be alive in 40 years time and I also believe that immortality and reanimation will become available at roughly the same time. Then the expected benefit of signing up for cryonics, all else being equal, would be about 10 times lower if I expected the relevant technologies to go online either very soon (next 40 years) or very late (longer than I would expect cryonics companies to last) than if I expected them to go online some time after I very likely died but before cryonics companies disappeared. Edit: Fixed silly typo.

That would make sense if you were doing something like buying a lifetime cryonics subscription upfront that could not be refunded even in part. But it doesn't make sense with actual insurance, where you stop buying it if is no longer useful, so costs are matched to benefits.

  • Life insurance, and cryonics membership fees, are paid on an annual basis
  • The price of life insurance is set largely based on your annual risk of death: if your risk of death is low (young, healthy, etc) then the cost of coverage will be low; if your risk of death is high the cost will be high
  • You can terminate both the life insurance and the cryonics membership whenever you choose, ending coverage
  • If you die in a year before 'immortality' becomes available, then it does not help you

So, in your scenario:

  • You have a 10% chance of dying before 40 years have passed
  • During the first 40 years you pay on the order of 10% of the cost of lifetime cryonics coverage (higher because there is some frontloading, e.g. membership fees not being scaled to mortality risk)
  • After 40 years 'immortality' becomes available, so you cancel your cryonics membership and insurance after only paying for life insurance priced for a 10% risk of death
  • In this world the potential benefits are cut by a factor of 10, but so are the costs (roughly); so the cost-benefit ratio does not change by a factor of 10

True. While the effect would still exist due to front-loading it would be smaller than I assumed . Thank you for pointing this out to me.

3private_messaging8yExcept people do usually compare the spending on the insurance which takes low probability of need into account, to the benefits of cryonics that are calculated without taking the probability of need into account. The issue is that it is not "cryonics or nothing". There's many possible actions. For example you can put money or time into better healthcare, to have a better chance of surviving until better brain preservation (at which point you may re-decide and sign up for it). The probability of cryonics actually working is, frankly, negligible - you can not expect people to do something like this right without any testing, even if the general approach is right and it is workable in principle*. (Especially not in the alternative universe where people are crazy and you're one of the very few sane ones), and is easily out-weighted even by minor improvements in your general health. Go subscribe to a gym, for a young person offering $500 for changing his mind that'll probably blow cryonics out of water by orders of magnitude, cost benefit wise. Already subscribed to a gym? Work on other personal risks. * I'm assuming that cryonics proponents do agree that some level of damage - cryonics too late, for example - would result in information loss that likely can not be recovered even in principle.
1[anonymous]8yITYM “before”.
0Adele_L8yWhen immortality is at stake, a 91% chance is much much better than a 90% chance.
2private_messaging8yNot if that 1% (seems way over optimistic to me) is more expensive than other ways to gain 1% , such as by spending money or time on better health. Really, you guys are way over-awed by the multiplication of made up probabilities by made up benefits, forgetting that all you did was making an utterly lopsided, extremely biased pros and cons list, which is a far cry from actually finding the optimum action.
2Dentin8yI signed up for cryonics precisely because I'm effectively out of lower cost options, and most of the other cryonicists are in a similar situation.
2private_messaging8yI wonder how good of an idea is a yearly full body MRI for early cancer detection...
1CellBioGuy8yThere are those that argue that it's more likely to find something benign you've always had and wouldn't hurt you but you never knew about, seeing as we all have weird things in us, leading to unnecessary treatments which have risks.
6private_messaging8yWhat's about growing weird things? Here we very often use ultrasound (and the ultrasound is done by the medical doctor rather than by a technician), it finds weird things very very well and the solution is simply to follow up later and see if its growing.
0bogus8yThis can only decrease the amount of useful information you'd get from the MRI, though - it can't convert a benefit into a cost. After all, if the MRI doesn't show more than the expected amount of weirdness, you should avoid costly treatments.
4ChrisHallquist8yMost of these issues I was already aware of, though I did have a brief "holy crap" moment when I read this parenthetical statement: But following the links to the explanation [http://www.cryonics.org/emergency-situations/deceased-non-member], I don't think this impacts considerably my estimate of CI's competence / trustworthiness. This specific issue only affects people who didn't sign up for cryonics in advance, comes with an understandable (if not correct) rationale, and comes with acknowledgement that it's less likely to work than the approach they use for people who were signed up for cryonics before their deaths. Their position may not be entirely rational, but I didn't previously have any illusions about cryonics organizations being entirely rational (it seems to me cryonics literature has too much emphasis on the possibility of reviving the original meat as opposed to uploading.)
2V_V8y"less likely to work" seems a bit of an euphemism. I think that the chances that this works are essentially negligible even if cryopreservation under best condition did work (which is already unlikely). My point is that even if they don't apply this procedure to all their patients, the fact that CI are offering it means that they are either interested in maximizing profit instead of success probability, and/or they don't know what they are doing, which is consistent with some claims by Mike Darwin (who, however, might have had an axe to grind). Signing up for cryonics is always buying a pig in a poke because you have no way of directly evaluating the quality of the provider work within your lifetime, therefore the reputation of the provider is paramount. If the provider behaves in a way which is consistent with greed or incompetence, it is an extremely bad sign.
4ChrisHallquist8yRead a bit of Mike Darwin's complaints, those look more serious. I will have to look into that further. Can you give me a better sense of your true (not just lower bound) estimate of the chances there's something wrong with cryonics orgs on an institutional level that would lead to inadequate preservation even if in theory they had a working procedure in theory?
1V_V8yI'm not sure how to condense my informal intuition into a single number. I would say > 0.5 and < 0.9, closer to the upper bound (and even closer for the Cryonics Institute than for Alcor).
4Zaine8yTo keep the information all in one place, I'll reply here. Cryogenic preservation exists in the proof of tardigrades - also called waterbears - which can reanimate from temperatures as low as 0.15 K, and have sufficient neurophysiological complexity [http://www.ncbi.nlm.nih.gov/pubmed/22806919] to enable analysis of neuronal structural damage. We don't know if the identity of a given waterbear pre-cyrobiosis is preserved post-reanimation. For that we'd need a more complex organism. However, the waterbear is idiosyncratic in its capacity for preservation; while it proves the possibility for cyrogenic preservation exists, we ourselves do not have the traits of the waterbear that facilitate its capacity for preservation. In the human brain, there are billions of synapses - to what neurones other neurones connect, we call the connectome: this informs who you are. According to our current theoretical and practical understanding of how memories work, if synapses degrade even the slightest amount your connectome will change dramatically, and will thus represent a different person - perhaps even a lesser human (fewer memories, etcetera). Now, let's assume uploading becomes commonplace and you mainly care about preserving your genetic self rather than your developed self (you without most of your memories and different thought processes vs. the person you've endeavoured to become), so any synaptic degradation of subsistence brain areas becomes irrelevant. What will the computer upload? Into what kind of person will your synapses reorganise? Even assuming they will reorganise might ask too much of the hypothetical. Ask yourself who - or what - you would like to cyropreserve; the more particular your answer, the more science needed to accommodate the possibility.
2[anonymous]8yHow would you design that experiment? I would think all you'd need is a better understanding of what identity is. But maybe we mean different things by identity.
0Zaine8yWe'd need to have a means of differentiating the subject waterbear's behaviour from other waterbears; while not exhaustive, classically conditioning a modified reflexive reaction to stimuli (desensitisation, sensitisation) or inducing LTP or LTD on a synapse, then testing whether the adaptations were retained post-reanimation, would be a starting point. The problem comes when you try to extrapolate success in the above experiment to mean potential for more complex organisms to survive the same procedure given x. Ideally you would image all of the subjects synapses pre-freeze or pre-cryobiosis (depending on what x turns out to be), then image them again post-reanimation, and have a program search for discrepancies. Unfortunately, the closest we are to whole-brain imaging is neuronal fluorescence imaging, which doesn't light up every synapse. Perhaps it might if we use transcranial DC or magnetic stimulation to activate every cell in the brain; doing so may explode a bunch of cells, too. I've just about bent over the conjecture tree by this point.
0[anonymous]8yDoes the waterbear experience verification and then wake up again after being thawed, or does subjective experience terminate with vitrification - subjective experience of death / oblivion - and a new waterbear with identical memories begin living?
0Zaine8yWe need to stop and (biologically) define life and death for a moment. A human can be cryogenically frozen before or after their brain shuts down; in either case, their metabolism will cease all function. This is typically a criterion of death. However if, when reanimated, the human carries on as they would from a wee kip, does this mean they have begun a new life? resumed their old life after a sojourn to the Underworld? You see the quandary our scenario puts to this definition of life, for the waterbear does the exact above. They will suspend their metabolism, which can be considered death, reanimate when harsh environmental conditions subside, and go about their waterbearing ways. Again, do the waterbears live a subset of multiple lives within the set of one life? Quite confusing to think about, yes? Now let's redefine life. A waterbear ceases all metabolic activity, resumes it, then lumbers away. In sleep, one's state pre- and post-sleep will differ; one wakes up with changed neuronal connections, yet considers themselves the same person - or not, but let's presume they do. Take, then, the scenario in which one's state pre- and post-sleep does not differ; indeed, neurophysiologically speaking, it appears they've merely paused then recommenced their brain's processes, just as the time 1:31:00 follows 1:30:59. This suggests that biological life depends not on metabolic function, but on the presence of an organised system of (metabolic) processes. If the system maintains a pristine state, then it matters not how much time has passed since it last operated; the life of the system's organism will end only when when that system becomes so corrupted as to lose the capacity for function. Sufficient corruption might amount to one specalated synapse; it might amount to a missing ganglion. Thus cyrogenics' knottiness. As to whether they experience verification, you'll have to query a waterbear yourself. More seriously, for any questions on waterbear experience I refer
1Benquo8yI think the question was a practical one and "verification" should have been "vitrification."
0Zaine8yI considered that, but the words seemed too different to result from a typo; I'm interested to learn the fact of the matter. I've edited the grandparent to accommodate your interpretation.
0adbge8yGoing under anesthesia is a similar discontinuity in subjective experience, along with sleep, situations where people are technically dead for a few moments and then brought back to life, coma patients, and so on. I don't personally regard any of these as the death of one person followed by the resurrection of a new person with identical memories, so I also reject the sort of reasoning that says cryogenic resurrection, mind uploading, and Star Trek-style transportation is death. Eliezer has a post here [http://lesswrong.com/lw/r9/quantum_mechanics_and_personal_identity/] about similar concerns. It's perhaps of interest to note that the PhilPapers survey revealed a fairly even split on the teletransporter problem [http://philpapers.org/surveys/results.pl] among philosophers, with the breakdown being 36.2%/32.7%/31.1% as survive/other/die respectively. ETA: Ah, nevermind, I see you've already considered this [http://lesswrong.com/lw/qx/timeless_identity/9to2].
3[anonymous]8yYes, that post still reflects my views. I should point out again that sleep and many forms of anesthesia don't stop operation of the brain, they just halt the creation of new memories so people don't remember. That's why, for example, some surgery patients end up with PTSD from waking up on the table, even if they don't remember. Other cases like temporary (clinical) death and revival also aren't useful comparisons. Even if the body is dying, the heart and breathing stops, etc., there are still neural computations going on from which identity is derived. The irrecoverable disassociation of the particle interactions underlying consciousness probably takes a while - hours or more, unless there is violent physical damage to the brain. Eventually the brain state fully reverts to random interactions and identity is destroyed, but clinical revival becomes impossible well before then. Cryonics is more of a weird edge case ... we don't know enough now to say with any certainty whether cryonics patients have crossed that red line or not with respect to destruction of identity.
1Gunnar_Zarncke8yFor a formula see http://www.alcor.org/Library/html/WillCryonicsWork.html [http://www.alcor.org/Library/html/WillCryonicsWork.html](I do find the given probabilities significantly to optimistic though and lacking and references).
0MugaSofer8yWoah, really? This seems ... somewhat worse than my estimation. (Note that I am not signed up, for reasons that have nothing to do with this.) This is a good point that I hadn't heard before.
0handoflixue8yhttp://www.alcor.org/cases.html [http://www.alcor.org/cases.html] A loooot of them include things going wrong, pretty clear signs that this is a novice operation with minimal experience, and so forth. Also notice that they don't even HAVE case reports for half the patients admitted prior to ~2008. It's worth noting that pretty much all of these have a delay of at LEAST a day. There's one example where they "cryopreserved" someone who had been buried for over a year, against the wishes of the family, because "that is what the member requested." (It even includes notes that they don't expect it to work, but the family is still $50K poorer!) I'm not saying they're horrible, but they really come off as enthusiastic amateurs, NOT professionals. Cryonics might work, but the modern approach is ... shoddy at best, and really doesn't strike me as matching the optimistic assumptions of people who advocate for it.
0MugaSofer8yYikes. Yeah, that seems like a serious problem that needs more publicity in cryonics circles.
0V_V8yI think it's also worth considering that a society of people who rarely die would probably have population issues, as there is a limited carrying capacity. That's most obvious in the case of biologic humans, where even with our normal lifespan, we are already close or even above carrying capacity. In more exotic (and thus less probable, IMHO) scenarios such as Hansonian brain emulations, the carrying capacity might be perhaps higher, but it would still be fixed, or at least it would increase slowly once all the easily reachable resources on earth have been put to use (barring, of course, extreme singularity scenarios where nanomagicbots turn Jupiter into "computronium" or something, which I consider highly improbable). Thus, if the long-lived future people are to avoid continuous cycles of population overshoot and crash, they must have some way of enforcing a population cap, whether by market forces or government regulation. This implies that reviving cryopreserved people would probably have costs other than those of the revival tech. Whoever revives you would have to split in some way their share of resources with you (or maybe in the extreme case, commit suicide to make room for you). Hanson, for instance, predicts that his brain emulation society would be a Malthusian subsistence economy. I don't think that such a society could afford to ever revive any significant number of cryopatients, even if they had the technology (how Hanson can believe that society is likely and be still signed up for cryonics, is beyond my understanding). Even if you don't think that a Malthusian scenario is likely, it still likely that the future will be an approximately steady-state economy, which means it would be strong disincentives against adding more people.
-2MugaSofer8yI'm inclined to agree, actually, but I would expect a post-scarcity "steady-state economy" large enough that absorbing such a tiny number of people is negligible. With that said: * Honestly, it doesn't sound all that implausible that humans will find ways to expand - if nothing else, without FTL (I infer you don't anticipate FTL) there's pretty much always going to be a lot of unused universe out there for many billions of years to come (until the universe expands enough we can't reach anything, I guess.) * Brain emulations sound extremely plausible. In fact, the notion that we will never get them seems ... somewhat artificial in it's constraints. Are you sure you aren't penalizing them merely for sounding "exotic"? * I can't really comment on turning Jupiter into processing substrate and living there, but ... could you maybe throw out some numbers regarding the amounts of processing power and population numbers you're imagining? I think I have a higher credence for "extreme singularity scenarios" than you do, so I'd like to know where you're coming from better. That ... is strange. Actually, has he talked anywhere about his views on cryonics?
2V_V8yObviously I don't anticipate FTL. Do you? Yes, but exploiting resources in our solar system is already difficult and costly. Currently there is nothing in space worth the cost of going there or bringing it back, maybe in the future it will be different, but I expect progress to be relatively slow. Interstellar colonization might be forever physically impossible or economically unfeasible. Even if it is feasible I expect it to be very very slow. I think that's the best solution to Fermi's paradox. Tom Murphy discussed these issue here [http://physics.ucsd.edu/do-the-math/2011/10/why-not-space/] and here [http://physics.ucsd.edu/do-the-math/2011/10/stranded-resources/]. He focused on proven space technology (rockets) and didn't analyze more speculative stuff like mass drivers, but it seems to me that his whole analysis is reasonable. I'm penalizing them because they seem to be far away from what current technology allows (consider the current status of the Blue Brain Project [http://en.wikipedia.org/wiki/Blue_Brain_Project] or the Human Brain Project [http://en.wikipedia.org/wiki/Human_Brain_Project_%28EU%29]). It's unclear how many hidden hurdles are there, and how long Moore's law will continue to hold. Even if the emulation of a few human brains becomes possible, it's unclear that the technology would scale to allow a population of billions, or trillions as Hanson predicts. Keep in mind that biological brains are much more energy efficient than modern computers. Conditionally on radical life extension technology being available, brain emulation is more probable, since it seems to be an obvious avenue to radical life extension. But it's not obvious that it would be cheap and scalable. I think the most likely scenario, at least for a few centuries, is that human will still be essentially biological and will only inhabit the Earth (except possibly for a few Earth-dependent outposts in the solar system). Realistic population sizes will be between 2 and 10 billions
1MugaSofer8yPrediction confirmed, then. I think you might be surprised how common anticipating that we will eventually "solve FTL" using "wormholes", some sort of Alcubierre variant or plain old Clarke-esque New Discoveries - in sciencey circles, anyway. I ... see. OK then. That seems like a more plausible objection. Hmm. I started to calculate out some stuff, but I just realized: all that really matters is how the amount of humans we can support compares to available human-supporting resources, be they virtual, biological or, I don't know, some sort of posthuman cyborg. So: how on earth can we calculate this? We could use population projections - I understand the projected peak is around 2100 at 9 billion or so - but those are infamously unhelpful for futurists and, obviously, may not hold when some technology or another is introduced. So ... what about wildly irresponsible economic speculation? What's your opinion of the idea we'll end up in a "post-scarcity economy", due to widespread automation etc. Alternatively, do you think the population controls malthusians have been predicting since forever will finally materialize? Or ... basically I'm curious as to the sociological landscape you anticipate here.
-1V_V8yAs long as we are talking about biologic humans (I don't think anything else is likely, at least for a few centuries), then carrying capacity is most likely in the order of billions: each human requires a certain amount of food, water, clothing, housing, healthcare, etc. The technologies we use to provide these things are already highly efficient, hence their efficiency will probably not grow much, at least not by incremental improvement. Groundbreaking developments comparable to the invention of agriculture might make a difference, but there doesn't seem to be any obvious candidate for that which we can foresee, hence I wouldn't consider that likely. In optimistic scenarios, we get an approximately steady state (or slowly growing) economy with high per capita wealth, with high automation relieving many people from the necessity of working long hours, or perhaps even of working at all. In pessimistic scenarios, Malthusian predictions come true, and we get either steady state economy at subsistence level, or growth-collapse oscillations with permanent destruction of carrying capacity due to resource depletion, climate change, nuclear war, etc. up to the most extreme scenarios of total civilization breakdown or human extinction.
3Lumifer8yThis is certainly not true for healthcare. I think that making energy really cheap ("too cheap to meter") is foreseeable and that would count as a groundbreaking development.
0V_V8yDo you think that modern healthcare is inefficient in energy and resource usage? Why? What energy source you have in mind?
2Lumifer8yI think that modern healthcare is inefficient in general cost/benefit terms: what outputs you get at the cost of which inputs. Compared to what seems achievable in the future, of course. Fusion reactors, for example.
5V_V8yI suppose that in optimistic scenarios one could imagine cutting labor costs using high automation, but we would probably still going to need hospitals, drug manufacturing facilities, medical equipment factories, and so on. Always 20-30 years in the future for the last 60 years. I'm under the impression that nuclear fusion reactors might have already reached technological maturity and thus diminishing returns before becoming commercially viable. Even if commercial fusion reactors become available, they would hardly be "too cheap to meter". They have to use the deuterium-tritium reaction (deuterium-deuterium is considered practically unfeasible), which has two main issues: it generates lots of high-energy neutrons and tritium must be produced from lithium. High-energy neutrons erode any material and make it radioactive. This problem exists in conventional fission reactors, but it's more significant in fusion reactors because of the higher neutron flux. A commercial fusion reactor would probably have higher maintenance requirement and/or shorter lifespan than a fission reactor with the same power. Lithium is not rare, but not terribly common either. If we were to produce all the energy of the world from fusion, lithium reserves would last between thousands and tens of thousands years, assuming that energy consumption does not increase. That's clearly an abundant source of energy (in the same ballpark of uranium and thorium), but not much more abundant than other sources we are used to. Moreover, in a fission power station the fuel costs make up only a fraction of the total costs per joule of energy. Most of the costs are fixed costs of construction, maintenance and decommissioning. A fusion power station would have similar operational and decommissioning safety issues of a fission one (although it can't go into melt down), and probably and higher complexity, which mean that fixed cost will dominate, as for fission power. If fusion power becomes commercially via
-2Lumifer8yNo, I primarily mean new ways of treatment. For example, a hypothetical country which can easily cure Alzheimer's would have much lower costs of medical care for the elderly. Being able to cure (as opposed to control) diabetes, a large variety of autoimmune disorders, etc. has the potential to greatly improve the efficiency of health care. Yes, but I am not saying it would happen, I'm saying this is an example of what might happen. You're basically claiming that there will be no major breakthroughs in the foreseeable future -- I disagree, but of course can't come up with bulletproof examples :-/
2V_V8yI see. But the point is how much disability people will have before they die. It's not obvious to me that it will go down, at least it has gone up in the recent past. I'm claiming that breakthroughs which increase the amount of available energy or other scarce resources by a huge amount don't seem especially likely in the foreseeable future.
0private_messaging8yFrom Wikipedia: It's already happening. Current process size is ~22nm, silicon lattice size is ~0.5nm . Something around 5..10 nm is the limit for photolithography, and we don't have any other methods of bulk manufacturing in sight. The problem with individual atoms is that you can't place them in bulk because of the stochastic nature of the interactions.

I'll bite. (I don't want the money. If I get it, I'll use it for what is considered by some on this site as ego-gratifying wastage for Give Directly or some similar charity.)

If you look around, you'll find "scientist"-signed letters supporting creationism. Philip Johnson, a Berkeley law professor is on that list, but you find a very low percertage of biologists. If you're using lawyers to sell science, you're doing badly. (I am a lawyer.)

The global warming issue has better lists of people signing off, including one genuinely credible human: Richard Lindzen of MIT. Lindzen, though, has oscillated from "manmade global warming is a myth," to a more measured view that the degree of manmade global warming is much, much lower than the general view. The list of signatories to a global warming skeptic letter contains some people with some qualifications on the matter, but many who do not seem to have expertise.

Cryonics? Well, there's this. Assuming they would put any neuroscience qualifications that the signatories had... this looks like the intelligent design letters. Electrical engineers, physicists... let's count the people with neuroscience expertise, other than peo... (read more)

let's count the people with neuroscience expertise, other than people whose careers are in hawking cryonics

This is a little unfair: if you have neuroscience experience and think cryonics is very important, then going to work for Alcor or CI may be where you can have the most impact. At which point others note that you're financially dependent on people signing up for cryonics and write you off as biased.

In a world where cryonics were obviously worthwhile to anyone with neuroscience expertise, one would expect to see many more cryonics-boosting neuroscientists than could be employed by Alcor and CI. Indeed, you might expect there to be more major cryonics orgs than just those two.

In other words, it's only unfair if we think size of the "neuroscientist" pool is roughly comparable to the size of the market for cryonics researchers. It's not, so IMO JRMayne raises an interesting point, and not one I'd considered before.

3James_Miller8yEconomists are the scientists most qualified to speculate on the likely success of cryonics because this kind of prediction involves speculating on long-term technological trends and although all of mankind is bad at this, economists at least try to do so with rigor.
9jefftk8y"How likely is it that the current cryonics process prevents information-theoretic death" is a question for neuroscientists, not economists.
2MakoYass2yI wonder if the really heaviest claims cryonics makes are mostly split between civics (questions like can an operation keep running long enough, will there always be people who care about reviving the stiffs) and partially in computer science (can the information needed be recovered from what remains), and the questions that are in the domain neuroscience (what biochemical information is important) might be legible enough to people outside of the field that neuroscientists don't end up being closer to the truth? I wouldn't say so, judging by the difficulties the openworm project is having in figuring out which information is important, but it's conceivable a time will come when it falls this way. This is making me wonder how often people assume a question resides exclusively in one field when it's split between a number of fields in such a way that a majority of the experts in the one assumed focal field don't tend to be right about it.
0James_Miller8yIdentical twins raised apart act fairly similarly, and economists are better qualified to judge this claim than neuroscientists. Given my DNA and all the information saved in my brain by cryonics it almost certainly would be possible for a super-intelligence with full nanotech to create something which would act similar to how I do in similar circumstances. For me at least, that's enough to preserve my identity and have cryonics work. So for me the answer to your question is almost certainly yes. To know if cryonics will work we need to estimate long-term tech trends to guess if Alcor could keep my body in tact long enough until someone develops the needed revival technologies.
4TheOtherDave8yI'm curious... if P1 is the probability that a superintelligence with full nanotech can create something which would act similar to how you do in similar circumstances given your DNA and all the information in your cryonically frozen brain, and P2 is that probability given just your DNA, what's your estimate of P1/P2?
2James_Miller8yGood point, especially if you include everything I have published in both P1 and P2 then P1 and P2 might be fairly close. This along with the possibility of time travel to bring back the dead is a valid argument against cryonics. Even in these two instances, cryonics would be valuable as a strong signal to the future that yes I really, really want to be brought back. Also, the more information the super-intelligence has the better job it will do. Cryonics working isn't a completely binary thing.
4TheOtherDave8ySo... it sounds like you're saying that your confidence that cryonic preservation differentially prevents information-theoretic death is relatively low (given that you estimate the results with and without it to be fairly close)... yes? (nods) What's your estimate of the signal-strength ratio, to such a superintelligence of your preferences in the matter, between (everything it knows about you + you signed up for cryonics) and (everything it knows about you + you didn't sign up for cryonics)? True.
0James_Miller8yYes given an AI super-intelligence trying to bring me back. I'm not sure. So few people have signed up for cryonics and given cryonics' significant monetary and social cost it does make for a powerful signal.
0TheOtherDave8yIf we assume there is no AI superintelligence trying to bring you back, what's your estimate of the ratio of the probabilities of information-theoretic death given cryonic preservation and absent cryonic preservation? To a modern-day observer, I agree completely. Do you think it's an equally powerful signal to the superintelligence you posit?
0James_Miller8yI don't know enough about nanotech to give a good estimate of this path. The brain uploading path via brain scans is reasonable given cryonics and, of course, hopeless without it.
0TheOtherDave8yOK... thanks for clarifying.
0jefftk8yHave you considered getting your DNA sequenced and storing that in a very robust medium?
0James_Miller8yYes. I'm a member of 23andMe, although they don't do a full sequencing.
2jefftk8ySorry, I should be more clear. You think your DNA is going to be really helpful to a superintelligence bringing you back, then it would make sense to try and increase the chances it stays around. 23andMe is a step in this direction, but as full genome sequencing gets cheaper at some point you should probably do that too. It's alreadfy much cheaper than cryonics and in a few years should be cheaper by an even larger margin.

I'm glad you attached your bounty to a concrete action (cancelling your cryonics subscription) rather than something fuzzy like "convincing me to change my mind". When someone offers a bounty for the latter I cynically expect them to use motivated cognition to explain away any evidence presented, and then refuse to pay out even if the evidence is very strong. (While you might still end up doing that here, the bounty is at least tied to an unambiguously defined action.)

2Kawoomba8yNot really, because the sequence of events is "Change my mind", then "Cancel subscription", i.e. the latter hinges on the former. Hence, since "Change my mind" is a necessary prerequisite, the ambiguity remains.
1satt8yWhen all is said & done, we may never know whether Chris Hallquist really did or really should have changed his mind. But, assuming Alcor/CI is willing to publicly disclose CH's subscription status, we will be able to decide unambiguously whether he's obliged to cough up $500.
1Kawoomba8yObviously a private enterprise won't publicly disclose the subscription status of its members. He can publicly state whatever he wants regarding whether he changed his mind or not, no matter what he actually did. He can publicly state whatever he wants regarding whether he actually cancelled his subscription, no matter what he actually did. If you assume OP wouldn't actually publicly lie (but still be subject to motivated cognition, as you said in the grandparent), then my previous comment is exactly right. You don't avoid any motivated cognition by adding an action which is still contingent on the problematic "change your mind" part. In the end, you'll have to ask him "Well, did you change your mind?", and whether he answers you "yes or no" versus "I cancelled my subscription" or "I did not cancel my subscription" comes out to the same thing.
7James_Miller8yWhen Alcor was fact checking my article titled Cryonics and the Singularity [http://www.alcor.org/cryonics/Cryonics2012-4.pdf] (page 21) for their magazine they said they needed some public source for everyone I listed as a member of Alcor. They made me delete reference to one member because my only source was that he had told me of his membership (and had given me permission to disclose it).
0Kawoomba8yGood article, you should repost it as a discussion topic or in the open thread.
0satt8yNot so obvious to me. CH could write to Alcor/CI explaining what he's done, and tell them he's happy for them to disclose his subscription status for the purpose of verification. (Even if they weren't willing to follow through on that, CH could write a letter asking them to confirm in writing that he's no longer a member, and then post a copy of the response. CH might conceivably fake such a written confirmation, but I find it very unlikely that CH would put words in someone else's mouth over their faked signature to save $500.)

Supposing that you get convinced that a cryonics subscription isn't worth having for you.

What's the likelihood that it's just one person offering a definitive argument rather than a collaborative effect? If the latter, will you divide the $500?

2ChrisHallquist8yGood question, should have answered it in the OP. The answer is possibly, but I anticipate a disproportionate share of the contribution coming from one person, someone like kalla724, and in that case it goes to that one person. But definitely not divided between the contributors to an entire LW thread.

It is likely that you would not wish for your brain-state to be available to all-and-sundry, subjecting you to the possibility of being simulated according to their whims. However, you know nothing about the ethics of the society that will exist when the technology to extract and run your brain-state is developed. Thus you are taking a risk of a negative outcome that may be less attractive to you than mere non-existence.

1jowen8yThis argument has made me start seriously reconsidering my generally positive view of cryonics. Does anyone have a convincing refutation? The best I can come up with is that if resuscitation is likely to happen soon, we can predict the values of the society we'll wake up in, especially if recovery becomes possible before more potentially "value disrupting" technologies like uploading and AI are developed. But I don't find this too convincing.
1topynate8yMy attempt at a reply turned into an essay, which I've posted here [http://lesswrong.com/r/discussion/lw/jhw/recreational_cryonics/].
0Ishaan8yThis answer raises the question of how narrow the scope of the contest is: Do you want to specifically hear arguments from scientific evidence about how cryonics is not going to preserve your consciousness? Or, do you want arguments not to do cryonics in general? Because that can also be accomplished via arguments as to the possible cons of having your consciousness preserved, arguments towards opportunity costs of attempting it (effective altruism), etc. It's a much broader question. (Edit - nevermind, answered in the OP upon more careful reading)
[-][anonymous]8y 6

You have read the full kalla724 thread, right?

I think V_V's comment is sufficient for you to retract your cryonics subscription. If we get uFAI you lose anyways, so I would be putting my money into that and other existential risks. You'll benefit a lot more people that way.

I had read some of that thread, and just went and made a point of reading any comments by kalla724 that I had missed. Actually, I had them in mind when I made this thread - hoping that $500 could induce a neuroscientist to write the post kalla724 mentioned (but as far as I can tell never wrote), or or else be willing to spend a few hours fielding questions from me about cryonics. I considered PMing kalla724 directly, but they don't seem to have participated in LW in some time.

Edit: PM'd kalla724. Don't expect a response, but seemed worth the 10 seconds on that off-chance.

7Furcas8yKalla724 is strongly convinced that the information that makes us us won't be preserved by current cryonics techniques, and he says he's a neuroscientist. Still, it would be nice if he'd write something a bit more complete so it could be looked at by other neuroscientists who could then tell us if he knows what he's talking about, at least.

My objection to cryonics is financial - I'm all for it if you're a millionaire, but most people aren't. For most people, cryonics will eat a giant percentage of your life's total production of wealth, in a fairly faint-hope chance at resurrection. The exact chances are a judgement call, but I'd ballpark it at about 10%, because there's so very many realistic ways that things can go wrong.

If your cryonics insurance is $50/month, unless cryonics is vastly cheaper than I think it is, it's term insurance, and the price will jump drastically over time(2-3x per... (read more)

4ChrisHallquist8y$50/month is for universal life insurance. It helps that I'm young and a non-smoker.
4Alsadius8yWhat payout? And "universal life" is an incredibly broad umbrella - what's the insurance cost structure within the UL policy? Flat, limited-pay, term, YRT? (Pardon the technical questions, but selling life insurance is a reasonably large portion of my day job). Even for someone young and healthy, $50/mo will only buy you $25-50k or so. I thought cryonics was closer to $200k.
3ChrisHallquist8y$100k. Cryonics costs vary with method and provider. I don't have exact up-to-date numbers, but I believe the Cryonics Institute charges ~$30k, while Alcor charges ~$80k for "neuro" (i.e. just your head) or ~$200k for full-body.
2Alsadius8yRunning the numbers, it seems you can get a bare-bones policy for that. I don't tend to sell many bare-bones permanent policies, though, because most people buying permanent insurance want some sort of growth in the payout to compensate for inflation. But I guess with cheaper cryo than I expected, the numbers do add up. Cryo may be less crazy than I thought.
[-][anonymous]8y 5

Be aware that you are going to get a very one-sided debate. I am very much pro-cryonics, but you're not going to hear much from me or others like me because (1) I'm not motivated to rehash the supporting arguments, and (2) attaching monetary value actually deentivises me from participating (particularly when I am unlikely to receive it).

ETA: Ok, I said that and then I countered myself by being compelled to respond to this point:

In particular, I find questions about personal identity and consciousness of uploads made from preserved brains confusing, but t

... (read more)
3kalium8yWhy shouldn't uploading affect his decision? If he's resurrected into a physical body and finds the future is not a place he wants to live, he can opt out by destroying his body. If he's uploaded, there is very plausibly no way out.
3Ishaan8yCurious - would you retain this belief if uploading actually happened, the uploaded consciousnesses felt continuity, and external observers could tell no difference between the uploaded consciousnesses and the original consciousnesses? (Because if so, you can just have an "only if it works for others may you upload me" clause)
1[anonymous]8yTo whom are you asking the question? I'd be dead. That computer program running a simulation of me would be a real person, yes, with all associated moral implications. It'd even think and behave like me. But it wouldn't be me - a direct continuation of my personal identity - anymore than my twin brother or any of the multiverse copies of "me" are actually me. If my brain was still functioning at all I'd be cursing the technicians as they ferry my useless body from the uploader to the crematorium. Then I'd be dead while some digital doppelgänger takes over my life. Do you see? This isn't about whether uploading works or not. Uploading when it works creates a copy of me. It will not continue my personal existence. We can be sure of this, right now.
2TheOtherDave8yOn what grounds do you believe that the person who wrote that comment is the same person who is reading this response? I mean, I assume that the person reading this response thinks and behaves like the same person (more or less), and that it remembers having been the person who wrote the comment, but that's just thought and behavior and memory, and on your account those things don't determine identity. So, on your account, what does determine identity? What observations actually constitute evidence that you're the same person who wrote that comment? How confident are you that those things are more reliable indicators of shared identity than thought and behavior and memory?
1[anonymous]8yBy examining the history of interactions which occured between the two states. Because it is very easy to construct thought experiments which show that thought, behavior, and memory are not sufficient for making a determination. For example, imagine a non-destructive sci-fi teleporter. The version of you I'm talking to right now walks into the machine, sees some flashing lights, and then walks out. Some time later another Dave out of a similar machine on Mars. Now step back a moment in time. Before walking into the machine, what experience do you expect to have after: (1) walking back out or (2) waking up on Mars?
2TheOtherDave8yWell, yes, but what are you looking for when you do the examination? That is, OK, you examine the history, and you think "Well, I observe X, and I don't observe Y, and therefore I conclude identity was preserved." What I'm trying to figure out is what X and Y are. Both.
0Dentin8yWith 50% probability, I expect to walk back out, and with 50% probability I expect to wake up on mars. Both copies will feel like and believe that they are the original, and both copies will believe they are the 'original'.
1[anonymous]8yBut you expect one or the other, right? In other words, you don't expect to experience both futures, correct? Now what if the replicator on Mars gets stuck, and starts continuously outputting Dentins. What is your probability of staying on Earth now? Further, doesn't it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars, and someone else takes over your life on Earth? What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?
3TheOtherDave8yNot Dentin, but since I gave the same answer above I figured I'd weigh in here. I expect to experience both futures, but not simultaneously. Somewhat similarly, if you show me a Necker cube [http://en.wikipedia.org/wiki/Necker_cube], do I expect to see a cube whose front face points down and to the left? Or a cube whose front face points up and to the right? Well, I expect to see both. But I don't expect to see both at once... I'm not capable of that. (Of course, the two situations are not the same. I can switch between views of a Necker cube, whereas after the duplication there are two mes each tied to their own body.) I will stay on Earth, with a probability that doesn't change. I will also appear repeatedly on Mars. Well, sure, in the real world it seems very odd to take this possibility seriously. And, indeed, it never seems to happen, so I don't take it seriously... I don't in fact expect to wake up on Mars. But in the hypothetical you've constructed, it doesn't seem odd at all... that's what a nondestructive teleporter does. (shrug) In ten minutes, someone will take over my life on Earth. They will resemble me extremely closely, though there will be some small differences. I, as I am now, will no longer exist. This is the normal, ordinary course of events; it has always been like this. I'm comfortable describing that person as me, and I'm comfortable describing the person I was ten minutes ago as me, so I'm comfortable saying that I continue to exist throughout that 20-minute period. I expect me in 10 minutes to be comfortable describing me as him. If in the course of those ten minutes, I am nondestructively teleported to Mars, someone will still take over my life on Earth. Someone else, also very similar but not identical, will take over my life on Mars. I'm comfortable describing all of us as me. I expect both of me in 10 minutes to be comfortable describing me as them. That certainly seems odd, but again, what's odd about it is the nondestruct
1Dentin8yNo, I would never expect to simultaneously experience being on both Mars and Earth. If you find anyone who believes that, they are severely confused, or are trolling you. If I know the replicator will get stuck and output 99 dentins on Mars, I would only expect a 1% chance of waking up on earth. If I'm told that it will only output one copy, I would expect a 50% chance of waking up on earth, only to find out later that the actual probability was 1%. The map is not the territory. Not at all. In fact, it seems odd to me that anyone would be surprised to end up on Mars. Because conciousness is how information processing feels from the inside, and 'information processing' has no intrinsic requirement that the substrate or cycle times be continuous. If I pause a playing wave file, copy the remainder to another machine, and start playing it out, it still plays music. It doesn't matter that the machine is different, that the decoder software is different, that the audio transducers are different - the music is still there. Another, closer analogy is that of the common VM: it is possible to stop a VPS (virtual private server), including operating system, virtual disk, and all running programs, take a snapshot, copy it entirely to another machine halfway around the planet, and restart it on that other machine as though there were no interruption in processing. The VPS may not even know that anything has happened, other than suddenly its clock is wrong compared to external sources. The fact that it spent half an hour 'suspended' doesn't affect its ability to process information one whit.
2Ishaan8yOK, I was just checking. There were two ways to interpret your statement - that uploaded won't be identical human beings (an empirical statement) vs. uploads will disrupt your continuity (a philosophical statement). I was just wondering which one it was. I'm interested in hearing arguments against uploading -How do you know right now that you are a continuity of the being that existed one-hour-in-the-past, and that the being that exists one-hour-in-the-future will be in continuity with you? -Would you ever step into a sci-fi style teleporter? -cryonics constitutes "pausing" and "resuming" yourself. How is this sort of temporal discontinuity different from the spatial discontinuity involved in teleporting?
1[anonymous]8yThe latter, but they are both empirical questions. The former deals with comparing informational configurations at two points in time, whereas the latter is concerned with the history of how we went from state A to state B (both having real-world implications). We need more research on the physical basis for consciousness [http://lesswrong.com/r/discussion/lw/jgd/link_consciousness_as_a_state_of_matter_max/] to understand this better such that we can properly answer the question. Right now all we have is the fleeting experience of continued identity moment to moment, and the induction principle which is invalid to apply over singular events like destructive uploading. My guess as to the underlying nature of the problem is that consciousness exists in any complex interaction of particles - not the pattern itself, but the instantiation of the computation. And so long as this interaction is continuous and ongoing we have a physical basis for the continuation of subjective experience. Never, for the same reasons. Pausing is a metaphor. You can't freeze time and chemistry never stops entirely. The particles in a cryonic patient's brain keep interacting in complex, albeit much slowed down ways. Recall that the point of pumping the brain full of anti-freeze is that it remains intact and structurally unmolested even at liquid nitrogen temperatures. It is likely that some portion of biological activity is ongoing in cryostatasis albeit at a glacial pace. This may or may not be sufficient for continuity of experience, but unlike uploading the probability is at least not zero. BTW the problem with teleporting is not spatial or temporal. The problem is that the computational process which is the subjective experience of the person being teleported is interrupted. The machine violently disassembles them and they die, then somewhere else a clone/copy is created. If you have trouble seeing that, imagine that the process is not destructive. You step into the teleporter, it sc
1Ishaan8yMy current thought on the matter is that Ishaan0 stepped into the elevator, Ishaan1a stepped out of the elevator, and Ishaan1b was replicated by the elevator. At time 2, Ishaan2a was shot, and Ishaan2b survived. Ishaan0 -> ishaan1a --> ishaan2a just died. Ishaan0 -> ishaan1b--->ishaan2b--->ishaan3b --->... gets to live on. So Ishaan0 can be said to have survived, whereas ishaan1a has died. The way I see it, my past self is "dead" in every respect other than that my current self exists and contains memories of that past self. I don't think there is anything fundamental saying we aught to be able to have "expectations" about our future subjective experiences, only "predictions" about the future. Meaning, if ishaan0 had a blindfold on, then at time1 when I step out of the teleporter I would have memories which indicate that my current qualia qualify me to be in the position of either Ishaan1a or Ishaan1b. When I take my blindfold off, I find out which one I am.
0Dentin8yIt sounds to me like you're ascribing some critical, necessary aspect of consciousness to the 'computation' that occurs between states, as opposed to the presence of the states themselves. It strikes me as similar to the 'sampling fallacy' of analog audio enthusiasts, who constantly claim that digitization of a recording is by definition lossy because a discrete stream can not contain all the data needed to reconstruct a continuous waveform.
0[anonymous]8yAbsolutely (although I don't see the connection to analog audio). Is a frozen brain conscious? No. It is the dynamic response of brain from which the subjective experience of consciousness arises. See a more physical explanation here [http://lesswrong.com/lw/jgd/link_consciousness_as_a_state_of_matter_max/].
0Dentin8yThe connection to analog audio seems obvious to me: a digitized audio file contains no music, it contains only discrete samples taken at various times, samples which when played out properly generate music. An upload file containing the recording of a digital brain contains no conciousness, but is concious when run, one cycle at a time. A sample is a snapshot of an instant of music; an upload is a snapshot of conciousness. Playing out a large number of samples creates music; running an upload forward in time creates conciousness. In the same way that a frozen brain isn't concious but an unfrozen, running brain is - an uploaded copy isn't concious, but a running, uploaded copy is. That's the point I was trying to get across. The discussion of samples and states is important because you seem to have this need for transitions to be 'continuous' for conciousness to be preserved - but the sampling theorem explicitly says that's not necessary. There's no 'continuous' transition between two samples in a wave file, yet the original can still be reconstructed perfectly. There may not be a continous transition between a brain and its destructively uploaded copy - but the original and 'continuous transition' can still be reconstructed perfectly. It's simple math. As a direct result of this, it seems pretty obvious to me that conciousness doesn't go away because there's a time gap between states or because the states happen to be recorded on different media, any more than breaking a wave file into five thousand non-contiguous sectors on a hard disk platter destroys the music in the recording. Pretty much the only escape from this is to use a mangled definition of conciousness which requires 'continuous transition' for no obvious good reason.
0[anonymous]8yI'm not saying it goes away, I'm saying the uploaded brain is a different person, a different being, a separate identity from the one that was scanned. It is conscious yes, but it is not me in the sense that if I walk into an uploader I expect to walk out again in my fleshy body. Maybe that scan is then used to start a simulation from which arises a fully conscious copy of me, but I don't expect to directly experience what that copy experiences.
0Dentin8yThe uploaded brain is a different person, a different being, a separate identity from the one that was scanned. It is conscious yes, and it is me in the sense that I expect with high probability to wake up as an upload and watch my fleshy body walk out of the scanner under its own power. Of course I wouldn't expect the simulation to experience the exact same things as the meat version, or expect to experience both copies at the same time. Frankly, that's an idiotic belief; I would prefer you not bring it into the conversation in the future, as it makes me feel like you're intentionally trolling me. I may not believe what you believe, but even I'm not that stupid.
1ArisKatsaris8yI honestly don't know how "copy" is distinct from "continuation" on a physical level and/or in regards to 'consciousness'/'personal existence'. If the MWI is correct, every moment I am copied into a billion versions of myself. Even if it's wrong, every moment I can be said to be copied to a single future version of myself. Both of these can be seen as 'continuations' rather than 'copies'. Why would uploading be different? Mind you, I'm not saying it necessary isn't -- but I understand too little about consciousness to argue about it definitively and with the certainty you claim one way or another.
1[anonymous]8yIt's not any different, and that's precisely the point. Do you get to experience what your MWI copies are doing? Does their existence in any way benefit you, the copy which is reading this sentence? No? Why should you care if they even exist at all? So it goes with uploading. That person created by uploading will not be you any more than some alternate dimension copy is you. From the outside I wouldn't be able to tell the difference, but for you it would be very real: you, the person I am talking to right now, will die, and some other sentient being with your implanted memories will take over your life. Personally I don't see the benefit of that, especially when it is plausible that other choices (e.g. revival) might lead to continuation of my existence in the way that uploading does not.
1ArisKatsaris8yUh, the present me is experiencing none of the future. I will "get to experience" the future, only via all the future copies of me that have a remembered history that leads back to the present me. If none of the future mes exist, then that means I'm dead. So of course I care because I don't want to die? I think we're suffering from a misunderstanding here. The MWI future copy versions of me are not something that exist in addition to the ordinary future me, they are the ordinary future me. All of them are, though each of them has only one remembered timeline. Or "that person created by uploading will be as much me as any future version of me is me".
0[anonymous]8yI'm a physicist, I understand perfectly well MWI. Each time we decohere we end up on one branch and not the others. Do you care at all what happens on the others? If you do, fine, that's very altruistic of you.
0ArisKatsaris8yLet me try again. First example: Let's say that tomorrow I'll decohere into 2 versions of me, version A and version B, with equal measure. Can you tell me whether now I should only care to what happens to version A or only to version B? No, you can't. Because you don't know which branch I'll "end up on" (in fact I don't consider that statement meaningful, but even if it was meaningful, we wouldn't know which branch I'd end up on). So now I have to care about those two future branches equally. Until I know which one of these I'll "end up on", I have no way to judge between them. Second example. Let's say that tomorrow instead of decohering via MWI physics, I'll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)? Seems to me that like in the first example I can't know which of the two branches "I'll end up on". So now I must care about the two future versions equally.
-1[anonymous]8yYes, you'd care about P and not U, because there's a chance you'd end up on P. There's zero chance you'd end up as U. Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?
0Dentin8yIn the case of MWI physics, I don't care about the other copies at all, because they cannot interact with me or my universe in any way whatsoever. That is not true for other copies of myself I may make by uploading or other mechanisms. An upload will do the same things that I would do, will have the same goals I have, and will in all probability do things that I would approve of, things which affect the universe in a way that I would probably approve of. None of that is true for an MWI copy.
0Dentin8yThis statement requires evidence or at least a coherent argument.
1[anonymous]8yActually, I think the burden of proof lies in the other direction. By what mechanism might you think that your subjective experience would carry over into the upload, rather than stay with your biological brain while the upload diverges as a separate individual? That's the more extraordinary belief.
0Dentin8yI think this is at least partially a bogus question/description. Let me break it up into pieces: This postulates an 'either/or' scenario, which in my mind isn't valid. A subjective experience carries over into the upload, and a subjective experience also stays in the biological brain. There isn't a need for the subjective experience to have a 'home'. It's ok for there to be two subjective experiences, one in each location. Of course the upload diverges from the biological. Or rather, the biological diverges from the upload. This was never a question. Of course the two subjective experiences diverge over time. And lastly: By the sampling theorem, which separates the content from the substrate.
1[anonymous]8yYou are talking about something completely different. Can you describe to me what it feels like for someone to be nondestructively scanned for upload? What should someone walking into the clinic expect?
1Dentin8ySample scenario 1: I go to an upload clinic. They give me a coma inducing drug and tell me that it will wear off in approximately 8 hours, after the scan is complete. As I drift off, I expect a 50% chance that I will awake to find myself an upload, and a 50% chance that I will awake to find myself still stuck in a meat body. Sample scenario 2: I go to an upload clinic. They tell me the machine is instantaneous and I will be conscious for the scan, and that the uploaded copy will be fully tested and operational in virtual form in about an hour. I step into the machine. I expect with 50% probability that I will step out of the machine after the scan, not feeling particularly different, and that an hour later I'll be able to talk to my virtual upload in the machine. I also expect with 50% probability that I will find myself spontaneously in virtual form the instant after the scan completes, and that when I check the block, an hour or more of real time will have passed even though it felt instantaneous to me. (Waking up as an upload in scenario 2 doesn't seem much different from being put under for surgery to me, at least based on my experiences. You're talking, then suddenly everything is in a different place and the anaestheseologist is asking 'can you tell me your name', interrupting your train of thought and half an hour has passed and the doctor has totally lost track of the conversation right when it was getting interesting.)
0[anonymous]8yOk, I understand your position. It is not impossible that what you describe is reality. However I believe that it depends on a model of consciousness / subjective experience / personal identity as I have been using those terms which has not definitely been shown to be true. There are other plausible models which would predict with certainty that you would walk out of the machine and not wake up in the simulator. Since (I believe) we do not yet know enough to say with certainty which theory is correct, the conservative, dare I say rational way to proceed is to make choices which come out favorably under both models. However in the case of destructive uploading vs. revival in cryonics we can go further. Under no model is it better to upload than to revive. This is analogous to scenario #2 - where the patient has (in your model) only a 50% chance of ending up in the simulation vs. the morgue. If I'm right he or she has a 0% chance of success. If you are right then that same person has a 50% chance of success. Personally I'd take revival with a 100% chance of success in both models (modulo chance of losing identity anyway during the vitrification process).
0Dentin8yNothing I said implied a '50% chance of ending up in the simulation vs. the morgue'. In the scenario where destructive uploading is used, I would expect to walk into the uploading booth, and wake up as an upload with ~100% probability, not 50%. Are you sure you understand my position? Signs point to no.
0ArisKatsaris8yWhy are you saying that? If you don't answer this question, of why you believe there's no chance of ending up as the upload, what's the point of writing a single other word in response? I see no meaningful difference between first and second example. Tell me what the difference is that makes you believe that there's no chance I'll end up as version U.
0ephion8yThe copy will remember writing this, and will feel pretty strongly that it's a continuation of you.
0[anonymous]8ySo? So all the other Everett branches distinct from me. So would some random person implanted with my memories. I don't care what it thinks or feels, what I care about is whether it actually is a direct continuation of me.
0Dentin8yI'm sorry to hear that. It's unfortunate for you, and really limits your options. In my case, uploading does continue my personal existence, and uploading in my case is a critical aspect of getting enough redundancy in my self to survive black swan random events. Regarding your last sentence, "We can be sure of this, right now", what are you talking about exactly?
1[anonymous]8yI mean we can do thought experiments which show prettying convincingly that I should not expect to experience the other end of uploading.
0Dentin8yWhat might those thought experiments be? I have yet to hear any convincing ones.
1[anonymous]8yThe teleporter arguments we've already been discussing [http://lesswrong.com/lw/jgu/i_will_pay_500_to_anyone_who_can_convince_me_to/ad20] , and variants.
0DanielLC8yHe has already heard from others like you. The point is for him to find the arguments he hasn't heard, which tend to be the ones against cryonics. That sounds much more difficult and correspondingly less likely to be accomplished.

If it could be done, would you pay $500 for a copy of you to be created tomorrow in a similar but separate alternate reality?(Like an Everette branch that is somewhat close to ours, but faraway enough that you are not already in it?)

Given what we know about identity, etc., this is what you are buying.

Personally, I wouldn't pay five cents.

Unless people that you know and love are also signed up for cryonics? (In which case you ought to sign up, for lots of reasons including keeping them company and supporting their cause.)

1[anonymous]8yCryonics does not necessarily imply uploading. It is possible that using atomically precise medical technology we could revive and rebuild the brain and body in-situ, thereby retaining continuity.
0byrnema8yI meant a physical copy. Would it make a difference, to you, if they rebuilt you in-situ, rather than adjacent? But I just noticed this set of sentences, so I was incorrect to assume common ideas about identity:
0[anonymous]8yI know. I was pointing out that your thought experiment might not actually apply to the topic of cryonics.

Let me attempt to convince you that your resurrection from cryonic stasis has negative expected value, and that therefore it would be better for you not to have the information necessary to reconstruct your mind persist after the event colloquially known as "death," even if such preservation were absolutely free.

Most likely, your resurrection would require technology developed by AI. Since we're estimating the expected value of your resurrection, let's work on the assumption that the AGI will be developed.

Friendly AI is strictly more difficult t... (read more)

How low would your estimate have to get before you canceled your subscription? I might try to convince you by writing down something like:

P(CW) = P(CW | CTA) * P(CTA)

Where CW = "cryonics working for you" and CTA = "continued technological advancement in the historical short term", and arguing that your estimate of P(CTA) is probably much too high. Of course, this would only reduce your overall estimate by 10x at most, so if you still value cryonics at P=0.03 instead of P=0.3, it wouldn't matter.

One rational utilitarian argument I haven't seen here but which was brought up in an old thread is that cryonics competes with organ donation.

With organ donation you can save on average more than one life (the thread mentions 3.75, this site says "up to 8") wheras cryonics saves only <0.1 (but your own life).

And you probably can't have both.

Assuming you meant for the comment section to be used to convince you. Not necessarily because you meant it, but because making that assumption means not willfully acting against your wishes on what normally would be a trivial issue that holds no real preference for you. Maybe it would be better to do it with private messages, maybe not. There's a general ambient utility to just making the argument here, so there shouldn't be any fault in doing so.

Since this is a real-world issue rather than a simple matter of crunching numbers, what you're really asking f... (read more)

This post inspired me to quickly do this calculation. I did not know what the answer would be when I started. It could convince you in either direction really, depending on your level of self/altruism balance and probability estimate.

Cost of neuro-suspension cryonics > $20,000

Cost of saving a single life via effective altruism, with high certainty < $5,000

Let's say you value a good outcome with a mostly-immortal life at X stranger's regular-span lives.

Let "C" represent the threshold of certainty that signing up for cryonics causes that go... (read more)

7solipsist8yThis sort of utilitarian calculation should be done with something like QALYs, not lives. If the best charities extend life at $150 per QALY, and a $20,000 neuro-suspension extends life by a risk-adjusted 200 QALYs, then purchasing cryonics for yourself would be altruistically utilitarian.
4Ishaan8yTrue, but that's much harder to estimate (because real world QALY data) and involves more uncertainty (how many QALYs to expect after revival?) and I didn't want that much work - just a quick estimate. However, I'm guessing someone else has done this properly at some point?
2solipsist8yNote: I have not, so do not use my 200 QALYs as an anchor.
-2somervta8yYes. Because instructing people to avoid anchoring effects works.
3jefftk8yThese calculations get really messy because the future civilization reviving you as an upload is unlikely to have their population limited by frozen people to scan. Instead they probably run as many people as they have resources or work for, and if they decide to run you it's instead of someone else. There are probably no altruistic QALYs in preserving someone for this future.
2solipsist8yThis reply made me really think, and prompted me to ask this question [http://lesswrong.com/lw/jh8/stupid_questions_thread_january_2014/acql].

Given that you believe that unfriendly AI is likely, I think one of the best arguments against cryonics is that you do not want to increase the probability of being "resurrected" by "something". But this concerns the forbidden topic, so I can't get into more details here. For hints see Iain M. Banks' novel Surface detail on why you might want to be extremely risk averse when it comes to the possibility of waking up in a world controlled by posthuman uploads.

It's easy to get lost in incidental costs and not realize how they add up over time. If you weren't signed up for cryonics, and you inherited $30K, would you be inclined to dump it in to a cryonics fund, or use it someplace else? If the answer is the latter, you probably don't REALLY value cryonics as much as you think - you've bought in to it because the price is spread out and our brains are bad at budgeting small, reoccurring expenses like that.

My argument is pretty much entirely on the "expense" side of things, but I would also point out that... (read more)

1ChrisHallquist8yI'm not that young--I graduated collect four years ago. If I inherited ~30k, it would go into a generic early start on retirement / early start on hypothetical kids' college fund / maybe downpayment on a condo fund. Given that I'd just be holding on to it in the short-term anyway, putting it in a cryonics fund doesn't actually strike me as completely crazy. Even in that case, though I think I'd get the insurance anyway, so I'd know the inheritance money could be used for anything I needed for when said need arose. Also, I understand that funding through insurance can avoid legal battles over the money.
0handoflixue8yThe average college graduate is 26, and I was estimating 25, so I'd assume that by this community's standards, you're probably on the younger side. No offense was intended :) I would point out that by the nature of it being LIFE insurance, it will generally not be used for stuff YOU need, nor timed to "when the need arises". That's investments, not insurance :) (And if you have 100K of insurance for $50/month that lets you early-withdrawal AND isn't term insurance... then I'd be really curious how, because that sounds like a scam or someone misrepresenting what your policy really offers :))

Let's suppose your mind is perfectly preserved (in whatever method they choose to use). Let's suppose you retain the continuity of your memories and you still feel you are "you." Let's suppose the future society is kinder, nicer, less wasteful, more tolerant, and every kid owns a puppy. Let's suppose the end of fossil fuels didn't destroy civilization because we were wise enough to have an alternative ready in time. Let's suppose we managed to save the ozone layer and reverse global warming and the world is still a more-or-less pleasant place to ... (read more)

2gjm8yI don't follow how this is an argument against cryonics, unless you're talking to someone who really truly believed that cryonics meant a serious chance of actual literal immortality. (Also, I have seen it alleged that at least one plausible model of the future of the universe has it dying after finite time, but in such a way that an infinite amount of computation can be done before the end. So it's not even entirely obvious you couldn't be subjectively immortal given sufficiently advanced technology. Though I think there have been cosmological discoveries since this model was alleged to be plausible that may undermine its plausibility.)
0polymathwannabe8yOn the other hand, you're actually paying people to get you to forfeit your chance at eternity. To paraphrase religious language, you're dangerously selling your soul too short.

After I ran my estimates, I concluded that cryonics raised my odds of living to ~90 years old by approximately 5% absolute, from 50% to 55%. It's not very much, but that 5% was enough for me to justify signing up.

I think the most important part is to be honest about the fact that cryonics is a fairly expensive safety net largely consisting of holes. There are many unknowns, it relies on nonexistent technology, and in many scenarios you may become permanently dead before you can be frozen. That said, it does increase your odds of long term survivability.

[-][anonymous]8y 0

Doesn't this thread go against the principles of The Bottom Line?

8DanielLC8yNot entirely. It's well known that, if you can't find an unbiased opinion, it's good to at least get biases from different directions. He has already seen the arguments in favor of cryonics. Repeating them would be wasting his time. Now he wants to find the arguments against. If they are more convincing than he expected, his expectations of cryonics working will go down. Otherwise, they will go up.

It's worth mentioning that anyone with a strong argument against cryonics is likely to believe that you will be persuaded by it (due to low base-rates for these kinds of conversions). Thus the financial incentive is not as influential as you would like it to be.

Added: Relevant prediction

4wuncidunci8yIf someone believes they have a really good argument against cryonics, even if it only has a 10% chance of working, that is $50 in expected gain for maybe an hour of work writing it up really well. Sounds to me like quite worth their time.

I work in software. I once saw a changelog that said something like " * session saving (loading to be implemented in a future version)", and I laughed out loud. The argument in favour of cryonics seems to boil down to "we can't see why revival won't work", which is basically meaningless for a system this complex and poorly-understood. How can we be at all confident that we're preserving memories when we don't even know how they're encoded? I can't predict exactly what crucial thing we will have missed preserving. But I can predict we wi... (read more)

I found myself in that situation once.

When I wrote the loader, the saved-game files worked.

Of course, that was because I just took the whole game data object and serialized it into a file stream. Similarly, here, we're storing the actual thing.

Last paragraph: ha. Restoring someone who wasn't frozen requires time travel. If cryo works and time travel doesn't, there you go.

0VAuroch8yIt doesn't necessarily involve time travel. It could just require extremely precise backwards extrapolation. And if it does involve time travel, it only requires the travel of pure information from the past to its future. And since information can already be transmitted to its future light cone, the idea that it's possible to specify a particular location in spacetime sufficiently specifically that you can induce a process to transfer information about that specified location to a specific point in its future lightcone (i.e. your apparatus). Which still sounds extremely difficult, but also much more likely to be possible than describing it as time travel. For the record, I assign the possibility of time travel that could travel to our current point in time as epsilon, the possibility of time travel that can travel to no point earlier than the creation of the specific time machine as very small (<0.1%) but greater than epsilon, and the possibility of the outlined information-only "time travel" as in the range of 0.1%-1%.
1Luke_A_Somers8yThe ability to radiate light into space means that nope, you need to catch up to all those photons. Second law murders extrapolation like that.
0VAuroch8yThat's true, slipped my mind.

I will pay $500 to anyone who can convince me to NOT X

Is incentivizing yourself to X. Not ideal for being open to genuinely changing your mind.

4jefftk8yHe stands to save a lot of money over the years by canceling his subscription, much more than this $500. The net short and medium term (which of course ignores the potential, long term, payoff of cryonics working) incentive is towards changing his mind and believing "not X", he's just offering to split some of that incentive with us.
[-][anonymous]8y -2

The definition of science that I prefer is: a theory that can be tested and shown to fail. If a theory gives itself room to always add one more variable and thus never be shown to fail, it might be useful or beautiful or powerful or comforting but it won't be science. Revival 'some day' can always be one more day away, one more variable added.

Pour some milk into water. Now, get the milk back out. Not milk powder, not the milk plus a little water, not 99.9% of the milk and some minerals from the water, just the milk. I don't think it's possible. Now, let your brain die. Freeze it (freezing a live brain will kill it). Then, restart the most complex machine/arrangement of matter known. It just doesn't seem feasible.

I think machines can have consciousness, and I think a copy of you can have consciousness, but you can't have the consciousness of your copy, and it seems to me that after de... (read more)

2[anonymous]8yA copy of you is identical to you. Therefore I don't see how a copy of you could not have your consciousness.
1Torello8yYes, I agree that the copy would have your consciousness, I guess I wasn't clear about expressing that. But that's the point; the copy would have your consciousness--you wouldn't.
0[anonymous]8ySince the copy of Chris Hallquist would say "I am Chris Hallquist" for the same reason Chris Hallquist says "I am Chris Hallquist", I would say that the copy of Chris Hallquist just is Chris Hallquist in every way. So Chris Hallquist still has Chris Hallquist's consciousness in the cryonics scenario. In the computer scenario, both Chris Hallquist in the flesh and Chris Hallquist on the computer have Chris Hallquist's consciousness. Over time they might become different versions of Chris Hallquist if exposed to different things, but at the start, from the inside, it seems the same to both.
-1Torello8y"the copy of Chris Hallquist just is Chris Hallquist in every way" I would say that by definition of a copy, it can't be Chris in every way, because there is one clear way that it isn't:--it's a copy! This is a fundamental principle of identity--a thing can only be identical to itself. Things might be functionally equivalent, or very similar, but a copy by definition isn't the same, or we wouldn't call it a copy.
0[anonymous]8yBut why would Chris Hallquist care about this "fundamental principle of identity", if it makes no difference to his experiences?
0Torello8yIt does make a difference--the use of the word "his" is key. "Copy of Chris" might have experiences and would not notice any difference regarding the fate of Chris, but for Chris, HIS experiences would end. (sorry for the caps; not shouting, just don't know how to do italics). Let's say that "Chris" and "copy of Chris" are in a room. I come into the room and say, "I'm going to kill one of you". Both "Chris" and "copy of Chris" are going to prefer that the other is killed, because their particular ability to experience things would end, even if a very similar consciousness would live on.
1[anonymous]8yBoth "Chris" and "copy of Chris" are Chris Hallquist. Both remember being Chris Hallquist, which is the only way anyone's identity ever persists. Copy of Chris would insist that he's Chris Hallquist for the same reason the original Chris would insist so. And as far as I'm concerned, they'd both be right - because if you weren't in the room when the copying process happened, you'd have no way of telling the difference. I don't deny that as time passes they gradually would become different people. I prefer to frame things this way. Suppose you take Chris Hallquist and scan his entire body and brain such that you could rebuild it exactly the same way later. Then you wait 5 minutes and then kill him. Now you use the machine to rebuild his body and brain. Is Chris Hallquist dead? I would say no - it would basically be the same as if he had amnesia - I would prefer to experience amnesia than to be killed, and I definitely don't anticipate having the same experiences in either case. Yet your view seems to imply that, since the original was killed, despite having a living, talking Chris Hallquist in front of you, it's somehow not really him. Edit: Moreover, if I was convinced the technology worked as advertised, I would happily undergo this amnesia process for even small amounts of money, say, $100. Just to show that I actually do believe what I'm saying.
1Torello8ywith regard to "Yet your view seems to imply that, since the original was killed, despite having a living, talking Chris Hallquist in front of you, it's somehow not really him." Yes, I do believe that the copy of Chris Hallquist would have an identical consciousness (until, as you stated, he had some new experiences), but the original (non-copy) Chris is still gone. So from a functional perspective I can interact with "copy of Chris" in the same way, but the original, unbroken consciousness of "original Chris" is still gone, which from the perspective of that consciousness, would be important. with regard to "Both "Chris" and "copy of Chris" are Chris Hallquist." I still am confused: they may have the same structure, function, and properties, but there are still two of them, so they cannot be the same thing. There are two entities; just because you made a copy doesn't mean that when you destroy the original that the original isn't changed as a result.
0[anonymous]8yWhy do you consider Chris Hallquist to be the same person when he wakes up in the morning as he is when he went to bed the night before (do you?)? The original is changed. And I agree that there are two entities. But I don't see why Chris Hallquist should care about that before the split even occurs. Would you undergo the amnesia procedure (if you were convinced the tech worked, that the people were being honest, etc.) for $1000? What's the difference between that and a 5-minute long dreamless sleep (other than the fact that a dead body has magically appeared outside the room)?
0Torello8yI would consider the Chris that wakes up in the morning the same person because his consciousness was never destroyed. Death destroys consciousness, sleep doesn't; this seems obvious to me (and I think most people); otherwise we wouldn't be here discussing this (if this was the case it seems we'd be discussing nightly cryonics to prevent our nightly deaths). Just because most people agree doesn't make something right, but my intuition tells me that sleep doesn't kill me (or my consciousness) while death does. Sorry for caps, how do you italicize in comments? I think the crux of the issue is that you believe GENERIC "Chris H consciousness" is all that matters, no matter what platform is running it. I agree that another platform ("copy of Chris") would run it equally well, but I still think that the PARTICULAR person experiencing the consciousness (Chris) would go away, and I don't like it. It seems like you are treating consciousness as a means--we can run the software on a copy, so it's exactly the same, where I see it as an end--original Chris should hold on to his particular consciousness. Isn't this why death is a fundamental problem for people? If people could upload their consciousness to a computer, it may provide some solace but I don't think it would eliminate completely the sting of death. With regard to whether I would do it for $1,000--no. Earlier you equated the amnesia procedure with death (I agree). So no, I wouldn't agree to have a copy of me who happens to be running my consciousness $1,000 for the privilege of committing suicide!
0[anonymous]8yAsterisks around your *italic text* like that. There should be a "Show help" button below the comment field which will pop up a table that explains this stuff. I actually think so. I mean, I used to think of death as this horrible thing, but I realized that I will never experience being dead, so it doesn't bother me so much anymore. Not being alive bothers me, because I like being alive, but that's another story. However, I'm dying all the time, in a sense. For example, most of the daily thoughts of 10-year old me are thoughts I will never have again; particularly, because I live somewhere else now, I won't even have the same patterns being burned into my visual cortex. That's a good way of putting it. The main thing that bothers me about focusing on a "particular" person is that I (in your sense of the word) have no way of knowing whether I'm a copy (in your sense of the word) or not. But I do know that my experiences are real. So I would prefer to say not that there is a copy but that there are two originals. There is, as a matter of fact, a copy in your sense of the word, but I don't think that attribute should factor into a person's decision-making (or moral weighting of individuals). The copy has the same thoughts as the original for the same reason the original has his own thoughts! So I don't see why you consider one as being privileged, because I don't see location as being that which truly confers consciousness on someone.
1Torello8yI see what you mean about not knowing whether you are a copy. I think this is almost part of the intuition I'm having--you in particular know that your experiences are real, and that you value them. So even if the copy doesn't know it's a copy, I feel that the original will still lose out. I don't think people experience death, as you noted above, but not being alive sucks, and that's what I think would happen to "original Chris" By the way, thanks for having this conversation--it made me think about the consequences of my intuitions about this matter more than I have previously--even counting the time I spent as an undergrad writing a paper about the "copy machine dilemma" we've been toying with. Thanks for the italics! Don't know how I missed the huge show help button for so long.
0ArisKatsaris8yHow would this objection work if I believe it likely that a billion copies of me are likely created every single second (see Many Worlds Interpretation), all of them equally real, and all of them equally me?
3polymathwannabe8yThe "you" that is in this universe is the only "you" you can realistically care about. You don't live vicariously through your other selves more than you can live through a twin.
1ArisKatsaris8yYou didn't understand my objection, or perhaps I didn't communicate it clearly enough: According to MWI the "me" in the present leads to a billion zillion different "me"s in the near future. I'm NOT talking about people who have already branched from me in the past -- I'm talking about the future versions of me. Torello seems to argue that a "me" who has branched through the typical procedures of physics is still a real "me", but a me who has branched via "uploading" isn't one. I don't see why that should be so.
1Torello8yThe "me" traveling through typical physics is a single entity, so it can continue to experience it's consciousness. The "me(s)" in these many worlds don't have the continuity to maintain identity. Think of this: if one actually believed in Many Worlds, and didn't find any problem with what I've stated above, this would be a convincing argument not to do cryonics, because it's already happening for free, and you can spend the money on entertainment or whatever (minus of course the $500 you owe me for convincing you not to do it ;)
0ArisKatsaris8ySo you believe that people nowadays have continuity that maintain identity only because you don't believe in MWI? So if MWI proves true, does this distinction between "copies" and "continuations" become meaningless (according to you)?
1Torello8yNo, I think that the other worlds of MWI don't/wouldn't affect our world so continuity/identity in our world wouldn't change if MSI were true (or suddenly became true). The break in continuity comes BETWEEN (sorry for caps, can't italicize) the many worlds, preventing the "me(s" in different worlds from continuity.