Followup to: Cryonics wants to be big

We've all wondered about the wisdom of paying money to be cryopreserved, when the current social attitude to cryopreservation is relatively hostile (though improving, it seems). In particular, the probability that either or both of Alcor and CI go bankrupt in the next 100 years is nontrivial (perhaps 50% for "either"?). If this happened, cryopreserved patients may be left to die at room temperature. There is also the possibility that the organizations are closed down by hostile legal action.A

The ideal solution to this problem is a way of keeping bodies cold (colder than -170C, probably) in a grave. Our society already has strong inhibitions against disturbing the dead, which means that a cryonic grave that required no human intervention would be much less vulnerable. Furthermore, such graves could be put in unmarked locations in northern Canada, Scandinavia, Siberia and even Antarctica, where it is highly unlikely people will go, thereby providing further protection. 

In the comments to "Cryonics wants to be big", it was suggested that a large enough volume of liquid nitrogen would simply take > 100 years to boil off. Therefore, a cryogenic grave of sufficient size would just be a big tank of LN2 (or some other cryogen) with massive amonuts of insulation.

So, I'll present what I think is the best possible engineering case, and invite LW commenters to correct my mistakes and add suggestions and improvements of their own.

If you have a spherical tank of radius r with insulation of thermal conductivity k and thickness r (so a total radius for insulation and tank of 2r) and a temperature difference of ΔT, the power getting from the outside to the inside is approximately

25 × k × r × ΔT 

If the insulation is made much thicker, we get into sharply diminishing returns (asymptotically, we can achieve only another factor of 2). The volume of cryogen that can be stored is approximately 4.2 × r3, and the total amount of heat required to evaporate and heat all of that cryogen is 

4.2 × r× (volumetric heat of vaporization + gas enthalpy) 

The quantity is brackets for Nitrogen and a ΔT of 220 °C is approximately 346,000,000 J m-3. Dividing energy by power gives a boiloff time of 

1/12,000 × r× k-1 centuries

Setting this equal to 1 century, we get:

r2/k = 12,000. 

Now the question is, can we satisfy this constraint without an exorbitant price tag? Can we do better and get 2 or 3 centuries? 

"Cryogel" insulation with a k-value of 0.012 is commercially available Meaning that r would have to be at least 12 meters. A full 12-meter radius tank would weigh 6000 tons (!) meaning that some fairly serious mechanical engineering would be needed to support it. I'd like to hear what people think this would cost, and how the cost scales with r. 

The best feasible k seems to be fine granules or powder in a vacuum. When the mean free path of a gas increases significantly beyond the characteristic dimension of the space that encloses it, the thermal conductivity drops linearly with pressure. This company quotes 0.0007 W/m-K, though this is at high vacuum. Fine granules of aerogel would probably outperform this in terms of the vacuum required to get down to < 0.001 W/m-K. 

Supposing that it is feasible to maintain a good enough vacuum to get to 0.0007 W/m-K, perhaps with aerogel or some other material. Then r is a mere 2.9 meters, and we're looking at a structure that's the size of a large room rather than the size of tower block, and a cryogen weight of a mere 80 tons. Or you could double the radius and have a system that would survive for 400 years, with a size and weight that was still not in the "silly" range.

The option that works without the need for a vacuum is inviting because there's one less thing to go wrong, but I am no expert on how hard it would be to make a system hold a rough vacuum for 100 years, so it is not clear how useful that is.

As a final comment, I disagree that storing all patients in one system is a good idea. Too many eggs in one basket is never good when you're trying to maximize the probability that each patient will survive. That's why I'm keen on finding a system that would be small enough that it would be economical to build one for a few dozen patients, say (cost < 30 million).  

So, I invite Less Wrong to comment: is this feasible, and if so how much would it cost, and can you improve on my ideas?

In particular, any commenters with experience in cryogenic engineering would delight me with either refinement or critique of my cryogenic ideas, and delight me even more with cost estimates of these systems. Its also fairly critical to know whether you can hold a 99% vacuum for a century or two. 





A: In addition to this, many scenarios where cryonics is useful to the average LW reader are scenarios where technological progress is slow but "eventually" gets to the required level of technology to reanimate you, because if progress is fast you simply won't have time to get old and die before we hit longevity escape velocity. Slow progress in turn correlates with the world experiencing a significant "dip" in the next 50 or so years, such as a very severe recession or a disaster of some kind. These are precisely the scenarios where a combination of economic hardship and hostile public opinion might kill cryonics organizations. 

New Comment
204 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

With the development of commercial space flight, at some point launching cryonauts into space might become cost-effective.

Right now, with a human head weighting about 5 kg, launching it would cost about $150,000 (not counting the cryopreservation equipment, which is probably significant, and has to withstand the launching stress). Comparing this with a price tag of Alcor full-body preservation, which is also $150,000, it's not totally bonkers to suppose that in a few decades it could become competitive, even without the fancy space elevators.

If it's possible to use the low temperature in space, despite the solar radiation, to keep the temperature down, or somehow keep the package in the shadow, maybe of a specifically crafted accompanying object (I'm not sure about that -- while it's a critical question), it could be a no-maintenance solution, where one would have to perform a deliberate and rather costly procedure to disturb it.

True, I considered that possibility. But the problem is, what do you do with it once it's in space? You have to somehow make sure that it stays cold and doesn't hit anything or deorbit. How would you prevent solar radiation from heating it up? Reflective on the sun facing side, black on the other side? But then you have to keep a control system operating for hundreds of years! And think of the required radiation shielding! All of your stuff is getting irradiated, so you need lots of lead. And if it goes wrong, what do you do? No, space is not good. Now if you could get it to the moon, you'd be in business. Bury it in one of the always-shaded craters, perhaps a kilometer below the surface, and it'll be safe for millions of years.

And think of the required radiation shielding! All of your stuff is getting irradiated, so you need lots of lead.

Is the radiation going to cause significant information-theoretic damage? In how long?

I don't know, wikipedia states that you'd receive 0.5-1 Sievert per year in normal conditions, where the safe dose for a living human is 0.002 Sv/yr. However, that's for a living human. In a solar flare event, the dose would go up. I'd bet that it would take 1000s of years for this to add up to irreparable damage, with some uncertainty regarding a solar flare.

Shopping is hard, let's do math!

First, we need a conceptual framework. The whole point of cryonics is to stop chemistry, so if you're cryopreserved and then exposed to ionizing radiation over any period of time, you'll experience the same amount of damage as if you were alive and exposed to that much radiation all at once. (Being alive and exposed to radiation over a period of time is different; you experience less damage because your cells have time to repair themselves.)

Wikipedia says "Estimates are that humans unshielded in interplanetary space would receive annually roughly 400 to 900 milli-Sieverts (mSv) (compared to 2.4 mSv on Earth)". Wikipedia also says that an acute exposure of 4500 to 5000 mSv is "LD50 in humans (from radiation poisoning), with medical treatment". Now, LD50 isn't LD100, but we can agree that it's a Very Bad Dose.

Generously, assuming that the Very Bad Dose is 5000 mSv, and Outer Space's Death Rays are 400 mSv/yr, being Cryopreserved In Space will give you a Very Bad Dose in 12.5 years. This is compared to roughly 2000 years on Earth.

That answers one half of Eliezer's question. My answer to the other half (is this significant in informat... (read more)

This fits in with something I've been wondering in general just for Earth based cryopreservation. How much effort do cryonic organizations make to ensure that there's a minimum of radiation exposure to the cryopreserved individuals? Even background radiation matters a lot more than it would for a living person since there's no ongoing repair mechanisms. I suspect that the bodies are not being subject to much external radiation simply because the cryochambers themselves would block most of it. But, the bodies themselves will generate some radiation, primarily from the decay of potassium 40 and carbon 14. Naively if one were trying absolutely to minimize this one would try to have people who knew they were likely to die soon (due to terminal illness) to eat diets which have less potassium. One could also conceive of having foods made made with carbon that had a low amount of c-14. But given the proportions I'm pretty sure that the bulk of the radiation will be from potassium 40. Robert Ettinger at one point presented a back of the envelope calculation that showed that the radiation just from potassium 40 is unlikely to be a problem if one is the range of 50-100 years, but if one is interested in longer ranges then this becomes a more serious worry.
Remember that it takes a lot more radiation to erase someone than to merely kill them. To information-theoretically erase a person would seem to require that at least 40% of the molecules in their brains are altered, which would seem to imply at least 10^24 or so radioactive particles. This is extremely high.
I'm curious where you are getting the 40% number. I'm not completely sure what we mean by erasing a person since the mind isn't a binary presence that is there or not. Damage can result in loss of some aspects of personality or some memories without complete erasure. Presumably, most people would like to minimize that issue. Given your 40% claim I tentatively agree with your 10^24 number. There's a minor issue of cascading particles but that shouldn't be a problem since most of the radiation is going to be low energy beta particles. I am however concerned slightly that radiation could result in additional free radicals which are able to jump around and do unpleasant chemical stuff even at the temperatures of liquid nitrogen. I suspect that this would not be a major issue either but I don't think I have anywhere near enough biochem knowledge to make a strong conclusion about this. Additionally, as STL pointed out, we don't want to make things more difficult for the people reviving them. This combines badly with the first-in-last-out nature of cryonics- the bodies which have been around longer will have more radiation damage and will already be much more technically difficult to revive. Moreover, some people will strongly prefer being reanimated in their own bodies rather than as simulations on computers. The chance that that can occur is lower if the bodies have serious problems due to radiation damage.
Say you randomly alter 1% of the molecules in the brain. Then almost every neuron would still recognizably be a neuron, and still have synapses that connected to the right things, and any concentration of neurotransmitter X would still recognizably be type X (rather than Y). There is no way I see for 1% random destruction to erase the person information-theoretically. The difference between 1% and 40% is not actually so much... 10^22 vs 10^24. Still huge.
Would this be enough to keep thresholds for action potentials correct? I'm more familiar with neural nets for computational purposes than with actual neural architecture, but for neural nets this matters a lot. You can have wildly different behavior even with the same neurons connected to each other just by changing the potential levels. Learning behavior consists not just in constructing or removing connections but also in strengthening and weakening existing connections. I don't know why you mention the concentrations of neurotransmitters since that's a fairly temporary thing which (as far as I'm aware) doesn't contain much in the way of actual data except about neurons which have fired very recently.
What determines the threshold for an action potential? If it's something bigger than a few dozen molecules, it seems that a random 1% destruction can't erase it.
I don't know enough about the mechanisms to to comment. Do we have any more biologically inclined individuals here who can?
I suspect you are right. Since the important structures involved are significantly larger than one molecule, most of the single molecule alterations will be rather obvious and easy to reverse (for a given kind of 'easy').
Life is tough. Unlife is tougher.
Yes, that would be lots better. The cost of robotics should eventually go down as well, which would enable relatively cheap crater-seeking moon-burrowing robots (though sabotage would become cheaper over time too).

Good idea! A few refinements:

  • You probably don't want a literally spherical tank; it might roll away and hit something or bother someone. Trading a few % of efficiency for a flattened, ridged bottom might be a good idea.

  • If you're going to rely on social taboos against disturbing graves, you probably have to keep bodies/tank down to 30, if not an even lower number. A group of family and friends who are buried together in the same crypt are eccentric; a community of essentially unrelated people who are buried together in the same crypt are a cult, and

... (read more)
I'm pretty sure this is mistaken-- people generally don't wreck graveyards, even though large numbers of people are buried there.
Right, but the graveyard is thought of as a place where many individuals are separately buried. It's OK, if slightly mischievous, to enter a graveyard, tell spooky stories there, maybe even make out -- but you would never do any of those things inside a grave. If we build a cryoyard in which there are many individual cryotanks nearby, that will probably be fine, and might cut down on security costs. But if we put all the bodies in the same cryotank, then we run a nontrivial risk of setting off people's creepy cult alarms, and the taboo against disturbing graves-of-people-who-are-not-markedly-unholy may or may not hold.
Yes, and in the very worst case scenario, the weirdness factor would make some teenagers more likely to try to go and vandalize them as a dare. Weird cult having strange frozen crypts is almost asking for that to happen. Unfortunately, this is real life, so we can't even have the satisfaction of this sort of thing triggering the terrible monsters that sleep beneath the cursed ground. (Why yes, I have watched too many bad horror movies. Whatever gave you that impression?)
If you were developing a simulation of a Universe for entertainment purposes, how long would you let the inhabitants think they were at the top level of reality before introducing firm evidence that something was seriously off? Just curious.
Depends on how long the backstory is. Also, it's plausible that any species which can simulate complex universes has a longer attention span than we do. Consider the range of human art. It's plausible that simulators would have at least as wide a range, and I can see purist simulators (watchmaker Gods) and interventionists.
I'd do it over and over again, in all sorts of different ways, record the hilarious results, and after each such session reset the simulation back to an earlier, untampered saved state.
I've long suspected that we live in the original universe's blooper reel.
This doesn't match my intuitions at all, but I'm not an expert on normal people. Is there any way the plausible range of reactions to big cryonics facilities can be tested?
Let's ask our neighbors!
I've thought hard about this, but I see no way to get anything to be reliable enough, with the exception of a radioisotope thermal generator. Repairing the vacuum can be done with getters and adsorbers (they just absorb gas molecules chemically), which is a no-moving parts solution. The insulation layer could be full of little sorbs.
[grin] I wasn't sure if those were sci-fi or not. Sure, for starters, but it's hard to say what will and won't be permafrost in 100 years, what with the non-trivial risk of catastrophic climate change and all. If the tank is built right, I think rolling, although unlikely, would still be one of the top 5 most likely failure modes; it is an easy enough flaw to fix. Even municipal water towers, e.g., aren't perfect spheres, and nobody expects those to fall off their columns and plow through downtown Suburb Beach.
Far from being sci-fi, they are quite common (if we're talking about the same thing): Common enough that they're the main reason NASA has been targeted by green groups, even.
You're right to worry about global warming. But permafrost is soil, not ice. Permafrost means "always frozen soil". I suspect that there are regions of northern Canada where even a +20 degree warming would not get rid of the permafrost. Though the cost of getting to these places may be prohibitive? Anyone live in Canada and know about Nunavut?
I can verify that these places are accessible, and that the permafrost extends quite a bit farther south than one might expect. I used to live just south of the Yukon territory. There are regular long-haul trucks that go up there all year round; if you go in winter, you can use an ice road to get to the very cold and remote places. Given the regular volume of traffic, I'd say the cost is not prohibitive. I can get precise figures if you'd like.
Thanks. Do you know what places have the coldest winter temperature?
Hits on google for "coldest place on earth" seem unanimous that it's somewhere in Antarctica. Here's an interesting newspaper article: This sounds like it could be a lot of fun.
That's pretty cool. As I said, -70C is thermodynamically very useful. A phase change heat-pipe could capture that cold from the winter, meaning that throughout the summer your system still only sees an outside temperature of -70C.
This place is much colder... If you could only get permission to use it...
I agree. The ideal is just one person.
I should have made clear: the tanks would be buried under the ground, probably in the Canadian permafrost. No rolling.

I think that the idea is good, and the engineering is fine for back-of-the-envelope, but can we please call it a "vault" or something instead of a grave? Cryonics already has an image problem, and we don't want to suggest the people in the grave are permanently dead.

But, on the other hand, there is the advantage that if you spin it as a grave, then when the cryonics company goes bust, the law protects the patient (to some extent) from being disturbed. For example, creditors can't dismantle it for scrap if it's a grave, but if it's an Alcor/CI asset, then they can. Rather like patently atheistic cryonicists having to say that they have a "religious" objection to autopsy.
"cryonics" -> "autopsy", I assume
Then we can suggest that they're temporarily dead, but they're still dead, so it's a "grave". Religions have been saying that death is temporary for thousands of years anyways, it wouldn't be anything new.
Maybe we should spin a cryo-patient as "undead". Isn't there a vampire show on TV that everyone is glued to? I can't even remember the name.

As a final comment, I disagree that storing all patients in one system is a good idea. Too many eggs in one basket is never good when you're trying to maximize the probability that each patient will survive.

Why? Several baskets certainly make sense if you're trying to maximize the probability that at least a few patients survive, and might make sense if you assign significantly negative utility to higher variance in your probability distribution about survival percentage. If you just care about the mean, why would more baskets be better?


(a) Because a single, large target is very inviting for people to try and break deliberately, but if there are N small targets spread out all over the place, the effort required to inflict a lot of damage is extreme, especially if each grave is under 5 meters of concrete and in a remote location.

Also, the idea of killing "the elitists" by rupturing their collective cryograve seems more righteous than going to Jack Smith's grave and specifically killing Jack. It seems more like murder that way. So the optimal solution is one person per grave.

(b) because graves can use slightly different technology, and in the time between when you set the scheme up and when civilization or just cryonics companies collapse, you can see which designs actually fail, and rescue the patients inside them. A large population of say, 100 graves, with 10 examples of each design type will yield information about what works best as the worst ones break. Then over time you should become more confident in the remaining designs that have zero instances of failure.

It seems to me that, for someone to conceive of their actions as "specifically killing Jack", they have to believe that cryogenics works. If they don't, they're not killing Jack, they're just vandalising his grave, and he was clearly a weirdo. This doesn't necessarily invalidate your points; I'm just saying that you should be careful not to project your own beliefs onto future opposers-of-cryogenics, or you will defend against the wrong attitudes.
I think that people who deliberately wanted to smash cryo facilities would do so because they were jealous, i.e. they thought that there was a chance that it would work. This is especially the case if civilization is going pear shaped and they feel that the cryonauts are getting a lucky escape, and there are rumors of "the elite" escaping to the future that way, etc etc. If you don't think cryo works, you don't have a motive to expend lots of effort smashing cryo facilities. So the only protection required against people who think it won't work is that the facility is so remote they'll never bump into it.

I think you're Rokomorphizing an awful lot. You just need to be in a state of mind where smashing a cyro container seems cool, something that can score points with your friends, and where you think you can get away with it.

And in particular, where smashing cryonics facilities will infuriate the people who care about them, even if you don't believe cryonics will work. I don't have a feeling for whether anti-cryonicism will ever get to that point. My feeling is that the sort of vandalism I'm talking about is extremely impulsive, and just not having cryonic storage near where people live is enough to greatly improve the odds that there won't be random vandalism.
Also guns. People with guns.
3Paul Crowley
According to Mike Darwin one cryonics facility (don't remember which, sorry) has already been shot at from the street.
For being a cryonics facility? Is there enough evidence to determine if it could've been just a random drive-by?
1Paul Crowley
I'm afraid all I know about it is a brief remark from Mike Darwin somewhere in this sequence of videos:
You probably mean security guards. Note that decent security is going to add something to the cost of cryonics. However, this gets to the scarier possibility-- government policies opposed to cryonics. Any ideas about the odds of that happening?
Absolutely, and this conversation has prompted me to consider how best to handle such factors to ensure my head has the maximum chance of survival. Now that is really scary. Also beyond my ability to create a reliable estimate. I wonder which country is the least likely to have such political problems? Like, the equivalent of the old style swiss banks but for heads.
It's hard to predict that far ahead, though Scandinavia is looking attractive-- the people there don't have a history of atrocious behavior, and there's cold climate available. The nightmare scenario is a hostile world government, or similar effect of powerful governments-- think about the US exporting the war on drugs. I hate saying this, but the only protective strategies I can see are aimed at general increase of power-- make money, develop political competence (this can be a community thing, it doesn't mean everyone has to get into politics) and learn how to be convincing to normal people.
While I don't expect future Vikings to raid cryonics facilities, I feel this statement should have been qualified somehow.
For what it's worth, the Vikings were very peaceable and property-respecting in Scandinavia - I'm sure we're all familiar with Saga-era Iceland's legal system, and the respect for property was substantial even in the culture (why was Burnt Njal's death so horrifying? because besides burning to death, it destroyed the farm). And even outside they weren't so bad; you can't raid a place too quickly if you raze it to the ground.
And the even bigger risk of such political singletons would be that they probably aren't too keen on allowing development of technological singleton needed to pull off the reanimation. Agree again. Unfortunately most of the ways I can imagine to attain the necessary power take more financial resources and skills than developing an FAI.
Could you expand on what you mean by a political singularity? And it's my impression that merely ordinary amounts of wealth can make a difference to politics if they're applied to changing minds.
In this context, exactly what you mean by 'hostile world government'. By 'singularity' I refer to anything that can be conceptualised as a single agent that has full control over its environment. For example, a world government would qualify assuming there were no independent colonies (or aliens) within realistic reach of our solar system. Few entities with absolute power is likely to be inclined to relinquish that power to another entity. Don't tell big brother that you are going to make him irrelevant!
I find "political singularity" to be very unclear, and I'm curious about whether other LessWrongians came up with the intended meaning.
I was paraphrasing Bostrom from memory, and meant singleton. The relevant section is up to and including the first sentence of '2'.
I came up with the intended meaning but it required context. I think that overarching world government or the like would probably be more clear. This seems like an example of possible overuse of a "singularity" paradigm, or at least fondness for the term.
I suspect the intended word was singleton Which has less overloaded meaning.
That's the one. Edited.
Or a spelling error when referencing a somewhat credible authority. I didn't use 'overarching world government' because it would be clear but convey the wrong meaning.
Ah ok. This makes a lot of sense. Political singleton makes a lot of sense.
Why do you hate saying this, out of curiosity?
Because getting good at that sort of thing would mean getting past gigantic ugh fields at my end.
It might just be my own ugh field talking, but can you think of long-lived institutions that haven't had broad public support that continued their mission effectively over time. Even stuff like the catholic church has had periods where it wasn't really following its mission statement. Or do you think you can get broad-scale public support? I'd rate that plausible in less theistic countries,
Cryonics doesn't need broad public support, it just needs to not be substantially attacked. If we can get it filed under weird harmless hobby which has enough of a lobby that it's not worth fucking with, I think that would be probably be enough. If violent rage against cryonics starts building, that's a hard problem. At the moment, I don't know what to do about that one, except for the usual political and propaganda efforts. I don't know if it's possible to get many people to actually sign up for it unless the tech for revival looks at least imminent, so public support would have to be based in principle-- probably property rights and/or autonomy. Long-lived institutions without broad public support? The only thing I can think of is Talmud study, and I don't know if that would count as an institution.
Okay we gain money and power now. What happens in 70-100 years when we aren't around to wield it. Will our descendants care upon our behalf? How do we create self-sustaining social systems? I'm not interested much in Cryo for myself (although I wouldn't mind getting frozen for Science). But these kinds of questions matter for things like existential risk reduction that is time dependent. Like meteor deflection or FAI theory when the science of AI is getting close to human level (if it is a long hard slog, and can't be done before we figure out how intelligence works). If we could get it to be a status symbol to be signed up for cryonics people will flock to it. You want to make it visible as well. Perhaps having your dewar as a coffee table or something. Freemasons? Although it is hard to tell how well they keep to their mission statement they might be an example of a long-lived institution that does keep their mission.
Good question-- you obviously can't control the future of an institution, all you can do is improve the odds. And this isn't something where I have actual knowledge, so anything I could say would be pulling it off the top of my head. I don't think the "who'd care about the early adopters?" question is a real problem-- if you can get the thing going at all, it's going to have to have a lot of regard for promises and continuity.
They don't cut rocks anymore. Like, at all. How would you feel if, a couple hundred years from now, there actually was a Cult of the Severed Head, with silly initiation rituals and charity fundraisers and a football team, but most of them just figured all this 'corpsicle' nonsense was really just symbolic, and spent most of their time arguing about which version of Robert's Rules of Order they should be using and how to lure people away from the Rotary Club?
Wiki says that the origin of freemasonry is uncertain. Do you have better sources? Was the purpose of freemasons to help them cut rock? Or was it just a group of people who shared something banding together to help each other? E.g. freemasonry was never about cutting rock to use a Hansonianism. I'm not suggesting we copy freemasonry whole cloth. Simply that we need to look at what social organisations survive, at all.
Freemasonry was literally never about stone work. The stone work and ideas of architecture are used as an analogy for a system of morality, as I understand it.
Wikipedia suggests that the theory that freemasonry evolved from stonemason's guilds is considered at least plausible.
This has happened at least once in British Columbia. See this article. As far as I am aware this is at present the only location which specifically singles out cryonics although there are other areas where the regulations for body disposal inadvertently prevent the use of cryonics.
This kind of stuff makes me boil with anger. Some bureacrat busybody inserts garbage about irradiation into a law at the last second, and there's nothing we can do to get it out? Is there some kind of international law against defamation? Because that is exactly what this is. And the stuff they prattle on about it taking advantage of patients in a vulnerable state is total nonsense. What they're doing -- pressuring patients into not cryopreserving -- is taking advantage, and in a particularly grotesque and unconscionable manner. Ironically, if I were to send them a letter or call them about this stupid law they'd take it as me being a foreign busybody. This is stupid. They're the ones harming BC's global reputation by keeping such idiotic laws on the books. /rant
Isparrish: On the contrary -- as a general rule, in English-speaking countries, legislators enjoy immunity from any legal consequences of anything they say or write in the course of their work. This is known as "parliamentary privilege," and goes far beyond the free speech rights of ordinary citizens. In particular, they are free to commit libel without repercussions, as long as they speak in official capacity. In the U.S., this is even written explicitly into the constitution ("for any speech or debate in either House, [the Senators and Representatives] shall not be questioned in any other place").
Life is not fair. Don't expect other people to not randomly screw up our prospects, up to and including causing our deaths. The solution is for us rationalists/transhumanists/future-oriented folk to become richer, better organized and more numerous so that there are more resources available to prevent more things like this from happening.
Yes, agreed that there is a class of vandal of that form. However, it's not hard to put a cryo-grave high in the mountains of northern Canada, buried under a protective shell of 10 meters of earth and 3 meters of reinforced concrete. The casual thug wouldn't know where it is, wouldn't be able to get there if he did know, and wouldn't be able to get at it if he were there. The more worrying class are people who are strongly motivated to get liquidate cryopatients.
Putting a big gravestone on top would be a good idea. If the cryo organization fails, it's too easy for all paper and digital records of the grave to be lost. In that scenario, a gravestone will make it more likely that the grave will be rediscovered before the liquid nitrogen boils away. It'll still be safe from random vandals if it's underground in a remote location, and in the case of anti-cryonics fanatic vandals, there's nothing you can do to keep them from finding out where all the graves are.
Well... maybe. That's an interesting problem. One solution is to just expect that as technology increases, ability to find stuff increases. Another is to make some kind of hint or signal that only a highly advanced civilization would find, like writing a message down in atomically small writing on many tiny pieces of metal, and scattering them around the planet, or encrypting the location with a very strong encryption. Another is for the grave to be "smart" and to broadcast that it is getting to the end of its lifetime. Even better, just people reading this discussion in the future would think "hey, let's check Canada for cryonauts"
Hello, people of the future! Please unfreeze us, and give us warm soup! We'll be very grateful! Thanks much. Seriously, though, I wonder about the ability of future archeologists to dig through historical Internet information. At the moment, the only attempt to create a thorough historical archive of the Internet is the Wayback machine, and since (I estimate) that the growth of the Internet is accelerating faster than the cheapness of reliable long-term storage, they'll either have to get lots more funding or start being more selective about what they archive. In terms of the ability to maintain information of interest to future archaeologists through a straight-up global disaster, the Internet isn't any better than paper. Maybe we need to start looking into cuneiform printers...
I think that getting the grave found at the other end is a less serious problem than building it to last. If they have nanotech, they can explore the entire surface of the earth in great detail, including doing an ultrasound scan of the entire crust. Also the thing would have a magnetic signature, being metallic. And if you were really concerned, you could build in a powerful permanent magnet, which would make it even more detectable. You could even use temperature differentials to power a weak radio transmitter, but honestly that's probably making it too easy to find. Better to have a whole host of slight anomalies.
You could handle this by having each separate cryonic organization exchange data about locations of grave sites. The probability that they will all fail is much lower than any single one failing. Moreover, the most likely situations resulting in such large scale failure will be situations where the human economy is so damaged that replacing the liquid nitrogen will not be feasible.
What do you think about a honeycomb like structure that has individual cells for a single person, but is bundled together enough to get a lot of the insulation benefits of being big?
When you consider the pool of potential patients (over a given century) is in the billions, a few million per location does not necessarily constitute putting all your eggs in one basket. And the process of making it mainstream enough for this to happen could have a huge positive impact the sanity waterline.
I'm thinking about actually proposing this to cryo-companies, so we have to deal with the real world, where there are tens of patients per decade, not billions.
With only a few dozen patients, I don't think you will see appreciable economies of scale. The whole idea seems to me reliant on at least a few thousand patients becoming available within a short period of time (or prepaying).
My calculations indicate that you could have a system that lasted for 200 years for just $500,000, though with a scale of perhaps 100 units being built this would go down by a factor of 2-3 and the reliability would go up.
At r=2.9 meters, the size is about in the 10,000 neuro patient range. (V~=102m^3, patients per cubic meter is about 125). You might only fill it part way though if you are aiming for maximum duration, as the less cryogen is displaced the longer the system stays cold. Even so, this could probably hold every cryonicist currently in existence.
Though it's reliability, not just cost that matters. If there were fewer patients per grave, (e.g. 10 per grave), then the reliability goes up (see my previous comments to this effect)
Still, filling it to 50% of its volume would only bring down refill time by 50%. And you only can fill to a certain percentage with patients as they are irregularly shaped. I suppose the real question is whether cost or hands-off reliability is the biggest concern.
I think that with 10 patients, such a system would cost $100k each, which is pretty good. With many such systems scattered around the remote, cold parts of the world, the probability of any fraction of systems being vandalized goes down, and the information gained about how such systems fail comes in quickly as a few of them fail (e.g. vacuum leaks).
Not graves!

... and a ΔT of 220 °C ...

With liquid nitrogen at -196°C and the average temp in the places you suggest well below freezing (A few minutes of googling suggests it wouldn't be hard to find an average annual temp of -20°C.), I think you could use a more-optimistic ΔT of 175°.

There is a good idea along these lines, though. Have an outer shell cooled by dry ice, which takes about 6 times more heat per unit volume than nitrogen to heat from solid to gas at ambient temperature. The dry ice sublimes at -78C. If you do this, the ΔT that the dry ice sees matters, so building the facility somewhere with a very cold winter temperature makes sense.
Sure, you could do that. You only gain a factor of 1.25 for that, though.

If you care about cryonics and its sustainability during an economic collapse or worse, chemical fixation might be a good alternative.

The main advantage is that it requires no cooling and is cheap. People might be normally buried after the procedure, so it would seem less weird.
However, a good perfusion of the brain with the fixative is hard to achieve.

Chemical fixation could also be combined with those low maintenance cryonic graves just in case the nitrogen boils off.

Agreed re: this. What I'd love to know is how chemical and thermodynamic means of preservation interact, for example if you can get someone to -40C in the permafrost, will chemical preservation suffice? What about -70C? How much difference does temperature make? (Arrhenius equation suggests that a 10C decrease roughly halves reaction rates, so -70C is 2^10 or 1000 times slower than 30C, and -140C is 2^17 or 131,000 times slower)
Interesting that 2^20 hours is 120 years, so an hour at room temp == a decade at -135 == a century at -170C == a millenium at LN2

Australia claims 42% of Antarctica. That should be plenty of room.

Antarctica seems suitable, but why do you suggest that part owned by Australia specifically?
A few months ago, I was thinking about the possibility of cryonic suspension as part of the Australian health-care system. With perhaps 100,000 new people to suspend each year, the AAT seems an obvious place to put them. And once the infrastructure was in place, people from other countries could get involved; it would just be a matter of fashioning the necessary financial and other arrangements. So perhaps your studies should focus on Antarctic geopolitics, the better to protect our future cryo-bases. Unfortunately, I think the pattern theory of identity (according to which your copy is still you) is an illusion, and that this is all cryonics is likely to provide - a way to make copies of the frozen originals. So I find myself wanting to be supportive of the impulse behind cryonics, but unable to earnestly advocate the creation of national cryosuspension facilities. At best I can just try not to impede such an effort should it arise.
Does it help you at all to think of cryonics as a form of advanced reproduction?
Would you also disagree with the pattern theory of identity as applied to, say, a game of chess? Imagine I am playing chess on a chessboard with a friend, and then we have to go home, and I copy down the positions of all the pieces and put the board away. The next day, we get out another board, put the pieces into their positions, and start playing from there. Are we playing the same game of chess?
No, it then becomes a Zombie Chess game.
That's a thought. I must confess I hadn't considered my country to be particularly likely to be a world leader in cryonics adoption. That belief must be a frustrating belief. Right or wrong I must say my anticipated experience is a whole lot better. But then... I philosophically evaluate preferences over the entire state of the universe by default and yes, 'identity' and affiliation with this form are not something that particularly comes up.
I think that cryonics patients could actually be repaired rather than sliced and scanned. It would be more difficult, but with advanced nanotechnology and the nice access that the blood vessels provide, it seems that it would be pretty easy to do. Repairing the body would be even easier.
So do I. But the result will be a copy. During sleep and hypothermia, the brain remains in the same physical phase. Cellular metabolism never shuts down, for example. But I would be rather surprised if the "neurophysical correlate of selfhood" survives the freezing transition. ETA: See followup comment.
8Paul Crowley
When you say you would be surprised, is there any actual observation that could surprise you here?
It's not as though Mitchell's belief is uniquely untestable. It's more like we can't collect any evidence at all about whether identity is preserved, just by reanimating a bunch of people and asking them. We'd need some sort of neurological description of what "selfhood" means, and then presumably testing to see whether this property is preserved after reanimation would be the actual surprising observation. Until then, it's irrational to dismiss either theory based purely on the argument that "even if we cryopreserve you, it wouldn't falsify your theory", since this applies to both sides.
3Paul Crowley
No, the position that is unfalsifiable is that there is a distinction here at all.
I don't think so. I'm a processist (though I do think it's unlikely that quantum effects matter), but I can imagine kinds of discoveries that would falsify my current belief on that matter. It could turn out, once we localize and understand consciousness: ...that it's not even "on" or merely suspended all the time, but sometimes is "off" in the normal course of brain operation. ...that it's possible to erase clear memories even with the brain in the same physical state (this would support either Porter's view or some more spiritual dualism). ...that there is more than a single thread of consciousness, and no particular continuity of identity for the person as a whole, even though some thread is operating all the time. Of those, one and three even seem plausible, but I can't think of a way to do the experiments at our current level of understanding and technology. In any case, once we actually have a working and well-tested theory of consciousness, identity will either vanish or be similarly well-understood.
Actually, there is. If we cryopreserved Mitchell and then reanimated him, he would be very surprised: it would falsify his theory. If we did it to anyone else, however, that wouldn't be enough. It would have to be him.
3Paul Crowley
I suspect you wrong him here - I'm guessing post-freeze Mitchell would say "Obviously I feel like I'm the same person, but now I know I've been cryopreserved I must conclude I'm a copy, not the real thing. I feel good about being alive, but it's copy-Mitchell who feels good, not the guy who got frozen."
Well, in that case he really has joined the fairy cult.
The neurophysical correlate of selfhood can survive a temperature drop to 0 but it can't survive a phase change? So selfhood is kind of like latent heat of fusion? This is grade A+ magical thinking.
So that is why there is such interest in vitrification! grin/duck/run...
Yes, that's right, if you get vitrified, does that count as a different phase?!
It makes a difference legally. Actually I suspect that Antarctica may be a bridge too far legally, even though thermodynamically it's nice to have access to -89C (the blackbody radiation from -89C is less than 15% of that at room temperature because it scales as T^4, which is important as radiative heating may turn out to be the hardest mode of thermal transport to block)

Why limit yourself to no maintenance at all in your feasibility speculations? Tending graves is common across cultures. As long as you're spinning a tank of liquid nitrogen as a "grave", why not spin a nitrogen topoff as equivalent to keeping the grass trimmed or bringing fresh flowers?

Because if the shit hits the fan and cryo companies go bust, who can you rely on to pay $5000 for a tanker to come every few years? I don't even think I'd rely on my kids to do that, every year without fail, even if there's a major depression and their own kids are going hungry. And if the shit really hits the fan (civilizational collapse) then there will be no liquid nitrogen.
I see what you mean. It's a matter of what threat you have in mind. I'm thinking mainly of the hostility of a pretty-much intact society to cryonics, and how to take your idea of protecting preserved people by using the notion of "respect for the dead" further, also incorporating the idea of honoring the dead by maintaining shrines/graves, etc. You're totally right that if there's a global depression or civilizational collapse, then the threat of thawing comes more from inability to maintain rather than unwillingness or opposition. Maybe it would help to split the post, or maybe organize this discussion, to investigate these ideas separately? It seems that engineering speculation about zero-maintenance cryonics is interesting and useful, and that using the "grave" analogy to make cryonics more acceptable and safe from interference is also interesting, but different issues and constraints arise for each of them.
Could someone design a stainless-steel prayer wheel that doubles as a hand-cranked device for condensing nitrogen from the atmosphere? "We maintain this mechanism to honor our ancestors, that one day they may be reborn" sounds like the kind of thing some Shinto priestesses could've kept straight for all of recorded history, let alone a few centuries.
Moving parts, it would break. If you could persuade people to keep a fire lit in a certain location for most of the time, you could use the heat energy to power a TAD-OPTR cyrocooler with no moving parts. It's an interesting idea. You could design it so that the fire only has to be stoked 1% of the time on average, for example.

A couple thoughts on places to look for ideas, places where people have probably been thinking about similar challenges:

  • Interstellar Travel There's a lot of speculation about feasibility here, and I think people generally assume the need for some sort of long-term, low-power cryogenic preservation. They do assume access to interstellar vacuum, though.
  • DNA "arks" and similar biodiversity libraries. I haven't heard of anything in this space looking at zero- or low-maintenance preservation, but maybe there's a paranoid fringe?
And presumably also interstellar temperatures of 3 degrees above absolute zero!
I think GreenRoot refers to the situation where this isn't available, or they wouldn't have to worry about cryogenic preservation.
That doesn't make sense to me. Space is cold. You can't be doing interstellar travel and not have access to cold.
I understand. I was thinking ..but I'm misjudging relative distances? That is, a spaceship wouldn't spend sufficient time near stars?
If you're in interstellar space, i.e. travelling to another star, the inverse square law and the large distances very quickly kill radiation heat from either the destination or origin star. However, if (as Vladimir suggested) you want to stay close to the sun, i.e. in earth orbit, you have to use a reflective shield.

My comments in this sub-thread brought out more challenges and queries than I expected. I thought that by now everyone would expect me to periodically say a few things out of line regarding identity, consciousness, and so on, and that only the people I was addressing might respond. I want to reply in a way which provides some context for the answers I'm going to give, but which covers old territory as little as possible. So I would first direct interested parties to my articles here, for the big picture according to me. Those articles are flawed in various... (read more)

Why do people keep trying to posit quantum as the answer to this problem when it has been so soundly refuted?
My current leading hypotheses: * "Quantum mechanics" feels like a mysterious-enough big rock to crack the equally mysterious phenomenon of "consciousness". * Free will feels like it requires indeterminism, and quantum mechanics is often described as indeterministic.
There is a long history of diverse speculation by scientists about quantum mechanics and the mind. There was an early phase when biology hardly figured and it was often a type of dualism inspired by Copenhagen-interpretation emphasis on "observers". But these days the emphasis is very much on applying quantum mechanics to specific neuromolecular structures. There are papers about superpositions of molecular conformation, transient quantum coherence in ionic complexes, phonons in filamentary structures, and so on. To me, this work still doesn't look good enough, but it's a necessary transitional step, in which ambitious simple models of elementary quantum biophysics are being proposed. The field certainly needs a regular dose of quantitative skepticism such as Tegmark provided. But entanglement in condensed-matter systems is a very subtle thing. There are many situations in which long-range quantum order forms despite local disorder. Like it or not, you can't debunk the idea of a quantum brain in a few pages because we assuredly have not thought of all the ways in which it might work. As for the philosophical rationale of the thing, that varies a lot. But since we know that most neural computation is not conscious, I find it remarkably natural to suppose that it's entanglement that makes the difference. Any realistic hypothesis is not going to be fuzzy and just say "the quantum is the answer". It will be more like, special long-lived clathrins found in the porosome complex of astrocytes associated with glutamate-receptor hotspots in neocortical layer V share quantum excitons in a topologically protected way, forming a giant multifractal cluster state which nonlocally regulates glutamatergic excitation in the cortex - etc. And we're just not at that level yet.
What evidence is there that would promote any given quantum-mechanical theory of consciousness to attention? I mean that sincerely - there ought to be some reason that, say, you have to come up with your monad theory, and I quite frankly don't know of any that would impel me to do so.
How I got here: Starting point: consciousness is real. This sequence of conscious experiences is part of reality. Next: The physical world doesn't look like that. (That consciousness is a problem for atomism has been known for more than 2000 years.) So let us suppose that this is how it feels to be some physical thing "from the inside". Here we face a new problem if we suppose that orthodox computational neuroscience is the whole story. There must then be a mapping from various physical states (e.g. arrangements of elementary particles in space, forming a brain) to the corresponding conscious states. But mappings from physics to causal-functional roles are fuzzy in two ways. We don't have, and don't need, an exact criterion as to whether any particular elementary particle is part of the "thing" whose state we are characterizing functionally. Similarly, we don't have, and don't need, a dividing line in the space of all possible physical configurations providing an exact demarcation between one computational state and another. All this is just a way of saying that functional and computational properties are not entirely objective from a physical standpoint. There are always borderline cases but we don't really care about not having an exact border, because most of the time the components of a functioning computational device are in physical states which are obviously well in correspondence with the abstract computational states they represent. A device whose components are constantly testing the boundaries of the mapping is a device in danger of deviating from its function. However, when it comes to consciousness, a fuzzy-but-good-enough mapping like this is not good enough, because consciousness (according to our starting point) is an entirely real and "objective" element of reality. It is what it is "exactly", and therefore its counterpart in physical ontology must also have an exact characterization, both with respect to physical parts and with respect to phys
I think you grant excessive reliability to your impressions of consciousness. A philosophical argument along the lines proposed is an awfully weak thread to hang a theory on.
Doesn't it mean that consciousness is epiphenomenon? As all quantum algorithms can be expressed as equivalent classical algorithms, and we can have unconscious computer which is functionally equivalent to human brain. ETA: I can't see any reason to associate consciousness with some particular kind of physical object/process, as it undermines functional significance of consciousness as high-level coordination, decision making and self-representation system of brain.
No, it would just mean that you can have unconscious simulations of consciousness. Think of it like this. We say that the things in the universe which have causal power are "quantum tensor factors", and consciousness always inhabits a single big tensor factor, but we can simulate it with lots of little ones interacting appropriately. More precisely, consciousness is some sort of structure which is actually present in the big tensor factor, but which is not actually present in any of the small ones. However, its dynamics and interactions can be simulated by the small ones collectively. Also, if you took a small tensor factor and made it individually "big" somehow (evolved it into a big state), it might individually be able to acquire consciousness. But the hypothesis is that consciousness as such is only ever found in one tensor factor, not in sets of them. It's a slightly abstract conception when so many details are lacking, but it should be possible to understand the idea: the world is made of Xs, an individual X can have property Y, a set of Xs cannot, but a set of Xs can imitate the property. What would really make consciousness epiphenomenal is if we persisted with property dualism, so we have the Xs, their "physical properties", and then their correlated "subjective properties". But the whole point of this exercise is to be able to say that the subjective properties (which we know to exist in ourselves) are the "physical properties" of a "big" X. That way, they can enter directly into cause and effect.
Doesn't this undermine your entire philosophical basis for your argument which rests on the experience of consciousness being real? if your system allows such an unconscious classical simulation then why believe you are one of the actual conscious entities? This seems P-Zombieish.
It's like asking, why do you think you exist, when there are books with fictional characters in them? I don't know exactly what is happening when I confirm by inspection that some reality exists or that I have consciousness. But I don't see any reason to doubt the reality or efficacy of such epistemic processes, just because there should also be unconscious state machines that can mimic their causal structure.
I understand you. Your definition is "real consciousness" is quantum tensor factor that belong to particular class of quantum tensor factors, because we can find them in human brains and we know that at least one human brain is conscious and consciousness must be physical entity to participate in causal chain. All other quantum tensor factors and their sets are not consciousness by definition. Questions are: 1. How to define said class without fuzziness, when it is yet not known what is not "real consciousness"? Should we include dolphins' tensor factors, great apes' ones and so on? 2. Is it always necessary for something to exist as physical entity to participate in causal chain? Does temperature exist as physical entity? Does "thermostatousness" of refrigerator exist as physical entity? Of course, temperature and "termostatousness" are our high-level description of physical systems, they don't exist in your sense. So, it seems that you see contradiction in subjectively apparent existence of consciousness and apparent nonexistence of physical representation of consciousness as high-level description of brain functions. Don't you see flaw in that contradiction?
Causality for statistical or functional properties mostly reduces to generalizations about the behavior of exact microstates. ("Microstate" means physical state completely specified in its microscopic detail. A purely thermodynamic or macroscopic description is a "macrostate".) The entropy goes up because most microstate trajectories go from the small phase-space volume into the large phase-space volume. Macroscopic objects have persistent traits because most microstate trajectories for those objects stay in the same approximate region of state space. So the second question is about ontology of macrostate causation. I say it is fundamentally statistical. Cause and effect in elemental form only operates locally in the microstate, between and within fundamental entities, whatever they are. Macrostate tendencies are like theromodynamic laws or Zipf's law, they are really statements about statistics of very large and complex chains of exact microscopic causal relations. The usual materialist idea of consciousness is that it is also just a macrostate phenomenon and process. But as I explained, the macrostate definition is a little fuzzy, and this runs against the hypothesis that consciousness exists objectively. I will add that because these "monads" or "tensor factors" containing consciousness are necessarily very complex, there should be a sort of internal statistical dynamics. The laws of folk psychology might just be statistical mechanics of exact conscious states. But it is conceptually incoherent to say that consciousness is purely a high-level description if you think it exists objectively; it is the same fallacy as when some Buddhists say "everything only exists in the mind", which then implies that the mind only exists in the mind. A "high-level description" is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent. The first question is a question about h
What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition? If we will feed internal states of classical brain simulation into quantum box (outputs discarded), containing 10^2 or 10^20 entangled particles/quasi-particles, will it make simulation conscious? How in principle can we determine that it will or will not? Interesting thing is that mind as a high-level description of brain workings is mind-dependent on the same mind (it's not a paradox, but a recursion), not on a mind. Different observers will agree on the content of high-level model of brain workings presented in same brain, as that model is unambiguously determined by the structure of brain. Thus mind is subjective in a sense that it is conceptual description of brain workings (including concepts of self, mind and so on), and mind is objective in a sense that its content can be reconstructed from structure of brain. It isn't paradox, really. ---------------------------------------- I can't help imagining procedure of accepting works on philosophy of mind: "Please, show your tensor factor. ... Zombies and simulations are not allowed. Next".
The difference is between conscious and not conscious. This will translate mathematically into presence or absence of some particular structure in the "tensor factor". I can't tell you what structure because I don't have the theory, of course. I'm just sketching how a theory of this kind might work. But the difference between small and big is number of internal degrees of freedom. It is reasonable to suppose that among the objects containing the consciousness structure, there is a nontrivial lower bound on the number of degrees of freedom. Here is where we can draw a line between small and big, since the small tensor factors by definition can't contain the special structure and so truly cannot be conscious. However, being above the threshold would just be necessary but not sufficient, for presence of consciousness. If you have a completed theory of consciousness, then you answer this question just as you would answer any other empirical question in a domain where you have a well-tested theory: You evaluate the data using the theory. If the theory tells you all the tensor factors in the box are below the magic threshold, there's definitely no consciousness there. If there might be some big tensor factors present, it will be more complicated, but it will still be standard reasoning. If you are still developing the theory, you should focus just on the examples which will help you finish it, e.g. Roko's example of general anesthesia. That might be an important clue to how biology, phenomenology, and physical reality go together. Eventually you have a total theory and then you can apply it to other organisms, artificial quantum systems like in your thought experiment, and so on. Any causal model using macrostates leaves out some micro information. For any complex physical system, there is a hierarchy of increasingly coarse-grained macrostate models. At the bottom of the hierarchy is exact physical fact - one model state for each exact physical microstate. At the top o
Here's the point of divergence. There is peculiar coarse-graining. Specifically it is conceptual self-model consciousness uses to operate on (as a wrote earlier it uses concepts of self, mind, desire, intention, emotion, memory, feeling, etc. When I think "I want to know more", my consciousness uses concepts of that model to (crudely) represent actual state of (part of) brain including parts which represent model itself). Thus, to find a consciousness in a system it is necessary to find a coarse-graining such that corresponding macrostate of system is isomorphic to physical state of part of the system (it is not sufficient, however). Or in map-territory analogy to find a part of territory that isomorphic to a (crude) map of territory. Edit: Well, it seems that lower bound on information content of map is necessary for this approach too. However, this approach doesn't require adding fundamental ontological concepts. Edit: Isomorphism condition is too limiting, it will require another level of course-graining to be true. I'll try to come up with another definition.
This really sounds to me like a perfect fit for Robin's grandparent post. If, say, nonlocality is important, why achieve it through quantum means?
This is meant to be ontological nonlocality and not just causal coordination of activities throughout a spatial region. That is, we would be talking about entities which do not reduce to a sum of spatially localized parts possessing localized (encapsulated) states. An entangled EPR pair is a paradigm example of such ontological nonlocality, if you think the global quantum state is the actual state, because the wavefunction cannot be factorized into a tensor product of quantum states possessed by the individual particles in the pair. You are left with the impression of a single entity which interfaces with the rest of the universe in two places. (There are other, more esoteric indications that reality has ontological nonlocality.) These complex unities glued together by quantum entanglement are of interest (to me) as a way to obtain physical entities which are complex and yet have objective boundaries; see my comment to RobinZ.
Not only does this quantum brain idea violate known experimental and theoretical facts about the brain, it also violates what we know about evolution. Why would evolution design a system that maintains coherence during sleep and unconsciousness, if this has no effect on inclusive genetic fitness? (Mitchell Porter thinks that his "copy" would behave essentially identically to what he would have done had he not "lost his selfhood", so in terms of reproductive fitness, there's no difference)
Though I agree that this quantum brain idea is against all evidence, I don't think the evolutionary criticism applies. Not every adaptation has a direct effect on inclusive genetic fitness; some are just side effects of other adaptations.
Sure, but the empirical difficulty of maintaining quantum coherent state would imply that it isn't the kind of thing that would happen by accident.
Well, it might be that maintaining the system rather than restarting it when full consciousness resumes is an easier path to the adaptation, or has some advantage we don't understand. Of course, if the restarted "copy" would seem externally and internally as a continuation, the natural question is why bother positing such a monad in the first place?
If you want something that flies, the simplest way is for it to have wings that still exist even when it's on the ground. We don't actually know (big understatement there) the relative difficulty of evolving a "persistent quantum mind" versus a "transient quantum mind" versus a "wholly classical mind". There may also be an anthropic aspect. If consciousness can only exist in a quantum ontological unit (e.g. the irreducible tensor factors I mention here), then you cannot find yourself to be an evolved intelligence based solely on classical computation employing many such entities. Such beings might exist in the universe, but by hypothesis there would be nobody home. This isn't relevant to persistent vs transient, but it's relevant for quantum vs classical.

You seem to jump to the conclusion that, in the favorable case, (that consciousness only exists in quantum computers AND quantum coherence is the fundamental basis of persistent identity), the coherence timescale would obviously be your whole lifetime, even if hypothermia, anesthetics, etc happen, but as soon as you are cryopreserved, it decoheres, so that the physical basis of persistent identity corresponds perfectly to the culturally accepted notion.

But that would be awfully convenient! Why not assign most of your probability to the proposition that evolution accidentally designed a quantum computer with a decoherence timescale of one second? ten seconds? 100 seconds? 1000 seconds? 10,000 seconds? Why not postulate that unconsciousness or sleep destroys the coherence? After all, we know that classical computation is perfectly adequate for evolutionarily adaptive tasks (because we can do them on a classical computer).

This is, first of all, an exercise in taking appearances ("phenomenology") seriously. Consciousness comes in intervals with internal continuity, one often comes to waking consciousness out of a dream (suggesting that the same stream of consciousness still existed during sleep, but that with mental and physical relaxation and the dimming of the external senses, it was dominated by fantasy and spontaneous imagery), and one should consider the phenomenon of memory to at least be consistent with the idea that there is persistent existence, not just throughout one interval of waking consciousness, but throughout the whole biological lifetime. So if you're going to think about yourself as physically actual and as actually persistent, you should think of yourself as existing at least for the duration of the current period of waking consciousness, and you have every reason to think that you are the same "you" who had those experiences in earlier periods that you can remember. The idea that you are flickering in and out of existence during a single day or during a lifetime is somewhat at odds with the phenomenological perspective. Cryopreservation is far more disruptive than anything which happens during a biological lifetime. Cells full of liquid water freeze over and grow into ice crystals which burst their membranes. Metabolism ceases entirely. Some, maybe even most models of persistent biological quantum coherence have it depending on a metabolically maintained throughput of energy. To survive the freezing transition, it seems like the "bio-qubits" would have to exist in molecular capsules that weren't penetrated as the ice formed.
But if you're going to argue phenomenologically, then any form of reanimation that restores the persons memory in a continuous way will seem (from the inside) to be continuous. Can I ask: have you ever been under a general anesthetic? It is a philosophically significant life event, because what you experience is just so incredibly at odds with what actually happens. You lie there waiting for the anesthetic to take effect, and then the next instant, your eyes open and find your arm/leg/whatever in plaster, and a glance at the clock suggests that 3 hours have passed. I'd personally want to be cryopreserved before I fully lost my marbles so that I can experience that kind of time travel. Imagine closing your eyes, then reopening them and it's the 23rd century? How cool would that be?
I must have been, at some point, but a long time ago and don't remember. Clearly there are situations where extra facts would lead you to conclude that the impression of continuity is an illusion. If you woke up as Sherlock Holmes, remembering your struggle with Moriarty as you fell off a cliff moments before, and were then shown convincingly that Holmes was a fictional character from centuries before, and you were just an artificial person provided with false memories in his image, you would have to conclude that in this case, you had erred somehow in judging reality on the basis of subjective appearances. It seems unlikely that reliable reconstruction of cryonics patients could occur and yet the problem of consciousness not yet be figured out. Reliable reconstruction would require such a profound knowledge of brain structure and function, that there wouldn't be room for continuing uncertainty about quantum effects in the brain. By then you would know it was there or not there, so regardless of how the revivee felt, the people(?) doing the reviving should already know the answers regarding identity and the nature of personal existence. (I add the qualification reliable reconstruction, because there might well be a period in which it's possible to experiment with reconstructive protocols while not really knowing what you're doing. Consider the idea of freezing a C. elegans and then simulating it on the basis of micrometer sections. We could just about do this today, except that we would mostly be guessing how to map the preserved ultrastructure to computational elements of a simulation. One would prefer the revival of human beings not to proceed via similar trial and error.) In the present, the question is whether subjectively continuous but temporally discontinuous experience, such as you report, is evidence for the self only having an intermittent physical existence. Well, the experience is consistent with the idea that you really did cease to exist during tho
There is no uncertainty. A large amount of evidence points to the lack of quantum effects in the brain. Furthermore, there was never really any evidence in favor of quantum effects, and certainly none has been produced.
I think that most of the problems of consciousness have already been figured out; Gary Drescher, Dan Dennett, Drew McDerrmot have done it. They just don't yet have overwhelming evidence, so you have to be "light like a leaf blown by the winds of evidence" to see their answer as being correct. The remaining unsolved problems in this area seem to be related to the philosophy of computations-in-general, such as "what counts as implementing a computation" or anthropic/big world problems.
Which is to say, decision theory for algorithms, understanding of how an algorithm controls mathematical structures, and how intuitions about the real world and subjective anticipation map to that formal setting.
Well, that's one possible solution. But not without profound problems, for example the problem of lack of a canonical measure over "all mathematical structures" (even the lack of a clean definition of what "all structures" means). But it certainly solves some problems, and has the sort of "reductionistic" feel to it that indicates it is likely to be true.
Logics allow to work with classes of mathematical structures (not necessarily individual structures), which seems to be a good enough notion of working with "all mathematical structures". A "measure" (if, indeed, it's a useful concept) is aspect of preference, and preferences are inherently non-canonical, though I hope to find a relatively "canonical" procedure for defining ("extracting") preference in terms of an agent-program.
In the case of MWI quantum, the measure is Integral[ ], and if Robin's Mangled Worlds is true, there's no doubt that this measure is not "preference". What is the difference between the MWI/Mangled Big World and other Big Worlds such that measure is preference in others but not in MWI/Mangled?
Any given concept is what it is. Truth about any given concept is not a matter of preference. But in cases where there is no "canonical choice of a concept", it is a matter of choice which concept to consider. If you want a concept with certain properties, these properties already define a concept of their own, and might determine the mathematical structure that satisfies them, or might leave some freedom in choosing one you prefer for the task. In case of quantum mechanical measure, you want your concept of measure to produce "probabilities" that conform with the concept of subjective anticipation, which is fairly regular and thus create illusion of "universality", because preferences of most minds like ours (evolved like ours, in our physics) have subjective anticipation as a natural category, a pattern that has significant explanatory (and hence, optimization) power. But subjective anticipation is still not a universally interesting concept, one can consider a mind that looks at your theories about it, says "so what?", and goes on optimizing something else.
The reason I spoke about Mangled Worlds MWI is that the Integral[ ] measure is not dependent upon subjective anticipation. This is because in mangled worlds QM there is a physically meaningful sense in which some things cease to exist, namely that things (people, computers, any complex or macroscopic phenomenon) get "Mangled" if their Integral[ ] measure gets too low.
That preference is a cause of a given choice doesn't prohibit physics to also be a cause. There is rarely an ultimate source (unique dependence). You value thinking about what is real (accords with physical laws) because you evolved to value real things. There are also concepts which are not about our physical laws which you value, because evolution isn't a perfect designer. This is also a free will argument. I say that there is a decision to be made about which concepts to consider, and you say that the decision is already made by the laws of physics. It's easier to see how you do have free will for more trivial choices. It's more difficult to consider acting and thinking as if you live in different physics. In both cases, the counterfactual is physically impossible, you couldn't have made a different choice. Your thoughts accord with the laws of physics, caused by physics, embedded within physics. And in both cases, what is actually true (what action you'll perform; and what theories you'll think about) is determined by your decision. As an agent, you shouldn't (terminally) care about what laws of physics say, only about what your preference says, so this cause is always more relevant, although currently less accessible to reflection.
Yes, I get that free will is compatible with deterministic physics. That is not the issue. I don't quite see what about my reply made you think that this was relevant? The point is that in Mangled world QM there is such a think as objective probability, even though the world is (relatively) big, and it basically turns out to be defined by just the number of instances of something rather than something else.
I think Vladimir is essentially saying that caring about that objective property of that particular mathematical structure is still your "arbitrary", subjectively objective preference. I don't think I understand where the free will argument comes in either.
Sure, it is arbitrary to care about what actually exists and what will actually happen, as opposed to (for example) running your life around trying to optimize the state of Tolkein's Middle Earth. But I think that what Big Worlds calls into question is whether there is such a thing as "what actually exists" and "what will actually happen". That's the problem. I agree that evolution could (like it did in the case of subjective anticipation and MWI QM) have played a really cruel trick on us. But I brought up Mangled Worlds because it seems that Mangled worlds is a case where there is such a thing as "what will actually happen" and "what actually exists", even though the world is relatively big (though mangled worlds is importantly different to MWI with no mangler or world-eater) The important difference between MWI and Mangled-MWI is that if you say "ah, measure over a big world is part of preference, and my preference is for a ||Psi>|^10 measure, then you will very quickly end up mangled, i.e. there will be no branches of the wavefunction where your decision algorithm interacts with reality in the intended way for an extended period of time .
So what? Not everyone cares about what happens in this world. Plus, you don't have to exist in this world to optimize it (though it helps).
If we take as an assumption that Mangled-worlds MWI is the only kind of "Bigness" that the world has, then there is nothing else to care about apart from what happens in one of the branches, and since nothing exists apart from those branches, you have to exist in at least one of them to actually do anything. Though, of course, acausally speaking, a slim probability that some other world exists is enough for people to (perhaps?) take notice of it. EDIT: One way to try to salvage objective reality from Big Worlds would be to drive a wedge between "other worlds that we have actual evidence for" (such as MWI) and "Other worlds that are in-principle incapable of providing positive evidence of their existence", (such as Tegmark's MUH), then showing that all of the evidentially implied big worlds are not problematic for objectivity, as seems to be the case for Mangled-MWI. However, this would only work if one were willing to part with kolmogorov/Bayesian reasoning, and say that certain perfectly low-complexity hypotheses are thrown out for being "too big" and "too hypothetical".
I'm fairly sure at this point it's conceptual confusion to say that. You can care about mathematical structures, and control mathematical structures, that have nothing to do with the real world. These mathematical structures don't have to be "worlds" in any usual sense, for example they don't have to be processes (have time), and they don't have to contain you in them in any form. One of the next iterations of ambient decision theory should make it clearer, though the current version should give a hint (but probably isn't worth the bother in the current form, considering it has known philosophical/mathematical bugs - but I'm studying, improving my mathematical sanity).
Perhaps the distinction I'm interested is the difference between control and function-ness. There is an abstract mathematical function, say, the parity function of the number of open eyes I have. It is a function of me, but I wouldn't say that I am controlling it in the conventional sense, because it is abstract.
More abstract than whether your eyes are open? They're about the same distance from the underlying physics.
I guess if there were an actual light that lit up as a function of the parity, then I would feel comfortable with "control", and I would say that I am controlling the light
... Whether the light is on is also pretty abstract, no?
The role of decision-theoretical notion of control is to present consequences of your possible decisions for evaluation by preference. Whatever fills that role, but if one can value mathematical abstractions, then the notion of control has to describe how to control abstractions. Conveniently, the real world can be seen as just another mathematical structure (class of structures).
I would say that the conventional usage of the word "control" requires the thing-under-control to be real, but sure, one can use the words how one pleases. It worries me somewhat that we seem to concerned with what word-set we use here; this indicates that the degree to which we value performing certain actions depends whether we frame it as "controlling something that's no more-or-less real than the laptop in front of you" versus "this nonexistent abstraction happens to be a function of you; so what? There are infinitely many abstract functions of you" Is there some actual substance here?
This complication is created by the same old ontology problem: if preference talks about the real world, power to you (though that would make physics relevant, which is no good too), but if it doesn't, we have to deal with that. And we can't assume a priori what preference talks about. My previous position (and, it seems, long-held position of Wei Dai's) was to assume that preference can be expressed as talking about behavior of programs (as in UDT), since ultimately it has to determine behavior of agent's program, and seeing the environment as programs fits the pattern and allows to express preferences that hold arbitrary agent's strategies as the best option. Now, since ambient decision theory (ADT) suggests treating the notions of consequences of agent's decision as logical theories, it became more natural to see environment as models of those theories, and so structures more general than programs. But more importantly, if, as logical theories, preferred concepts do not refer to programs (even though they can directly influence only behavior of agent's program), there is no easy way of converting them into preference-about-programs equivalents. Getting the info out of those theories may well be undecidable, something to work on during decision-making and not on the preliminary stage of preference-definition.
Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You'd import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.
Scary, and I haven't even finished converting myself into a pure mathematician yet. :-) I was hoping to avoid these issues by somehow limiting preference to programs, but investigation led me back to the harder problem statement. Ultimately, a simpler understanding has to be found, that sidesteps the monstrosity of set-theoretical infrastructure and diversity of logics. At this point though, I expect to benefit from conceptual clarity brought by standard mathematical tools.
I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I believe as much: for foundational study of decision-making, the notions of "real world" are useless, which is why we have to deal with "all mathematical structures", somehow accessed through more manageable concepts (for which the best fit is logic, though that's uncomfortable for many reasons). (I'd still expect that it's possible to extract some fuzzy outline of the concept of the "real world", like it's possible to vaguely define "chairs" or "anger".)
Maybe. Though my intuition seems to point to a more fundamental role for "reality" in decisionmaking. Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures? I predict that we'll end up with a plethora of different kinds of decision theory, which lead to a whole random assortment of different practical recommendations, and the very finest of framing differences could push a person to act in completely different ways, with one exception being a decision theory that caches out the notion of reality, that will be relatively unique because of its relative similarity to our pretheoretic notions. But I am willing to be proven wrong.
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what "was supposed to be chosen" by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities. I did not mean why should there even be a clear and unique generalization of the human concept of reality -- for the time being I was assuming that there wouldn't be one.
You don't try to generalize, or extrapolate human goals. You try to figure out what they already are.
I think that this is a different sense of the word "control" than controlling physical things. Can you elaborate on this?
UDT is about control in the same sense. See this comment for a point in that direction (and my last comment on "Ambient decision theory go-through" thread on SIAI DT list). I believe this to be conceptual clarification of the usual notion of control, having the usual notion ("explicit control") as a special case (almost, modulo explicit dependence bias - it allows to get better results than if you only consider the explicit dependence as stated). See "ambient dependence" on DT list, but the current notion (involving mathematical structures more general than programs) is not written up. I believe "logical control", as used by Wei/Eliezer, refers to basically the same idea. In a two-player game, you can control the other player's decisions despite not literally sitting inside their head.
I just accidentally found this other decision theory google group and thought LWers might find it of interest.
I'm not on that list. Do you know who the list owner is? Just as a note, my current gut feeling is that it is perfectly plausible that the right way to go is to do something like UDT but with a notion of what worlds are real (as in Mangled worlds QM). However, I shall read your theory of controlling that which is unreal and see what I make of it!
Yes you are (via r****c at IIRC, you got there after I sent you an invitation. Try logging in on the list page.
Oh, thanks. Obviously I accepted and forgot about it.
But you do care about optimizing Middle Earth (let it be Middle Earth with Halting Oracles to be sure), to some tiny extent, even though it doesn't exist at all.
Free will is about dependencies: one got to say that the outcome depends on your decision. At the same time, outcome depends on other things. Here, considering quantum mechanical measure depends on what's true about the world, but at the same time it depends on what you prefer to consider. Thus, saying that there are objective facts dictated by the laws of physics is analogous to saying that all your decisions are already determined by the physical laws. My argument was that as in the case of the naive free will argument, here too we can (indeed, should, once we get to the point of being able to tell the difference) see physical laws as (subjectively) chosen. Of course, as you can't change your own preference, you can't change the implied physical laws seen as aspect of that preference (to make them nicer for some purpose, say).
It is relevant, but I ran out of expectation to communicate this quickly, so let's all hope I figure out and write up in detail my philosophical framework for decision theory sometime soon.
I don't agree with this claim. One would simply need an understanding of what brain systems are necessary for consciousness and how to restore those systems to a close approximation to pre-existing state (presumably using nanotech). This doesn't take much in the way of actually understanding how those systems function. Once one had well-developed nanotech one could learn this sort of thing simply be trial and error on animals (seeing what was necessary for survival, and what was necessary for training to stay intact) and then move on to progressively larger brained creatures. This doesn't require a deep understanding of intelligence or consciousness, simply an understanding of what parts of the brain are being used and how to restore them.
Actually, we do. We've been trying for decades to build viable quantum computers, and it turns out to be excruciatingly hard.

Mass cryonic suspension does not seem likely to be affordable anytime soon: "As of 2010, only around 200 people have undergone the procedure since it was first proposed in 1962" -

Maybe it just hasn't been marketed properly.