How does cryonics make sense in the age of high x-risk? As p(doom) increases, cryonics seems like a worse bet because (1) most x-risk scenarios would result in frozen people/infrastructure/the world being destroyed and (2) revival/uploading would be increasingly likely to be performed by a misaligned ASI hoping to use humans for its own purposes (such as trade). Can someone help me understand what I’m missing here or clarify how cryonics advocates think about this?
This is my first quick take—feedback welcome!
It makes the same kind of sense as still planning for a business-as-usual 10-20 year future. There are timelines where the business-as-usual allocation of resources helps, and allocating the resources differently often doesn't help with the alternative timelines. If there's extinction, how does not signing up for cryonics (or not going to college etc.) make it go better? There are some real tradeoffs here, but usually not very extreme ones.
I'm signed up for Alcor.
I straightforwardly agree that the more likely I am to die of x-risk, the less good of a deal, probabalistically, cryonics is.
(I don't particularly buy the cryonics patients are more likely to be utilized by misaligned superintelligences than normally living humans. Cryopreserving destroys structure that the AI would have to reconstruct, which might be cheap, but isn't likely to be cheaper than just using intact brains, scanned with superintelligently developed technology.
But, yep, to the extent that living through an AI takeover might entail an AI doing stuff with your brain-soul that you don't like, being cryopreserved also exposes you to that risk.)
I see here is that if there's a delay between when uploads are announced and when they occur, living people retain the option to end their lives.
Seems correct that cryonics patients have a lot less ability to flexibly respond to the situation compared to alive and animate people.[1]
I don't think that this is a very decisive consideration. I expect that whatever series of events will cause the superintelligence to get the most of what it wants in expectation is the series of events that will play out.
It's astonishingly weird if the superintelligence prefers to upload Bob, and then takes actions that allow Bob to prevent himself from being uploaded. "Announcing" that you're going to upload people is an unforced error, if it causes people to kill themselves. (Though I suppose it might not be an error if most people would prefer to be uploaded, and the AI is using it as a bargaining chip?)
A very savvy person be able to see the writing on the wall and see that a misaligned superintelligence is close to inevitable, and if the balance of fear of personal s-risk vs. personal death comes out in favor of death, commit suicide early. But this will almost definitely be a gamble based on substantial uncertainty. Presumably less uncertainty than a decision to get frozen, or not, at any point before then, but not so much less that you still need to weigh the probabilities of different outcomes and make a bet.
Not literally zero flexibility, though. It's normal to leave a will / instructions with the cryonics org about under what circumstances you want to be revived (eg. upload or bodily resurrection, how good does the tech need be before you risk it, etc). It's probably non-standard to leave instructions like "please destroy my brain, if XYZ happens", but may be feasible.
Cryonics companies are not enormously competent (this is bad, to be clear). I wouldn't trust them to execute those instructions unless I had a personal relationship with someone who worked there, I had assessed their competence and trustworthiness as "high", and they personally told me that they would take responsibility for destroying my brain if XYZ.
But there are some options here.
It is actually done only to patients who are clinically dead, as a last chance to survive. The patients who weren't resurrected don't lose anything except for the hope to come back to life.
I understand that. I’m not sure I understand your point here, though—wouldn’t it still be an arguably poor use of effort to sign up for cryonics if likely outcomes ranged from (1) an increasingly unlikely chance of people being revived, at best, to (2) being revived by a superintelligence with goals hostile to those of humanity, at worst?