Thank you so much for the response! Great to hear from someone actually bought in to cryonics :-)
I don't particularly buy the cryonics patients are more likely to be utilized by misaligned superintelligences than normally living humans. Cryopreserving destroys structure that the AI would have to reconstruct, which might be cheap, but isn't likely to be cheaper than just using intact brains, scanned with superintelligently developed technology.
I think the cardinal distinction I see here is that if there's a delay between when uploads are announced and when they occur, living people retain the option to end their lives. I think this distinction is meaningful insofar as one would prefer death over a high probability of indefinite (or perpetual) suffering.
I think you're headed in the right direction, yes: people can only experience psychological reactance when they are aware that information is being suppressed, and most information suppression is successful (in that the information is suppressed and the suppression attempt is covert). In the instances where the suppression attempt is overt, a number of factors determine whether the "Streisand Effect" occurs (the novelty/importance of the information, the number of people who notice the suppression attempt, the traits/values/interests/influence of the people that notice the suppression attempt, whether it's a slow news day, etc.). I think survivorship bias is relevant to the extent that it leads people to overestimate how often the Streisand Effect occurs in response to attempts to suppress information. Does that sound about right to you?
I like how you think, but I don’t think it’s entirely driven by survivorship bias—experimental evidence shows that people are more motivated to access information when it’s suppressed than when it’s accessible (a phenomenon called psychological reactance).
Thank you for your thoughtful response.
It makes the same kind of sense as still planning for a business-as-usual 10-20 year future.
Agreed. I don’t know whether that approach to planning makes sense either, though. Given a high (say 90%)[1] p(doom) in the short term, would a rational actor change how they life their life? I’d think yes, in some easier ways to accept (assigning a higher priority to short-term pleasure, maybe rethinking effortful long-term projects that involve significant net suffering in the short term) as well as some less savoury ways that would probably be irresponsible to post online but would be consistent with a hedonistic utilitarian approach (i.e., prioritizing minimizing suffering).
Choosing 90% because that’s what I would confidently bet on—I recognize many people in the community would assign a lower probability to existential catastrophe at this time.
I understand that. I’m not sure I understand your point here, though—wouldn’t it still be an arguably poor use of effort to sign up for cryonics if likely outcomes ranged from (1) an increasingly unlikely chance of people being revived, at best, to (2) being revived by a superintelligence with goals hostile to those of humanity, at worst?
How does cryonics make sense in the age of high x-risk? As p(doom) increases, cryonics seems like a worse bet because (1) most x-risk scenarios would result in frozen people/infrastructure/the world being destroyed and (2) revival/uploading would be increasingly likely to be performed by a misaligned ASI hoping to use humans for its own purposes (such as trade). Can someone help me understand what I’m missing here or clarify how cryonics advocates think about this?
This is my first quick take—feedback welcome!
Ignoring the serious ethical issues inherent in manipulating people's emotions for instrumental gain, this strategy seems highly (I'd say 95%+) likely to backfire. Intergroup relations research shows that strong us-vs-them dynamics leads to radicalization and loss of control of social movements, and motivated reasoning literature demonstrates that identity-defining beliefs inhibit evidence-based reasoning. Moreover, even if this somehow worked, cultivating hatred of Silicon Valley and Big Tech would likely lead to the persecution of EY-types and other AI safety researchers with the most valuable insights on the matter.
Would someone who legitimately, deeply believes lack of diversity of perspective would be catastrophic, and who values avoiding that catastrophe and thus will in fact take rapid, highest-priority action to get as close as possible to democratically constructed values and collectively rational insight, be able to avoid this problem?
No, I don’t think anyone could, barring the highly unlikely case of a superintelligence perfectly aligned with human values (and even still, maybe not—human values are inconsistent and contradictory). Also, I think a system of democratically constructed values would probably be at odds with rational insight, unfortunately.
Regarding the rest, agreed. Heading into verbotten-ish political territory here, but see also Jenny Holzer and Chomsky on this.
Maybe the result of one person’s clones forming a very capable Em Collective would still be suboptimal and undemocratic from the perspective of the rest of humanity, but it wouldn’t kill everyone, and I think wouldn’t lead to especially bad outcomes if you start from the right person.
I think the risk of a homogeneous collective of many instances of a single person's consciousness is more serious than "suboptimal and undemocratic" suggests. Even assuming you could find a perfectly well-intentioned person to clone, identical minds share the same blindspots and biases. Without diversity of perspective, even earnestly benevolent ideas could—and I imagine would—lead to unintented catastrophe.
I also wonder how you would identify the right person, as I can't think of anyone I would trust with that degree of power.
Have there been any recent discussions about navigating practical life decisions under the assumption of a high p(doom)? I’ve read Eliezer’s death with dignity piece and reviewed the alignment problem mental health resources, but am interested in how others are behaviourally updating on this. It seems bunkers and excessive planning are probably futile and/or too costly, but are people reconsidering demanding careers? To what extent is a delayed gratification approach less compelling nowadays, and how might this impact financial management? Do people have any sort of low-cost, short-term-suffering-mitigation plans in place for the possible window between the emergence of ASI and people dying/getting uploaded/other horrors?[1] It seems these conversations would be useful, but they seem conspicuously lacking (unless I'm missing where they're happening).
I'm particularly interested in practical s-risk mitigation, as I can't find much discussion about this anywhere. I recognize these conversations are sensitive. If anyone has thoughts on this or knows of any relevant discussion spaces, please DM me.