One theme I've been thinking about recently is how bids for connection and understanding are often read as criticism. For example:
Person A shares a new idea, feeling excited and hoping to connect with Person B over something they've worked hard on and hold dear.
Person B asks a question about a perceived inconsistency in the idea, feeling excited and hoping for an answer which helps them better understand the idea (and Person B).
Person A feels hurt and unfairly rejected by Person B. Specifically, Person A feels like Person B isn't willing to give their sincere idea (and effort to connect) a chance, so shuts down and labels Person B as an idea-hater.
Person B feels hurt and unfairly rejected by Person A. Specifically, Person B feels like Person A isn't willing to give their sincere question (and effort to connect) a chance, so shuts down and labels Person A as a question-hater.
This seems like a huge source of human suffering, and I have been Person A and Person B in different interactions. Does anyone else resonate with this? Do you see things differently?
I think standard advice like the compliment sandwich (the formula "I liked X, I'm not sure about Y, but Z was really well done") is meant to counteract this a bit. You can also do stuff like "I really enjoyed this aspect of the idea/resonated with [...], but can I get more information about [...]".
It was definitely relevant! Thank you for the link--I think introducing this idea might assist communication in some of my relationships.
Yeah, I think that part of the training my PhD program offers is learning how to handle and deliver criticism. Besides the common advice of choosing your language carefully, being praiseful, and limiting questions/critiques to zero or one most important area in most settings, my favorite tactic when giving my own presentations is to frontload limitations. Just to be clear (this is frontloading!), what follows is speculation based on my own personal experiences.
When I start presenting my research, I introduce the audience first to the biology (3D genome architecture), then the technology I use to study it (the Hi-C family of assays). That technology is complex and has important limitations (sparsity, averaging over time/homologs/cells).
If I don't frontload limitations, what's an informed audience going to be thinking about once I introduce the technology? They're going to be thinking about those limitations, figuring them out in their own head. "Hey, wouldn't an assay that works like that end up averaging over time/homologs/cells?", they'll think. And while they're thinking those thoughts, I'll be trying to push forward into my own projects. So they'll be irritated by the distraction of trying to think about those limitations while also keeping up with what I'm saying about my projects. They'll be associating those limitations and that irritation with my projects. And the natural question for them to ask will be about the existence of those limitations, rather than about my findings. Not good.
By frontloading limitations, it alleviates the cognitive burden on the audience of thinking up those limitations for themselves and compartmentalizes the limitations so that the cognitive load of thinking about them isn't interfering with attempts to understand what I'm saying about my project. They can then ask me questions about the specifics of my project, which is what's most interesting, what I'm most excited to talk about, and will lead to positive feeling on both sides.
This is really useful information; thank you! I think I will change my approach to presenting my own research based on this comment. I have a limited biology background, but would love to watch a presentation of yours sometime.
Agreed that there's a lot of suffering involved in this sort of interaction. Not sure how to fix it in general - I've been working on it in myself for decades, and still forget often. Please take the following as a personal anecdote, not as general advice.
The difficulty (for me) is that "hoping to connect" and understanding the person in addition to the idea are very poorly defined, and are very often at least somewhat asymmetrical, and trying to make them explicit is awkward and generally doesn't work.
I find it bizarre and surprising, no matter how often it happens, when someone thinks my helping them pressure-test their ideas and beliefs for consistency is anything except a deep engagement and joy. If I didn't want to connect and understand them, I wouldn't bother actually engaging with the idea.
It's happened often enough that I often need to modulate my enthusiasm, as it does cause suffering in a lot of friends/acquaintances who don't think the same way as I do. This includes my habit of interrupting and skipping past the "obvious agreement" parts of the conversation to get to the good, deep stuff - the parts that need work. With some friends and coworkers, this style is amazingly pleasant and efficient. With others, some more explicit (and sometimes agonizingly slow, to me) groundwork of affirming the connection and the points of non-contention are really important.
I find it bizarre and surprising, no matter how often it happens, when someone thinks my helping them pressure-test their ideas and beliefs for consistency is anything except a deep engagement and joy. If I didn't want to connect and understand them, I wouldn't bother actually engaging with the idea.
I feel like I could have written this (and the rest of your comment)! It's confusing and deflating when deep engagement and joy aren't recognized as such.
It's happened often enough that I often need to modulate my enthusiasm, as it does cause suffering in a lot of friends/acquaintances who don't think the same way as I do.
I've tried the same with mixed effectiveness. In in-person contexts, nonverbal information makes it much easier to determine when and how to do this. I've found it's more difficult online, particularly when you don't know your interlocutor--sometimes efforts to affirm the connection and points of non-contention are read as pitying or mocking. I imagine this is partially attributable to the high prevalence of general derision on social media (edit: and of course partially attributable to faulty inference on my part).
How does cryonics make sense in the age of high x-risk? As p(doom) increases, cryonics seems like a worse bet because (1) most x-risk scenarios would result in frozen people/infrastructure/the world being destroyed and (2) revival/uploading would be increasingly likely to be performed by a misaligned ASI hoping to use humans for its own purposes (such as trade). Can someone help me understand what I’m missing here or clarify how cryonics advocates think about this?
This is my first quick take—feedback welcome!
It makes the same kind of sense as still planning for a business-as-usual 10-20 year future. There are timelines where the business-as-usual allocation of resources helps, and allocating the resources differently often doesn't help with the alternative timelines. If there's extinction, how does not signing up for cryonics (or not going to college etc.) make it go better? There are some real tradeoffs here, but usually not very extreme ones.
I'm signed up for Alcor.
I straightforwardly agree that the more likely I am to die of x-risk, the less good of a deal, probabalistically, cryonics is.
(I don't particularly buy the cryonics patients are more likely to be utilized by misaligned superintelligences than normally living humans. Cryopreserving destroys structure that the AI would have to reconstruct, which might be cheap, but isn't likely to be cheaper than just using intact brains, scanned with superintelligently developed technology.
But, yep, to the extent that living through an AI takeover might entail an AI doing stuff with your brain-soul that you don't like, being cryopreserved also exposes you to that risk.)
I see here is that if there's a delay between when uploads are announced and when they occur, living people retain the option to end their lives.
Seems correct that cryonics patients have a lot less ability to flexibly respond to the situation compared to alive and animate people.[1]
I don't think that this is a very decisive consideration. I expect that whatever series of events will cause the superintelligence to get the most of what it wants in expectation is the series of events that will play out.
It's astonishingly weird if the superintelligence prefers to upload Bob, and then takes actions that allow Bob to prevent himself from being uploaded. "Announcing" that you're going to upload people is an unforced error, if it causes people to kill themselves. (Though I suppose it might not be an error if most people would prefer to be uploaded, and the AI is using it as a bargaining chip?)
A very savvy person be able to see the writing on the wall and see that a misaligned superintelligence is close to inevitable, and if the balance of fear of personal s-risk vs. personal death comes out in favor of death, commit suicide early. But this will almost definitely be a gamble based on substantial uncertainty. Presumably less uncertainty than a decision to get frozen, or not, at any point before then, but not so much less that you still need to weigh the probabilities of different outcomes and make a bet.
Not literally zero flexibility, though. It's normal to leave a will / instructions with the cryonics org about under what circumstances you want to be revived (eg. upload or bodily resurrection, how good does the tech need be before you risk it, etc). It's probably non-standard to leave instructions like "please destroy my brain, if XYZ happens", but may be feasible.
Cryonics companies are not enormously competent (this is bad, to be clear). I wouldn't trust them to execute those instructions unless I had a personal relationship with someone who worked there, I had assessed their competence and trustworthiness as "high", and they personally told me that they would take responsibility for destroying my brain if XYZ.
But there are some options here.
It is actually done only to patients who are clinically dead, as a last chance to survive. The patients who weren't resurrected don't lose anything except for the hope to come back to life.
I understand that. I’m not sure I understand your point here, though—wouldn’t it still be an arguably poor use of effort to sign up for cryonics if likely outcomes ranged from (1) an increasingly unlikely chance of people being revived, at best, to (2) being revived by a superintelligence with goals hostile to those of humanity, at worst?
Creative intellectual play for the day. Can you add another observation to each set? Would you rearrange the sets in any way? Can you unite sets 1, 2, and 3 under a higher-order concept/rule/group?
(1) Plato’s forms, Real numbers, logarithmic functions, taxonomic lumpers.
(2) Mycelium networks, infinite recursion, exponential functions, taxonomic splitters.
(3) Aristotle’s hylomorphism, Langan’s CTMU, dialectical reasoning, Derridean Différance.
No wrong answers. Confident everyone has something insightful to contribute :D