sure, I agree with the object level claim, hence why I say the facts of AI are different. it sounds like you're saying that because climate change is not that existential, if we condition on people believing that climate change is existential, then this is confounded by people also being worse as believing true things. this is definitely an effect. however, I think there is an ameliorating factor: as an emotional stance, existential fear doesn't have to literally be induced by human extinction; while the nuance between different levels of catastrophe matters a lot consequentially, for most people their emotional ability to feel it even harder caps out much lower than x-risk.
of course, you can still argue that given AGI is bigger, then we should still be more worried about it. but I think rejecting "AGI is likely to kill everyone" indicts one's epistemics a lot less than accepting "climate change is likely to kill everyone" does. so this makes the confounder smaller.
I mean, sure, maybe maximal doomerish is not exactly the right term for me to use. but there's definitely a tendency for people to be worried that being insufficiently emotionally scared and worried will make them complacent. to be clear, this is not about your epistemic p(doom); I happen to think AGI killing everyone is more likely than not. but really feeling this deeply emotionally is very counterproductive for my actually reducing x-risk.
being dishonest when honesty would violate local norms doesn't necessarily feel like selling your soul. like concretely, in most normal groups of people, it is considered a norm violation to tell someone their shirt is really ugly even if this is definitely true. so I would only tell someone this if I was sufficiently confident that they would take it well - maybe I've known them long enough that I know they like the honesty, or we are in a social setting where people expect it. imo, it doesn't take galaxy brain consequentialism to arrive at this particular norm, or impugn one's honor to comply with this norm.
with respect to the climate change example, it seems instructive to observe the climate people who feel an urge to be maximally doomerish because anything less would be complacent, and see if they are actually better at preventing climate change. I'm not very deeply embedded in such communities, so I don't have a very good sense. but I get the vibe that they are in fact less effective towards their own goals: they are too prone to dismiss actual progress, lose a lot of productivity to emotional distress, are more susceptible to totalizing "david and goliath" ideological frameworks, descend into purity spiral infighting, etc. obviously, the facts of AI are different, but this still seems instructive as a case study to look deeper into.
to be clear, a very important part of the culture of the antechamber is encouraging people to spend time in the arena, or if people are not ready to do so, to encourage people to grow emotionally so that they can handle being in the arena.
I don't hear that one as often - what's a good example? in particular, I hear people complain all the time that LW is too critical of ideas, and that when you post anything a whole bunch of people will appear out of the woodwork to critique you. I don't feel like I've ever heard anyone say that people in LW are too uncritical and unwilling to challenge things they disagree with
maybe I should host an antechamber/arena house party: one chill cozy room with soothing music where no arguing is allowed and people are strongly encouraged to say kind things and reflect on things they're grateful for and whatnot, and another with harsh fluorescent lights and agitating music and a big whiteboard full of hot takes and the conversations all get transcribed by speech to text and posted on lesswrong in real time. and guests are given a heart rate monitor that beeps if their HR gets too high, forcing them to spend a few minutes in the chill room before returning to the arena
I don't think unsycophantic kindness is quite that difficult to achieve. clearly some groups of people IRL achieve such kindness. generally, people in such communities try to understand each other and why they believe the things they do without judgement in either direction, and affirm the emotional responses to beliefs rather than the beliefs themselves. you don't have to agree with someone to agree that you'd feel the same in their shoes. somehow, these groups of people don't inevitably slide into subtle sneering and trolling and sycophancy.
plus, the point of explicitly separating the arena and the antechamber is to make it clear that when you are receiving kindness, you are not receiving updates towards truth. so it is clear to you, and to people around you, that receiving emotional validation in the antechamber is not evidence that your beliefs are correct. it's valid for people to spend all their time in the antechamber, but everyone will see this, and assign less weight to the truthfulness of their beliefs.
I also don't think non-sycophantic kindness causes people to dig in to their incorrect beliefs. if anything, it seems more common that people dig into incorrect beliefs because of a sense of adversity against others. think about how much more painful it is to concede a point if your interlocutor is being really mean about it, vs if they are thoughtful and hear you out.
cryonics companies being "well run" in the sense that they don't implode as often as normal companies does not imply that cryonics ceos could simply run a normal company vastly more successfully than normal ceos. running a company for stability vs running it for rapid expansion, market dominance, etc are very different skills. most companies do not have "continue existing at all costs" as an explicit goal; part of their strategy involves making big gambles, because otherwise they'd become irrelevant. the organizations that do indeed have this goal, like insurance companies or retirement funds, routinely exist for many decades, and only very rarely fail.