astridain
astridain has not written any posts yet.

That might be a fault with my choice of example. (I am not infact in fact a master of etiquette.) But I'm sure examples can be supplied where "the polite thing to say" is a euphemism that you absolutely do expect the other person to understand. At a certain level of obviousness and ubiquity, they tend to shift into figures of speech. “Your loved one has passed on” instead of “you loved one is dead”, say.
And yes, that was a typo. Your way of expressing it might be considered an example of such unobtrusive politeness. My guess is that you said “I assume that's just a slip” not because you have assigned... (read more)
Some of it might be actual-obfuscation if there are other people in the room, sure. But equally-intelligent equally-polite people are still expected to dance the dance even if they're alone.
Your last paragraph gets at what I think is the main thing, which is basically just an attempt at kindness. You find a nicer, subtler way to phrase the truth in order to avoid shocking/triggering the other person. If both people involved were idealised Bayesian agents this would be unnecessary, but idealised Bayesian agents don't have emotions, or at any rate they don't have emotions about communication methods. Humans, on the other hand, often do; and it's often not practical to try and train ourselves out of them completely; and even if it were, I don't think it's ultimately desirable. Idiosyncratic, arbitrary preferences are the salt of human nature; we shouldn't be trying to smooth them out, even if they're theoretically changeable to something more convenient. That way lies wireheading.
I think this misses the extent to which a lot of “social grace” doesn't actually decrease the amount of information conveyed; it's purely aesthetic — it's about finding comparatively more pleasant ways to get the point across. You say — well, you say “I think she's a little out of your league” instead of saying “you're ugly”. But you expect the ugly man to recognise the script you're using, and grok that you're telling him he's ugly! The same actual, underlying information is conveyed!
The cliché with masters of etiquette is that they can fight subtle duels of implied insults and deferences, all without a clueless shmoe who wandered into the parlour even... (read more)
My guess is mostly that the space is so wide that you don't even end up with AIs warping existing humans into unrecognizable states, but do in fact just end up with the people dead
Why? I see a lot of opportunities for s-risk or just generally suboptimal future in such options, but "we don't want to die, or at any rate we don't want to die out as a species" seems like an extremely simple, deeply-ingrained goal that almost any metric by which the AI judges our desires should be expected to pick up, assuming it's at all pseudokind. (In many cases, humans do a lot to protect endangered species even as we do diddly-squat to fulfill individual specimens' preferences!)
It's about trade-offs. HPMOR/an equally cringey analogue will attract a certain sector of weird people into the community who can then be redirected towards A.I. stuff — but it will repel a majority of novices because it "taints" the A.I. stuff with cringiness by association.
This is a reasonable trade-off if:
In the West, 1. is true because there's a strong association between techy people and niche fandom, so even though weird nerds are a minority,... (read more)
If I was feeling persistently sad or hopeless and someone asked me for the quality of my mental health, and I had the energy to reply, I would reply ‘poor, thanks for asking.’
I wouldn't, not if I was in fact experiencing a rough enough patch of life that I rationally and correctly believed these feelings to be accurate. If I had been diagnosed with terminal cancer, for example, I would probably say that I was indeed sad and hopeless, but not that I had any mental health issues; indeed I'd be concerned with my mental health if I wasn't feeling that way. I find that this extends to beliefs about the future... (read more)
At a guess, focusing on transforming information from images and videos into text, rather than generating text qua text, ought to help — no?
We maybe need an introduction to all the advance work done on nanotechnology for everyone who didn't grow up reading "Engines of Creation" as a twelve-year-old or "Nanosystems" as a twenty-year-old.
Ah. Yeah, that does sound like something LessWrong resources have been missing, then — and not just for my personal sake. Anecdotally, I've seen several why-I'm-an-AI-skeptic posts circulating on social media for whom "EY makes crazy leaps of faith about nanotech" was a key point of why they rejected the overall AI-risk argument.
(As it stands, my objection to your mini-summary would be that that sure, "blind" grey goo does trivially seem possible, but programmable/'smart' goo that seeks out e.g. computer CPUs in particular could be a whole other challenge, and a less obviously solvable one looking at bacteria. But maybe that "common-sense" distinction dissolves with a better understanding of the actual theory.)
Hang on — how confident are you that this kind of nanotech is actually, physically possible? Why? In the past I've assumed that you used "nanotech" as a generic hypothetical example of technologies beyond our current understanding that an AGI could develop and use to alter the physical world very quickly. And it's a fair one as far as that goes; a general intelligence will very likely come up with at least one thing as good as these hypothetical nanobots.
But as a specific, practical plan for what to do with a narrow AI, this just seems like it makes a lot of specific unstated assumption about what you can in fact do with nanotech in particular. Plausibly the real technologies you'd need for a pivotal act can't be designed without thinking about minds. How do we know otherwise? Why is that even a reasonable assumption?
Because hopefully those people will include, and (depending on population control) might indeed be overwhelmingly composed of, the current, pre-singularity population of Earth. I don't think a majority of currently-alive humans would ever agree to destroy the Sun, and that includes being unwilling to self-modify into minds that would agree to destroy the Sun.
Raemon spoke upthread about how "no single culture that has survived 10,000 years", but that was in a world with mortality.