Posts

Sorted by New

Wiki Contributions

Comments

That might be a fault with my choice of example. (I am not infact in fact a master of etiquette.) But I'm sure examples can be supplied where "the polite thing to say" is a euphemism that you absolutely do expect the other person to understand. At a certain level of obviousness and ubiquity, they tend to shift into figures of speech. “Your loved one has passed on” instead of “you loved one is dead”, say.

And yes, that was a typo. Your way of expressing it might be considered an example of such unobtrusive politeness. My guess is that you said “I assume that's just a slip” not because you have assigned noteworthy probability-mass to the hypothesis “astridain had a secretly brilliant reason for saying the opposite of what you'd expect and I just haven't figured it out”, but because it's nicer to fictitiously pretend to care about that possibility than to bluntly say “you made an error”. It reduces the extent to which I feel stupid in the moment; and it conveys a general outlook of your continuing to treat me as a worthy conversation partner; and that's how I understand the note. I don't come away with a false belief that you were genuinely worried about the possibility that there was a brilliant reason I'd reversed the pronouns and you couldn't see it. You didn't expect me to, and you didn't expect anyone to. It's just a graceful way of correcting someone.

Some of it might be actual-obfuscation if there are other people in the room, sure. But equally-intelligent equally-polite people are still expected to dance the dance even if they're alone. 

Your last paragraph gets at what I think is the main thing, which is basically just an attempt at kindness. You find a nicer, subtler way to phrase the truth in order to avoid shocking/triggering the other person. If both people involved were idealised Bayesian agents this would be unnecessary, but idealised Bayesian agents don't have emotions, or at any rate they don't have emotions about communication methods. Humans, on the other hand, often do; and it's often not practical to try and train ourselves out of them completely; and even if it were, I don't think it's ultimately desirable. Idiosyncratic, arbitrary preferences are the salt of human nature; we shouldn't be trying to smooth them out, even if they're theoretically changeable to something more convenient. That way lies wireheading.

astridain9mo2312

I think this misses the extent to which a lot of “social grace” doesn't actually decrease the amount of information conveyed; it's purely aesthetic — it's about finding comparatively more pleasant ways to get the point across. You say — well, you say “I think she's a little out of your league” instead of saying “you're ugly”. But you expect the ugly man to recognise the script you're using, and grok that you're telling him he's ugly! The same actual, underlying information is conveyed!

The cliché with masters of etiquette is that they can fight subtle duels of implied insults and deferences, all without a clueless shmoe who wandered into the parlour even realising. The kind of politeness that actually impedes transmission of information is a misfire; a blunder. (Though in some cases it's the person who doesn't get it who would be considered “to blame”.)

Obviously it's not always like this. And rationalists might still say “why are we spending all this brainpower encrypting our conversations just so that the other guy can decrypt them again? it's unnecessary at best”. But I don't grant your premise that social grace is fundamentally about actual obfuscation rather than pretend-obfuscation.

astridain11mo72

 My guess is mostly that the space is so wide that you don't even end up with AIs warping existing humans into unrecognizable states, but do in fact just end up with the people dead

Why? I see a lot of opportunities for s-risk or just generally suboptimal future in such options, but "we don't want to die, or at any rate we don't want to die out as a species" seems like an extremely simple, deeply-ingrained goal that almost any metric by which the AI judges our desires should be expected to pick up, assuming it's at all pseudokind. (In many cases, humans do a lot to protect endangered species even as we do diddly-squat to fulfill individual specimens' preferences!) 

It's about trade-offs. HPMOR/an equally cringey analogue will attract a certain sector of weird people into the community who can then be redirected towards A.I. stuff — but it will repel a majority of novices because it "taints" the A.I. stuff with cringiness by association.

This is a reasonable trade-off if:

  1. the kind of weird people who'll get into HPMOR are also the kind of weird people who'd be useful to A.I. safety;
  2. the normies were already likely to dismiss the A.I. stuff with or without the added load of cringe.

In the West, 1. is true because there's a strong association between techy people and niche fandom, so even though weird nerds are a minority, they might represent a substantial fraction of the people you want to reach.  And 2. is kind of true for a related reason, which is that "nerds" are viewed as generally cringe even if they don't specifically talk about HP fanfiction; it's already assumed that someone who thinks about computers all days is probably the kind of cringe who'd be big into a semi-self-insert HP fanfiction. 

But in China, from @Lao Mein's testimony, 1. is definitely not true (a lot of the people we want to reach would be on Team "this sounds weird and cringe, I'm not touching it") and 2. is possibly not true (if computer experts ≠ fandom nerds in Chinese popular consciousness, it may be easier to get broad audiences to listen to a non-nerdy computer expert talking about A.I.). 

If I was feeling persistently sad or hopeless and someone asked me for the quality of my mental health, and I had the energy to reply, I would reply ‘poor, thanks for asking.’

I wouldn't, not if I was in fact experiencing a rough enough patch of life that I rationally and correctly believed these feelings to be accurate. If I had been diagnosed with terminal cancer, for example, I would probably say that I was indeed sad and hopeless, but not that I had any mental health issues; indeed I'd be concerned with my mental health if I wasn't feeling that way. I find that this extends to beliefs about the future in general being screwed rather than your personal future (take A.I. doomerism: I think Eliezer is fairly sad and hopeless, and I don't think he'd say that makes him mental ill). So if 13% of the kids genuinely believe to some degree that their personal life sucks and will realistically always suck, and/or that the world is doomed for whatever combination of climate change and other known or perceived x-risks, that would account for this, surely?

At a guess, focusing on transforming information from images and videos into text, rather than generating text qua text, ought to help — no? 

We maybe need an introduction to all the advance work done on nanotechnology for everyone who didn't grow up reading "Engines of Creation" as a twelve-year-old or "Nanosystems" as a twenty-year-old.

Ah. Yeah, that does sound like something LessWrong resources have been missing, then — and not just for my personal sake. Anecdotally, I've seen several why-I'm-an-AI-skeptic posts circulating on social media for whom "EY makes crazy leaps of faith about nanotech" was a key point of why they rejected the overall AI-risk argument.

(As it stands, my objection to your mini-summary would be that that sure, "blind" grey goo does trivially seem possible, but programmable/'smart' goo that seeks out e.g. computer CPUs in particular could be a whole other challenge, and a less obviously solvable one looking at bacteria. But maybe that "common-sense" distinction dissolves with a better understanding of the actual theory.)

Hang on — how confident are you that this kind of nanotech is actually, physically possible? Why? In the past I've assumed that you used "nanotech" as a generic hypothetical example of technologies beyond our current understanding that an AGI could develop and use to alter the physical world very quickly. And it's a fair one as far as that goes; a general intelligence will very likely come up with at least one thing as good as these hypothetical nanobots. 

But as a specific, practical plan for what to do with a narrow AI, this just seems like it makes a lot of specific unstated assumption about what you can in fact do with nanotech in particular. Plausibly the real technologies you'd need for a pivotal act can't be designed without thinking about minds. How do we know otherwise? Why is that even a reasonable assumption?

Slightly boggling at the idea that nuts and eggs aren't tasty? And I completely lose the plot at "condiments". Isn't the whole point of condiments that they are tasty? What sort of definition of "tasty" are you going with?

Load More