Sorted by New

Wiki Contributions


Sleeping on benches in daytime.

I've often had the thought that controversial topics may just be unknowable: as soon as a topic becomes controversial, it's deleted from the public pool of reliable knowledge.

But yes, you could get around it by constructing a clear chain of inferences that's publicly debuggable. (Ideally a Bayesian network: just input your own priors and see what comes out.)

But that invites a new kind of adversary, because a treasure map to the truth also works in reverse: it's a treasure map to exactly what facts need to be faked, if you want to fool many smart people. I worry we'd end up back on square one.

I agree, although I sense there's some disagreement on the meaning of "learning by rote".

Learning by rote can be tactical move in a larger strategy. In introductory rhetoric, I wasn't retaining much from the lectures until I sat down to memorize the lists of tropes and figures of speech. After that, every time the lectures mentioned a trope or other, even just in passing, the whole lesson stuck better.

Rote memorization prepares an array of "hooks" for lessons to attach to.

Also Nate's Replacing Guilt sequence. I'm still reading it, but I predict it'll be the single most important sequence to me.

I think I was unfair. I concede it's possible to have legible argumentation that people won't understand in a short time, even if it's perfectly clarified in your head. But in my experiences interrogating my own beliefs, I think it's common that they are actually not clear (you just think they are) until you can explain them to someone else, so the term "illegible belief" may help some people properly debug themselves.

Regarding your question about math and the like... The point of having the concept of epistemic legibility is that we want to be able to "debug" articles we read, and the articles should accommodate us doing that. If we cannot debug them, they're not legible.

If your math is correct but poorly explained, I suppose I'd have to call it legible (as long as the explanations don't lead the reader astray). I won't want to grace it with that adjective, as I'm sure you understand, but that's more a matter of signaling.

By contrast, it's fine by me if you assume background knowledge, though keep in mind it's easy to assume too much (Explainers Shoot High, Aim Low).

So if a Rationality Quotient (RQ) became famous for only measuring skills that everyone can build regardless of where they start, rather than innate ability, it'd be less infected than the discourse around IQ?

Paraphrasing from How to Take Smart Notes by Sönke Ahrens: we easily get away with unfounded claims when we speak orally. We can distract from argumentative gaps with a "you know what I mean", even if on introspection we would find that we don't know what we mean. Writing permanent notes will make these gaps obvious.

Thank you for writing this out. Don't lose heart if the response isn't what you'd hoped--some future post could even be curated into the featured section. Why I say that? The bits about ineffective self-talk:

He notices that he made a mistake by not trusting his gut instinct earlier enough, and then decides once again that he made another mistake. This is not, actually, the only reaction one could have. One could instead react in the following way: “Oops, I guess I didn’t make a mistake after all.” These two different reactions calibrate the mind in two different directions.

For me, it's been important to change my self-talk towards compassion and acceptance, and this presents an interesting new dimension. If it helps us experience life (including our rationalist journey) as more fun, that's so important. Ties in with what Nate was saying about stoking genuine enthusiasm, in his sequence Replacing Guilt.

(To clarify, that's 6% RDI, not 6% by volume, which would be worrying.)

I'm confused. Are you saying 1 cup of organic peas is "half a day's intake of vegetables" for you?

Load More