A funny thing happens with woo sometimes, in the rationality community. There's a frame that says: this is a mix of figurative stuff and dumb stuff, let's try to figure out what the figurative stuff is pointing at and salvage it. Let's call this "salvage epistemology". Unambiguous examples include the rationality community's engagement with religions, cold-reading professions like psychics, bodywork, and chaos magic. Ambiguous examples include intensive meditation, Circling, and many uses of psychedelics.

The salvage epistemology frame got locally popular in parts of the rationality community for awhile. And this is a basically fine thing to do, in a context where you have hyper-analytical programmers who are not at risk of buying into the crazy, but who do need a lens that will weaken their perceptual filters around social dynamics, body language, and muscle tension.

But there's a bad thing happens when you have a group that are culturally adjacent to the hyper-analytical programmers, but who aren't that sort of person themselves. They can't, or shouldn't, take for granted that they're not at risk of falling into the crazy. For them, salvage epistemology disarms an important piece of their immune system.

I think salvage epistemology is infohazardous to a subset of people, and we should use it less, disclaim it more, and be careful to notice when it's leading people in over their heads.

New Comment
119 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Well, there's also the fact that "true"[1] ontological updates can look like woo prior to the update. Since you can't reliably tell ahead of time whether your ontology is too sparse for what you're trying to understand, truth-seeking requires you find some way of dealing with frames that are "obviously wrong" without just rejecting them. That's not simply a matter of salvaging truth from such frames.

Separate from that:

I think salvage epistemology is infohazardous to a subset of people, and we should use it less, disclaim it more, and be careful to notice when it's leading people in over their heads.

I for one am totally against this kind of policy. People taking responsibility for one another's epistemic states is saviorist fuckery that makes it hard to talk or think. It wraps people's anxiety around what everyone else is doing and how to convince or compel them to follow rules that keep one from feeling ill at ease.

I like the raising of awareness here. That this is a dynamic that seems worth noticing. I like being more aware of the impact I have on the world around me.

I don't buy the disrespectful frame though. Like people who "aren't hyper-analytical programmers" are on par w... (read more)

Well, there's also the fact that "true" ontological updates can look like woo prior to the update.

Do you think they often do, and/or have salient non-controversial examples? My guess prior to thinking about it is that it's rare (but maybe the feeling of woo differs between us).

Past true ontological updates that I guess didn't look like woo:

  • reductionism
  • atomism
  • special relativity
  • many worlds interpretation (guy who first wrote it up was quite dispositionally conservative)
  • belief that you could gain knowledge thru experiment and apply that to the world (IDK if this should count)
  • germ theory of disease
  • evolution by natural selection as the origin of humans

Past true ontological updates that seem like they could have looked like woo, details welcome:

  • 'force fields' like gravity
  • studying arguments and logic as things to analyse
  • the basics of the immune system
  • calculus
  • 'force fields' like gravity

AFAIK gravity was indeed considered at least woo-ish back in the day, e.g.:

Newton’s theory of gravity (developed in his Principia), for example, seemed to his contemporaries to assume that bodies could act upon one another across empty space, without touching one another, or without any material connection between them. This so-called action-at-a-distance was held to be impossible in the mechanical philosophy. Similarly, in the Opticks he developed the idea that bodies interacted with one another by means of their attractive and repulsive forces—again an idea which was dismissed by mechanical philosophers as non-mechanical and even occult.

5Adele Lopez
And they were probably right about "action-at-a-distance" being impossible (i.e. locality), but it took General Relativity to get a functioning theory of gravity that satisfied locality. (Incidentally, one of the main reasons I believe the many worlds interpretation is that you need something like that for quantum mechanics to satisfy locality.)
All interpretations of QM make the same predictions, so if "satisfying locality" is an empirically meaningful requirement, they are all equivalent. But locality is more than one thing, because everything is more than one thing. Many interpretations allow nonlocal X where X might be a correlation ,but not an action or a signal.
5Adele Lopez
Yeah, it's not empirically meaningful over interpretations of QM (at least the ones which don't make weird observable-in-principle predictions). Still meaningful as part of a simplicity prior, the same way that e.g. rejecting a simulation hypothesis is meaningful.
Zero was considered weird and occult for a while

One example, maybe: I think the early 20th century behaviorists mistakenly (to my mind) discarded the idea that e.g. mice are usefully modeled as having something like (beliefs, memories, desires, internal states), because they lumped this in with something like "woo."  (They applied this also to humans, at least sometimes.)

The article Cognition all the way down argues that a similar transition may be useful in biology, where e.g. embryogenesis may be more rapidly modeled if biologists become willing to discuss the "intent" of a given cellular signal or similar.  I found it worth reading.  (HT: Adam Scholl, for showing me the article.)

I think "you should one-box on Newcomb's problem" is probably an example.  By the time it was as formalized as TDT it was probably not all that woo-y looking, but prior to that I think a lot of people had an intuition along the lines of "yes it would be tempting to one-box but that's woo thinking that has me thinking that."

I like this inquiry. Upvoted. Well… yes, but not for deep reasons. Just an impression. The cases where I've made shifts from "that's woo" to "that's true" are super salient, as are cases where I try to invite others to make the same update and am accused of fuzzy thinking in response. Or where I've been the "This is woo" accuser and later made the update and slapped my forehead. Also, "woo" as a term is pretty strongly coded to a particular aesthetic. I don't think you'd ever hear concern about "woo" in, say, Catholicism except to the extent the scientist/atheist/skeptic/etc. cluster is also present. But Catholics still slam into ontology updates that look obviously wrong beforehand and are obviously correct afterwards. Deconversion being an individual-scale example. (Please don't read me as saying "Deconversion is correct." I could just as well have given the inverse example: Rationalists converting to Catholicism is also an ontological update that's obviously wrong beforehand and obviously correct afterwards. But that update does look like "woo" beforehand, so it's not an example of what I'm trying to name.)   I like the examples others have been bringing. I like them better than mine. But I'll try to give a few anyway. Speaking to one of your "maybe never woo" examples: if I remember right, the germ theory of disease was incredibly bizarre and largely laughed at when first proposed. "How could living creatures possibly be that small? And if they're so small, how could they possibly create that much illness?" Prevailing theories for illness were things like bad air and demons. I totally expect lots of people thought the microbes theory was basically woo. So that's maybe an example. Another example is quantum mechanics. The whole issue Einstein took with it was how absurd it made reality. And it did in fact send people like Bohm into spiritual frenzy. This is actually an incomplete ontology update in that we have the mathematical models but people still don'
Your examples seem plausible, altho I'd still be interested in more details on each one. Further notes: * "And it did in fact send people like Bohm into spiritual frenzy." - do you mean Bohr, or is this a story/take I don't know about? * Re: Semmelweis reflex, I think there's a pretty big distinction between the "woo" taste and the "absurd" taste. For example, "all plants are conscious and radiate love all the time" sounds like woo to me. "The only reason anybody gets higher education is to find people to have kids with" and "there's a small organ in the centre of the brain that regulates the temperature of the blood that nobody has found yet" sound absurd to me, but not like woo.
Received such a bad reception that Everett left academic physics. Didn't seem crazy to the Greeks, but was controversial when reintroduced by Boltzman.
A lot of things can be pretty controversial but not woo-ish.
Can you say more about these for the benefit of folks like me who don't know about them?  What kind of "bad reception" or "controversial" was it?  Was it woo-flavored, or something else?
https://www.scientificamerican.com/article/hugh-everett-biography/ Everett tried to express his ideas as drily as possible, and it didn't entirely work--he was still accused of "theology" by Bohr. But there were and are technical issues as well, notably the basis problem. It can be argued that if you reify the whole formalism, then you have to reify the basis, and that squares the complexity of multiverse -- to every state in every basis. The argument actually was by JS Bell in Modern approaches tend to assume the multiverse has a single "preferred" basis, which has its own problems. Which tells us that it hasn't always been one exact theory.

I would amend the OP by saying that “salvage epistemology” is a bad idea for everyone, including “us” (for any value of “us”). I don’t much like labeling things as “infohazards” (folks around here are much too quick to do that, it seems to me), which obfuscates and imbues with an almost mystical air something that is fairly simple: epistemically, this is a bad idea, and reliably doesn’t work and makes our thinking worse.

As I’ve said before: avoiding toxic, sanity-destroying epistemologies and practices is not something you do when you’re insufficiently rational, it is how you stay sufficiently rational.

If you think that some kinds of ideas are probably harmful for some people to hear, is acting on that belief always saviorist fuckery or does there exist a healthy form of it?

It seems to me that, just as one can be mindful of one's words and avoid being intentionally hurtful but also not take responsibility for other people's feelings... one could also be mindful of the kinds of concepts one is spreading and acknowledge that there are likely prerequisites for being able to handle exposure to those concepts well, without taking responsibility for anyone's epistemic state.

5Said Achmiz
I am not Valentine, but I would say: it is “saviorist fuckery” if your view is “these ideas are harmful to hear for people who aren’t me/us (because we’re enlightened/rational/hyper-analytic/educated/etc. and they’re not)”. If instead you’re saying “this is harmful for everyone to hear (indeed I wish I hadn’t heard it!), so I will not disseminate this to anyone”, well, that’s different. (To be clear, I disapprove of both scenarios, but it does seem plausible that the motivations differ between these two cases.)
Is part of your claim that such ideas do not exist?  By "such ideas" I mean ideas that only some people can hear or learn about for some definition of "safely".
6Said Achmiz
Hard to answer that question given how much work the clause ‘for some definition of “safely”’ is doing in that sentence. EDIT: This sort of thing always comes down to examples and reference classes, doesn’t it? So let’s consider some hypothetical examples: 1. Instructions for building a megaton thermonuclear bomb in your basement out of parts you can get from a mail-order catalog. 2. Langford’s basilisk. 3. Langford’s basilisk, but anyone who can write a working FizzBuzz is immune. 4. The truth about how the Illuminati are secretly controlling society by impurifying our precious bodily fluids. Learning idea #1 is perfectly safe for anyone. That is, it’s safe for the hearer; it will do you no harm to learn this, whoever you are. That does not, however, mean that it’s safe for the general public to have this idea widely disseminated! Some ne’er-do-well might actually build the damn thing, and then—bad times ahead! If we try to stop the dissemination of idea #1, nobody can accuse us of “saviorist fuckery”, paternalism, etc.; to such charges we can reply “never mind your safety—that’s your own business; but I don’t quite trust you enough to be sure of my safety, if you learn of this (nor the safety of others)!” (Of course, if it turns out that we are in possession of idea #1 ourselves, the subject might ask how comes it that we are so trustworthy as to be permitted this knowledge, but nobody else is!) Ok, what about #2? This one’s totally unsafe. Anyone who learns it dies. There’s no question of us keeping this idea for ourselves; we’re as ignorant as anyone else (or we’d be dead). If we likewise keep others from learning this, it can only be purely altruistic. (On the other hand, what if we’re wrong about the danger? Who appointed us guardians against this threat, anyway? What gives us the right to deny people the chance to be exposed to the basilisk, if they choose it, and have been apprised of the [alleged] danger?) #3 is pretty close to #2, except that
  I think the amount of work that clause does is part of what makes the question worth answering...or at least makes the question worth asking. I'm not a fan of inserting this type of phrasing into an argument.  I think it'd be better to either argue that the claim is true or not true. To me, this type of claim feels like an applause light.  Of course, it's also possibly literally accurate...maybe most claims of the type we're talking about are erroneous and clung to because of the makes-us-feel-superior issue, but I don't think that literally accurate aspect of the argument makes the argument more useful or less of an applause light. In other words, I don't have access to an argument that says both of these cannot exist: 1. Cases that just make Group A feel superior because Group A erroneously thinks they are the only ones who can know it safely. 2. Cases that make Group A feel superior because Group A accurately thinks they are the only ones who can know it safely. In either case Group A comes across badly, but in case 2, Group A is right.   If we cannot gather any more information or make any more arguments, it seems likely that case #1 is going to usually be the reality we're looking at.  However, we can gather more information and make more arguments.  Since that is so, I don't think it's useful to assume bad motives or errors on the part of Group A. I don't really know.  The reason for my root question was to suss out whether you had more information and arguments or were just going by the heuristics that make you default to my case #1.  Maybe you have a good argument that case #2 cannot exist.  (I've never heard of a good argument for that.)   eta: I'm not completely satisfied with this comment at this time as I don't think it completely gets across the point I'm trying to make. That being said, I assign < 50% chance that I'll finish rewriting it in some manner so I'm going to leave it as is and hope I'm being overly negative in my assessment of it
Do you have the same reaction to: "This claim is suscpicious."
Less so, but it just leads to the question of "why do you think it's suspicious?".  If at all possible I'd prefer just engaging with whether the root claim is true or false.
That's fair. I initially looked at (the root claim) as a very different move, which could use critique on different grounds. 'Yet another group of people thinks they are immune to common bias. At 11, we will return to see if they, shockingly, walked right into it. When will people (who clearly aren't immune) going to stop doing this?'
2Said Achmiz
Er… I think there’s been some confusion. I was presenting a hypothetical scenario, with hypothetical examples, and suggesting that some unspecified (but also hypothetical) people would likely react to a hypothetical claim in a certain way. All of this was for the purpose of illustrating and explaining the examples, nothing more. No mapping to any real examples was intended. My point is that before we can even get to the stage where we’re talking about which of your cases apply, we need to figure out what sort of scenario (from among my four cases, or perhaps others I didn’t list?) we’re dealing with. (For instance, the question of whether Group A is right or wrong that they’re the only ones who can know a given idea safely, is pretty obviously ridiculous in my scenario #4, either quite confused or extremely suspect in my case #1, etc. At any rate, scenario #1 and scenario #2—just to take one obviously contrasting pair—are clearly so different that aggregating them and discussing them as though they’re one thing, is absurd!) So it’s hard to know how to take your question, in that light. Are you asking whether I think that things like Langford’s basilisk exist (i.e., my scenario #2), or can exist? (Almost certainly not, and probably not but who knows what’s possible, respectively…) Are you asking whether I think that my scenario #3 exists, or can exist? Even less likely… Do you think that such things exist?
I was referring to this part of your text: It seemed to me like your parentheticals were you stepping out of the hypothetical and making commentary about the standpoint in your hypotheticals.  I apologize if I interpreted that wrong.  Yeah, I think I understood that is what you're saying, I'm saying I don't think your point is accurate. I do not think you have to figure out which of your scenarios we're dealing with.  The scenario type is orthogonal to the question I'm asking. I'm asking if you think it's possible for these sort of ideas to exist in the real world: I'm confused about how what you've said has a bearing on the answerability of my root question. I...don't know. My prior is that they can exist. It's doesn't break any laws of physics. I don't think it breaks any laws of logic. I think there are things that some people are better able to understand than others. It's not insane to think that some people are less prone to manipulation than others. Just because believing something makes someone feel superior does not logically mean that the thing they believe is wrong. As far as if they do exist:  There are things that have happened on LW like Roko's basilisk that raise my prior that there are things that some people can hold in their heads safely and others can't. Of course, that could be down to quirks of individual minds instead of general features of some group.  I'd be interested in someone exploring that idea further.  When do we go from saying "that's just a quirk" to "that's a general feature"?  I dunno.
2Said Achmiz
That was indeed not my intention. I don’t see how that can be. Surely, if you ask me whether some category of thing exists, it is not an orthogonal question, to break that category down into subcategories, and make the same inquiry of each subcategory individually? Indeed, it may be that the original question was intended to refer only to some of the listed subcategories—which we cannot get clear on, until we perform the decomposition! The bearing is simple. Do you think my enumeration of scenarios exhausts the category you describe? If so, then we can investigate, individually, the existence or nonexistence of each scenario. Do you think that there are other sorts of scenarios that I did not list, but that fall into your described category? If so, then I invite you to comment on what those might be. True enough. I agree that what you describe breaks no (known) laws of physics or logic. But as I understood it, we were discussing existence, not possibility per se. In that regard, I think that getting down to specifics (at least to the extent of examining the scenarios I listed, or others like them) is really the only fruitful way of resolving this question one way or the other.
I think I see a way towards mutual intelligibility on this, but unfortunately I don't think I have the bandwidth to get to that point. I will just point out this: Hmm, I was more interested in the possibility.

This post seems to be implying that "salvage epistemology" is somehow a special mode of doing epistemology, and that one either approaches woo from a frame of uncritically accepting it (clearly bad) or from a frame of salvage epistemology (still possibly bad but not as clearly so).

But what's the distinction between salvage epistemology and just ordinary rationalist epistemology?

When I approach woo concepts to see what I might get out of them, I don't feel like I'm doing anything different than when I do when I'm looking at a scientific field and seeing what I might get out of it.

In either case, it's important to remember that hypotheses point to observations and that hypotheses are burdensome details. If a researcher publishes a paper saying they have a certain experimental result, then that's data towards something being true, but it would be dangerous to take their interpretation of the results - or for that matter the assumption that the experimental results are what they seem - as the literal truth. In the same way, if a practitioner of woo reports a certain result, that is informative of something, but that doesn't mean the hypothesis they are offering to explain it is true.

In... (read more)

4Gordon Seidoh Worley
Indeed. I left a comment on the Facebook version of this basically saying "it's all hermeneutics unless you're just directly experiencing the world without conceptions, so worrying about woo specifically is worrying about the wrong frame".
I think the incentives in science and woo are different. Scientists are rewarded for discovering new things, or finding an error in existing beliefs, so if 100 scientists agree on something, that probably means more than if 100 astrologers agree on something. You probably won't make a career in science by merely saying "all my fellow scientists are right", but I don't see how agreeing with fellow astrologers would harm your career in astrology. An ordinary rationalist will consider some sources more reliable, and some other sources less reliable. For example, knowing that 50% of findings in some field don't replicate, is considered bad news. Someone who wants to "salvage" e.g. Buddhism is privileging a source that has a replication rate way below 50%.

I think the incentives in science and woo are different.

I agree, though I'm not sure how that observation relates to my comment. But yes, certainly evaluating the incentives and causal history of a claim is an important part of epistemology.

Someone who wants to "salvage" e.g. Buddhism is privileging a source that has a replication rate way below 50%.

I'm not sure if it really makes sense to think in terms of salvaging "Buddhism", or saying that it has a particular replication rate (it seems pretty dubious whether the concept of replication rate is well-defined outside a particular narrow context in the first place). There are various claims associated with Buddhism, some of which are better-supported and potentially valuable than others. 

E.g. my experience is that much of meditation seems to work the way some Buddhists say it works, and some of their claims seem to be supported by compatible models and lines of evidence from personal experience, neuroscience, and cognitive science. Other claims, very much less so. Talking about the "replication rate of Buddhism" seems to suggest taking a claim and believing it merely on the basis of Buddhism having made such a claim, but that w... (read more)

Imagine two parallel universes, each of them containing a slightly different version of Buddhism. Both versions tell you to meditate, but one of them, for example, concludes that there is "no-self", and the other concludes that there is "all-self", or some other similarly nebulous claim. How certain do you feel that in the other universe you would evaluate the claim and say: "wrong"? As opposed to finding a different interpretation why the other conclusion is also true. (Assuming the same peer pressure, etc.)
(Upvoted.) Well, that is kind of already the case, in that there are also Buddhist-influenced people talking about "all-self" rather than "no-self". AFAICT, the framings sound a little different but are actually equivalent: e.g. there's not much difference between saying "there is no unique seat of self in your brain that would one could point at and say that it's the you" and "you are all of your brain". There's more to this than just that, given that talking in terms of the brain etc. isn't what a lot of Buddhists would do, but that points at the rough gist of it and I guess you're not actually after a detailed explanation. Another way of framing that is what Eliezer once pointed out, that there is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. Similarly, a graph where each vertex is marked as self is kind of equivalent to one where none are. More broadly, a lot of my interpretation of "no-self" isn't actually that directly derived from any Buddhist theory. When I was first exposed to such theories, much of their talk about self/no-self sounded to me like the kind of misguided folk speculation of a prescientific culture that didn't really understand the mind very well yet. It was only when I actually tried some meditative practices and got to observe my mind behaving in ways that my previous understanding of it couldn't explain, that I started thinking that maybe there's actually something there. So when I talk about "no-self", it's not so much that "I read about this Buddhist thing and then started talking about their ideas about no-self"; it's more like "I first heard about no-self but it was still a bit vague what it exactly meant and if it even made any sense, but then I had experiences which felt like 'no-self' would be a reasonable cluster label for, so I assumed that these kinds of things are probably what the
Who has tried to replicate it?
When aspiring rationalists interact with science, it's not just believing whatever 100 scientists agree on. If you take COVID-19 for example, we read a bunch of science, build models in our head about what's happening and then took action based on those models.
It's not obvious to me this effect dominates over politician punishments for challenging powerful people's ideas. I definitely think science is more self-correcting than astrology over decades, but don't trust the process on a year to year basis.

I would guess that a lot (perhaps most) of time, "salvage epistemology" is a rationalization to give to rationalists to justify their interest in woo, as opposed to being the actual reason they are interested in the woo. (I still agree that the concept is likely hazardous to some people.)

I agree with this.

There is also a related phenomenon: when a community that otherwise/previously accepted only people who bought into that community’s basic principles (aspiration to rationality, belief in the need for clear reasoning, etc.) adopts “salvage epistemology”, that community now opens itself up to all manner of people who are, shall we say, less committed to those basic principles, or perhaps not committed at all. This is catastrophic for the community’s health, sanity, integrity, ability to accomplish anything, and finally its likelihood of maintaining those very basic principles.

In other words, there is a difference between a community of aspiring rationalists of whom some have decided to investigate various forms of woo (to see what might be salvaged therefrom)—and the same community which has a large contingent of woo-peddlers and woo-consumers, of whom none believe in rationalist principles in the first place, but are only there to (at best) hang out with fellow peddlers and consumers of woo. The former community might be able to maintain some semblance of sanity even while they make their salvage attempts; the latter community is doomed.

It is difficult to distinguish between (1) people who think that there may be some value in a woo, and it is worth exploring it and separating the wheat from the chaff, and (2) people who believe that the woo is useful, and their only question is how to make it more palatable for the rationalist community. Both these groups are together opposed to people who would refuse to touch the woo in principle. The subtle difference between those two groups is the absence or presence of motivated reasoning. If you are willing to follow evidence wherever it may lead you, you are open to the possibility that the horoscopes may actually correlate with something useful, but you are also open to the possibility that they might not. While the "salvage at all costs" group already knows that the horoscopes are useful, and are useful in more or less the traditional way, the only question is how to convince the others, who are immune to the traditional astrological arguments, but it seems mostly like the question of using the right lingo, so perhaps if we renamed Pisces to "cognitive ichthys", the usual objections would stop and rationalists would finally accept that Pisces might actually be cognitively different from other people; especially if a high-status community member supported it publicly with an anecdote or two. (The opposite kind of mistake would be refusing to accept in principle that being a Virgo might correlate with your success at school... simply because it makes you one of the oldest kids in the classroom.) Maybe it's a question of timing. First prove that some part of the woo makes sense; then use it. Do not simply start with the assumption that surely most of the woo will be salvageable somehow; it may not.
3Said Achmiz
I think we’re talking about the same distinction? Or did you mean to specifically disagree with me / offer a competing view / etc.? I’d go further and say: first give us a good reason why we should think that it’s even plausible or remotely likely that there’s anything useful in the woo in question. (Otherwise, what motivates the decision to attempt to “salvage” this particular woo? Why, for example, are you trying to “salvage” Buddhism, and not Old Believer-ism?) How, in other words, did you locate the hypothesis that this woo, out of all the nonsense that’s been purveyed by all the woo-peddlers over the whole history of humanity, is worth our time and attention to examine for salvageability?
I mostly agree. I believe it is possible -- and desirable -- in theory to do the "salvage epistemology" correctly, but sadly I suspect that in practice 90% of wannabe rationalists will do it incorrectly. Not sure what is the correct strategy here, because telling people "be careful" will probably just result in them saying "yes, we already are" when in fact they are not. That actually makes sense. I would assume that each of them contains maybe 5% of useful stuff, but almost all useful stuff of Old Believer-ism is probably shared with the rest of Christianity, and maybe 1/3 of it is already "in the water supply" if you grew up in a Christian culture. Also, the "Buddhism" popular in the West is probably quite different from the original Buddhism; it is filtered for modern audience. Big focus on meditation and equanimity, and mostly silence about Buddha doing literal miracles or how having the tiniest sexual thought will fuck up your afterlife. (So it's kinda like Jordan B. Peterson's idea of Christianity, compared to the actual Christianity.) So I wouldn't be surprised if the Western "Buddhism" actually contained 10% of useful stuff. But of course, 10% correct still means 90% incorrect. And when I hear some people in rationalist community talk about Buddhism, they do not sound like someone who is 90% skeptical.
Broadly speaking, it's useful to have a wide range of ideas, because you can't guarantee that the ideas that are "local" to you are the best ones. It's gradient descent stuff.
4Gordon Seidoh Worley
You do say "a lot"/"most", but at least for me this is totally backwards. I only looked at woo type stuff because it was the only place attempting to explain some aspects of my experience. Rationalists were leaving bits of reality on the floor so I had to look elsewhere and then perform hermeneutics to pick out the useful bits (and then try to bring it back to rationalists, with varying degrees of success).

"Salvage" seems like a very strong frame/asserting what you're trying to prove. Something that needs salvaging has already failed, and the implication is you're putting a bunch of work into fixing it.

An alternate frame would be "mining", where it's accepted that most of the rock in a mine is worthless but you dig through it in the hopes of finding something small but extremely valuable. It might need polishing or processing, but the value is already in it in a way it isn't for something that needs salvaging. 

My guess is that you (Jim) would agree with the implications of "salvage", but I wanted to make them explicit.

It only counts as salvage epistemology if the person doing the mining explicitly believes that the thing they're mining from has failed; that is, they've made a strong negative judgment and decided to push past it. I don't mean for the term to include cases where the thing has failed but the person mistakenly believes it's good.
Suggestion withdrawn.

Do you have a principled model of what an "epistemic immune system" is and why/whether we should have one?

To elaborate a bit where I'm coming from here: I think the original idea with LessWrong was basically to bypass the usual immune system against reasoning, to expect this to lead to some problems, and to look for principles such as "notice your confusion," "if you have a gut feeling against something, look into it and don't just override it," "expect things to usually add up to normality" that can help us survive losing that immune system. (Advantage of losing it: you can reason!)

My guess is that that (having principles in place of a reflexive or socially mimicked immune system) was and is basically still the right idea. I didn't used to think this but I do now.

An LW post from 2009 that seems relevant (haven't reread it or its comment thread; may contradict my notions of what the original idea was for all I know): Reason as Memetic Immune Disorder

I don't have a complete or principled model of what an epistemic immune system is or ought to be, in the area of woo, but I have some fragments.

One way of looking at it is that we look at a cluster of ideas, form an outside view of how much value and how much crazymaking there is inside it, and decide whether to engage. Part of the epistemic immune system is tracking the cost side of the corresponding cost/benefit. But this cost/benefit analysis doesn't generalize well between people; there's a big difference between a well-grounded well-studied practitioner looking at their tenth fake framework, and a newcomer who's still talking about how they vaguely intend to read the Sequences.

Much of the value, in diving into a woo area, is in the possibility that knowledge can be extracted and re-cast into a more solid form. But the people who are still doing social-mimicking instead of cost/benefit are not going to be capable of doing that, and shouldn't copy strategies from people who are.

(I am trying not to make this post a vagueblog about On Intention Research, because I only skimmed it and I don't know the people involved well, so I can't be sure it fits the pattern, but the parts of it... (read more)

I'm skeptical that Leverage's intention research is well described as them trying to extract wisdom out of an existing framework that someone outside of Leverage created. They were interested in doing original research on related phenomena.  It's unclear to me how to do a cost-benefit analysis when doing original research in any domain.  If I look at credence calibration as a phenomenon to investigate and do original research that research involves playing around with estimating probabilities it's hard to know beforehand which exercises will create benefits. Original research involves persuing a lot of strains that won't pan out.  Credence calibration is similar to the phenomenon of vibes that Leverage studied in the sense that it's a topic where it's plausible that some value is gained by understanding the underlying phenomena better. It's unclear to me how you would do the related cost-benefit analysis because it's in the nature of doing original research that you don't really know the fruits of your works beforehand.

I want to state more explicitly where I’m coming from, about LW and woo.

One might think: “LW is one of few places on the internet that specializes in having only scientific materialist thoughts, without the woo.”

My own take is more like: “LW is one of few places on the internet that specializes in trying to have principled, truth-tracking models and practices about epistemics, and on e.g. trying to track that our maps are not the territory, trying to ask what we’d expect to see differently if particular claims are true/not-true, trying to be a “lens that sees its own flaws.””

Something I don’t want to see on LW, that I think at least sometimes happens under both the headings of “fake frameworks” and the headings of “woo” (and some other places on LW too), is something like “let’s not worry about the ultimate nature of the cosmos, or what really cleaves nature at the joints right now.  Let’s say some sentences because saying these sentences seems locally useful.”

I worry about this sort of thing being on LW because, insofar as those sentences make truth-claims about the cosmos, deciding to “take in” those sentences “because they’re useful,” without worrying about the nature of th... (read more)

Are people here mostly materialists? I'm not. In a Cartesian sense, the most authentic experience possible is that of consciousness itself, with matter being something our mind imagines to explain phenomenon that we think might be real outside of our imagination (but we can never really know). In other words, we know that idealism is true, because we experience pure ideas constantly, and we suspect that the images our minds serve up might actually correspond to some reality out there (Kant's things-in-themselves). The map might really be the territory. Like, if you read a book by Tolkein and find that the map doesn't match the text, which is right? And if Tolkein clarified, would he be right, considering the thing he's talking about doesn't even exist? Except it kinda does, in that we're debating real things, and they impact us, etc? I don't think we're anywhere near approaching a meaningful metaphysics, so the confidence of the materialists seems misplaced. I mean, yeah, I've seen matter, so I know it's real. But I've also seen matter in my dreams (including under a microscope, where it continued to be "real"). Sorry to rant on this single word!

Are people here mostly materialists?

Okay, since you seem interested in knowing why people are materialists. I think it's the history of science up until now. The history of science has basically been a constant build-up of materialism.

We started out at prehistoric animism where everything happening except that rock you just threw at another rock was driven by an intangible spirit. The rock wasn't since that was just you throwing it. And then people started figuring out successive compelling narratives about how more complex stuff is just rocks being thrown about. Planets being driven by angels? Nope, just gravitation and inertia. Okay, so comets don't have comet spirits, but surely living things have spirits. Turns out no, molecular biology is a bit tricky, but it seems to still paint a (very small) rocks thrown about picture that convincingly gets you a living tree or a cat. Human minds looked unique until people started building computers. The same story is repeating again, people point human activities as proofs of the indomitable human spirit, then someone builds an AI to do it. Douglas Hofstadter was still predicting that mastering chess would have to involve encompassing t... (read more)

Thank you, this makes a lot of sense. I do see how the history of science kind of narrows its way down towards materialism, and if we assume that path will continue in the same direction, pure materialism is the logical outcome. But... I disagree with the narrative that science is narrowing in on materialism. Popular culture certainly interprets the message of Science with a capital S that way, but reading actual scientific work doesn't leave that impression at all. The message I got from my middle school science classes was that science is profoundly uncertain of what matter is, but that it appears to manifest probabilistically under the governance of forces, which are really just measurable tendencies of the behavior of matter, whose origin we also have no guess at. The spiritualists were wrong in their specific guesses, but so were the scientists, who as you note when citing Aristotle. I have no doubt you will be on the right side of history. The priesthood will change the definitions of matter to accommodate whatever spiritual magic we discover next. Past scriptures will be reinterpreted to show how science was always progressing here, the present is the logical endpoint of the past, or at least, of our team in the past. That's because materialists write the record. It's easy to construct History to serve Ideology, so history, at least not epic narrative history like this, is a bad teacher when received from power. Primitive pagan mythology stumbled ignorantly towards the True Religion, or even the inverse of your claim, history is full of self-sure clockwork Newtonians eating crow when the bizarre, uncertain nature of modern physics slowly unraveled before their arrogant, annoyed eyes. --- Thanks again for taking the time to discuss this btw, your response answered my question very well. After all, I'm arguing about whether people should be materialists, but you only explained why they are, so feel free to ignore my ramblings and accept my gratitude :)
You seem to be claiming that whatever does get discovered, which might be interpreted as proof of the spiritual in another climate, will get distorted to support the materialist paradigm. I'm not really sure how this would work in practice. We already have a something of a precommitment to what we expect something "supernatural" to look like, ontologically basic mental entities. So far the discoveries of science have been nothing like that, and if new scientific discoveries suddenly were, I find it very hard to imagine quite many people outside of the "priesthood" not sitting up and paying very close attention. I don't really follow your arguments about what matter is and past scientist being wrong. Science improved and proved past scientists mistaken, that's the whole idea with science. Spiritualists have not improved much so far. And the question with matter isn't so much as what it is (what would an answer to this look like anyway?), but how matter acts, and science has done a remarkably good job at that part.
Okay. I'm curious to understand why! Are you yourself materialist? Any recommended reading or viewing on the topic, specifically within the context of the rationalist movement?
I'd say that something-like-materialism feels like the most consistent and likely explanation. Sure we could assume that maybe, despite all appearances, there isn't a real world of matter out there after all... but given that we do assume such a world for pretty much everything else we do, it would seem like an unjustifiably privileged hypothesis to assume anything else. There's a pretty strong materialist viewpoint in the original LW sequences, though it's kinda scattered across a number of posts so I'm not sure which ones in particular I'd recommend (besides the one about privileged hypotheses).

This sounds a bit harsher than I really intend but... Self described rationalists and post rationalists could mostly use a solid course in something like Jonathan Baron's Thinking and Deciding, ie obtaining a broad and basic grounding in practical epistemology in the first place.

Is that a book?
I reviewed it on LW, almost 10 years ago now.
It's an academic textbook on rationality. It defines rationality as:

This seems roughly on point, but is missing a crucial aspect - whether or not you're currently a hyper-analytical programmer is actually a state of mind which can change. Thinking you're on one side when actually you've flipped can lead to some bad times, for you and others.

I'm genuinely uncertain whether this is true. The alternate hypothesis is that it's more of a skillset than a frame of mind, which mean that it can atrophy but only partially and only slowly.

This is a bit off-topic with respect to the OP, but I really wish we’d more often say “aspiring rationalist” rather than “rationalist.” (Thanks to Said for doing this here.) The use of “rationalist” in parts of this comment thread and elsewhere grates on me. I expect most uses of either term are just people using the phrase other people use (which I have no real objection to), but it seems to me that when we say “aspiring rationalist” we at least sometimes remember that to form a map that is a better predictor of the territory requires aspiration, effort, forming one’s beliefs via mental motions that’ll give different results in different worlds. While when we say “rationalist”, it sounds like it’s just a subculture.

TBC, I don’t object to people describing other people as “self-described rationalists” or similar, just to using “rationalist” as a term to identify with on purpose, or as the term for what LW’s goal is. I’m worried that if we intentionally describe ourselves as “rationalists,” we’ll aim to be a subculture (“we hang with the rationalists”; “we do things the way this subculture does them”) instead of actually asking the question of how we can form accurate beliefs.

I... (read more)

I dutifully tried to say "aspiring rationalist" for awhile, but in addition to the syllable count thing just being too much of a pain, it... feels like it's solving the wrong problem.

An argument that persuaded me to stop caring about it as much: communities of guitarists don't call themselves "aspiring guitarists". You're either doing guitaring, or you're not. (in some sense similar for being a scientist or researcher).

Meanwhile, I know at least some people definitely meet any reasonable bar for "actually a goddamn rationalist". If you intentionally reflect on and direct your cognitive patterns in ways that are more likely to find true beliefs and accomplish your goals, and you've gone off into the world and solved some difficult problems that depended on you being able to do that... I think you're just plain a rationalist.

I think I myself am right around the threshold where I think it might reasonably make sense to call myself a rationalist. Reasonable people might disagree. I think 10 years ago I was definitely more like "a subculture supporting character." I think Logan Strohl and Jim Babcock and Eliezer Yudkowsky and Elizabeth van Nostrand and Oliver Habryka each have some clea... (read more)

I love your observations here. The quality of grounding in a clear intuition here. I don't think you can avoid the subculture thing. The discipline doesn't exist in a void the way math kind of does. Unless & until you can actually define the practice of rationality, there's no clear dividing line between the social scene and the set of people who practice the discipline. No clear analogue to "actually playing a guitar". Like, I think I follow your intuition, but consider: I'm reasonably sure a lot of people here would consider me a great example of a non-rationalist. Lots of folk told me that to my face while I worked at CFAR. But the above describes me to an utter T. I'm just doing it in a way that the culture here doesn't approve of and thinks is pretty nutty. Which is fine. I think the culture here is doing its "truth-seeking" in a pretty nutty way too. Y'all are getting great results predicting Covid case numbers, and I'm getting great results guiding people to cure their social anxiety and depression. To each their own. I think what you're talking about is way, way more of an aesthetic than you might realize. Like, what are you really using to detect who is and isn't "actually a goddamn rationalist"? My guess is it's more of a gut sense that you then try to examine upon reflection. Is Elon Musk "actually a goddamn rationalist"? He sure seems to care about what's true and about being effective in the world. But I'm guessing he somehow lands as less of a central example than Oli or Eliezer do. If so, why? If Elon doesn't do it for you, insert some other successful smart person who mysteriously doesn't gut-ping as "actually a goddamn rationalist". If I'm way off here, I'd actually be pretty interested in knowing that. Because I'd find that illuminating as to what you mean by rationalism. But if I'm basically right, then you're not going to separate the discipline from the social scene with a term. You'll keep seeing social status and perception of skill co

I agree that "aspiring rationalist" captures the desired meaning better than "rationalist", in most cases, but... I think language has some properties, studied and documented by linguists, which define a set of legal moves, and rationalist->aspiring rationalist is an invalid move. That is: everyone using "aspiring rationalist" is an unstable state from which people will spontaneously drop the word aspiring, and people in a mixed linguistic environemnt will consistently adopt the shorter one. Aspiring Rationalist just doesn't fit within the syllable-count budget, and if we want to displace the unmodified term Rationalist, we need a different solution.

I don't know; finding a better solution sounds great, but there aren't that many people who talk here, and many of us are fairly reflective and ornery, so if a small group keeps repeatedly requesting this and doing it it'd probably be sufficient to keep "aspiring rationalist" as at least a substantial minority of what's said.

FWIW, I would genuinely use the term 'aspiring rationalist' more if it struck me as more technically correct — in my head 'person aspiring to be rational' ≈ 'rationalist'. So I parse aspiring rationalist as 'person aspiring to be a person aspiring to be rational'.

'Aspiring rationalist' makes sense if I equate 'rationalist' with 'rational', but that's exactly the thing I don't want to do.

Maybe we just need a new word here. E.g., -esce is a root meaning "to become" (as in coalesce, acquiesce, evanesce, convalescent, iridescent, effervescent, quiescent). We could coin a new verb "rationalesce" and declare it means "to try to become more rational" or "to pursue rationality", then refer to ourselves as the rationalescents.

Like adolescents, except for becoming rational rather than for becoming adult. :P

7Eli Tyre
I'm in for coining a new word to refer to exactly what we mean. I find it kind of annoying that if I talk about "rationality" on say, twitter, I have to wade through a bunch of prior assumptions that people have about what the term means (eg "trying to reason through everything is misguided. Most actual effective deciding is intuitive.") I would rather refer to the path of self honesty and aspirational epistemic perfection by some other name that doesn't have prior associations, in the same way that if a person says "I'm a circler / I'm into Circling", someone will reply "what's circling?". 
1Ben Pace
“Effective Altruist” has six syllables, “Aspiring Rationalist” has seven. Not that different. I will try using it in my writing more for a while.
Note what people actually say in conversation is "EA" (suggests "AR" as a replacement)
2Ben Pace
Hm, the "AR scene" already refers to something, but maybe we could fight out our edge in the culture.
There's also the good ol' Asp Rat abbreviation.
Autocompletes to asperger-rationalist for me, and I see Valentine reports the same. But maybe this frees up enough syllable-budget to spend one on bypassing that. How about: endevrat, someone who endeavours to be rational. (This one is much better on the linguistic properties, but note that there's a subtle meaning shift: it's no longer inclusive of people who aspire but do not endeavour, ie people who identify-with rationality but can't quite bring themselves to read or practice. This seems important but I don't know whether it's better or worse.)
(this was the intended joke)
Alas, my brain autocompletes "Asp Rat" to "Asperger's-like rationalist".
That one's also a little hard to pronounce, so I think we'd have to collapse it to "assrat".

Could go "aspirat". (Pronounced /ˈæs.pɪ̯.ɹæt/, not /ˈæsˈpaɪ̯.ɹɪʔ/.)

I find "AR" more difficult to actually say out loud than "EA". 

Just think like a pirate.

If "rationalist" is a taken as a success term, then why wouldn't "effective altruist" be as well? That is to say: if you aren't really being effective, then in a strong sense, you aren't really an "effective altruist". A term that doesn't presuppose you have already achieved what you are seeking would be "aspiring effective altruist", which is quite long IMO.
3Said Achmiz
One man’s modus tollens is another’s modus ponens—I happen to think that the term “effective altruist” is problematic for exactly this reason.
As I see it, "rationalist" already refers to a person who thinks rationality is particularly important, not necessarily a person who is rational, like how "libertarian" refers to a person who thinks freedom is particularly important, not necessarily a person who is free. Then literally speaking "aspiring rationalist" refers to a person who aspires to think rationality is particularly important, not to a person who aspires to be rational. Using "aspiring rationalist" to refer to people who aspire to attain rationality encourages people to misinterpret self-identified rationalists as claiming to have attained rationality. Saying something like "person who aspires to rationality" instead of "aspiring rationalist" is a little more awkward, but it respects the literal meaning of words, and I think that's important.
4Said Achmiz
This was not the usage in the Sequences, however, and otherwise at the time the Sequences were written.
I agree that it was not the usage in the Sequences, and that it was therefore not (or at least not always) the usage within the community that coalesced around EY's blogging. But if "otherwise at the time the Sequences were written" is meant to say more than that -- if you're saying that there was a tendency for "rationalist" to mean something like "person skilled in the art of reason" apart from EY's preference for using it that way -- then I would like to see some evidence. I don't think I have ever seen the word used in that way in a way that wasn't clearly causally descended from EY's usage.
2Said Achmiz
I was referring to usage here on Less Wrong (and in adjacent/related communities). In other words— —nope, it is not meant to say more than that.
Maybe what you're actually looking for is something like "aspiring beisutsuka". Like there's an ideal you're aiming for but can maybe approach only asymptotically. Just don't equate "rationalist" with "beisutsuka" and you're good.
The same model that says aspiring rationalist will self-replace with rationalist, says aspiring beisutsuka will self-rpeplace with beisutsuka. But beisutsuka is a bit better than rationalist on its own terms; it emphasizes being a practitioner more, and presupposes the skill less. And it avoids punning with a dozen past historical movements that each have their own weird connotations and misconceptions. Unfortunately the phonology and spelling of beisutsuka is 99.9th-percentile tricky and that might mean it's also a linguistic invalid move.
3Rana Dexsin
Some rabbit-hole expansion on this: First of all, you're missing an “i” at the end (as attested in “Final Words”), so that's some direct evidence right there. The second half is presumably a loan from Japanese 使い “tsukai”, “one who uses/applies”, usable as a suffix. In fiction and pop culture, it shows up prominently in 魔法使い “mahoutsukai”, “magic user” thus “wizard” or “sorcerer”; I infer this may have been a flavor source given Eliezer's other fandom attachments. The first half is presumably a transliteration of “Bayes” as ベイス “beisu”, which devoices the last mora for reasons which are not clear to me. Compare to Japanese Wikipedia's article on Thomas Bayes which retains the ズ (zu) at the end, including in compounds related to Bayesian probability and inference.
I kinda like this

Go to such people not for their epistemology, which is junk, but for whatever useful ground-level observations can be separated from the fog.

Crossposted from Facebook:

The term used in the past for a concept close to this was "Fake frameworks" -- see for instance Val's post in favor of it from 2017: https://www.lesswrong.com/.../in-praise-of-fake-frameworks

Unfortunately I think this proved to be a quite misguided idea in practice, and one that was made more dangerous by the fact that it seems really appealing in principle. As you imply, the people most interested in pursuing these frameworks are often not I think the ones who have the most sober and evenhanded evaluations of such, which can lead... (read more)

But there's a bad thing happens when you have a group that are culturally adjacent to the hyper-analytical programmers, but who aren't that sort of person themselves.

I... don't think "hyper-analytical programmers" are a thing. We are all susceptible the the risk of "falling into crazy" to a larger degree than we think we are. There is something in the brain where openness, being necessary for Bayesian updating, also means suspending your critical faculties to consider a hypothetical model seriously, and so one runs a risk that the hypothetical takes hold, ... (read more)

Yep. If you were crazy, what would that feel like from the inside?

for me it mostly felt like I and my group of closest friends were at the center of the world, with the last hope for the future depending on our ability to hold to principle. there was a lot of prophesy of varying qualities, and a lot of importance placed suddenly on people we barely knew then rapidly withdrawn when those people weren't up for being as crazy as we were.

Thanks.  Are you up for saying more about what algorithm (you in hindsight notice/surmise) you were following internally during that time, and how it did/didn't differ from the algorithm you were following during your "hyper-analytical programmer" times?

This sort of “salvage epistemology” can also turn “hyper-analytical programmers”[1] into crazy people. This can happen even with pure ideas, but it’s especially egregious when you apply this “salvage epistemology” approach to, say, taking drugs (which, when I put it like that, sounds completely insane, and yet is apparently rather common among “rationalists”…).

  1. To the extent that such people even exist; actually, I mostly agree with shminux that they basically do not. ↩︎

I'm not so certain of that? Of the two extreme strategies "Just Say No" and "do whatever you want man, it feels goooooood", Just Say No is the clear winner. But when I've interacted with reasonable-seeming people who've also done some drugs, it looks like "here's the specific drug we chose and here's our safety protocol and here's everything that's known to science about effects and as you can see the dangers are non-zero but low and we think the benefits are also non-zero and are outweighing those dangers". And (anecdotally of course) they and all their friends who act similarly appear to be leading very functional lives; no one they know has gotten into any trouble worse than the occasional bad trip or exposure to societal disapproval (neither of which was ultimately a big deal, and both of which were clearly listed in their dangers column). Now it is quite possible they're still ultimately definitively wrong - maybe there are serious long-term effects that no one has figured out yet; maybe it turns out that the "everyone they know turns out ok" heuristic is masking the fact that they're all getting really lucky and/or the availability bias since the ones who don't turn out ok disappear from view; etc. And you can certainly optimize for safety by responding to all this with "Just Say No". But humans quite reasonably don't optimize purely for safety, and it is not at all clear to me that what these ones have chosen is crazy.
3Said Achmiz
What you say sounds like it could easily be very reasonable, and yet it has almost nothing in common, results-wise, with what we actually observe among rationalists who take psychedelics.

I know several rationalists who have taken psychedelics, and the description does seem to match them reasonably well.

There's a selection bias in that the people who use psychedelics the least responsibly and go the most crazy are also the ones most likely to be noticed. Whereas the people who are appropriately cautious - caution which commonly also involves not talking about drug use in public - and avoid any adverse effects go unnoticed, even if they form a substantial majority.

Unless it is a survivor bias, where among people who use drugs with approximately the same level of caution some get lucky and some get unlucky, and then we say "eh, those unlucky ones probably did something wrong, that would never happen to me". Or maybe the causality is the other way round, and some people become irresponsible as a consequence of becoming addicted.
3Said Achmiz
The selection effect exists, I don’t doubt that. The question is how strong it is. The phenomenon of people in the rationalist community taking psychedelics and becoming manifestly crazier as a result is common enough that in order for the ranks of such victims to be outnumbered substantially by “functional” psychedelic users, it would have to be the case that use of such drugs is, among rationalists, extremely common. Do you claim that this is the case?

it would have to be the case that use of such drugs is, among rationalists, extremely common.

It is, in fact, extremely common, including among sane stable people who don't talk about it.

2Said Achmiz
For avoidance of doubt, could you clarify whether you mean this comment to refer to “rationalist” communities specifically (or some particular such community?), or more broadly?


(One additional clarification: the common version of psychedelic use is infrequent, low dose and with a trusted sober friend present. Among people I know to use psychedelics often, as in >10x/year, the outcomes are dismal.)

4Said Achmiz
Understood, thanks.
I don't want to make a claim either way, since I don't know exactly how common the public thing you're referring to is. I know there's been some talk about this kind of thing happening, but I know neither exactly how many people we've talking about, nor with what reliability the cause can be specifically identified as being the psychedelics.
Who is “we”? This is more or less what I’ve observed from 100% of my (admittedly small sample size of) rationalist acquaintances who have taken psychedelics.
1Said Achmiz
“We” is “we on Less Wrong, talking about things we observe”. Of course I can’t speak for your private experience.

It's been my experience that many more people think they're immune to woo than actually are. I'm not sure the risk is worth the reward.

Hmm, I agree that the thing you describe is a problem, and I agree with some of your diagnosis, but I think your diagnosis focuses too much on a divide between different Kinds Of People, without naming the Kinds Of People explicitly but kind of sounding (especially in the comments) like a lot of what you're talking about is a difference in how much Rationality Skill people have, which I think is not the right distinction? Like I think I am neither a hyper-analytic programmer (certainly not a programmer) nor any kind of particularly Advanced rationalist, an... (read more)

I dunno...  IME, when someone not capable of steelmanning him reads e.g. David Icke, what usually happens is that they just think he must be crazy or something and dismiss him out of hand, not that they start believing in literal reptilian humanoids.

That post increased the propability that I will overcome my laziness and finally write a post about the concept of "bright doublethink" in English. Thanks.

I'm not sure how the right decision process on whether to do salvage epistemology on any given subject should look like. Also, if you see or suspect that this woo-ish thingy X "is a mix of figurative stuff and dumb stuff" but decide that it's not worth salvaging because of infohazard, how do you communicate it? "There's 10% probability that the ancient master Changacthulhuthustra discovered something instrumentally useful about human condition but reading his philosophy may mess you up so you shouldn't." How many novices do you expect to follow a general c... (read more)