Or: “I don’t want to think about that! I might be left with mistaken beliefs!”

Related to: Rationality as memetic immune disorder; Incremental progress and the valley; Egan's Law.

tl;dr: Many of us hesitate to trust explicit reasoning because... we haven’t built the skills that make such reasoning trustworthy. Some simple strategies can help.

Most of us are afraid to think fully about certain subjects.

Sometimes, we avert our eyes for fear of unpleasant conclusions. (“What if it’s my fault? What if I’m not good enough?”)

But other times, oddly enough, we avert our eyes for fear of inaccurate conclusions.[1] People fear questioning their religion, lest they disbelieve and become damned. People fear questioning their “don't walk alone at night” safety strategy, lest they venture into danger. And I find I hesitate when pondering Pascal’s wager, infinite ethics, the Simulation argument, and whether I’m a Boltzmann brain... because I’m afraid of losing my bearings, and believing mistaken things.

Ostrich Theory, one might call it. Or I’m Already Right theory. The theory that we’re more likely to act sensibly if we don’t think further, than if we do. Sometimes Ostrich Theories are unconsciously held; one just wordlessly backs away from certain thoughts. Other times full or partial Ostrich Theories are put forth explicitly, as in Phil Goetz’s post, this LW comment, discussions of Tetlock's "foxes vs hedgehogs" research, enjoinders to use "outside views", enjoinders not to second-guess expert systems, and cautions for Christians against “clever arguments”.

Explicit reasoning is often nuts

Ostrich Theories sound implausible: why would not thinking through an issue make our actions better? And yet examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse. Examples include, among many others:

  • Most early Communists;
  • Ted Kaczynski (The Unabomber; an IQ 160 math PhD who wrote an interesting treatise about the human impacts of technology, and also murdered innocent people while accomplishing nothing);
  • Mitchell Heisman;
  • Folks who go to great lengths to keep kosher;
  • Friends of mine who’ve gone to great lengths to be meticulously denotationally honest, including refusing jobs that required a government loyalty oath, and refusing to click on user agreements for videogames; and
  • Many who’ve gone to war for the sake of religion, national identity, or many different far-mode ideals.

In fact, the examples of religion and war suggest that the trouble with, say, Kaczynski wasn’t that his beliefs were unusually crazy. The trouble was that his beliefs were an ordinary amount of crazy, and he was unusually prone to acting on his beliefs. If the average person started to actually act on their nominal, verbal, explicit beliefs, they, too, would in many cases look plumb nuts. For example, a Christian might give away all their possessions, rejoice at the death of their children in circumstances where they seem likely to have gone to heaven, and generally treat their chances of Heaven vs Hell as their top priority. Someone else might risk their life-savings betting on an election outcome or business about which they were “99% confident”.

That is: many peoples’ abstract reasoning is not up to the task of day to day decision-making. This doesn't impair folks' actions all that much, because peoples' abstract reasoning has little bearing on our actual actions. Mostly we just find ourselves doing things (out of habit, emotional inclination, or social copying) and make up the reasons post-hoc. But when we do try to choose actions from theory, the results are far from reliably helpful -- and so many folks' early steps toward rationality go unrewarded.

We are left with two linked barriers to rationality: (1) nutty abstract reasoning; and (2) fears of reasoned nuttiness, and other failures to believe that thinking things through is actually helpful.[2]

Reasoning can be made less risky

Much of this nuttiness is unnecessary. There are learnable skills that can both make our abstract reasoning more trustworthy and also make it easier for us to trust it.

Here's the basic idea:

If you know the limitations of a pattern of reasoning, learning better what it says won’t hurt you. It’s like having a friend who’s often wrong. If you don’t know your friend’s limitations, his advice might harm you. But once you do know, you don’t have to gag him; you can listen to what he says, and then take it with a grain of salt.[3]

Reasoning is the meta-tool that lets us figure out what methods of inference are trustworthy where. Reason lets us look over the track records of our own explicit theorizing, outside experts' views, our near-mode intuitions, etc. and figure out which is how trustworthy in a given situation.

If we learn to use this meta-tool, we can walk into rationality without fear.

Skills for safer reasoning

1. Recognize implicit knowledge.

Recognize when your habits, or outside customs, are likely to work better than your reasoned-from-scratch best guesses. Notice how different groups act and what results they get. Take pains to stay aware of your own anticipations, especially in cases where you have explicit verbal models that might block your anticipations from view. And, by studying track records, get a sense of which prediction methods are trustworthy where.

Use track records; don't assume that just because folks' justifications are incoherent, the actions they are justifying are foolish. But also don't assume that tradition is better than your models. Be empirical.

2. Plan for errors in your best-guess models.

We tend to be overconfident in our own beliefs, to overestimate the probability of conjunctions (such as multi-part reasoning chains), and to search preferentially for evidence that we’re right. Put these facts together, and theories folks are "almost certain" of turn out to be wrong pretty often. Therefore:

  • Make predictions from as many angles as possible, to build redundancy. Use multiple theoretical frameworks, multiple datasets, multiple experts, multiple disciplines.
  • When some lines of argument point one way and some another, don't give up or take a vote. Instead, notice that you're confused, and (while guarding against confirmation bias!) seek follow-up information.
  • Use your memories of past error to bring up honest curiosity and fear of error. Then, really search for evidence that you’re wrong, the same way you'd search if your life were being bet on someone else's theory.
  • Build safeguards, alternatives, and repurposable resources into your plans.

3. Beware rapid belief changes.

Some people find their beliefs changing rapidly back and forth, based for example on the particular lines of argument they're currently pondering, or the beliefs of those they've recently read or talked to. Such fluctuations are generally bad news for both the accuracy of your beliefs and the usefulness of your actions. If this is your situation:

  • Remember that accurate beliefs come from an even, long-term collection of all the available evidence, with no extra weight for arguments presently in front of one. Thus, they shouldn't fluctuate dramatically back and forth; you should never be able to predict which way your future probabilities will move.
  • If you can predict what you'll believe a few years from now, consider believing that already.
  • Remember that if reading X-ist books will predictably move your beliefs toward X, and you know there are X-ist books out there, you should move your beliefs toward X already. Remember the Conservation of Expected Evidence more generally.
  • Consider what emotions are driving the rapid fluctuations. If you’re uncomfortable ever disagreeing with your interlocutors, build comfort with disagreement. If you're uncomfortable not knowing, so that you find yourself grasping for one framework after another, build your tolerance for ambiguity, complexity, and unknowns.

4. Update your near-mode anticipations, not just your far-mode beliefs.

Sometimes your far-mode is smart and you near-mode is stupid. For example, Yvain's rationalist knows abstractly that there aren’t ghosts, but nevertheless fears them. Other times, though, your near-mode is smart and your far-mode is stupid. You might “believe” in an afterlife but retain a concrete, near-mode fear of death. You might advocate Communism but have a sinking feeling in your stomach as you conduct your tour of Stalin’s Russia.

Thus: trust abstract reasoning or concrete anticipations in different situations, according to their strengths. But, whichever one you bet your actions on, keep the other one in view. Ask it what it expects and why it expects it. Show it why you disagree (visualizing your evidence concretely, if you’re trying to talk to your wordless anticipations), and see if it finds your evidence convincing. Try to grow all your cognitive subsystems, so as to form a whole mind.

5. Use raw motivation, emotion, and behavior to determine at least part of your priorities.

One of the commonest routes to theory-driven nuttiness is to take a “goal” that isn’t your goal. Thus, folks claim to care “above all else” about their selfish well-being, the abolition of suffering, an objective Morality discoverable by superintelligence, or average utilitarian happiness-sums. They then find themselves either without motivation to pursue “their goals”, or else pulled into chains of actions that they dread and do not want.

Concrete local motivations are often embarrassing. For example, I find myself concretely motivated to “win” arguments, even though I'd think better of myself if I was driven by curiosity. But, like near-mode beliefs, concrete local motivations can act as a safeguard and an anchor. For example, if you become abstractly confused about meta-ethics, you'll still have a concrete desire to pull babies off train tracks. And so dialoguing with your near-mode wants and motives, like your near-mode anticipations, can help build a robust, trust-worthy mind.

Why it matters (again)

Safety skills such as the above are worth learning for three reasons.

  1. They help us avoid nutty actions.
  2. They help us reason unhesitatingly, instead of flinching away out of fear.
  3. They help us build a rationality for the whole mind, with the strengths of near-mode as well as of abstract reasoning.

[1] These are not the only reasons people fear thinking. At minimum, there is also:

  • Fear of social censure for the new beliefs (e.g., for changing your politics, or failing to believe your friend was justified in his divorce);
  • Fear that part of you will use those new beliefs to justify actions that you as a whole do not want (e.g., you may fear to read a study about upsides of nicotine, lest you use it as a rationalization to start smoking again; you may similarly fear to read a study about how easily you can save African lives, lest it ends up prompting you to donate money).

[2] Many points in this article, and especially in the "explicit reasoning is often nuts" section, are stolen from Michael Vassar. Give him the credit, and me the blame and the upvotes.

[3] Carl points out that Eliezer points out that studies show we can't. But it seems like explicitly modeling when your friend is and isn't accurate, and when explicit models have and haven't led you to good actions, should at least help.

New to LessWrong?

New Comment
96 comments, sorted by Click to highlight new comments since: Today at 1:32 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Reading your post felt very weird to me, as if you were deliberately avoiding the obvious conclusion from your own examples! Do you really believe that people follow kosher or die in religious wars due to using abnormally explicit reasoning? The common thing about your examples is putting ideals over personal gain, not reasoning over instinct. Too much acting on explicitly stated values, not explicitly stated beliefs. In truth, using rationality for personal gain isn't nearly as dangerous as idealism/altruism and doesn't seem to require the precautions you go on to describe. If any of the crazy things I do failed to help me, I'd just stop doing them.

Which prompts a question to everyone: what crazy things do you do that help you? (Rather than help save the light cone or something.)

I strongly disagree. I specifically think people DO die in religious wars due to using abnormally explicit reasoning.

Can you elaborate here? My initial reaction is one of skepticism, if only because abnormally explicit reasoning seems uncommon.

I also agree with MichaelVassar, I think much religious harm comes from using abnormally explicit reasoning.

This is because (I hypothesize that) great moral failures come about when a group of people (often, a religion, but any ideological group) think they've hit upon an absolute "truth" and then expect they can apply this truth to wholly develop an ethical code. The evil comes in when they mistakenly think that morality can be described by some set of universal and self-consistent principles, and they apply a principle valid in one context to another with disastrous results. When they apply the principle to the inappropriate domain, they should feel a twinge of conscience, but they override this twinge with their reason -- they believe in this original principle, and it deduces this thing here, which is correct, so that thing over there that it also deduces must also be correct. In the end, they use reason to override their natural human morality.

The Nazis are the main example I have in mind, but to look at a less painful example, the Catholic church is another example of over-extending principles due to reasoning. Valuing human life and general societal openness to pr... (read more)

Thanks for your feedback. Here I would guess that you're underestimating the influence of (evolutionarily conditioned) straightforwardly base motivations: c.f. the Milgram and Stanford Prison Experiments. I recently ran across this fascinating essay by Ron Jones on his experience running an experiment called "The Third Wave" in his high school class. I would guess that the motivation that he describes (of feeling superior to others) played a significantly larger role than abnormally explicit reasoning in the case of the Nazi regime; that (the appearance of?) abnormally explicit reasoning was a result of this underlying motivation rather than the cause. There may be an issue generalizing from one example here; what your describing sounds to me closer to why a LW poster might have become a Nazi during Nazi times than why a typical person might have become a Nazi during Nazi times. On the other hand, I find it likely that the originators of the underlying ideas ("Aryan" nationalism, communism, Catholic doctrines) used explicit reasoning more often than the typical person does in coming to their conclusions.
It really is fascinating. But I don't believe him. I don't believe it was 'kept secret' and this is most likely some kind of delusion he experienced. (A very small experiment of this kind might make him feel so guilty that the size of the project grew in his mind.) For example, I believe I would have felt the same way as his students, but I'm certain I would not have kept it secret. Also, I'm confused about his statement That seems rather ridiculous. Being sent to the library for not wanting to participate in an assignment isn't beyond the pale. However, something just clicked in my mind and I realized an evil that we do as a society that we allow, because we sanction it as a community. So, yes, I see now how people can go along with something that their conscience should naturally fight against.
I agree that the there are reasons to question the accuracy of Ron Jones' account. I think that Jones was not suggesting that the consequences of the students' actions are comparable to the consequences of Nazis' actions but rather was claiming that the same tendencies that led the Germans to behave as they did were present in his own students. This may not literally be true; it's possible that the early childhood development environment in 1950's Palo Alto were sufficiently different from the environmental factors in the early 1900's so that the students did not have the same underlying tendencies that the Nazi Germans did, but it's difficult to tell one way or the other. Right, this is what I was getting at. I think that there are several interrelated things going on here: •High self-esteem coming from feeling that one is on the right side. •Desire for acceptance / fear of rejection by one's peers. •Desire to reaping material & other goods from the oppressed party. with each point being experienced only on a semi-conscious level.. In the case of the Catholic Church presumably only the first two points are operative. Of course empathy is mixed in there as well; but it may play a negligible role relative to the other factors on the table.
Add in desire for something more interesting than school usually is.
I have a question regarding the Milgram experiment. Were the teachers under the impression that the learners were continuing to supply answers voluntarily?
The learner was perceived to initially agree to the experiment, but among the recordings in the programmed resistance was one demanding to be let out.
Ah, also this sentence helped my understanding: I imagine -- perhaps erroneously -- that I would have tried to obtain the verbal agreement of the learner before continuing. But, for example, this is because I know that continuous subject consent is required whereas this might not have been generally known or true in the early 60s. Of course, I do see the pattern that this is probably such a case where everyone wants to rate themselves as above average (but they couldn't possibly all be). Still, I will humor my hero-bone by checking out the book and reading about the heroic exceptions, since those must be interesting.
Don't know the answer to your question; now that I look at the Wikipedia page I realize that I should only have referred to the Zimbardo Stanford Prison Experiment (the phenomenon in the Milgram experiment is not what I had in mind).
Abnormally much still doesnt have to be much,
In line with your comment: (which I upvoted), I'm not really sure what you (or Anna, or cousin_it) mean by "abnormally explicit reasoning" and I can't tell whether the disagreement here is semantic or more substantive.
My assumption has been that religious wars mostly use religion as a surrogate for territorial, ethnic, and economic interest groups. On the other hand, religion somewhat shapes ethnic groups. Still, I that those wars are driven (at the top-- everyone else is stuck with the war whether they want it or not, and are likely to be influences by propaganda-- by "Because we're us!" much more than by "Because God wills it! (see elaborate argument for what God wants)".
Upvoted. "With or without religion, good people can behave well and bad people can do evil; but for good people to do evil—that takes religion. " - Steven Winberg
I had a similar impression and response. Humanity seems to get in trouble when they try to make their values too explicitly consistent. The examples that come immediately to mind are when individuals or groups decide to become too strict, black and white or exacting about upholding a value that they have. They forget about or deny a larger context for that value. I think that to avoid this, a person needs to learn to be comfortable with some inconsistency in their values. Even as they learn not to be comfortable with inconsistencies in their beliefs about reality. Our values don't represent truths about reality in the same way our beliefs about external reality do, and this seems to be a deeper source of the epistemological conflicts we have.
I too noticed that some of the examples did not necessarily involve abnormally explicit reasoning.

A quote from the linked-to "cautions for Christians against clever arguments”, to save others the pain of wading through it to figure out what it's talking about:

It always begins the same way. They swallow first the rather subtle line that it is necessary for each to think for himself, to judge everything by the light of whether it appears reasonable to him. There is never any examination of that basic premise, though what it is really saying is that the mind of man becomes the ultimate test, the ultimate authority of all life. It is necessary for man to reason and it is necessary for him to think for himself and to examine things. But we are creatures under God, and we never can examine accurately or rightly until we begin with the basic recognition that all of man's thinking, blinded and shadowed as it is with the confusion of sin, must be measured by the Word of God. There is the ultimate authority.

Thanks for a ton of great tips Anna, just wanted to nit pick on one:

Remember that if reading X-ist books will predictably move your beliefs toward X, and you know there are X-ist books out there, you should move your beliefs toward X already. Remember the Conservation of Expected Evidence more generally.

I suspect that reading enough X-ist books will affect my beliefs for any X (well, nearly any). The key word is enough -- I suspect that fully immersing myself in just about any subject, and surround myself entirely by people who advocate it, would significantly alter my beliefs, regardless of the validity of X.

It wouldn't necessarily make you a believer. Worked example: I joined in the battle of Scientology vs. the Net in 1995 and proceeded to learn a huge amount about Scientology and everything to do with it. I slung the jargon so well that some ex-Scientologists refused to believe I'd never been a member (though I never was). I checked my understanding with ex-Scientologists to see if my understanding was correct, and it largely was.

None of this put me an inch toward joining up. Not even slightly.

To understand something is not to believe it.

That said, it'll provide a large and detailed pattern in your head for you to form analogies with, good or bad.

Alexflint said:

I suspect that fully immersing myself in just about any subject, and surround myself entirely by people who advocate it, would significantly alter my beliefs, regardless of the validity of X.

It seems that your experience was learning about anti-Scientology facts while surrounded by people who advocated anti-Scientology.

So it's completely unsurprising that you remained anti-Scientology.

Had you been learning about Scientology from friends of yours who were Scientologists, you might have had a much harder time maintaining your viewpoint.

Similarly, learning about christianity through the skeptics annotated bible is very different from learning about christianity through a christian youth group.

I actually first started reading alt.religion.scientology because I was interested in the substance of Scientology (SPOILER: there isn't any) from being a big William S. Burroughs fan. The lunacy is pretty shallow below the surface, which is why the Church was so desperately keen to keep the more esoteric portions from the public eye as long as possible. But, um, yeah. Point. OTOH, all the Scientologists I knew personally before that emitted weirdness signals. Thinking back, they behaved like they were trying to live life by a manual rather than by understanding. Memetic cold ahoy!
2Alex Flint13y
Interesting! But I do think it's harder than we imagine to maintain that perfect firewall between arguments you read and arguments you believe (or at least absorb into your decisions). Cases where you're genuinely uncertain about the truth are probably more salient than cases like Scientology on this front.

Well, yeah. Scientology is sort of the Godwin example of dangerous infectious memes. But I've found the lessons most useful in dealing with lesser ones, and it taught me superlative skills in how to inspect memes and logical results in a sandbox.

Perhaps these have gone to the point where I've recompartmentalised and need to aggressively decompartmentalise again. Anna Salamon's original post is IMO entirely too dismissive of the dangers of decompartmentalisation in the Phil Goetz post, which is about people who accidentally decompartmentalise memetic toxic waste and come to the startling realisation they need to bomb academics or kill the infidel or whatever. But you always think it'll never happen to you. And this is false, because you're running on unreliable hardware with all manner of exploits and biases, and being able to enumerate them doesn't grant you immunity. And there are predators out there, evolved to eat people who think it'll never happen to them.

My own example: I signed up for a multi-level marketing company, which only cost me a year of my life and most of my friends. I should detail precisely how I reasoned myself into it. It was all very logical. The process of re... (read more)

I hope you'll also post about how you reasoned yourself out of it.
Reading the sucker shoot analogy in a Florence Littauer book (CAUTION: Littauer is memetic toxic waste with some potentially useful bits). That was the last straw after months of doubts, the bit where it went "click! Oh, this is actually really bad for me, isn't it?" Had my social life been on the internet then (this was 1993) this would have been followed with a "gosh, that was stupid, wasn't it?" post. I hope. It may be relevant that I was reading the Littauer book because Littauer's books and personality theories were officially advocated in the MLM in question (Omegatrend, a schism of Amway) - so it seemed to be coming from inside. I worry slightly that I might have paid insufficient attention had it been from outside. I'd be interested to know how others (a) suffered a memetic cold (b) got out of it. Possible post material.
Just re-read this thread, and I'm still keen to hear how you reasoned yourself into it.
I would be interested in reading this, and especially about what caused the initial vulnerability.
Seems to me that, for most questions where there is any real uncertainty, many books are written advocating multiple points of view. If I were to read any one of these books, I would probably move closer to the author's point of view (since the author will select evidence to support his/her belief), but to know what I would believe after reading all of the books, I would have to actually read them to compare the strength of their arguments.
0Alex Flint13y
Yes I think you're mostly right, but I just don't think I'm quite good enough to weigh the evidence just right, even when I'm explicitly trying to do. Especially in cases where there is real uncertainty.
If you're built anything like me, the size of the effect does depend pretty strongly X; some may require a simple book, some may equire a full-fledged immersive indoctrination with a lot of social pressure. So I should move my belief towards any X that sounds like it could convince me with a simple book, which would cover a lot of (conflicting) theories on economics and history, but not a lot of religion or conspiracy theories or nationalist ideologies. Another belief this would leave me away form is ideas that "people who believe in X are evil/crazy" for a lot of values of X.
0[comment deleted]2y

Anna - I'm favorably impressed by this posting! Thanks for making it. It makes me feel a lot better about what SIAI staff mean by rationality.

In the past I've had concerns that SIAI's focus on a future intelligence explosion may be born of explicit reasoning that's nuts (in the sense of your article), and the present posting does a fair amount to assuage my concerns - I see it as a strong indicator that you and some of the other SIAI staff are vigilant against the dangers of untrustworthy explicit reasoning.

Give Michael my regards.

If you can predict what you'll believe a few years from now, consider believing that already.

I've been thinking about this lately. Specifically, I've been considering the following question:

If you were somehow obliged to pick which of your current beliefs you'd disagree with in eight years time, with real and serious consequences for picking correctly or incorrectly, what criteria would you use to pick them?

I'm pretty sure that difficulty in answering this question is a good sign.

It seems to me that the problem splits into two parts-- changes in belief that you have no way of predicting (they're based on information and/or thinking that you don't have yet), and changes in belief that are happening slowly because you don't like the implications.
Like Nancy said for the seond class of problems, but a little more generally, I'd preferentially pick the ones that I have rational reasons to suspect at the moment and that seem to be persisting for reasons that aren't obvious to me (or aren't rational), and ones that feel like they're surviving because they exploit my cognitive biases and other undesirable habits like akrasia.
You can predict that your belief will change, just not in what direction.
I think the question has implied acceptance of this.
Then, could you describe your idea in more detail?
Well, how would you answer the question? To apply it to a more manageable example, my beliefs about psychological sex differences in humans have changed considerably over both long and short timescales, to the point where I actively anticipate having different beliefs about them in the near future. In spite of this, I have no way of knowing which of those beliefs I'm going to demote or reject in future, because if I had such information it would be factored into the beliefs themselves.
Beliefs about facts that were extensively studied probably won't change, unless I expect new observations to be made that resolve some significant uncertainty. For example, special relativity and population of USA in 2007 will stay about the same, while my belief about USD:EUR ratio in 2011 will change in 2011, updating with actual observations. I don't see any problem with being able to distinguish such cases, it always comes down to whether I expect new observations/inferences to be made. Your second paragraph still sounds to me as if you continue to make the mistake I pointed out. You can't know how your beliefs will change (become stronger or become weaker), but you can know that certain beliefs will probably change (in one of these directions). So, you can't know which belief you'll accept in the future, but you can know that the level of certainty in a given belief will probably shift.
I don't think I'm making a mistake. I think we're agreeing.
I don't have an understanding of that, but don't think it's worth pursuing further.
I got the sense that the question is asking you to look for beliefs you predict will change for the worse. So, you can't predict which direction your beliefs will change in, but if you have an inkling that one will go in the direction of "false", then that is some sort of warning sign: * You haven't thought the belief through fully, so you are semi-aware there might be contradictions down the line you haven't encountered yet, or * You haven't considered all the evidence fully, so you are semi-aware that there might be a small amount of very strong evidence against the belief, or * You have privileged your hypothesis, and you are semi-aware there might be explanations that fit the evidence better, or * You are semi-aware that you have done one of these things, but don't know which because you haven't thought about it. In any case, your motivated cognition has let you believe the belief, but motivated cognition doesn't feel precisely like exhaustive double-checking, and a question like this tries to find that feeling.
Er, no, I more meant beliefs that you'll change for the better. For example, some people find themselves flip-flipping from one fad or intellectual community to the next, each time being very enthusiastic about the new set of ideas. In such cases, their friends can often predict that later on their beliefs will move back toward their normal beliefs, and so the individual probably can too.
This was sort of what I was aiming for. Evidence saying you're going to change your mind about something should be the same as evidence for changing your mind about something.

I think it is a bit unfair to frame arguments to trust outside views or established experts as arguments to not think about things. Rather, they are arguments about how much one should trust inside views or your own thoughts relative to other sources.

Thanks for posting this, it's awesome.

I particularly endorse trying to build things out of your abstract reasoning, as a way of moving knowledge from "head-knowledge" to "fingers-knowledge".

Regarding this sentence: "Remember that if reading X-ist books will predictably move your beliefs toward X, and you know there are X-ist books out there, you should move your beliefs toward X already."

Since I'm irrational (memetic insecure) and persuasive deceptions (memetic rootkits) exist, the sentence needs some qualifier. Maybe: "If you believe that the balance of the unknown arguments favor believing X, then you have reason to believe X."

"fingers-knowledge" is a great phrase.

Make every link in a chain of argument explicit. Most of the weirder conclusions I have seen in my own and others' beliefs have come about because they had conflated several different lines of reasoning or have jumped over several steps that appeared "obvious" but that included a mistaken assumption, but were never noticed because they weren't spelled out explicitly.

Also, be very careful about not confusing different meanings of a word, sometimes these can be very subtle so you need to be watchful.

For actually reasoning with an argument, keep it... (read more)

I'm interested in examples for the sort of mistakes you're describing.
In general, when you see two people arguing past each other, these kinds of problems are often involved at the root. Two examples that I can give are the problem of "natural rights" and the problem of "authority". The natural rights issue needs a pretty long and involved discussion even to understand but it amounts to a long, convoluted sequence of conflations and assumptions. The problem of authority is easier to describe, since it amounts to a single major error - authority conflates two distinct ideas - knowledge or expertise and justifiable or legitimate force. The two are necessarily linked in parental authority, but they are distinct ideas that tend to cause misunderstandings and resentment when conflated in institutional academic or state interactions. A good source for understanding the root idea in a political context is Thomas Sowell's A Conflict of Visions where he points out that people tend to use the same word to mean different things - his main examples are "fairness" and "equality". Those distinct meanings rest on the fact that those words conflate those two (and more) meanings into their definitions - neither side is "misusing" the words - the words themselves, and the fact that most people don't notice the conflation, is the problem.
Basically: A human's guide to words
Could you give examples?

There is a much simpler way of winning than carefully building up your abstract-reasoning ability to the point where it produces usefully accurate, unbiased, well-calibrated probability distributions over relevant outcome spaces.

The simpler way is just to recognize that, as a human in a western society, you won't lose much more or win much more than the other humans around you. So you may as well dump the abstract reasoning and rationality, and pick some humans who seem to live relatively non-awful lives (e.g. your colleagues/classmates) and take whatever... (read more)

Well, unless you actually take specific steps to win more....which is kind of what this is about. Note that people probably tend to end up here by this very process. That is, of all the subcultures available to them, the subculture of people who are interested in is the most attractive.
True ... but I suspect that people who end up here do so because they basically take more-than-averagely literally the verbally endorsed beliefs of the herd. Rationality as memetic immune disorder, failure to compartmentalize etc. Perhaps I should amend my original comment to say that if you are cognitively very different from the herd, you may want to use a bit of rationality/self-development like a corrective lens. You'll have to run compartmentalization in software. Maybe I should try to start a new trend: use {compartmentalization} when you want to invalidate an inference which most people would not make because of compartmentalization? E.g. "I think all human lives are equally valuable" "Then why did you spend $1000 on an ipad rather than giving it to Givewell?" "I refute it thus: {compartmentalization: nearmode/farmode}"
What steps can a person actually take to really, genuinely win more, in the sense of "win" which most people take as their near-mode optimization target? I suspect that happiness set-points mean that there isn't really much you can do. In fact probably one of the few ways to genuinely affect the total of well-being over your lifetime is to take seriously the notion that you have so little control over it: you'll get depressed about it. I recently read a book called 59 seconds which said that 50% of the variance in life satisfaction/happiness is directly genetically determined via your happiness set-point. In fact the advice that the book gave was to just chill out about life, that by far the easiest way to improve your life is to frame it more positively.
Happiness is a sham; focus on satisfaction. There don't seem to be satisfaction set points. That said, I agree with what you seem to be saying- that optimization is a procedure that is itself subject to optimization.
There's at least one very big problem with this sort of majoritarian herding: If everyone did it, it wouldn't work in the least. You need a substantial proportion of people actually trying to get the right answer in order for "going with the herd" to get you anywhere. And even then, it will only get you the average; you'll never beat the average by going with the average. (And don't you think that, say, Einstein beat the average?) And in fact there are independent reasons from evolutionary psychology and memetics to suspect that everyone IS doing it, or at least a lot of people are doing it a lot of the time. Ask most Christians why they are Christian, and they won't give you detailed theological reasons; they'll shrug and say "It's how I was raised". This is sort of analogous to the efficient market hypothesis, and the famous argument that you should never try to bet against the market because on average the market always wins. Well... if you actually look at the data, no it doesn't, and people who bet against the market can in some cases become spectacularly rich. Moreover, the reason the market is as efficient as it is relies upon the fact that millions of people buy their stocks NOT in a Keynesian beauty contest, but instead based on the fundamental value of underlying assets. With enough value investors, people who just buy market-wide ETFs can do very well. But if there were no value investors (or worse, no underlying assets! A casino is an example of a market with options that have no underlying assets), buying ETFs would get you nowhere.

The main problem I see with this post is that it assumes that it's always advantageous to find out the truth and update one's beliefs towards greater factual and logical accuracy. Supposedly, the only danger of questioning things too much is that attempts to do so might malfunction and instead move one towards potentially dangerous false beliefs (which I assume is meant by the epithets such as "nutty" and "crazy").

Yet I find this assumption entirely unwarranted. The benefits of holding false beliefs can be greater than the costs. This ... (read more)

Speaking from experience, avoiding too much thought about true beliefs that negatively impact one's happiness without giving any value is done by monitoring one's happiness. Or possibly by working on depression. For quite some time, my thoughts would keep going back to the idea that your government can kill you at any time (the Holocaust). Your neighbors can kill you at any time. (Rwanda) Eventually, I noticed that such thoughts were driven by an emotional pull rather than their relevance to anything I wanted or needed. There's still some residue-- after all, it's a true thought, and I don't think I'm just spreading depression to occasionally point out that governments could build UFAI or be a danger to people working on FAI. Unfortunately, while I remember the process of prying myself loose from that obsession, I don't remember what might have led to the inspiration to look at those thoughts from the outside. More generally, I believe there's an emotional immune system, and it works better for some people than others, at some times than others, and probably (for an individual) about some subjects than others.
Do you have some examples of such beliefs?

The problem with the most poignant examples is that it's impossible to find beliefs that signal low status and/or disreputability in the modern mainstream society, and are also uncontroversially true. The mention of any concrete belief that is, to the best of my knowledge, both true and disreputable will likely lead to a dispute over whether it's really true. Yet, claiming that there are no such beliefs at all is a very strong assertion, especially considering that nobody could deny that this would constitute a historically unprecedented state of affairs.

To avoid getting into such disputes, I'll give only two weaker and (hopefully) uncontroversial examples.

As one example, many people have unrealistic idealized views of some important persons in their lives -- their parents, for example, or significant others. If they subject these views to rational scrutiny, and perhaps also embark on fact-finding missions about these persons' embarrassing past mistakes and personal failings, their new opinions will likely be more accurate, but it may make them much unhappier, and possibly also shatter their relationships, with all sorts of potential awful consequences. This seems like a clear an... (read more)

This is a good point. Most ideas that are mistreated by modern mainstream society are not obviously true. Rather, they are treated as much less probable than a less-biased assessment would estimate. This tendency leads to many ideas being given a probability of 0%, when they really deserve a probability of 40-60% based on the current evidence. This is consistent with your experience (and mine) of examining various controversies and being unable to tell which positions are actually correct, based on the current evidence. The psychology seems to combine a binary view of truth combined with raising the burden of proof for low status beliefs: people are allowed to "round-down" or even floor their subjective probabilities for undesirable beliefs. Any probability less than 50% (or 90%, in some discussions) can be treated the same. Unfortunately, the English language (and probably others, too) is horribly bad for communication about probability, allowing such sorts of forms of sophistry to flourish. And the real world is often insufficient to punish educated middle-class people for rounding or flooring the probabilities in the socially desirable direction, even though people making such abuses of probability would get destroyed in many practical endeavours (e.g. betting). One method for avoiding bias is to identify when one is tempted to engage in such rounding and flooring of probabilities.
I see your point. I agree that these people are moving away from a local optimum of happiness by gaining true beliefs. As to the global optimum, it's hard to say. I guess it's plausible that the best of all possible happinesses involves false beliefs. Does it make sense that I have a strong ethical intuition to reject that kind of happiness? (Anecdotally, I find the more I know about my loved ones' foibles, the more I look on them fondly as fellow creatures.)
Consequences like... getting out of a relationship founded on horror and lies? I agree that could be painful, but I have a hard time seeing it as a net loss.
Here's a good example: "The paper that supports the conventional wisdom is Jensen, A. R., & Reynolds, C. R. (1983). It finds that females have a 101.41 mean IQ with a 13.55 standard deviation versus males that have a 103.08 mean IQ with a 14.54 standard deviation." Now, people will lynch you for that difference of 1.67 IQ points (1.63 %), unless you make excuses for some kind of bias or experimental error. For one thing, the overall average IQ is supposed to be 100. Also some studies have females with the higher IQ. But what about that other bit, the 7% difference in standard deviation? Stated like this, it is largely inoffensive because people who know enough math to understand what it means, usually know to disregard slight statistical variations in the face of specific evidence. But what if you take that to its logical conclusions concerning the male/female ratio of the top 0.1% smartest people, and then tell other people your calculated ratio? (to make sure it is a true belief, state it as "this study, plus this calculation, results in...") If you state such a belief, people will take it as a signal that you would consider maleness to be evidence of being qualified. And, since people are bad at math and will gladly follow a good cause regardless of truth, almost no one will care that looking at actual qualifications is necessarily going to swamp any effects from statistics, nor will they care whether it is supported by a scientific study (weren't those authors both males?). And the good cause people aren't even wrong -- considering that people are bad at math, and there is discrimination against women, knowledge of that study will likely increase discrimination, either through ignorance or intentional abuse -- regardless of whether the study was accurate. If you accept the above belief, but decide letting others know about your belief is a bad idea, then you still have to spend some amount of effort guarding least you let slip your secret in your speech or ac
This actually checks out.
You might be able to inoculate yourself against that by also calculating and quoting the conjugate male/female ratio of the lowest 0.1% of the population. Which is really something you should be doing anyway any time you look at a highest or lowest X% of anything, lest people take your information as advice to build smaller schools, or move to the country to prevent cancer.
Why would that "inoculate" you? Yeah, it makes it obvious that you're not talking about a mean difference (except for, you know, the real mean difference found in the study), but saying "there are more men than women in prisons and more men than women that are math professors at Harvard" is still not gender egalitarian.
Using that figures, 0.117% of males and 0.083% of females have IQs below 58.814, so if the sex ratio in whatever-you're-thinking-of is much greater than 1.4 males per female, something else is going on.
Using that figures, 0.152% of males and 0.048% of females have IQs over 146.17, so if the sex ratio in whatever-you're-thinking-of is much greater than 3.2 males per female, something else is going on.
The zero of the scale is arbitrary, so the "1.63%" is meaningless.
In my experience, practically speaking though not theoretically, true beliefs are literally always beneficial relative to false ones, though not always worth the cost of acquiring them.

Propositional calculus is brittle. A contradiction implies everything.

In Set theory, logic and their limitations Machover calls this the Inconsistency Effect. I'm surprised to find that this doesn't work well as a search term. Hunting I find:

One page

In classical logic, a contradiction is always absurd: a contradiction implies everything.

Another page

Another trouble is that the logical conditional is such that P AND ¬P ⇒ Q, regardless of what Q is taken to mean. That is, a contradiction implies that absolutely everything is true.

Any false fact that y... (read more)

"Ex falso quodlibet" or "principle of explosion" might be the search term you are looking for. Relevance logic and other nonclassical logics are not explosive in the same way.
They can't consider themselves manifestations of logic, but since they are reasoning about the infallible logic, not about themselves, there is no problem.

I wouldn't say that this is a fear of an "inaccurate conclusion," as you say. Instead, it's a fear of losing control and becoming disoriented: "losing your bearings" as you said . You're afraid that your most trustworthy asset - your ability to reason through a problem and come out safe on the other side; an asset that should never fail you - will fail you and lead you down a path you don't want to go. In fact, it could lead to Game Over if you let that lead you to kill or be killed, as you highlight in your examples of the Unabomber... (read more)

For example, a Christian might give away all their possessions, rejoice at the death of their children in circumstances where they seem likely to have gone to heaven, and generally treat their chances of Heaven vs Hell as their top priority.

Steven Landsburg used this reasoning, combined with the fact that Christians don't generally do this, to conclude not that Christians don't act on their beliefs, but that Christians don't generally believe what they claim to believe. I think the different conclusion is reached because he assigns a lot more rationa... (read more)

What does it mean, actually, to "believe" something? If it implies that you integrate it into your worldview and act accordingly, then these people clearly don't "believe" in that sense. But this may be an altogether too strong notion of what it is to "believe" something, since most people have things they'd say they "believe" that aren't applied in this way.

I, too, really appreciated this post.

Unfortunately, though, I think that you missed one of the most important skills for safer reasoning -- recognizing and acknowledging assumptions (and double-checking that they are still valid). Many of the most dangerous failures of reasoning occur when a normally safe assumption is carried over to conditions where it is incorrect. Diving three feet into water that is unobstructed and at least five feet deep won't lead to a broken neck -- unless the temperature is below zero centigrade.

I like this; it seems practical and realistic. As a point of housekeeping, double-check the spaces around your links--some of them got lost somewhere. :)

More specifically, from "4. Update your near-mode anticipations, not just your far-mode beliefs" downwards, the links began lacking spaces with two exceptions.
Are they still lacking spaces on your browser? I'm puzzled, because they were lacking spaces for me last night (confusingly, even though there were spaces in my text in the edit window), and then they disappeared this morning without my having changed the text meanwhile.
That's quite strange, but yes, they're fixed now. For reference: What browser are you using? I'm currently running Google Chrome.
Chrome as well. I've had this problem before with other posts and non-confidently suspect I tried other browsers at that time.

Posted on behalf of someone else who had the following comment:

I would have liked for [this post] to contain details about how to actually do this:

If you're uncomfortable not knowing, so that you find yourself grasping for one framework after another, build your tolerance for ambiguity, complexity, and unknowns.

People fear questioning their “don't walk alone at night” safety strategy, lest they venture into danger.

I routinely walk (and run) alone at night. Indeed, I plan on going for a 40k run/walk alone tonight. Yet I observe that walking alone at night does really seem like it involves danger - particularly if you are an attractive female.

I actually know people (ok, so I am using my sisters as anecdotes) who are more likely to fear considering a "don't walk alone at night" strategy because it may mean they would have to sacrifice their exercise routine. Fortunately Melbourne is a relatively safe city as far as 'cities in the world' go.

I love this post, but I think "we can walk into rationality without fear" is too strong.

I'd just like to point out that 5 looks like a specific application of 1. Recognizing that your "goal" is just what you think is your goal, and you can be mistaken about it in many ways.

Minor typo -"denotationally honest, including refusing to jobs that required a government loyalty oath" - no need for "to" before "jobs".

Thanks. Fixed.
Deontological? I'm really confused now. I felt sure that the typo was denotational when it should be deontological
"denotationally honest" means speak the literal truth, though presumably your conotations and non-verbal communication may be misleading. Committment to this principle certainly seems deontological, as opposed to consequentialist concern for achieving the goals of others having accurate beliefs. One might claim that it is based on the consequentialist goal of having a reputation of making literally honest statements, but I would suspect that to be a rationalization.