This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.

Welcome to the Rationality reading group. This week we discuss the Preface by primary author Eliezer Yudkowsky, Introduction by editor & co-author Rob Bensinger, and the first sequence: Predictably Wrong. This sequence introduces the methods of rationality, including its two major applications: the search for truth and the art of winning. The desire to seek truth is motivated, and a few obstacles to seeking truth--systematic errors, or biases--are discussed in detail.

This post summarizes each article of the sequence, linking to the original LessWrong posting where available, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

Reading: Preface, Biases: An Introduction, and Sequence A: Predictably Wrong (pi-xxxv and p1-42)


Preface. Introduction to the ebook compilation by Eliezer Yudkowsky. Retrospectively identifies mistakes of the text as originally presented. Some have been corrected in the ebook, others stand as-is. Most notably the book focuses too much on belief, and too little on practical actions, especially with respect to our everyday lives. Establishes that the goal of the project is to teach rationality, those ways of thinking which are common among practicing scientists and the foundation of the Enlightenment, yet not systematically organized or taught in schools (yet).

Biases: An Introduction. Editor & co-author Rob Bensinger motivates the subject of rationality by explaining the dangers of systematic errors caused by *cognitive biases*, which the arts of rationality are intended to de-bias. Rationality is not about Spock-like stoicism -- it is about simply "doing the best you can with what you've got." The System 1 / System 2 dual process dichotomy is explained: if our errors are systematic and predictable, then we can instil behaviors and habits to correct them. A number of exemplar biases are presented. However a warning: it is difficult to recognize biases in your own thinking even after learning of them, and knowing about a bias may grant unjustified overconfidence that you yourself do not fall pray to such mistakes in your thinking. To develop as a rationalist actual experience is required, not just learned expertise / knowledge. Ends with an introduction of the editor and an overview of the organization of the book.

A. Predictably Wrong

1. What do I mean by "rationality"? Rationality is a systematic means of forming true beliefs and making winning decisions. Probability theory is the set of laws underlying rational belief, "epistemic rationality": it describes how to process evidence and observations to revise ("update") one's beliefs. Decision theory is the set of laws underlying rational action, "instrumental rationality", independent of what one's goals and available options are. (p7-11)

2. Feeling rational. Becoming more rational can diminish feelings or intensify them. If one cares about the state of the world, it is expected that he or she should have an emotional response to the acquisition of truth. "That which can be destroyed by the truth should be," but also "that which the truth nourishes should thrive." The commonly perceived dichotomy between emotions and "rationality" [sic] is more often about fast perceptual judgements (System 1, emotional) vs slow deliberative judgements (System 2, "rational" [sic]). But both systems can serve the goal of truth, or defeat it, depending on how they are used. (p12-14)

3. Why truth? and... Why seek the truth? Curiosity: to satisfy an emotional need to know. Pragmatism: to accomplish some specific real-world goal. Morality: to be virtuous, or fulfill a duty to truth. Curiosity motivates a search for the most intriguing truths, pragmatism the most useful, and morality the most important. But be wary of the moral justification: "To make rationality into a moral duty is to give it all the dreadful degrees of freedom of an arbitrary tribal custom. People arrive at the wrong answer, and then indignantly protest that they acted with propriety, rather than learning from their mistake." (p15-18)

4. ...what's a bias, again? A bias is an obstacle to truth, specifically those obstacles which are produced by our own thinking processes. We describe biases as failure modes which systematically prevent typical human beings from determining truth or selecting actions that would have best achieved their goals. Biases are distinguished from mistakes which originate from false beliefs or brain injury. Do better seek truth and achieve our goals we must identify our biases and do what we can to correct for or eliminate them. (p19-22)

5. Availability. The availability heuristic is judging the frequency or probability of an event by the ease with which examples of the event come to mind. If you think you've heard about murders twice as much as suicides then you might suppose that murder is twice as common as suicide, when in fact the opposite is true. Use of the availability heuristic gives rise to the absurdity bias: events that have never happened are not recalled, and hence deemed to have no probability of occurring. In general, memory is not always a good guide to probabilities in the past, let alone to the future. (p23-25)

6. Burdensome details. The conjunction fallacy is when humans rate the probability of two events has higher than the probability of either event alone: adding detail can make a scenario sound more plausible, even though the event as described necessarily becomes less probable. Possible fixes include training yourself to notice the addition of details and discount appropriately, thinking about other reasons why the central idea could be true other than the added detail, or training oneself to hold a preference for simpler explanations -- to feel every added detail as a burden. (p26-29)

7. Planning fallacy. The planning fallacy is the mistaken belief that human beings are capable of making accurate plans. The source of the error is that we tend to imagine how things will turn out if everything goes according to plan, and do not appropriately account for possible troubles or difficulties along the way. The typically adequate solution is to compare the new project to broadly similar previous projects undertaken in the past, and ask how long those took to complete. (p30-33)

8. Illusion of transparency: why no one understands you. The illusion of transparency is our bias to assume that others will understand the intent behind our attempts to communicate. The source of the error is that we do not sufficiently consider alternative frames of mind or personal histories, which might lead the recipient to alternative interpretations. Be not too quick to blame those who misunderstand your perfectly clear sentences, spoken or written. Chances are, your words are more ambiguous than you think. (p34-36)

9. Expecting short inferential distances. Human beings are generally capable of processing only one piece of new information at at time. Worse, someone who says something with no obvious support is a liar or an idiot, and if you say something blatantly obvious and the other person doesn't see it, they're the idiot. This is our bias towards explanations of short inferential distance. A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If at any point you make a statement without obvious justification in arguments you've previously supported, the audience just thinks you're crazy. (p37-39)

10. The lens that sees its own flaws. We humans have the ability to introspect our own thinking processes, a seemingly unique skill among life on Earth. As consequence, a human brain is able to understand its own flaws--its systematic errors, its biases--and apply second-order corrections to them. (p40-42)

It is at this point that I would generally like to present an opposing viewpoint. However I must say that this first introductory sequence is not very controversial! Educational, yes, but not controversial. If anyone can provide a link or citation to one or more decent non-strawman arguments which oppose any of the ideas of this introduction and first sequence, please do so in the comments. I certainly encourage awarding karma to anyone that can do a reasonable job steel-manning an opposing viewpoint.

This has been a collection of notes on the assigned sequence for this week. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Sequence B: Fake Beliefs (p43-77). The discussion will go live on Wednesday, 6 May 2015 at or around 6pm PDT, right here on the discussion forum of LessWrong.


29 comments, sorted by Click to highlight new comments since: Today at 7:19 AM
New Comment
[-][anonymous]7y 13

Re-reading this sequence resolved for me a long-standing confusion I had. In my day job I do a fair amount of project planning, and there is a wise old adage that I'm sure everyone reading here has heard at least once. It even has a name, Murphey's Law: "Anything that can possibly go wrong, will go wrong."

Anyone who has ever experienced the frustration of managing a real world project knows the truth of this statement. It is not a literal truth -- Murphey's Law is not a physical law, and it is not actually true that every single failure mode is encountered. But you may plan a project and identify 5 different likely failures, expecting to encounter 1 or maybe 2. In reality you actually hit 3 of the ones you identified, plus a 4th that you didn't know about.

The source of my confusion is that the real world is not intentional. Physics lacks the capability to seek out ways to frustrate your attempts at good planning. So how could the universe actively seek out to frustrate project planners? If I calculate the probability of a failure mode from fundamental analysis, why does that probability not match the observed reality?

The answer, of course, is the planning fallacy. A much less wise-sounding, but more true reformulation of Murphey's Law would be: "The number of things which could actually go wrong will exceed the number you will think of, with higher probabilities than you assign." Since your capability to plan is bounded, and since we all suffer from the availability heuristic in constructing our plans and noticing dependent probabilities, this is true.

[-][anonymous]7y 9

Good insight!! Once the older boy I nanny for mentioned Murphy's Law to me on the way home from school. I said, "That's a silly law. Let's play a little game called Disproving Murphy's Law." So we all did:

-Hey, that car didn't smash us! -I didn't twist my ankle in P.E. when we ran on bumpy grass! -You didn't give me carrots in my lunch today! -A sniper didn't just shoot us from behind that tree! -We didn't have bad weather!

It's a nice (and sometimes hilarious) game, kind of like the reverse of that multi-use psychology tactic where people are supposed to think of things they're thankful for.

"The number of things which could actually go wrong will exceed the number you will think of, with higher probabilities than you assign."

I'm going to share your revised version with them tomorrow :)

Does Murphy's law necessarily carry negative connotations? Because when I hear people invoking it, they mean Sod's law or “mocked by fate”: of all possible outcomes the worst will happen. At some point I believed that originally Murphy's law didn't carry negative (depressing) connotations, only meant that due to human factor the first trial of a system will be unsuccessful. But reading its Wikipedia page I'm not so sure. It would make sense coming from engineering. Compare with compiling a program code written from scratch, if it's big enough it will guaranteed not run with some stupid error due do a typo or type mismatch or missed array index or whatever. Programmers don't grieve that fate is unfair to them, because errors are supposed to happen, and Murphy's law (as I thought) is just acknowledgement of this phenomenon.

The fact that people believe in Sod's law or “mocked by fate” is just confirmation bias, BTW. When something goes wrong, it's an emotional distress, that people tend to ponder for a period of time, lose mental energy on. When something goes right, it's just expected as normal, not even reflected upon. So people tend to remember bad things better. This is related to availability heuristic: bad things are easily available, good things aren't because easily forgotten and not considered important.

“Birds always dung on my car!” cries Alice, and the image of bird feces from 2 years ago are readily available. It doesn't occur to Alice that all other times except today and that day 2 years ago birds didn't crap on her car at all. This is relevant for CBT, as confirmation bias towards negative events or features lead to various cognitive distortions, e.g. overgeneralization (birds don't always crap on your car) or mental filtering (how about all days when birds didn't crap on your car?).

Yeah, that kind of reminds me of this relevant SMBC. I've heard Murphy's Law also described as "if it can happen, it will." I feel like this is an oversimplification, because obviously not everything that has the potential to occur actually does, but it feels less strictly negative than other connotations.

There are a lot of formulations of Murphy's Law.

One of them is "The perversity of the Universe tends towards a maximum", also known as Finagle's Law.

Rereading "Illusion of transparency" and "Expecting short inferential distances", it finally dawned on me that the concept cuts both ways: When in the role of explainers we need to be careful to explain ourselves fully and to account for the inferential gaps between ourselves and those to whom we are talking. On the flip side (and this is the part that I didn't fully appreciate before), when in the role of listeners we sometimes read / hear something from an expert and think, "wait, there are a few gaps in that argument", or even "that's just ridiculous". Thoughts like these should raise an "inferential distance!" flag in our mind. It's much more likely that the gaps are due to inferential distance rather than to any actual flaw in the argument.

You're right, and inferential distance is a massively underestimated problem in communication and teaching. If a professional physicist explains how quantum physics is really correct, but you don't understand a thing, you just conclude that they are not a good teacher and inferential distance is too big.

But if your friend tells you that libertarianism, Marxism, radical feminism, Bitcoin, or even Bayesianism is the best idea ever, but can't explain how and gets mad at you, most likely they don't understand their idea themselves. It's very possible to have only vague understanding of the position you espouse, it's easy to deceive yourself that you really do understand the idea (especially when your understanding is superficial, see Dunning–Kruger effect). It's even easier if you support an idea for tribal/political reasons or you don't have a good epistemology.

I don't know about you, but for me 99% of a time I don't understand a thing behind somebody's reasoning is because they're bullshitting me. Almost always it's not intentional, they deceived themselves too. They think they know their ideas, they know what they're talking about, but they actually don't.

The vast majority of RAZ articles is to explain, that most of the time people's beliefs are not even inaccurate, they're meaningless, they don't anticipate anything, they use words incorrectly, but from the inside it feels like their beliefs make sense. I guess logical positivists hammered this point much stronger than Yudkowsky does, because they constantly repeated that most things people talk about are meaningless, not worth talking to being with, because they ultimately are not reduced to sense-impressions.

That's what beliefs of most people are, meaningless sequences of symbols or sounds, not having any connection to observable phenomena. When your average Trotkyist says something like “According to dialectics, communism is achievable only through proletarian revolution”, they have no idea, what dialectics, communism or proletarian revolution mean at all. But from the inside it feels like they know, at least in general sense. Maybe some Marxist knows exactly what communism or proletarian revolution is, but most of them don't. Mind you, this extends to almost everyone. People slip into bad epistemology by default, because correct epistemology is counter-intuitive, otherwise we wouldn't need LessWrong.

But maybe you're right. Every time somebody tells you something you don't get, ask them to cross this inferential distance, to step as many steps back as possible and carefully explain their concepts until you see for yourself that the idea is correct. If they can't do that, you then may be certain they're bullshitting.

These summaries are fantastic. I have several friends who don't have the time / patience to make their way through 1800 pages. But they can make their way through a series of short executive summaries, and if they find something particularly interesting they can look up that particular article. It's also useful for me to be able to quickly look over a summary before and/or after I've read the whole article. I'd highly recommend that these summaries or something like them be included in future versions of the book.

Posts 8 and 9 were really beneficial to me. The illusion of transparency is something that has caused me great distress in the past, and it was really nice to have an explanation for why that was. I always valued my intelligence, and I used to think that when people didn't agree with things that seemed obvious to me it was a sign that they were stupid. I had come across this idea as "people have different experiences", and when I saw things through that lens it helped me to be kinder and less arrogant. These posts really crystallized that idea and made me go "oh, that's why that is".

'Feeling Rational' is a post that I feel some disagreement with. Eliezer says that feelings and rationality aren't "orthogonal", which is true to some extent, but our feelings should not be directly proportional to our world model. Our feelings should in no way affect our beliefs, but just because something bad has happened to us doesn't mean we should be unhappy. I think the relationship is more complex; we must learn to be resilient to bad things. We must learn not to be angry in the moment, lest we make rash choices. In our spare time, we can be happy all about listening to music and watching funny videos. And sometimes we must learn to remove certain facts from our mind, so they are only called upon when necessary, else they will severely depress us.

[-][anonymous]7y 9

One problem is that going down that route in the extreme leads to sociopathy or psychopathy. I think we see a foreshadowing of Yudkowsky's meta-ethics here, which oversimplified to this discussion would be: what is good is what you care about. If you were to suppress feeling bad about bad things, then eventually you'll stop caring about making sure that bad things don't happen. Alternatively, if you embrace your emotional response then you can start caring more. I think it is very important to not lose sight of the instrumental goal - making the outcomes you care about come true. The strength of your emotional bond to the goal is what determines how stable that goal is.

That said, I myself have a much more nuanced view of the interplay between emotion and thinking. Emotions drive our thinking by expanding, contracting, or reshaping the space of possibilities our mind's planning engines consider. To a first approximation, emotions constitute modes of thinking, and sometimes it is useful to engage modes other than the privileged default of serenity. To learn more about this perspective on the false schism of emotion vs rational thinking, I recommend reading Minsky's The Emotion Machine, or his prior work The Society of Mind.

I felt a similar pang of dissent when I read Feeling Rational for the first time, and I couldn't quite put my finger on it. After reading your comment and thinking on it, I think I'll formulate it thus: "A distinction should be made between seeking comfort and seeking comfort in false beliefs. The former is acceptable and the latter is unacceptable."

For me, some of Eliezer's words at the 'How Should Rationalists Approach Death' panel at Skepticon 4 were brought to mind:

I guess the most important message I might have to give on the topic of how to deal with death is: It's alright to be angry; it's alright not to be comforted. For me, the prototype of this experience will always be the funeral of my little brother, who died at the age of 19. I went to his funeral; I was the only open atheist there, I believe. And, for me, there was no confusion in that experience. Pure anger. Pure wishing-it-didn't-happen. No need to seek comfort for it. And that may not have been a pleasant experience, but I think that it was, in a fundamental sense, more healthy, for being less conflicted, than what I saw on the faces of my relatives, and my parents, as they tried to attribute it to God.

People often seek comfort in false beliefs when faced with the death of themselves or others, and the implication is that this is very harmful when you take cryonics seriously like I do. Since this was the context of the panel, I can see how publicly encouraging people to avoid those false beliefs is instrumentally rational. But those words didn't completely mesh with me, because it felt like he was encouraging people to suffer more than they needed to if they didn't have false beliefs. It didn't feel like the difference between seeking comfort and believing false but comforting things had been adequately distinguished.

It's entirely possible that I just harmfully misinterpreted his words, but I should say that I had some difficulty when my mother died because I felt like it was Bad for me not to realize and feel, and to not want to realize and feel, on a gut level, how much value was destroyed when she died. That not feeling it, and not wanting to feel it, was simply inaccurate, and that I was making a mistake that I should feel bad about because my emotions were inaccurate; wrong; Bad. But there's a difference between the belief and the affect. There's a sense in which System 1 doesn't even know that my mother is dead. I haven't let it, because it does no good. The conventional wisdom is that this is suppressing Feelings That Need To Be Felt, but I don't feel generally stressed, like I'm holding something in that I need to get out. I'm never going to be okay with it in the sense that it will be congruent with my values. And viscerally feeling it just hurts. System 2 can have accurate verbal beliefs about death and cryonics without System 1's unintended side effect of thrashing around trying to tell me that something is very wrong with one of my parent-child relationship bonds. System 2 can have accurate beliefs without System 1 tracking their emotional implications. Feeling death on a gut level is only instrumentally valuable if you have false beliefs about death and longevity is a term in your utility function. Otherwise, it's just torture.

But it's also pretty clear to me that Eliezer doesn't want people to suffer, so I want to be charitable and consider how he might come to the sorts of conclusions and say the sorts of things that he has. My hypothesis is that, for Eliezer, cognitive consonance is its own comfort. The idea that cognitive dissonance is viscerally upsetting for him fits my model well. For him, perceiving his beliefs and emotions as congruent is as comforting as having true beliefs and regulating my emotions out of existence is for me. Later on, in speech and writing, the comfort of cognitive consonance and the instrumental value of viscerally feeling the evil of death when your beliefs about death are false are conflated into one larger and less precise point.

And sometimes we must learn to remove certain facts from our mind, so they are only called upon when necessary, else they will severely depress us.

Furthermore, we do it anyway.

Yes, your summary line at the top feels like exactly the distinction I was trying to make. Rereading my comment, I think I was attacking the straw man position, that after admitting that EY had said rationality and feelings were not orthogonal, I was arguing as though he'd said they were the opposite of that - directly related, parallel, etc.

You could explain Eliezer's non-standard approach in terms of his being fundamentally different to many in that regard; or you could explain it as depending more heavily on his situation: he was a highly intelligent atheist surrounded by confused religionists. Alleviating the confusion from your pain is surely a thing one would hold up as in that situation.

"A distinction should be made between seeking comfort and seeking comfort in false beliefs. The former is acceptable and the latter is unacceptable."

A fine quote.

I agree with what you said. Moreover, I think mentioning CBT is relevant, because CBT teaches you what emotions are and what to do with them. CBT claims that cognitions (beliefs, thoughts, attitudes, assumptions etc) determine emotions, feelings and moods. E.g. if you think that you've lost something invaluable, you'll feel sad. If you think you were treated unfairly by others or the world, you'll feel angry. If you think that others or the world is supposed to be or do something, but doesn't, you'll feel frustrated.

Some feelings are more powerful and destructive than others. If you feel guilt, shame, anger, frustration, inferiority or depression for days, months, years, your life becomes endless pain. What's use of living in agony, if this can't help anyone? Even more, what if this feeling is a result of an irrational (incorrect or meaningless) belief to begin with?

On the other hand, once you already feel certain way, there's no use in scolding yourself for that, because it's counter-productive. So what if you realize that your feeling is a result of irrational belief? What do you mean you're not “supposed” to feel that way? By cursing yourself you'll only feel more guilt and self-hate. You have to accept yourself as who you are with whatever feelings you ever had or have.

It's ok to feel certain way. Certainly trying to quench feelings at all costs is irrational. But there are signs that feelings can be destructive and make you worse off. And getting rid of feelings that incapacitate you (like depression, anxiety or uncontrollable anger) is not always easy, sometimes you need a whole system. CBT might help you (I recommend reading David Burns's Feeling Good). According to CBT, cognitions determine your emotions and behavior, and behavior reinforces certain cognitions, so your task is to change your cognitions from irrational to more realistic, and sometimes change your behavior too.

On the other hand, feelings are not facts. If you feel angry, it doesn't mean that somebody indeed treated you unfairly. Anger is the result of thinking somebody or something is unfair, not evidence for it. People make a fallacy of using their own feelings as evidence, which David Burns calls emotional reasoning, one of the cognitive distortions.

[-][anonymous]7y 6

Are there any systematic flaws in your own thinking that you have observed upon reflection, other than the ones presented in this sequence, and what are they? What technique did you use to correct your thinking from that point onward?

[-][anonymous]7y 6

Can you think of a way to test for yourself the information presented in this sequence?

[-][anonymous]7y 6

How can the content of this sequence be made practical? Or, how do you plan to apply it in your day to day life?

I have this compulsion where I obsessively reread things I've written after sending them, but I didn't have any insight about why I do this until I read the essay about the illusion of transparency.

Immersing myself in my perspective via obsessive and uncareful rereading serves (I think, probably) to assuage my anxieties about any lack of coherence and clarity in my writing. By blinding myself to my writing's ambiguities I can feel more assured of its quality, and also feel better about my abilities as a writer. So I think this behavior is caused, at least partially, by an unintentionally acquired habit of leaning into this bias.

I hope to notice myself engaging in this behavior and treat it as a warning that what I've written is probably quite ambiguous.

[-][anonymous]7y 2

Hrm.. I also have that habit, but in my case I often feel I am going back to see if my intention was clear as some distance let's me see it with relatively fresh eyes. I'm a little confused that you think it serves the opposite purpose for you. What's you're frame of mind when re-reading what you have wrote? Do you not notice sources of error after there has been some time separation?

A critical and focused rereading is my goal when I start; however my focus is not long lasting, and the process inevitably devolves into mindless retreading of whatever trail of thought I wish I was communicating, with little effort devoted to verifying that this is how I should realistically expect to be interpreted. It is repetition past this point which I suspect is motivated primarily by a desire to feel self-assured. I was wrong to believe that I could treat this behavior as evidence of ambiguity when its cause is likely unrelated to the specific content of my writing.

Do you not notice sources of error after there has been some time separation?

This is the one thing I've found which reliably helps.

By the way, thanks a lot for the work you've put into this group!

That's an interesting perspective! I also have a habit of repeatedly rereading whatever I wrote, and the idea that it's to assuage anxieties about lack of coherence never came to me. I always thought that I reread my writing simply because it gives strange satisfaction: like the favorite book of an author being their own.

[-][anonymous]7y 2

Something practical is something that helps you achieve your ends, right? Well, I've very recently had the thought that the vast majority of the population shares the same two terminal values, or terminal virtues, of happiness (some combo of pleasure + satisfaction) and goodness (concern for the happiness of others) in varying ratios, whether they are conscious of this or not. (I'll probably write more and share this idea here soon.)

So in my case, the biggest practical value I got out of reading the content of these sequences was an increase in happiness; I felt personally satisfied after reading them. They clearly articulated a lot of intuitions I had in the past. I liked the ideas. They made sense. Basically, this sounds really cheesy, but they brought me joy, which I consider an end in itself.

I'm about 40% through the ebook now, and I'm learning tons and tons of new facts about evolution and neurology and hearing many new ideas that will probably change my life. I don't think that this section itself will really change my life though, only because I feel like I knew these concepts intuitively and already applied them pretty well on a day to day basis. That said, having read this section, I can now express these ideas much more clearly to others, so people who don't naturally think like this can learn to, too :)

Actually, upon reading the summaries you wrote (big thanks for that) there was one bias that I didn't intuitively understand or catch very well in my day to day thinking, and that's #5, availability. I'm going to re-read that one and see if I can think of any good examples to share.

[-][anonymous]7y 6

What motivates you to be more rational in your everyday life?

I think of irrationality as being stuck in a pocket universe. The real world is the way it is, but my biases/blind spots/false beliefs exile me to a smaller world, disconnected from the real one, and I want to correct my errors and return home.

In fact, it's even worse than a pocket universe, because my actions take place in the real world. So every error can have consequences (imagine walking around, blind to trees, and how often you'd bonk your head)

[-][anonymous]7y 0

Wait, a pocket universe actually sounds like a cozy nice safe place to hide from the big, mean world. About actions, this really rings with my empoweredness hypothesis: if people feel their future depends on their own consequences, i.e. they are not powerless, they are motivated to be rational. If people feel their future depends on the powers that be that use them as puppets and their own choices matter not, then not.

I've noticed that a lot of my desire to be rational is social. I was raised as the local "smart kid" and continue to feel associated with that identity. I get all the stuff about rationality should be approached like "I have this thing I care about, and therefore become rational to protect it." but I just don't feel that way. I'm not sure how I feel about that.

Of the three reasons to be rational that are described, I'm most motivated by the moral reason. This is probably because of the aforementioned identity. I feel very offended at anything I perceive as "irrational" in others, kinda like it's an attack on my tribe. This has negative effects on my social life and causes me to be very arrogant to others. Does anybody have any advice for that?

[-][anonymous]7y 4

Oooh, I have advice! I've gotten so much from this site in my first week or two here, and this is my first chance to potentially help someone else :) If you think MBTI personality typing has no value, don't bother with this. It sounds silly, but finding out about Myers-Briggs was actually life-changing for me. Knowing someone's type can help you develop realistic expectations for their behavior, communicate much more effectively, and empathize. Other people are no longer mysteries!

Idk how familiar you are with MBTI, but there are 4 strict dichotomies, and of course some people fall on the borderline for some of them, but one of the more interesting to me is Thinking (not to be confused with intelligence) vs. Feeling (not to be confused with emotion). This gives a thorough explanation, which should help you understand "irrational" people a little better. And once you understand them, you'll be less likely to be offended by them and more likely to get along.

If there's anyone in particular that this is a struggle with, I'd recommend trying to figure out their full personality and reading the profile on the personality page here. When I was little, my strong-willed, very rational ISTP personality conflicted with my mother's ESFJ type and led to many mutual frustrations; we just couldn't relate to each other. Maybe you have some ESFJ types in your life. These are their weaknesses:

*May be unable to correctly judge what really is for the best

*May become spiteful and extremely intractable in the face of clear, logical reasoning

*May be unable to shrug off feelings that others are not "good people"

*May be unable to acknowledge anything that goes against their certainty about the "correct" or "right" way to do things

*May attribute their own problems to arbitrary and unprovable notions about the way people "ought" to behave

*May be at a loss when confronted with situations that require basic technical expertise or clear thinking

*May be oblivious to all but their own viewpoint, valuing their own viewpoint, valuing their own certainties to the exclusion of others

*May be unable to understand verbal logic, and quickly cut off other's explanations

*May be falsely certain of the true needs and feeling of others

*May be extremely vulnerable to superstitions, religious cults, and media manipulation

*May react too quickly and too emotionally in a situation better dealt with in a more pragmatic fashion

They have plenty of very nice strengths too, but seriously, with all these natural weaknesses, it's not their fault they're less rational than you are. That doesn't mean they can't improve, but they're starting from a totally different playing field. If you like your rationality (I know I do!) and think your type is generally better than most others, thank the universe that when it comes to decision making, you were born looking at logic and consistency instead of at people and special circumstances...but don't look down on people who weren't so lucky? Imagine what it's like to be them?

Since discovering MBTI and reading up on my mom's personality type, I've been able to understand and communicate with her soo much better. A story, to illustrate, from back in high school when I first discovered and started to use MBTI:

I prefer driving to passengering. The last time I had driven with my mom, she was napping and woke up to find me driving about 80 in a 65. She was horrified. The next time I offered to drive, she said no. What I wanted to say, which never would have worked: I was just keeping up with traffic, if I get a speeding ticket, it's my own fault and I'll accept the consequences, there wasn't much real danger driving 80 on a clear night with good weather. What I actually said, which did work: Awww, c'mon Mom, let me redeem myself!

Another interesting MBTI fact: Personality type is strongly correlated with religiosity. When I de-converted from Christianity a few months back, I had a conversation with my mom where I predicted that my "Thinking" and "Perceiving" preferences made me far less likely to remain in the faith than she was with "Feeling" and "Judging" preferences. Then I found a study that confirmed it.

I knew that I couldn't convince her that Christianity was wrong by pointing to inconsistencies or evidence, so rather than going on the offensive, I took the defensive side and played on her sympathies, glumly saying, "If Christianity is true, it's lucky for you that your personality type is the second most likely to believe. Unlucky for me that mine is so likely to abandon the faith. People talk about open-mindedness, curiosity, and logic as if they are good qualities, but I guess if Christianity is true, in the grand scheme of eternity we thinker-perceivers really got the short end of the stick, didn't we? People always talk about how it looks like God plays favorites based on culture, but this personality type favoritism seems just as harsh. I guess you should be extra thankful for yours. No matter how hard I prayed for a stronger faith, no matter how much I wanted it, it just didn't happen."

It was that easy! My mom loves me as much as ever, and I love her as much as anyone. :) Our relationship didn't even really take a hit, except for the unfortunate fact that she happens to think if I died today, I would go to hell. But she's convinced this is just a phase, that God will bring me back to him eventually, because in the words of Proverbs 22:6, "Train up a child in the way he should go: and when he is old, he will not depart from it." (A passage you could use as one more piece of evidence against the Bible if you were discussing the matter with someone who looked at logic and consistency rather than people and special circumstances.)

So yeah, I hope this helps at least a little bit! Just go to the personality page and read through the profiles to get a better understanding of how other people operate. It's kind of fun!

Thank you! That's the first in-depth presentation of someone actually benefiting from MBTI that I've ever seen, and it's really interesting. I'll mull over it. I guess the main thing to keep in mind is that other people are different from me.

I do not perceive rationality or intelligence as a thing; it is an invisible implied assumption. I notice irrationality and stupidity as things.

I notice irrationality or stupidity as a mistake. Then I try to correct people, but they resist. Then I am frustrated and give up; and if it happens repeatedly, in my head I assign the person label "an irrational person, not useful for debates that require thinking".

Then I reflect on a more meta level and think: "if this is how other people seem to me, there are probably people on a level higher than me, and this is how I seem to them". But it's not just a social feeling that these people may have a wrong opinion on me, but rather the chilling relization that yes, they are completely right. That I behave as an idiot in many ways which are mostly invisible to me, but they have a huge cumulative impact on my quality of life. Then I realize I only have one life, and I am not getting out of it as much as I could.

I do not really need Eliezer to show me that the scale of intelligence goes beyong Einstein. I am very clearly aware that I am far from the Einstein level; and I am smart enough to realize that quoting web articles about relativity would not bring me any closer. (And probably Einstein is still far away from the "human potential" level. And even if he were literally at the top; he didn't have the opportunities we have now, such as internet, so it is possible to do much more today than what Einstein did in his era.)

I see the huge difference between where am I now, and where I could possibly be if I were just more strategic. Actually, I have problem understanding how other people don't see this. I guess most people have some "ugh field" around it; they feel pain from the idea that they are not as good as their potential, so they avoid the thought. (Makes sense, since the circumstances of receiving the information usually feel bad.) I do not know why I don't feel this so strongly. Probably it is some failure of some social skills brain module, which should remind me that "admitting to not being perfect" is horribly low-status (unless it is performed as some form of fake humility, which requires staying sufficiently non-specific and avoiding your really painful points), and that I should try to hide this thought from everyone, starting with myself.

But I'd rather win.