Related to: Individual Rationality is a Matter of Life and Death, The Benefits of Rationality, Rationality is Systematized Winning
But I finally snapped after reading: Mandatory Secret Identities

Okay, the title was for shock value. Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

For this post, I will be using "extreme rationality" or "x-rationality" in the sense of "techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training." It seems pretty uncontroversial that there are massive benefits from going from a completely irrational moron to the average intelligent person's level. I'm coining this new term so there's no temptation to confuse x-rationality with normal, lower-level rationality.

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

So, what are these "benefits" of "x-rationality"?

A while back, Vladimir Nesov asked exactly that, and made a thread for people to list all of the positive effects x-rationality had on their lives. Only a handful responded, and most responses weren't very practical. Anna Salamon, one of the few people to give a really impressive list of benefits, wrote:

I'm surprised there are so few apparent gains listed. Are most people who benefited just being silent? We should expect a certain number of headache-cures, etc., just by placebo effects or coincidences of timing.

There have since been a few more people claiming practical benefits from x-rationality, but we should generally expect more people to claim benefits than to actually experience them. Anna mentions the placebo effect, and to that I would add cognitive dissonance - people spent all this time learning x-rationality, so it MUST have helped them! - and the same sort of confirmation bias that makes Christians swear that their prayers really work.

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn't entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics - syllogisms, fallacies, and the like - have been around much longer. The few groups who made a concerted effort to study x-rationality didn't shoot off an unusual number of geniuses - the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard's superstar followers, many of this century's most successful people have been notably irrational.

There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making better decisions than them not make you more successful?!?

This is a difficult question, but I think it has an answer. A complex, multifactorial answer, but an answer.

One factor we have to once again come back to is akrasia2. I find akrasia in myself and others to be the most important limiting factor to our success. Think of that phrase "limiting factor" formally, the way you'd think of the limiting reagent in chemistry. When there's a limiting reagent, it doesn't matter how much more of the other reagents you add, the reaction's not going to make any more product. Rational decisions are practically useless without the willpower to carry them out. If our limiting reagent is willpower and not rationality, throwing truckloads of rationality into our brains isn't going to increase success very much.

This is a very large part of the story, but not the whole story. If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

So the second factor is that most people are rational enough for their own purposes. Oh, they go on wild flights of fancy when discussing politics or religion or philosophy, but when it comes to business they suddenly become cold and calculating. This relates to Robin Hanson on Near and Far modes of thinking. Near Mode thinking is actually pretty good at a lot of things, and Near Mode thinking is the thinking whose accuracy gives us practical benefits.

And - when I was young, I used to watch The Journey of Allen Strange on Nickleodeon. It was a children's show about this alien who came to Earth and lived with these kids. I remember one scene where Allen the Alien was watching the kids play pool. "That's amazing," Allen told them. "I could never calculate differential equations in my head that quickly." The kids had to convince him that "it's in the arm, not the head" - that even though the movement of the balls is governed by differential equations, humans don't actually calculate the equations each time they play. They just move their arm in a way that feels right. If Allen had been smarter, he could have explained that the kids were doing some very impressive mathematics on a subconscious level that produced their arm's perception of "feeling right". But the kids' point still stands; even though in theory explicit mathematics will produce better results than eyeballing it, in practice you can't become a good pool player just by studying calculus.

A lot of human rationality follows the same pattern. Isaac Newton is frequently named as a guy who knew no formal theories of science or rationality, who was hopelessly irrational in his philosophical beliefs and his personal life, but who is still widely and justifiably considered the greatest scientist who ever lived. Would Newton have gone even further if he'd known Bayes theory? Probably it would've been like telling the world pool champion to try using more calculus in his shots: not a pretty sight.

Yes, yes, beisutsukai should be able to develop quantum gravity in a month and so on. But until someone on Less Wrong actually goes and does it, that story sounds a lot like when Alfred Korzybski claimed that World War Two could have been prevented if everyone had just used more General Semantics.

And then there's just plain noise. Your success in the world depends on things ranging from your hairstyle to your height to your social skills to your IQ score to cognitive constructs psychologists don't even have names for yet. X-Rationality can help you succeed. But so can excellent fashion sense. It's not clear in real-world terms that x-rationality has more of an effect than fashion. And don't dismiss that with "A good x-rationalist will know if fashion is important, and study fashion." A good normal rationalist could do that too; it's not a specific advantage of x-rationalism, just of having a general rational outlook. And having a general rational outlook, as I mentioned before, is limited in its effectiveness by poor application and akrasia.

I no longer believe mastering all these Overcoming Bias and Less Wrong techniques will turn me into Anasûrimbor Kellhus or John Galt. I no longer even believe mastering all these Overcoming Bias techniques will turn me into Eliezer Yudkowsky (who, as his writings from 2001 indicate, had developed his characteristic level of awesomeness before he became interested in x-rationality at all)3. I think it may help me succeed in life a little, but I think the correlation between x-rationality and success is probably closer to 0.1 than to 1. Maybe 0.2 in some businesses like finance, but people in finance tend to know this and use specially developed x-rationalist techniques on the job already without making it a lifestyle commitment. I think it was primarily a Happy Death Spiral around how wonderfully super-awesome x-rationality was that made me once think otherwise.

And this is why I am not so impressed by Eliezer's claim that an x-rationality instructor should be successful in their non-rationality life. Yes, there probably are some x-rationalists who will also be successful people. But again, correlation 0.1. Stop saying only practically successful people could be good x-rationality teachers! Stop saying we need to start having huge real-life victories or our art is useless! Stop calling x-rationality the Art of Winning! Stop saying I must be engaged in some sort of weird signalling effort for saying I'm here because I like mental clarity instead of because I want to be the next Bill Gates! It trivializes the very virtues that brought most of us to Overcoming Bias, and replaces them with what sounds a lot like a pitch for some weird self-help cult...



...but you will disagree with me. And we are both aspiring rationalists, and therefore we resolve disagreements by experiments. I propose one.

For the next time period - a week, a month, whatever - take special note of every decision you make. By "decision", I don't mean the decision to get up in the morning, I mean the sort that's made on a conscious level and requires at least a few seconds' serious thought. Make a tick mark, literal or mental, so you can count how many of these there are.

Then note whether you make that decision rationally. If yes, also record whether you made that decision x-rationally. I don't just mean you spent a brief second thinking about whether any biases might have affected your choice. I mean one where you think there's a serious (let's arbitrarily say 33%) chance that using x-rationality instead of normal rationality actually changed the result of your decision.

Finally, note whether, once you came to the rational conclusion, you actually followed it. This is not a trivial matter. For example, before writing this blog post I wondered briefly whether I should use the time studying instead, used normal (but not x-) rationality to determine that yes, I should, and then proceeded to write this anyway. And if you get that far, note whether your x-rational decisions tend to turn out particularly well.

This experiment seems easy to rig4; merely doing it should increase your level of conscious rational decisions quite a bit. And yet I have been trying it for the past few days, and the results have not been pretty. Not pretty at all. Not only do I make fewer conscious decisions than I thought, but the ones I do make I rarely apply even the slightest modicum of rationality to, and the ones I apply rationality to it's practically never x-rationality, and when I do apply everything I've got I don't seem to follow those decisions too consistently.

I'm not so great a rationalist anyway, and I may be especially bad at this. So I'm interested in hearing how different your results are. Just don't rig it. If you find yourself using x-rationality twenty times more often than you were when you weren't performing the experiment, you're rigging it, consciously or otherwise5.

Eliezer writes:

The novice goes astray and says, "The Art failed me."
The master goes astray and says, "I failed my Art."

Yet one way to fail your Art is to expect more of it than it can deliver. No matter how good a swimmer you are, you will not be able to cross the Pacific. This is not to say crossing the Pacific is impossible. It just means it will require a different sort of thinking than the one you've been using thus far. Perhaps there are developments of the Art of Rationality or its associated Arts that can turn us into a Kellhus or a Galt, but they will not be reached by trying to overcome biases really really hard.


1: Specifically, reading Overcoming Bias convinced me to study evolutionary psychology in some depth, which has been useful in social situations. As far as I know. I'd probably be biased into thinking it had been even if it hadn't, because I like evo psych and it's very hard to measure.

2: Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts.

3: This is actually an important point. I think there are probably quite a few smart, successful people who develop an interest in x-rationality, but I can't think of any people who started out merely above-average, developed an interest in x-rationality, and then became smart and successful because of that x-rationality.

4: This is a terribly controlled experiment, and the only way its data can be meaningfully interpreted at all is through what one of my professors called the "ocular trauma test" - when the data hits you between the eyes. If people claim they always follow their rational decisions, I think I will be more likely to interpret it as lack of enough cognitive self-consciousness to notice when they're doing something irrational than an honest lack of irrationality.

5: In which case it will have ceased to be an experiment and become a technique instead. I've noticed this happening a lot over the past few days, and I may continue doing it.

New Comment
281 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So the second factor is that most people are rational enough for their own purposes. Oh, they go on wild flights of fancy when discussing politics or religion or philosophy, but when it comes to business they suddenly become cold and calculating. This relates to Robin Hanson on Near and Far modes of thinking. Near Mode thinking is actually pretty good at a lot of things, and Near Mode thinking is the thinking whose accuracy gives us practical benefits.

Seems to me that most of us make predictably dumb decisions in quite a variety of contexts, and that by becoming extra bonus sane (more sane/rational than your average “intelligent science-literate person without formal rationalist training”), we really should be able to do better.

Some examples of the “predictably dumb decisions” that an art of rationality should let us improve on:

  • Dale Carnegie says (correctly, AFAIK) that most of us try to persuade others by explaining the benefits from our point of view (“I want you to play basketball with me because I don’t have enough people to play basketball with”), even though it works better to explain the benefits from their points of view. Matches my experiences, and matches also man
... (read more)

I don't think you need the art of rationality much for that stuff. I think just being reminded is almost as good, if not better. Who do you think would do better on them: someone who read all of LW/OB except this post, or someone who read this post only? Now consider that reading all of LW/OB would take at least 256 times longer.

That was only a sample. Should we really prefer keeping them all in mind over learning the pattern behind them?

Learning about rationality won't necessarily help you realize where you're being irrational. If you've got a general method for doing that, I'd be interested, but I don't think it's been discussed much on this blog.

Interesting. But searching a bit this applies to business. Looks nice on a job interview. Don't try this on a date! (no lukeprog allowed) Thanks for the advice! For completedness, I'd assume this is what you meant: or at least gives it a deeper point.

Don't try this on a date! (no lukeprog allowed)

Why not? Lukeprog's mistake, assuming you're talking about what I think you're talking about, seems to have been quite the opposite of trying to explain the benefits of an option from the other person's point of view:

So I broke up with Alice over a long conversation that included an hour-long primer on evolutionary psychology in which I explained how natural selection had built me to be attracted to certain features that she lacked.

I imagine he'd have had better luck, or at least not become the butt of quite so many relationship jokes on LW, if he'd gone with something like "you deserve someone who appreciates you better". Notice that from Alice's perspective, this describes exactly the same situation -- but in terms of what it means to her.

Nah. Just meant that considering his posts on relationships, he might try that, so therefore, no lukeprog allowed. In truth I was just trying to use reverse psychology to get him to do it and hopefully post some results. And this is where this silliness ends before I get more downvoetes.

Imagine a world where the only way to become really rich is to win the lottery (and everybody is either risk averse or at least risk neutral). With an expected return of less than $1 per $1 spent on tickets, rational people don't buy lottery tickets. Only irrational people do that. As a result, all the really rich people in this world must be irrational.

In other words, it is possible to have situations where being rational increases your expected performance, but at the same time reduces your changes of being a super achiever. Thus, the claim that "rationalists should win" is not necessarily true, even in theory, if "winning" is taken to mean being among the top performers. A more accurate statement would be, "In a world with both rational and irrational agents, the rational agents should perform better on average than the population average."

There's an extent to which we live in such a world. Many people believe you can achieve your wildest dreams if you only try hard enough, because by golly, all those people on the TV did it!


But many poor/middle-class people also believe that they can never become rich (except for the lottery) because the only ways to become rich are crime, fraud, or inheritance. And this leads them to underestimate the value of hard work, education, and risk-taking.

The median rationalist will perform better than these cynics. But his average wealth will also be higher, assuming he accurately observes his chances at becoming succesful.

From what I can see, crime and fraud are harder get significant success with than 'real' work. Education and risk-taking are also rather vital.
It can be rational to accept the responsibility of high risk/high reward behavior, on specific occasions and under specific circumstances. The trick is recognizing those occasions and circumstances and also recognizing when your mind is fooling you into believing "THIS TIME IS DIFFERENT". A rational agent is Warren Buffet. An irrational agent is Ralph Cramden. Both accept high risk/high reward situations. One is rational about that responsibility. The other is not. Also, in a world of both rational and irrational agents, in a world where the rational agent must depend upon the irrational, it is sometimes rational to think irrationally!

And this is why I am not so impressed by Eliezer's claim that an x-rationality instructor should be successful in their non-rationality life. Yes, there probably are some x-rationalists who will also be successful people. But again, correlation 0.1. Stop saying only practically successful people could be good x-rationality teachers! Stop saying we need to start having huge real-life victories or our art is useless! Stop calling x-rationality the Art of Winning! Stop saying I must be engaged in some sort of weird signalling effort for saying I'm here because I like mental clarity instead of because I want to be the next Bill Gates! It trivializes the very virtues that brought most of us to Overcoming Bias, and replaces them with what sounds a lot like a pitch for some weird self-help cult...

I think the truth is non-symmetrical: rationalism is the art of not failing, of not being stupid. I agree with you that "rationalists should win big" is not true in the sense Eliezer claims. However, rationalists should be generally above average by virtue of never failing big, never losing too much, e.g. not buying every vitamin at the health food store, not in cults, not bemoaning ancient relationships, etc.

Very good point!

I'm not sure if it was your intent to point this out by contrast, but I would like to point out that a reasonable art of "kicking" would not rely on you making conscious decisions, let alone explicitly rational ones. Rather, it would rely on you ensuring that your subconscious has been freed from sources of bias ahead of time, and is therefore able to safely leap to conclusions in its usual fashion. An art that requires you to think at the time things are actually happening is not much of an art.

Case in point: when reading "Stuck In The Middle With Bruce", I became aware of a subconsciously self-sabotaging behavior I'd done recently. So I "kicked" it out by crosslinking the behavior with its goal-satisfaction state. It would be crazy to wait until the next occasion for that behavior to strike, and then try to reason my way around it, when I can just fix the bloody thing in the first place. (Interestingly, I mentioned the story to my wife, and described how it related to my own behavior... and she thought of a different sort of self-sabotage she was doing, and applied the same mindhack. So, as of now, I'd say that story was one of the top 5 most ... (read more)


I voted this up, but I'm replying because I think it's a critical point.

Our brains are NOT designed to make conscious decisions about every thing that crosses our path. Trying to do that is like trying to walk everywhere instead of driving: it's technically possible, but it will take you forever and will be exhausting.

Our brains seem to work more like this: our brains process whatever it is we're doing at the time, and then feed that processed data into our subconscious for use later. Sure it jumps in every once in a while for something important, but generally it sits back and lets your subconscious do the driving.

Rationality should be about putting the best processed information down into your subconscious, so it works the way you'd like it too. Trying to do everything consciously is a poor use of your brain, as it 1) ignores the way your brain is designed to function and 2) forgoes the use of the powerful subconscious circuitry that makes up an enormous part of it.

What does "crosslinking the behavior with its goal-satisfaction state" mean? Specifically, I'm unable to guess what you mean by "crosslinking" and "the goal-satisfaction state" (of a behavior).

More details can be found in this comment.
I had the same question as Jonathan and I've read the comment you mentioned. Where can we read/learn more about this technique?
It's based on a technique called "Core Transformation", developed by Connirae Andreas and Tamara Andreas, and it's discussed in a book of the same name. (I linked to it once before when someone asked about this a few weeks ago, and was severely downmodded for some reason, so you'll have to find it yourself.) My own version of the technique is a streamlined and stripped-down variation that removes a certain amount of superstition and ritual. (Among other things, I drop the "parts" metaphor, which some schools of NLP now consider to have been a bad idea in the first place.) The technique works by using imagination to elicit the reward states associated with a behavior, going to higher and higher levels of abstraction to reach the top (or root?) of a person's reward tree -- usually a quasi-mystical state like inner peace, oneness, compassion, or something like that. (These "core states" are a good candidate for the "god-shaped hole" in humans, btw.) Anyway, once you have access to such a state, it can be used as a reinforcer for alternative behaviors, as it's stronger than the diluted intermediate versions found at other levels of the person's goal tree. (More precisely, it can be used to extinguish the conditioned appetite that drives the problem behavior.) I teach this method and use it in coaching; my wife and I also use it personally. I'd link to my own workshops and recordings on the subject as well, but since I was downmodded for referring to a site where you could buy someone else's book, I shudder to imagine what would happen if I linked to a site where you could buy my products or services. ;-)
Please post the link. And why should you be afraid of downmodding? I have been downmodded for saying things that are true(at least IMHO). Don't give that much importance to the mods!
I'm not. I'm simply attempting to respect the wishes of others regarding what should or should not be posted here. Googling "Core Transformation" and "Gateway of Desire" (as phrases in quotes) will get you the links. Don't be confused by something else called "Quantum Touch - Core Transformation"; it's something unrelated (thank goodness).
People are trying to eliminate spam. Spammers tend to include links to outside services which cost money. Thus, your providing such a link gives you the superficial appearance of a spammer, and you got downmodded accordingly. You are not a spammer, you have participated in good faith in this community, at great personal effort, and contributed many useful insights as a result. I think by now, most people are aware of this, and you should not need to worry about giving the appearance of spamming.
4Paul Crowley appears to be the main website. This Google search finds related materials. All I could find on Wikipedia was this article on Steve Andreas.

The fact that everything I can find on the web carefully avoids giving details and instead takes the form "We have these fantastic techniques that can solve most of your problems; sign up for our seminars and we'll teach them to you" is ... not promising.

Promising the world, giving few details, and insisting on being paid before saying anything more, seems to me to be strongly correlated with dishonesty and cultishness. Since pjeby seems like a valuable member of this community, I hope this case happens to be different; but I'd like to see some evidence.

Well, you didn't grant my wish for a simple link, I have to google now. How sad. As for the wishes of others would you rather not post a truth then to be downvoted by the majority?
Here's one link:
Absolutely, learning to work with your subconscious is a necessity. After all it does far more computation than your conscious mind does. Of course, you ought to explore the techniques that let you take positive advantage of it too.
But it's consciously understanding and applying techniques to make your mind as a whole work better that's the heart of rationality. By and large the 'subconscious' is outside of our ability to control. The task isn't to bring the subconscious to heel, but to establish filters through which to screen the output of our minds, discarding that which is incompatible with rational thinking.
Influencing your subconscious in rational ways is not easy or simple. But at the same time, simply because something is hard doesn't mean it should be discarded out of hand as a viable route to achieving your goals especially if those goals are important.
How about influencing your subconscious in irrational ways? I find that much easier, myself. The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table. If you store the right entries under the right keys, it does useful things. The hardest part of hacking it is that there's no "view source" button or way to get a listing of what's already in there: you have to follow associative links or try keys that have worked for other people. Well, I say hardest, but it's not so much hard as being sometimes tedious or time-consuming. The actually changing things part is usually quite quick. If it's not, you're almost certainly doing something wrong.
I'm suspicious of this characterization. I've made a couple surprising subconscious deductions in the past, and they forcefully reminded me that there's a very complex human brain down there doing very complex brain things on the sly all the time. You may have have learned some tricks to manipulate it, but I'd be surprised if you've done more than scratch the surface if you really just consider it to be a simple lookup table.
I didn't say it was a simple lookup table. It's indexed in lots of non-trivial ways; see e.g. my post here about "Spock's Dirty Little Secret". I just said that fundamentally, it's a lookup table. I also didn't say it's not capable of complex behavior. A state machine is "just a lookup table", and that in no way diminishes its potential complexity of behavior. When I say the subconscious doesn't "think", I specifically mean that if you point your built-in "mind projection" at your subconscious, you will misunderstand it, in the same way that people end up believing in gods and ghosts: projecting intention where none exists. This is a major misunderstanding -- if not THE major misunderstanding -- of the other-than-conscious mind. It's not really a mind, it's a "Chinese room". That doesn't mean we don't have complex behavior or can't do things like self-sabotage. The mistake is in projecting personhood onto our self-sabotaging behaviors, rather than seeing the state machine that drives them: condition A triggers appetite B leading to action C. There's no "agency" there, no "mind". So if you use an agency model (including Ainslie's "interests" to some extent), you'll take incorrect approaches to change. But if you realize it's a state machine, stored in a lookup table, then you can change it directly. And for that matter, you can use it more effectively as well. I've been far more creative and better at strategy since I learned to engage my creative imagination in a mechanical way, rather than waiting for the muse to strike. Meanwhile, it'd also be a mistake to think of it as a single lookup table; it includes many things that seem to me like specialized lookup tables. However, they are accessible through the same basic "API" of the senses, so I don't worry about drawing too fine of a distinction between the tables, except insofar as how they appear relate to specific techniques.
I look forward to seeing where your model goes as it becomes more nuanced. Among other things, I'm very curious about how your model takes into account actual computations (for example finding answers to combinatorial puzzles) that are performed by the subconscious.
What, you mean like Sudoku or something?
Sudoku would be one example. I meant generally puzzles or problems involving search spaces of combinations.
Well, I'll use sudoku since I've experienced both conscious and unconscious success at it. It used to drive me nuts how my wife could just look at a puzzle and start writing numbers, on puzzles that were difficult enough that I needed to explicitly track possibilities. Then, I tried playing some easy puzzles on our Tivo, and found that the "ding" reward sound when you completed a box or line made it much easier to learn, once I focused on speed. I found that I was training myself to recognize patterns and missing numbers, combined with efficient eye movement. I'm still a little slower than my wife, but it's fascinating to observe that I can now tell the available possibilities for larger and larger numbers of spaces without consciously thinking about it. I just look at the numbers and the missing ones pop into my head. Over time, this happens less and less consciously, such that I can just glance at five or six numbers and know what the missing ones are without a conscious step. This doesn't require a complex subconscious; it's sufficient to have a state machine that generates candidate numbers based on seen numbers and drops candidates as they're seen. It might be more efficient in some sense to cross off candidates from a master list, except that the visualization would be more costly. One thing about how visualization works is that it takes roughly the same time to visualize something in detail as it does to look at it... which means that visualizing nine numbers would take about the same amount of time as it would for you to scan the boxes. Also, I can sometimes tell my brain is generating candidates while I scan... I hear them auditorially verbalized as the scan goes, although it's variable at what point in the scan they pop up; sometimes it's early and my eyes scan forward or back to double check. Is this the sort of thing you're asking about?
It seems that our models are computationally equivalent. After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it. I have spent a great deal of time and reasoning on developing models of people in such a way. So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models. But the danger with models is that they are always limiting in what they can reveal. In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule. I suspect this is one particular short-coming of the look-up table basis for modeling the subconscious. I suspect my models have similar problems, but it's always hardest to see them from within.
Of course. But mine is a model specifically oriented towards being able to change and re-program it -- as well as understanding more precisely how certain responses are generated. One of the really important parts of thinking in terms of a lookup table is that it simplifies debugging. That is, one can be taught to "single-step" the brain, and identify the specific lookup that is causing a problem in a sequence of thought-and-action. How do you do that with a mind-projection model? The problem with modeling one's self as a "person", is that it gives you wrong ideas about how to change, and creates maladaptive responses to unwanted behavior. Whereas, with my more "primitive" model: 1. I can solve significant problems of myself or others by changing a conceptually-single "entry" in that table, and 2. The lookup-table metaphor depersonalizes undesired responses in my clients, allowing them to view themselves in a non-reactive way. Personalizing one's unconscious responses leads to all kinds of unuseful carry-over from "adversarial" concepts: fighting, deception, negotiation, revenge, etc. This is very counterproductive, compared to simply changing the contents of the table. Interestingly, this is one of the metaphors that I hear back from my clients the most, referencing personal actions to change. That is, AFAICT, people find it tremendously empowering to realize that they can develop any skill or change any behavior if they can simply load or remove the right data from the table. Of course novel solutions can be generated -- I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.
I'm not talking about a mind projection model, I'm talking about using using information models constructed and vetted to effectively model people as a foundation for a different model of a part of a person. I've modeled my subconscious in a similar manner before, I've gained benefits from it not unlike some you describe. I've even gone so far as to model up to sub-processor levels of capabilities and multi-threading. At the same time I was developing the Other models I mentioned, but they were incomplete. Then during adolescence I refined my Other models well enough for them to start working. I can go more into that later, but as time went on it became clear that computation models simply didn't let me pack enough information in my interactions with my subconscious, so I needed a more information rich model. That is what I'm talking about. So bluntly, but honestly, I feel what you're describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I'm hoping you'll be moving forward. Search-engines and databases don't produce novel solutions on their own, even in the sense of a combinatorial algorithm. And certainly not in the sense of more creative innovation. There are many anecdotes claiming the subconscious can incorporate more dimensions in problems solving than the conscious - some more poetic than others (answers coming in dreams or in showers), it seems dangerous to simply disregard it.
Bluntly, but honestly, I think you'd be better off describing more precisely what model you think I should be using, and what testable benefits it provides. I'm always willing to upgrade, if a model lets me do something faster, easier, quicker to teach, etc. -- Just give me enough information to reproduce one of your techniques and I'll happily try it.
I said what I meant there. It's a feeling. Which combined with lacking a personalized model of your cognitive architecture makes it foolish for me to suggest a specific replacement model. My comment about deep innovation is intended to point you towards one of the blind spots of your current work (which may or may not be helpful). I've been somewhere similar a long time ago, but I was working on other areas at the same time, which have led me to the models I use now. I sincerely doubt that that same avenue will work for you. Instead, I suggest you cultivate a skepticism of your work, plan a line of retreat, and start delving into the dark corners. As an aside: if you want a technique - using a model close to yours - consider volitional initiation of a problem on your subconscious "backburner" to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method. Using a more nuanced model, you can get much better results, but this should suffice to show you something of what I mean.
I've been doing that for about 24 years now. I fail to see how it has relevance to the model of mind I use for helping people change beliefs and behaviors. Perhaps you are assuming that I need to have ONE model of mind that explains everything? I don't consider myself under such a constraint. Note, too, that autonomous processing isn't inconsistent with a lookup-table subconscious. Indeed, autonomous processing independent of consciousness is the whole point of haivng a state-machine model of brain function. Consciousness is an add-on feature, not the point of having a brain. Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer's parents telling him, "you'll give up your childish ideas as soon as you get older".
Good. It seemed the next logical step considering what you were describing as your model. It's also very promising that you are not trying to have a singular model. Which at least is useful data on my part. Developing meta-cognitive technology means having negative as well as positive results. I do appreciate you taking the time to discuss things, though.
Any computational process can be emulated by a sufficiently complicated lookup table. We could, if we wished, consider the "conscious mind" to be such a table. Dismissing the unconscious because it's supposedly a lookup table is thus wrong in two ways: firstly, it's not implemented as such a table, and secondly, even if it were, that puts no limitations, restrictions, or reductions on what it's capable of doing. The original statement in question is not just factually incorrect, but conceptually misguided, and the likely harm to the resulting model's usefulness incalculable.
"The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table." Of all your errors thus far, those two are your most damaging.
I agree that the subconscious isn't just a giant lookup table, and that many people who make this error use it to justify practices which destroy other people's minds. But there are some important techniques of making the subconscious work better that are hard to invent unless you imagine that the subconscious is mostly a giant lookup table. pjeby uses these techniques in his practice. Do you deny pjeby's data that these techniques work? Do you even know which data made pjeby want to write "it's just a giant lookup table"? If you do know which data made pjeby want to write that, do you mean that it was wrong for him to write "the subconscious is just a giant lookup table" and not "the subconscious is mostly like just a giant lookup table"? I feel like you don't think through the real details of what other people are thinking and how those details would have to actually interact with the high standards you have for the thoughts of those people. All you do is tell them that you think something they did means they broke a rule.
pjeby has provided very little data. He's claimed that his techniques work. He's described them in terms that (1) are supremely vague about what he actually does, and (2) seem to imply that he has gained the ability to change all sorts of things about the behaviour of the unconscious bits of his brain more or less at will. There have been other people and groups that have made similar claims about their techniques. For instance, the Scientologists (though their claims about what they can do are more outlandish than pjeby's). None of this means that pjeby is wrong, still less that he's not being honest with us: but it means that an appeal to "pjeby's data" is a bit naive. All we have so far -- unless there are gems hidden in threads I haven't read, which of course there might be -- are his claims.
Annoyance has a point here. A look-up table is a very limiting model for a subconscious. What is the benefit you gain by assuming that there is no organizing structure, whether or not it is known to you, within your subconscious? Personally, I prefer a continually evolving model, updating with experience and observations. With periodic sanity checks of varying scales of severity. Not unlike how I model people. Of course this lends a resulting bias that I treat my subconscious a bit like a person, with encouragement, care, and deals. This can also lend positive outcomes like running subconscious mental operations for long term problem solving (a more active and volitional version of waiting for inspiration to strike) and encouraging those operations to have appropriate tracebacks to make it easier for me to consciously verify them. Not sure if that would work for other folks though, cognitive infrastructure may vary.
Right. No. More is possible: Is the rational person subject to "March winds"?
Speak for yourself. ;-) That's wasteful and inefficient. Bear in mind that there are two kinds of bias in the brain: hardware and software. The hardware biases cause software biases to get added, but those biases can also be removed, thereby eliminating the need to work around them. Conversely, for "hard" biases that can't be removed, much of the implementation of workarounds can be created by installing compensating biases. And it isn't even that complicated -- given appropriate (i.e. fast and unequivocal) feedback, the brain can make the software revisions on its own, without any complex conscious processes involved.
What are the other posts in your top five?

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

In my life, I've used rationality to tackle some pretty tough practical problems. The type of rationality I have been successful with hasn't been the debiasing program of Overcoming Bias, yet I have been employing scientific thinking, induction, and heuristic to certain problems in ways that are atypical for the category of people you are calling normal rationalists. I don't know whether to call this "x-rationality" or not, partly because I'm not sure the boundaries between rationality and x-rationality are always obvious, but it's certainly more advanced rationality than what people usually apply in the domains below.

On a general level, I've been studying how to get good (or at least, dramatically better) at things. Here are some areas where I've been successful using rationality:

... (read more)
I would absolutely love to see the development of a rational art of dating. If you've more to say on this I'll definitely look forward to reading it.
This is largely the basis of the whole online sub-community of 'Game' and the 'Seduction Community'. It may well fall under what Eliezer refers to as 'the dark arts' but many participants are fairly explicit about applying a rational/scientific approach to success with women.

I am highly familiar with the seduction community, and I've learned a lot from it. It's like extra-systemized folk psychology. It has certain elements of a scientific community, yet it is vulnerable to ideologies developing out of:

(a) bastardized versions of evolutionary psychology being thrown around like the proven truth, often leading to cynical and overgeneralized views of female behavior and preferences and/or overly narrow views of what works,

(b) financial biases,

(c) lack of rigor, because controlled experiments are not yet possible in this field (though I would never suggest that people wait until science catches up and gives us rigorous empirical knowledge before trying to improve their dating lives... who knows how long we will have to wait).

Yet there is promise for the community, because it's beholden to real world results. Its descriptions and prescriptions seems to have been improving, and it has gone through a couple paradigm shirts since the mid 80's.

I've also learned some useful things from my more limited familiarity with the community. I'd tend to agree with your criticisms but I think the emphasis on rigorous 'field testing' and on 'doing what works' in much of the community shows some common ground with general efforts at rationality. As you say, this is an area (like many areas of day to day life) that is not easily amenable to controlled scientific experiment for a number of reasons but one of the lessons of Bayesian thinking/'x-rationality' that I've found useful is the emphasis on being comfortable with uncertainty, fuzzy evidence and making the best decisions given limited information. It's treacherous terrain for anyone seeking truth since, like investment or financial advice or healthcare, there is a lot of noise along with the signal. It's certainly an interesting area with many cross-currents to those interested in applying rationality though.
Do you think it would benefit from knowing some of the OB/LW rationality techniques? Or from the general OB/LW picture, where inference is a thing that happens in material systems, and that yields true conclusions, when it does, for non-mysterious reasons that we can investigate and can troubleshoot?

Or from the general OB/LW picture, where inference is a thing that happens in material systems, and that yields true conclusions, when it does, for non-mysterious reasons that we can investigate and can troubleshoot?

One problem with interfacing formal/mathematical rationality with any "art that works", whether it's self-help or dating, is that when people are involved, there are feed-forward and feed-back effects, similar to Newcomb's problem, in a sense. What you predict will happen makes a difference to the outcome.

One of the recent paradigm shifts that's been happening in the last few years in the "seduction community" is the realization that using routines and patterns leads to state-dependence: that is, to a guy's self-esteem depending on the reactions of the women he's talked to on a given night. This has led to the rise of the "natural" movement: copying the beliefs and mindsets of guys who are naturally good with women, rather than the external behaviors of guys who are good with women.

Now, I'm not actually involved in the community; I'm quite happily married. However, I pay attention to developments in that field because it has huge over... (read more)

Experimenting, implementing, tracking results, etc. is totally compatible with the OB/LW picture. We haven't build cultural supports for this all that much, as a community, but we really should, and, since it resonates pretty well with a rationalist culture and there're obvious reasons to expect it to work, we probably will. Claiming that a particular general model of the mind is true, just because you expect that claim to yield good results (and not because you have the kind of evidence that would warrant claiming it as "true in general"), is maybe not so compatible. As a culture, we LW-ers are pretty darn careful about what general claims we let into our minds with the label "true" attached. But is it really so important that your models be labeled "true"? Maybe you could share your models as thinking gimmicks: "I tend to think of the mind in such-and-such a way, and it gives me useful results, and this same model seems to give my clients useful results", and share the evidence about how a given visualization or self-model produces internal or external observables? I expect LW will be more receptive to your ideas if you: (a) stick really carefully to what you've actually seen, and share data (introspective data counts); (b) label your "believe this and it'll work" models as candidate "believe this and it'll work" models, without claiming the model as the real, fully demonstrated as true, nuts and bolts of the mind/brain. In other words: (1) hug the data, and share the data with us (we love data); and (2) be alert to a particular sort of cultural collision, where we'll tend to take any claims made without explicit "this is meant as a pragmatically useful working self-model" tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models. If you actually tag your models with their intended use ("I'm not saying these are the ultimate atoms the mind is made of, but I have reasonably compelling evidence that thinking in th
Yeah, I've noticed that, which is why my comment history contains so many posts pointing out that I'm an instrumental rationalist, rather than an epistemic one. ;-)

I'm not sure it's about being an epistemic vs. an instrumental rationalist, vs. about tagging your words so we follow what you mean.

Both people interested in deep truths, and people interested in immediate practical mileage, can make use of both "true models" and "models that are pragmatically useful but that probably aren't fully true".

You know how a map of north America gives you good guidance for inferences about where cities are, and yet you shouldn't interpret its color scheme as implying that the land mass of Canada is uniformly purple? Different kinds of models/maps are built to allow different kinds of conclusions to be drawn. Models come with implicit or explicit use-guidelines. And the use-guidelines of “scientific generalizations that have been established for all humans” are different than the use-guidelines of “pragmatically useful self-models, whose theoretical components haven’t been carefully and separately tested”. Mistake the latter for the former, and you’ll end up concluding that Canada is purple.

When you try to share techniques with LW, and LW balks... part of the problem is that most of us LW-ers aren’t as practiced in contact-with-th... (read more)

Trying to interpret this charitably, I'll suggest a restatement: what you call a "theory" is actually an algorithm that describes the actions that are known to achieve the required results. In the normal use of the words, theory is an epistemic tool, leading you to come to know the truth, and a reason for doing something is explanation of why this something achieves the goals. Terminologically mixing opaque heuristic with reason and knowledge is a bad idea, in the quotation above the word "reason", for example, connotes more with rationalization than with anything else.
No, I'm using the term "theory" in the sense of "explanation" and "as opposed to practice". The theory of a self-help school is the explanation(s) it provides that motivate people to carry out whatever procedures that school uses, by providing a model that helps them make sense of what their problems are, and what the appropriate methods for fixing them would be. I don't see any incompatibility between those concepts; per DeBono (Six Thinking Hats, lateral thinking, etc.) a theory is a "proto-truth" rather than an "absolute truth". Something that we treat as if it were true, until something better is found. Ideally, a school of self-help should update its theories as evidence changes. Generally, when I adopt a technique, I provisionally adopt whatever theory was given by the person who created the technique, unless I already have evidence that the theory is false, or have a simpler explanation based on my existing knowledge. Then, as I get more experience with a technique, I usually find evidence that makes me update my theory for why/how that technique works. (For example, I found that I could discard the "parts" metaphor of Core Transformation and still get it to work, ergo falsifying a portion of its original theoretical model.) Also, I sometimes read about a study that shows a mechanism of mind that could plausibly explain some aspect of a technique, for example. Recently, for example, I read some papers about "affective asynchrony", and saw that it not only experimentally validated some of what I've been doing, but that it provided a clearer theoretical model for certain parts of it. (Clearer in the sense of providing a more motivating rationale, and not just because I can point to the papers and say, "see, science!") Similar thing for "reconsolidation" -- it provides a clear explanation for something that I knew was required for certain techniques to work (experiential access to a relevant concrete memory), but had no "theoretical" justification for. (I j

One common theme is recognizing when your theories aren't working and updating in light of new evidence. Many people are so sure that their beliefs about what 'should' work when it comes to dating are correct that they will keep trying and failing without ever considering that maybe their underlying theory is wrong. A common exercise used in the community to break out of these incorrect beliefs is to force yourself to go out and try things that 'can't possibly work' 10 times in a day, and then every day for a week or a month, until the false belief is banished.

I actually think the LW crowd could learn something from this approach - sometimes all the argument in the world is not as convincing as repeated confrontations with real world results. When it comes to changing behaviour (a key aspect of allowing rationality to improve results in our lives), rational argument is not usually the most effective technique. Rational argument may establish the need for change and the pattern for new behaviour but the most effective way to change behavioural habits is to just start consciously doing the new behaviour until it becomes a habit.


In any rational art of dating in which I would be interested, "winning" would be defined to include, indeed to require, respect for the happiness, well-being, and autonomy of the pursued. I don't know enough about these sub-communities to say whether they share that concern -- what is the impression you've gotten?

Many but by no means all in the community share that concern. I'm finding it interesting to note my own reluctance to link to some of the material since even among those who do share that concern there is discussion of some techniques that might be considered objectionable. One of the cornerstones of much of the material is that people are so conditioned by conventional beliefs about what 'should' work that they are liable to find what actually does work highly counter-intuitive at first. Reactions to the challenging of strongly held beliefs can be equally strong and I've often observed this in comment threads on the material. The most mainstream introduction to the community is probably "The Game" by Neil Strauss. I'm not sure it's the best starting point from the point of view of connections to rationality but it's an entertaining read if nothing else. I certainly believe it's possible to benefit from some of the ideas while maintaining your definition of 'winning' but equally there are some parts of the community which are less appealing.
I have extensive knowledge in that matter and I would say that the techniques are value neutral. To make an analogy, think of Cialdini's science of influence and persuasion( What Evolutionary Psychology, Cialdini and others showed is that we humans can be quite primitive and react in certain predetermined ways to certain stimuli. The dating community has investigated the right stimuli for women and figured out the way to "get" her. You have to push the right buttons in the right order and we males are not different(although the type of buttons is different). In other words, what you learn in the dating community will teach you how to win the hearts of women. It's up to you how to use this skillset(yes, it's a skillset) IF you manage to acquire it, which btw. is not easy at all. It's just a technique, you can use it for good or bad, although admittedly it lends itself more for selfish purposes IMHO. Btw, women are also very selfish creatures, so don't make the mistake to hold yourself to a too high moral standard. I also think that you might be misguided in that you start with the wrong assumption of what dating is all about. Evolutionarily speaking, dating alias mating is not to make the other people better off. On the contrary, having kids is mostly a disadvantage for the parents, but most people do it anyways because we have this desire to have kids. Rationally speaking we all would probably be better off without them. Of course if you factor in emotions it becomes more complicated. Also there is a fundamental difference between males and females. Males don't get pregnant, they want to have as much sex(pleasure) with as many partners as possible. Women get pregnant(at least before birth control was invented) and so their emotional circuitry is designed to be extremely selective towards which males they will have sex with. Also they want their males to stick around as long as possible(to help them take care of the
In general, I would agree that the teachings are value-neutral. Yet some of these tools are more conducive towards negative uses, while others are more conducive towards positive uses. It's true that people are not adapted to necessarily make each other optimally happy. Yet in spite of this, our skills give us the capability to find solutions that make both people at least somewhat happy. So in my case, winning is "defined to include, indeed to require, respect for the happiness, well-being, and autonomy of the pursued," as MBlume puts it. Yes, but the description in your post is contaminated by the oversimplified presumptions about evolutionary psychology in the community. I think you would get a lot out of reading more of real evolutionary psychologists, not just reading popularizations, or what the community says evolutionary psychologists are saying. I can find some cites when I'm at home. Typically, males are more oriented towards seeking multiple partners than women, yet that doesn't mean that they want "as many partners as possible." Some males are wired for short-term mating strategies, and other males are more wired for long-term mating strategies. Yes, and this is well-demonstrated experimentally. I don't have the citations on hand because I'm not at home, but a guy named Fisman has done some interesting work in this area. Yet this is again oversimplified, because some present day females follow short-term mating strategies and do not necessarily want males to stick around. True, though pretty good compromises exist. In a lot of cases, dating is like a Prisoner's Dilemma (though many other payoff matrices are possible). Personally, what I like the most about the community is that it gives me the tools to play C while simultaneously raising the chance that the other person will play C. Even when happiness for both people can't be achieved, it's at least possible for both people to treat each other with respect, even if someone can't give the other p
I'm not really sure how you can claim "techniques are value-neutral" without assuming what values are. For example, if my values contain a term for someone else's self-esteem, a technique that lowers their self-esteem is not value-neutral. If my values contain a term for "respecting someone else's requests", techniques for overcoming LMR are not value-neutral. Since I've only limited knowledge of the seduction techniques advanced by the community, I did not offer more -- after seeing some of the techniques, I decided that they are decidedly not value neutral, and therefore chose to not engage in them.
3Paul Crowley
A top-level post would be very welcome, I don't want to take this one too far off track. I've slept (and continue to sleep) with a lot of people, and my experience very much contradicts what you say here.


So you have to be aware that there is a fundamental difference in the objectives of the two which will make it extremely difficult or impossible to make BOTH happy at the same time.


my experience very much contradicts what you say here.

That's because it's a great example of theory being used to persuade people to take a certain set of "actions that work". There are other theories that contradict those theories, that are used to get other people to take action... even though the specific actions taken may be quite similar!

People self-select their schools of dating and self-help based on what theories appeal to them, not on the actual actions those schools recommend taking. ;-)

In this case, the theory roland is talking about isn't theory at all: it's a sales pitch, that attracts people who feel that dating is an unfair situation. They like what they hear, and they want to hear more. So they read more and maybe buy a product. The writer or speaker then gradually moves from this ev-psych "hook" to other theories that guide the reader to take the actions the author recommends.

That people confuse these sales pitches with actual theory is... (read more)

What exactly would you like to know? The subject is very broad, it would be easier if you made me a list of questions that are relevant to LW. There are already TONS of sites about this topic so please don't ask me to write another post about seduction in general.
3Paul Crowley
I think a post tailored to the particular interests and language of LW/OB readers would be fairly different from the ones already out there, but if you have a pointer that you think would be particularly appealing to us lot I'm interested.
I would personally love to see more cross-fertilization between that sub-community and LW, "dark arts" or no. (At least, I think I would; I don't know the community well and might be mistaken.) We need to make contact between abstract techniques for thinking through difficult issues, and on the ground practical strategicness. Importing people who've developed skilled strategicness in any domain that involves actual actions and observable success/failure, including dating (or sales, or start-ups, or ... ?), would be a good way to do this. If you could link to specific articles, or could create discussion threads that both communities might want to participate in, mattnewport, that would be good.
I second that. Here in the LW/OB/sci-fi/atheism/cryonics/AI... community, many of us fit quite a few stereotypes. I'll summarize them in one word that everybody understands: we're all nerds*. This means our lives and personalities introduce many biases into our way of thinking, and these often preclude discussions about acting rationally in interpersonal situations such as sales, dating etc. because we don't have much experience in these fields. Anything that bridges this gap would be extremely useful. *this is not a value judgment. And not everybody conforms to this stereotype. I know, I know, but this is not the point. I'm talking averages here.
I would say that it is largely the ostensible basis of the seduction community. As you can see if you read this subthread, they've got a mythology going on that renders most of their claims unfalsifiable. If their theories are unsupported it doesn't matter, because they can disclaim the theories as just being a psychological trick to get you to take "correct" actions. However they've got no rigorous evidence that their "correct" actions actually lead to any more mating success than spending an equivalent amount of time on personal grooming and talking to women without using any seduction-community rituals. They also have such a wide variety of conflicting doctrines and gurus that they can dismiss almost any critique as being based on ignorance, because they can always point to something written somewhere which will contradict any attempt to characterise the seduction community - not that this ever stops them making claims about the community themselves. They'll claim that they develop such evidence by going out and picking up women, but since they don't do any controlled tests this cannot even in theory produce evidence that the techniques they advocate change their success rate, and even if they did conduct controlled studies their sample sizes are tiny given the claimed success rates. I believe one "guru" claims to obtain sex in one out of thirty-three approaches. I do not believe that anyone's intuitive grasp of statistics is so refined that they can spot variations in such an infrequent outcome and determine whether a given technique increases or decreases that success rate. To do science on such a phenomenon would take a very big sample size. Ergo anyone claiming to have scientific evidence without having done a study with a very big sample size is a fool or a knave. The mythology of the seduction community is highly splintered and constantly changes over time, which increases the subjective likelihood that we are looking at folklore and scams rather than an
This is an absurd claim. Most of the claims can be presented in the form "If I do X I can expect to on average achieve a better outcome with women than if I do Y". Such claims are falsifiable. Some of them are even actually falsified. They call it "Field Testing". Your depiction of the seduction community is a ridiculous straw man and could legitimately be labelled offensive by members of the community that you are so set on disparaging. Mind you they probably wouldn't bother doing so: The usual recommended way to handle such shaming attempts is to completely ignore them and proceed to go get laid anyway.
If they conducted tests of X versus Y with large sample sizes and with blinded observers scoring the tests then they might have a basis to say "I know that if I do X I can expect to on average achieve a better outcome with women than if I do Y". They don't do such tests though. They especially don't do such tests where X is browsing seduction community sites and trying the techniques they recommend and Y is putting an equal amount of time and effort into personal grooming and socialising with women without using seduction community techniques. Scientific methodology isn't just a good idea, it's the law. If you don't set up your tests correctly you have weak or meaningless evidence. Or as the Bible says, "But if any place refuses to welcome you or listen to you, shake its dust from your feet as you leave to show that you have abandoned those people to their fate". It's good advice for door-to-door salespersons, Jehova's Witnesses and similar people in the business of selling. If you run into a tough customer don't waste your time trying to convince them, just walk away and look for an easier mark. However in science that's not how you do things. In science if someone disputes your claim you show them the evidence that led you to fix your claim in the first place. Are you sure you meant to describe my post as a "shaming attempt"? As pejoratives go this seems like an ill-chosen one, since my critique was strictly epistemological. It seems at least possible that you are posting a standard talking point which is deployed by seduction community members to dismiss ethical critiques, but which makes no sense in response to an epistemological critique. (There are certainly concerns to be raised about the ethics of the seduction community, but that would be a different post).
Your claim was: Are you familiar with the technical meaning of 'unfalsifiable'? It does not mean 'have not done scientific tests'. It means 'cannot do scientific tests even in principle'. I would like it if scientists did do more study of this subject but that is not relevant to whether claims are falsifiable. I'd be surprised. I've never heard such a reply, certainly not in response to subject matter which many wouldn't understand (unfalsifiability). I used that term 'shaming' because the inferred motive (and, regardless of motive, one of the practical social meanings) of falsely accusing the enemy of behavior that looks pathetic is to provide some small degree of humiliation. This can, the motive implicitly hopes, make people ashamed of doing the behaviors that have been misrepresented. I am happy to conceed that this point is more distracting than useful. I would have been best served to stick purely to the (more conventional expression of) "NOT UNFALSIFIABLE! LIES!" I assert that the "act like JWs" approach is not taken by the seduction community in general either. For most part they do present evidence. That evidence is seldom of the standard accepted in science except when they are presenting claims that are taken from scientific findings - usually popularizations thereof, Cialdini references abound. I again agree that the seduction community could use more scientific rigor. Shame on science for not engaging in (much) research in what is a rather important area! Yes, I agree that you didn't get in to ethics and that your claim was epistemological in nature. I do believe that the act of making epistemological claims is not always neutral with respect to other kinds of implication. As another tangential aside I note that if an exemplar of the seduction community were to be said to be sensitive to public opinion he would be far more sensitive to things that make him look pathetic than things than make him look unethical!
In the case of Sagan's Dragon, the dragon is unfalsifiable because there is always a way for the believer to explain away every possible experimental result. My view is that the mythology of the seduction community functions similarly. You can't attack their theories because they can respond by saying that the theory is merely a trick to elicit specific behaviour. You can't attack their claims that specific behaviours are effective because they will say that there is proof, but it only exists in their personal recollections so you have to take their word for it. You can't attack their attitudes, assumptions or claims because they can respond by pointing at one guru or another and saying that particular guru does not share the attitude, assumption or claim you are critiquing. Their claim could theoretically be falsified, for example by a controlled test with a large sample size which showed that persons who had spent N hours studying and practicing seduction community doctrine/rituals (for some value of N which the seduction community members were prepared to agree was sufficient to show an effect) were no more likely to obtain sex than persons who had spent N hours on things like grooming, socialising with women without using seduction community rituals, reading interesting books they could talk about, taking dancing lessons and whatnot. I suspect but cannot prove though that if we conducted such a test those people who have made the seduction community a large part of their life would find some way to explain the result away, just as the believer in Sagan's dragon comes up with ways to explain away results that would falsify their dragon. Of course it's not the skeptic's job to falsify the claims of the seduction community. Members of that community very clearly have a large number of beliefs about how best to obtain sex, even if those beliefs are not totally homogenous within that community, and it's their job to present the evidence that led them to the belief
It is dramatically different thing to say "people who are in the seduction community are the kind of people who would make up excuses if their claims were falsified" than to say "the beliefs of those in the seduction community are unfalsifiable". While I may disagree mildly with the former claim the latter I object to as an absurd straw man. I don't accept the role of a skeptic. I take the role of someone who wishes to have correct beliefs, within the scope of rather dire human limitations. That means I must either look for and process the evidence to whatever extent possible or, if a field is consider of insufficient expected value, remain in a state of significant uncertainty to the extent determined by information I have picked up in passing. I reject the skeptic role of thrusting the burden of proof around, implying "You've got to prove it to me or it ain't so!' That's just the opposite stupidity to that of a true believer. It is a higher status role within intellectual communities but it is by no means rational. No, it's their job to go ahead and get laid and have fulfilling relationships. It is no skin of their nose if you don't agree with them. In fact, the more people who don't believe them the less competition they have. Unless they are teachers, people are not responsible for forcing correct epistemic states upon others. They are responsible for their beliefs, you are responsible for yours.
I'm content to use the term "unfalsifiable" to refer to the beliefs of homeopaths, for example, even though by conventional scientific standards their beliefs are both falsifiable and falsified. Homeopaths have a belief system in which their practices cannot be shown to not work, hence their beliefs are unfalsifiable in the sense that no evidence you can find will ever make them let go of their belief. The seduction community have a well-developed set of excuses for why their recollections count as evidence for their beliefs (even though they probably shouldn't count as evidence for their beliefs), and for why nothing counts as evidence against their beliefs. It is not the opposite of stupidity at all to see a person professing belief Y, and say to them "Please tell me the facts which led you to fix your belief in Y". If their belief is rational then they will be able to tell you those facts, and barring significantly differing priors you too will then believe in Y. I suspect we differ in our priors when it comes to the proposition that the rituals of the seduction community perform better than comparable efforts to improve one's attractiveness and social skills that are not informed by seduction community doctrine, but not so much that I would withhold agreement if some proper evidence was forthcoming. However if the local seduction community members instead respond with defensive accusations, downvotes and so forth but never get around to stating the facts which led them to fix their belief in Y then observers should update their own beliefs to increase the probability that the beliefs of the seduction community do not have rational bases. Can you see that from my perspective, responses which consist of excuses as to why supporters of the seduction community doctrine(s) should not be expected to state the facts which inform their beliefs are not persuasive? If they have a rational basis for their belief they can just state it. I struggle to envisage probable s
On lesswrong insisting a claim is unfalsifiable while simultaneously explaining how that claim can be falsified is more than sufficient cause to downvote. This is false even if - and especially obviously when - that claim is false. Further, in general downvotes of comments by the PhilsophyTutor account - at least those by myself - have usually been for the consistent use of straw men and the insulting misrepresentation of a group of people you are opposed to. Declaring downvotes of your one's own comments to be evidence in favor of one's position is seldom a useful approach. They should not be persuasive and are not intended as such. Instead, in this case, it was an explicit rejection of the "My side is the default position and the burden of proof is on the other!" debating tactic. The subject of how to think correctly (vs debate effectively) is one of greater interest to me than seduction. I also reject the tactic used in the immediate parent. It seems to be of the form "You are trying to refute my arguments. You are being defensive. That means you must be wrong. I am right!". It is a tactic which, rather conveniently, become more effective the worse your arguments are!
That's rather sad, if the community here thinks that the word "unfalsifiable" only refers to beliefs which are unfalsifiable in principle from the perspective of a competent rationalist, and that the word is not also used to refer to belief systems held by irrational people which are unfalsifiable from the insider/irrational perspective. The fundamental epistemological sin is the same in each case, since both categories of belief are irrational in the sense that there is no good reason to favour the particular beliefs held over the unbounded number of other, equally unfalsifiable beliefs which explain the data equally well. That said, I do find it curious that such misunderstandings seem to exclusively crop up in those posts where I criticise the beliefs of the seduction community. Those posts get massively downvoted compared to posts I make on any other topic, and from my insider perspective there is no difference in quality of posting. There's a philosophical joke that goes like this: "Zabludowski has insinuated that my thesis that p is false, on the basis of alleged counterexamples. But these so- called "counterexamples" depend on construing my thesis that p in a way that it was obviously not intended -- for I intended my thesis to have no counterexamples. Therefore p". Source It's not clear to me at all that I have used straw men or misrepresented a group, and from my perspective it seems that it's impossible to criticise any aspect of the seduction community or its beliefs without being accused of attacking a straw man. Perhaps we should drop this subtopic then, since it seems solely to be about your views of what you see as a particular debating tactic, and get back to the issue of what exactly the evidence is for the beliefs of the seduction community. If we can agree that how to think correctly is the more interesting topic, then possibly we can agree to explore whether or not the seduction community are thinking correctly by means of examining their
Then you should indeed be sad. An unfalsifiable claim is a claim that can not be falsified. Not only is it right there in the word it is a basic scientific principle. The people who present a claim happening to be irrational would be a separate issue. Just say that the seduction community is universally or overwhelmingly irrational when it comes to handling counterevidence to their claims - and we can merrily disagree about the state of the universe. But unfalsifiable things can't be falsified.
I would update only slightly from the prior for "non-rationalists are dedicated to achieving a goal through training and practice". EDIT: In case the meaning isn't clear - this translates to "They're probably about the same as most folks are when they do stuff. Haven't seen much to think they are better or worse."
That seems to be a poorly-chosen prior. An obvious improvement would be to instead use "non-rationalists are dedicated to achieving a goal through training and practice, and find a system for doing so which is significantly superior to alternative, existing systems". It is no great praise of an exercise regime, for example, to say that those who follow it get fitter. The interesting question is whether that particular regime is better or worse than alternative exercise regimes. However the problem with that question is that there are multiple competing strands of seduction theory, which is why any critic can be accused of attacking a straw man regardless of the points they make. So you need to specify multiple sub-questions of the form "Group A of non-rationalists were dedicated to achieving a goal through training and practice, and found a system for doing so which is significantly superior to alternative, existing systems", "Group B of non-rationalists..." and so on for as many sub-types of seduction doctrine as you are prepared to acknowledge, where the truth of some groups' doctrines precludes the truth of some other groups' doctrines. As musical rationalists Dire Straits pointed out, if two guys say they're Jesus then at least one of them must be wrong. So then ideally we ask all of these people what evidence led them to fix the belief they hold that the methods of their group perform better than alternative, existing ways of improving your attractiveness. That way we could figure out which if any of them are right, or whether they are all wrong. However I don't seem to be able to get to that point. Since you position yourself as outside the seduction community and hence immune to requests for evidence, but as thoroughly informed about the seduction community and hence entitled to pass judgment on whether my comments are directed at straw men, there's no way to explore the interesting question by engaging with you. Edit to add: I see one of the ancestor p
I actually agree mainly with you, but am downvoting both sides on the principle that I'm tired of listening to people argue back and forth about PUAs/Seduction communities.
I have Hugh in my RSS feed for this reason!
It sounds as though you have data and experiences that our community should chew on. Please do share specific stories, anecdotes, strategies or habits for thinking strategically about practical domains, techniques you've found useful within "creative rationality", etc. Perhaps in a top-level post?
Thanks, Anna. Getting more specific is definitely on my list.
I'm curious, how did you use rationality to develop fashion sense?

If in 1660 you'd asked the first members of the Royal Society to list the ways in which natural philosophy had tangibly improved their lives, you probably wouldn't have gotten a very impressive list.

Looking over history, you would not have found any tendency for successful people to have made a formal study of natural philosophy.

It would be overconfident for me to say rationality could never become useful. My point is just that we are acting like it's practically useful right now, without very much evidence for this beyond our hopes and dreams. Thus my last sentence - that "crossing the Pacific" isn't impossible, but it's going to take a different level of effort.

If in 1660, Robert Boyle had gone around saying that, now that we knew Boyle's Law of gas behavior, we should be able to predict the weather, and that that was the only point of discovering Boyle's Law and that furthermore we should never trust a so-called chemist or physicist except insofar as he successfully predicted the weather - then I think the Royal Society would be making the same mistake we are.

Boyle's Law is sort of helpful in understanding the weather, sort of. But it's step one of ten million steps, used alone it doesn't work nearly as well as just eyeballing the weather and looking for patterns, and any attempt to judge applicants to the Royal Society on their weather prediction abilities would have excluded some excellent scientists. Any attempt to restrict gas physics itself to things that were directly helpful in predicti... (read more)


I'm confused about this article. I agree with most you've said, but I'm not sure the point is exactly. I thought the entire premise of this community was that more is possible, but we're only "less wrong" at the moment. I didn't think there was any promise of results for the current state of the art. Is this post a warning, or am I overlooking this trend?

I agree we shouldn't see x-rationality as practically useful now. You don't rule out rationality becoming the superpower Eliezer portrays in his fiction. That is certainly a long ways off. Boyle's Law and weather prediction is an apt analogy. Just trying harder to apply our current knowledge won't go very far, but there should be some productive avenues.

I think I'd understand your purpose better if you could answer these questions: In your mind, how likely is it that x-rationality could be practically useful in, say, 50 years? What approaches are most likely to get us to a useful practice of rationality? Or is your point that any advances that are made will be radically different from our current lines of investigation?

Just trying to understand.

The above would be component 1 of my own reply.

Component 2 would be (to say it again) that I developed the particular techniques that are to be found in my essays, in the course of solving my problem. And if you were to try to attack that or a similar problem you would suddenly find many more OB posts to be of immensely greater use and indeed necessity. The Eliezer of 2000 and earlier was not remotely capable of getting his job done.

What you're seeing here is the backwash of techniques that seem like they ought to have some general applicability (e.g. Crisis of Faith) but which are not really a whole developed rationalist art, nor made for the purpose of optimizing everyday life.

Someone faced with the epic Challenge Of Changing Their Mind may use the full-fledged Crisis of Faith technique once that year. How much benefit is this really? That's the question, but I'm not sure the cynical answer is the right one.

What I am hoping to see here is others, having been given a piece of the art, taking that art and extending it to cover their own problems, then coming back and describing what they've learned in a sufficiently general sense (informed by relevant science) that I can actually absorb it. For that which has been developed to address e.g. akrasia outside the rationalist line, I have found myself unable to absorb.

But you're not a good test case to see whether rationality is useful in everyday life. Your job description is to fully understand and then create a rational and moral agent. This is the exceptional case where the fuzzy philosophical benefits of rationality suddenly become practical.

One of the fundamental lessons of Overcoming Bias was "All this stuff philosophers have been debating fruitlessly for centuries actually becomes a whole lot clearer when we consider it in terms of actually designing a mind." This isn't surprising; you're the first person who's really gotten to use Near Mode thought on a problem previously considered only in Far Mode. So you've been thinking "Here's this nice practical stuff about thinking that's completely applicable to my goal of building a thinking machine", and we've been thinking, "Oh, wow, this helps solve all of these complicated philosophical issues we've been worrying about for so long."

But in other fields, the rationality is domain-specific and already exists, albeit without the same thunderbolt of enlightenment and awesomeness. Doctors, for example, have a tremendous literature on evidence and decision-making as t... (read more)

An x-rationalist who becomes a doctor would not, I think, necessarily be a significantly better doctor than the rest of the medical world, because the rest of the medical world already has an overabundance of great rationality techniques and methods of improving care that the majority of doctors just don't use

Evidence-based medicine was developed by x-rationalists. And to this day, many doctors ignore it because they are not x-rationalists.

...huh. That comment was probably more helpful than you expected it to be. I'm pretty sure I've identified part of my problem as having too high a standard for what makes an x-rationalist. If you let the doctors who developed evidence-based medicine in...yes, that clears a few things up.

One thinks particularly of Robyn Dawes - I don't know him from "evidence-based medicine" per se, but I know he was fighting the battle to get doctors to acknowledge that their "clinical experience" wasn't better than simple linear models, and he was on the front lines against psychotherapy shown to perform no better than talking to any bright person.

If you read "Rational Choice in an Uncertain World" you will see that Dawes is pretty definitely on the level of "integrate Bayes into everyday life", not just Traditional Rationality. I don't know about the historical origins of evidence-based medicine, so it's possible that a bunch of Traditional Rationalists invented it; but one does get the impression that probability theorists trying to get people to listen to the research about the limits of their own minds, were involved.

After thinking on this for a while, here are my thoughts. This should probably be a new post but I don't want to start another whole chain of discussions on this issue.

  1. I had the belief that many people on Less Wrong believed that our currently existing Art of Rationality was sufficient or close to sufficient to guarantee practical success or even to transform its practioner into an ubermensch like John Galt. I'm no longer sure anyone believes this. If they do, they are wrong. If anyone right now claims they participate in Less Wrong solely out of a calculated program to maximize practical benefits and not because they like rationality, I think they are deluded.

  2. Where x-rationality is defined as "formal, math-based rationality", there are many cases of x-rationality being used for good practical effect. I missed these because they look more like three percent annual gains in productivity than like Brennan discovering quantum gravity or Napoleon conquering Europe. For example, doctors can use evidence-based medicine to increase their cure rate.

  3. The doctors who invented evidence-based medicine deserve our praise. Eliezer is willing to consider them x-rationalists. But th

... (read more)

The Eliezer of 2000 and earlier was not remotely capable of getting his job done.

Are you more or less capable of that now? Do you have evidence that you are? Is the job tangibly closer to being completed?

I wouldn't bother with those questions if I were you, thomblake. They've never been answered here, and are unlikely ever to be answered, here or elsewhere. The goal here is to talk about being rational, not actually being so; to talk about building AIs, not show progress in doing so or even to define what that would be. It's about talking, not doing.
There are many different people here. I think talking about "the goal" is nonsense.
Why do you suppose that is?
2Scott Alexander
I'll admit I might be attacking a straw man, but if you read the posts linked to on the very top, I think there are at least a few people out there who believe it, or who don't consciously believe it but act as if it's true. Depends how you reduce "practically useful". Reduce it to "a person randomly assigned to take rationality classes two hours a week plus homework for a year will make on average ten percent more money than a similar person who doesn't", my wild completely unsubstantiated guess is 50% likely. But I'd give similar numbers to other types of self-improvement classes like Carnegie seminars and that sort of thing. If by "useful practice of rationality" you mean the way Eliezer imagines it, I think there should be more focus on applying the rationality we have rather than delving deeper and deeper into the theory, but if I could say more than that, I'd be rich and you'd be paying me outrageous hourly fees to talk about it :) I do think non-godlike levels of rationality have far more potential to help us in politics than in daily life, but that's a minefield. In terms of easy profits we should focus the movement there, but in terms of remaining cohesive and credible it's not really an option.

Michael Vassar:

nerds, scientists, skeptics and the like who like to describe their membership in terms of rationality are [not] noticibly better than average at behavioral rationality, as opposed to epistemic rationality where they are obviously better than average but still just hideously bad.

Simply applying "ordinary rationality" to behavior is extreme. People don't use reason to decide if fashion is important, they just copy. Eliezer's Secret Identities post seems to make a very similar point, which seemed to largely match this post. One point was to get rationality advice from people who actually found it useful, rather than ordinary nerds who fetishize it.

An understanding of 'x-rationality' has helped me find the world a little less depressing and a little less frustrating. Previously when observing world events, politics and some behaviours in social interactions that seemed incomprehensible without assuming depressing levels of stupidity, incompetence or malice I despaired at the state of humanity. An appreciation of human biases and evolutionary psychology (some of which stems from an interest in both going back well before I ever started reading OB) gives me a framework in which to understand events in the world which I find a lot more productive and optimistic.

An example from politics: it is hard to make any rational sense of drug prohibition when looking at the evidence of the costs and benefits. This would tend to lead to an inevitable conclusion that politicians and the voting public are either irredeemably stupid or actively seeking negative outcomes. Understanding how institutional incentives to maintain the status quo, confirmation bias and signaling effects (politicians and voters needing to be 'seen to care' and/or 'seen to disapprove') can lead to basically intelligent and well meaning people maintaining catastrophical... (read more)

I’m partly echoing badger here, but it’s worth distinguishing between three possible claims:
(1) An “art of rationality” that we do not yet have, but that we could plausibly develop with experimentation, measurements, community, etc., can help people.
(2) The “art of rationality” that one can obtain by reading OB/LW and trying to really apply its contents to one’s life, can help people.
(3) The “art of rationality” that one is likely to accidentally obtain by reading articles about it, e.g. on OB/LW, and seeing what happens to rubs off, can help people.

There are also different notions of “help people” that are worth distinguishing. I’ll share my anticipations for each separately. Yvain or others, tell me where your anticipations match or differ.

Regarding claim (3):
My impression is that even the art of rationality one obtains by reading articles about it for entertainment, does have some positive effects on the accuracy of peoples’ beliefs. A couple people reported leaving their religions. Many of us have probably discarded random political or other opinions that we had due to social signaling or happenstance. Yvain and others report “clarity-of-mind benefits”. I’d give reasonab... (read more)

5Scott Alexander
I agree with almost everything here, with the following caveats: I. The practical benefits we get from (3) are (I think I'm agreeing with you here) likely to be so small as to be difficult to measure informally; i.e. anyone who claims to have noticed a specific improvement is as likely to be imagining it as really improving. Probably some effects that could be measured in a formal experiment with a very large sample size, but this is not what we have been doing. II. (2) shows promise but is not something I see discussed very often on Overcoming Bias or Less Wrong. Using the Boyle metaphor, this would be the technology of rationality, as opposed to the science of it. I've seen a few suggestions for "techniques", but they seem sort of ad hoc (I will admit, in retrospect, that many of the times I was proposing 'techniques' were more of an attempt to sound like I was thinking pragmatically, than soundly based on good experimental evidence). I've tried to apply specific methods to specific decisions, but never gone so far as to set aside a half hour each day to "rationality practice", nor would I really know what to do with that half hour if I did. I'd like to know more about what you do and what you think has helped. III. You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn't impress me. Many of the transhumanists here were transhumanists before they were rationalists, and only came to Overcoming Bias out of interest in reading what transhumanist leaders Eliezer and Robin had to say. I think my "conversion" to transhumanism came about mostly because I started meeting so many extremely intelligent transhumanists that it no longer seemed like a fringe crazy-person belief and my mind felt free to judge it with the algorithms it uses for normal scientific theories rather than the algorithms it uses for random Internet crackpottery. Many other OB readers came to transhumanism just because EY and RH explicit
I'm sure it's something that could be helped with techniques like The Bottom Line, which most intelligent, science-literate, trying to be “rational” people mostly don’t do nearly enough of. Also something that could be helped by paying attention to which thinking techniques lead to what kinds of results, and learning the better ones. Dojos could totally teach these practices, and help their students actually incorporate them into their day-to-day, reflexive decison-making (at least more than most "intelligent, science-literate" people do now; most people hardly try at all). As to heuristics and biases, and probability theory... I do find those helpful. Essential for thinking usefully about existential risk; helpful but non-essential for day to day inference, according to my mental but not written (I’ve been keeping a written record lately, but not for long enough, and not systematically enough) observations. The probability theory in particular may be hard to teach to people who don’t easily think about math, though not impossible. But I don’t think building an art of rationality needs to be solely about the heuristics and biases literature. Certainly much of the rationality improvement I’ve gotten from OB/LW isn’t that.
The benefit I’m trying to list isn’t “greater appreciation of transhumanism” so much as “directing one’s efforts to ‘make the world a better place’ in directions that actually do efficiently make the world a better place”. As to the evidence and its significance: Even if we skip transhumanism, and look fully outside the Eliezer/Robin/Vassar orbit, folks like Holden Karnofsky of Givewell are impressive, both in terms of ability to actually analyze the world, and in terms of positive impact. You might say it’s just traditional rationality Holden is using -- certainly he didn’t get it from Eliezer -- but it’s beyond the level common among “intelligent, science-literate people” (who mostly donate their money in much less effective ways). Within transhumanism... I agree that the existing correlation between transhumanism and rationality-emphasis will tend to create future correlation, whether or not rationality helps one see merits in transhumanism. And that’s an important point. But it’s also bizarrely statistically significant that when people show up and say they want to spend their lives reducing AI risks, they’re often people who spent unusual effort successfully becoming better thinkers before they ever heard of Eliezer or Robin, or met anyone else working on this stuff. It’s true that maybe we’re just recognizing “oh, someone who cares about actually getting things right, that means I can relax and believe them” (or, worse, “oh, someone with my brand of tennis shoes, let me join the in-group”). But... 1. Recognizing that someone else has good epistemic standards and can be believed is rationality working, even without independently deriving the same conclusions (though under the tennis shoe interpretation, not so much); 2. Many of us (independently, before reading or being in contact with anyone in this orbit) said we were looking for the most efficient use of some time/money, and it’s probably not an accident that trying to become a good thinker, and asking

By "decision", I don't mean the decision to get up in the morning, I mean the sort that's made on a conscious level and requires at least a few seconds' serious thought.

Consider yourself lucky if that doesn't describe getting up in the morning for you.

Anyway, not that this counts at all (availability bias), but I made a rational decision a couple of days ago to get some sleep instead of working later into the night on homework. I did exactly that.

In fact, I just made a rational decision-- just now-- to quit reading the article I was reading, work on homework for a few minutes and then go to bed. I haven't gotten to bed yet. Otherwise, that's going well.

Can you rig your mornings so that staying in bed just doesn't work? I use two alarm clocks, one set for two minutes after the other; the one that goes off two minutes later is out of arm's reach, so I have to either get out of bed, or sleep through it.
Not really worth it, but thanks. :) My current strategy is just to wait a few minutes, which essentially always does the trick unless I'm totally exhausted and need more sleep. I appreciate the thought, though.
I should point out that while the rational choice to go to bed a couple days ago worked out well, the last one failed because I got drawn into housework I could never have predicted I'd have to help with (I thought it was already done).

...but you will disagree with me. And we are both aspiring rationalists, and therefore we resolve disagreements by experiments. I propose one.

I'm surprised you expected most of your readers to disagree. I think it's pretty clear that the techniques we work on here aren't making us much more successful than most people.

Humans aren't naturally well equipped to be extreme rationalists. The techniques themselves may be correct, but that doesn't mean we can realistically expect many people to apply them. To use the rationality-as-martial art metaphor, if you taught Shaolin kung fu to a population of fifty year old couch potatoes, they would not be able to perform most of the techniques correctly, and you should not expect to hear many true accounts of them winning fights with their skills.

Perhaps with enough work we could refine the art of human instrumental rationality into something much better than what we've got, maybe achieve a .3 correlation with success rather than a .1, but while a fighting style developed explicitly for 50 year old couch potatoes might give your class better results than other styles, you can only expect so much out of it.

2Sailor Vulcan
This. If less wrong had been introduced to an audience of self-improvement health buffs and business people instead of nerdy booksmart Harry Potter fans, things would have been drastically different. it is possible to become more effective at optimizing for other goals besides just truth. People here seem to naively assume so as long as they have enough sufficiently accurate information everything else will simply fall into place and they'll do everything else right automatically without needing to really practice or develop any other skills. I will be speaking more on this later.
1Дмитрий Зеленский
I would replace "introduced" to "sold" or "made interesting" here. It's not enough to introduce a group of people to something - unless their values are already in sync with said something's _appearance_ (and the appearance, aka elevator pitch, aka hook, is really important here), you would need to apply some marketing/Dark Arts/rhetorics/whatever-you-call-it to persuade them it's worth it. And, for all claims of "Rationalists should win", Yudkowsky2008 was too much of a rhetorics-hater (really, not noticing his own pattern of having the good teachers of Defence against the Dark Arts in Hogwarts themselves practicing Dark Arts (or, in case of Lupin, *being* Dark Arts)?) to perform that marketing, and thus the blog went to attract people who already shared the values - nerdy booksmarts (note that a)to the best of my knowledge, HPMoR postdates Sequences; b)Harry Potter isn't exactly a booksmart-choosing fandom, as is shown by many factors including the gross proportion of "watched-the-films-never-read-the-books" fans against readers AND people who imagine Draco Malfoy to be a refined aristocrat whose behavior is, though not nice, perfectly calibred instead of the petty bully we see in both books and films AND - I should stop here before I go on a tangent; so I am not certain how much "Harry Potter fans" is relevant).

Sometimes, people do worse when they try to be rational because they have a poor model of rationality.

One error I commonly see is the belief that rationality means using logic, and that logic means not believing things unless they are proven. So someone tries to be "rational" by demanding proof of X before changing their behavior, even in a case where neither priors nor utilities favor not X. The untrained person may be doing something as naive as argument-counting (how many arguments in favor of X vs. not X), and is still likely to come out ahead of the person who requires proof.

A related error is using Boolean models where they are inappropriate. The most common error of this type is believing that a phenomenon, or a class of phenomena, can have only one explanation.

Here's one example of a change I've made recently, which I think qualifies as x-rationality. When I need to make a decision that depends on a particular piece of data, I now commit to a decision threshold before I look at the data. (I feel like I took this strategy from a LW article, but I don't remember where now.)

For example, I recently had to decide whether it would be worth the potential savings in time and money to commute by motorcycle instead of by car. I set a threshold for what I considered an appropriate level of risk beforehand, and then looked up the accident statistics. The actual risk turned out to be several times larger than that.

Had I looked at the data first, I would have been tempted to find an excuse to go with my gut anyway, which simply says that motorcycles are cool. (I'm a 23-year-old guy, after all.) A high percentage of motorcyclists experience a serious or even fatal accident, so there's a decent chance that x-rationality saved me from that.

Huh. I did the same thing and came to the exact opposite conclusion and have been commuting by two-wheeler for 15 years now. What swayed me was: A huge proportion of the accidents involved really excessive speed. A similarly huge proportion happened to untrained motorcyclists. So: If I don't speed (much) and take the time to practice regularly on a track, preferably with an instructor, I have eliminated just about all the serious accidents. In actuality I have had zero accidents outside the track, and the "accidents" on the track has been to deliberately test the limits of myself and the bike. (and on a bike designed to take slides without permanent damage) The cash savings are higher in Europe due to taxes on fuel and vehicles and the size of the bike is more appreciated in cities that are designed in the middle ages, so the upside is larger too, but it seems that we don't have anything like the same risk tolerance. edit: also it is possible that motorcycling is a lot safer in Europe than the US? assuming you are from the US ofc.
I'm from California, where it's legal to split lanes. Most places don't allow that. I could just decide not to, but the ability to skip traffic that way is probably the single largest benefit of having a motorcycle.
Most states don't allow that, but in Europe it's standard practice. I probably wouldn't bother with the bike if I couldn't.

Am I the only one who is isn't entirely positive towards the heavy use of language identifying the LW community as "rationalists", including terms like "rationalist training" etc.? (Though he is by far the heaviest user of this kind of language, I'm not really talking about Eliezer here, his language use is whole topic on its own - I'm restricting this particular concern to other people, to the general LW non-Eliezer jargon). Is strongly self-identifying as a "rationalist" really such a good thing? Does it really help you solve problems? (I second the questions raised by Yvain). Though perhaps small, isn't there still a risk that the focus becomes too much on "being a rationalist" instead of on actually solving problems?

Of course, this is a blog about rationality and not about specific problems, so this kind of language is not suprising and sometimes might even be necessary. I'm just a bit hesitant towards it when the community hasn't actually shown that it's better at solving problems than people who don't self-identify as rationalists and haven't had "rationalist training", or shown that the techniques fostered here have such a high cross-domain applicability as seems to be assumed. Maybe after it has been shown that "rationalists" do better than other people, people who just solve problems, I would feel better about this kind of jargon.

I find it much more tolerable when 'aspiring' is added.

I define "rationalist" to be "someone who tries to become more rational". I'm fine with calling this a community of rationalists. I don't like it when people use "rationalist" to refer exclusively to members of this community.

This experiment seems easy to rig4; merely doing it should increase your level of conscious rational decisions quite a bit. And yet I have been trying it for the past few days, and the results have not been pretty. .... [O]ne way to fail your Art is to expect more of it than it can deliver.... Perhaps there are developments of the Art of Rationality or its associated Arts that can turn us into a Kellhus or a Galt, but they will not be reached by trying to overcome biases really really hard.

To make a somewhat uncharitable paraphrase: you read many artic... (read more)

I accept that to some degree my results say more negative things about me than about rationality, but insofar as I may be typical we need to take them into account when considering how we're going to benefit from rationality. inability to communicate clearly continues to be the bane of my existence. Let me try a strained metaphor.

Christianity demands its adherents "love thy enemy", "turn the other cheek", "judge not lest ye be judged", "give everything to the poor", and follow many other pieces of excellent moral advice. Any society that actually followed them all would be a very nice place to live.

Yet real-world Christian societies are not such nice places to live. And Christians say this is not because there is anything wrong with Christianity, but because Christians don't follow their religion enough. As the old saying goes, "Christianity has not been tried and found wanting, it has been found difficult and left untried." There's some truth to this.

But it doesn't excuse Christianity's failure to make people especially moral. If Christianity as it really exists can't translate its ideals into action, then it's gone wrong som... (read more)

Well, as a former christian(now atheist thanks to OB/Yudkowsky) I have to disagree. Christianity doesn't work regardless if you live by it or not. I don't claim that I lived 100% as expected but I implemented some things quite literally like "turn the other cheek"(btw, taking this literally is a misinterpretation of the real meaning I can say: it's nonsense, it doesn't work, it only makes other people take advantage of you and yes, I'm talking from experience.
"Turn the other cheek" is a phrase with a natural figurative meaning—"expose yourself to further aggression". Are you saying that this figurative meaning should itself be taken figuratively, or just that "turn the other cheek" should not be interpreted literally literally?
Here is the whole: Matthew 5:39 "Turn the other cheek" can only be understood if you know the cultural context of the time which goes as follows: The left hand was considered unclean so people used the right hand and for a person to strike your right cheek with his right hand implies that he is giving you a backhand slap. This was understood as a humiliating gesture that a higher ranking person would dish out to someone lower in status, e.g. a master to his servant. Now, if you received such a slap and proceed to offer the other cheek you would put the higher ranking person in a conundrum. He can no longer reach your right cheek with a backhand slap, the only option he has left is attacking you on the left cheek. But attacking on the left didn't have the same social connotation, it probably would just be interpreted as a de facto aggressive behavior, implying that the higher ranking person is acknowledging you as socially equal and also giving you the right to fight back. The same logic is also present in "walking another mile" and "leaving the undergarmnet"(which are part of the same biblical passage). So we can see that offering the other cheek puts the other in check and has nothing to do with "exposing oneself to further aggression" or being meek and humble, it is in fact a gesture of defiance, a very clever one.
Former christian here. Every once in a while, I catch myself about to--or worse, in the middle of--recounting an explanation like the one you just gave for which I have no evidence other than some pastor's word. On more than one of those occasions, the recalled explanation was just wrong. I haven't googled your explanation here, so it's possible that there's lots of evidence for it, but my prior for that is fairly low (it seems like a really specific piece of cultural information, and it pattern matches against "story that reinterprets well known biblical passage in a way that makes the inconvenient and obvious interpretation incorrect"). I'm incredibly pessimistic about the abilities of the average christian pastor at weighing the evidence for multiple competing historical hypotheses and coming up with the most correct answer (it's basically their job to be bad at this). I know that reversed stupidity is not intelligence, but as a rule I no longer repeat things I "learned" in a church setting unless I've independently verified it. (Oh, and: my apologies if you came by that story via a more rigorous process.)
I was interested enough to google, and found some relevant links. has (unlinked, presumably offline) references for an explanation like that. has more of the argument and says "resist not evil" is a biased or incorrect translation invented by King James' bible translators. From the above page (by Walter Wink): "Jesus did not tell his oppressed hearers not to resist evil. His entire ministry is at odds with such a preposterous idea." - I had noticed that a lot of his behaviour described in the bible was inconsistent with this doctrine. He makes more sense without it.
This seems strange. I don't know Greek so I can't look at the closest to original text, but I can read some Latin. So I looked at the Vulgatus which is both a) Catholic and b) predating the KJV by many centuries. That uses the phrase here "Non resistere malo" means something like "don't resist the bad" but might be closer to "don't fight bad things".
Alright, wikipedia has better evidence than I expected, although I'm also not going to read the referenced book. Wink's piece is coherent and well-put, but doesn't seem like great evidence-- I cannot tell if he mentally wrote his conclusion before or after making those arguments, and I can't tell which elements are actual features of ANE culture identified by historians and which are things that just sounded reasonable to him.
There are specific things that pastors are required to be wrong about yet when it comes to adding mere details for the sake of little more than curiosity there is little reason to believe they would be worse than average. For most part, of course, they will be simply teaching what they were taught and theological college - the evidence weighing is done by others. This is how most people operate.
What you say is true for competent pastors. I've probably been exposed to more than my fair share of the incompetent ones. ...I noticed a long time before I deconverted that when pastors said something about a subject I knew something about, they were totally wrong some ridiculously high percentage of the time. Should have tipped me off.
I've been fortunate in as much as several of my pastors and most of my lay preachers had science degrees. Mind you I suspect I've selected out most of the bad ones since I do recall I used to spend time with my family absolutely bagging the crap out of those preachers who said silly things.
I didn't learn that in a church setting, I read it on the internet in a page that claimed this to be the result of some scholar. What I liked most about the explanation is that it makes sense of the weird examples: cheek slapping(usually men use their fists if they mean to be aggressive) and forcing someone to walk a mile(makes sense if you assume the roman occupation context). So it is the best explanation I heard up to date, sigh.
Hm, as Caspian says it shows up on wikipedia. I think I have heard a garbled version of this story before, which probably contributed to my skepticism (which, if you squint just right, makes my prior comment an example of the thing I was protesting). Anyway, I'll retract the accusatory nature of my prior comment. I'm still pretty skeptical, but I don't care enough to read the book wikipedia references. :)
I noticed after posting that roland had linked to the same wikipedia page I did with nearly the same URL in his earlier comment Looks like we both missed it.
Huh. I recall reading the rest of that comment. Joke's on me, I guess.
I encountered an identical explanation on the History Channel a decade ago (this was back when the history channel was actually about history beyond Nostradamus and Hitler).
This explanation is neat, but it sounds quite contrived to me, especially since the previous sentence clearly says, "do not resist an evil person". Is there any reason to believe that your interpretation is the one that the writers of the Bible originally intended ?
Writers of the bible? Who wrote the bible? It is a collection of folklore that at first was transmitted orally and some day some people starting writing it all down. The people who wrote it down were not necessarily the originators or even first witnesses of the stories. As always different people will try to extract different teachings from the same stories. Maybe there was originally the parable of the cheek and later someone added "do not resist an evil person" trying to make a general teaching out of it and disregarding or not knowing the original context. To really find out you would have to go back to the origin of the whole and understand what cultural context was present there at that time. That there is a lot of confusion nowadays is an indicator that a lot of the context got lost. Did you ever find anyone who forced you to go a mile with you? Isn't that weird that such a thing is in the bible? It is until you understand that there was a roman occupation and that soldiers had the right to demand you carry their pack for a mile(but not more, a soldier could be punished if he forced you for more than that hence the second mile thing).
Sure, that's true, but: I agree with you there. I kind of assumed that you have already accomplished this task, though, since you are pretty confident about your interpretation of the "other cheek" concept. All I was asking for is some evidence that your interpretation is the more correct one. I agree that it sounds neat, but that's not enough; you also need to show that this was the passage's original, intended meaning. Same thing goes for miles and undergarments.
How would you accomplish this?
I'm not a historian, so I don't really know. But, naively, I'd try to find some historical evidence that the "slapping customs" you describe actually existed and were widely followed, and that someone actually took Jesus's advice and implemented it successfully. I would do so by looking through sources other than the Bible, such as works of fiction, historical documents, paintings and sculptures, etc. I could also try to tracing some oral folklore backwards through time, to see if it converges with the other sources.
It is the explanation that makes the most sense to me, but that doesn't mean it is the correct one. The mile thing only makes sense in a context where people actually force you to go a mile with them, thus the roman law explanation sounds plausible. Again, doesn't mean this is the correct one.
Ok, in this case, your explanation is nothing more than a "just so" story. I could make up my own story and it would be just as valid (which is to say, still pretty arbitrary). And yet, you stated your own explanation as though it were fact. That's confusing, at best.
HT G.K. Chesterton (I was sure it would be Lewis, so I'm glad I decided to Google anyway)
On the other hand, I once read that certain influences of religion are found across societies even among non-explicitly-religious people, e.g. people from historically-predominantly-Catholic regions are usually more likely to turn a blind eye to minor rule violations, or people from historically-predominantly-Calvinist regions are usually more likely to actively seek economic success (whether they self-identify as Catholic/Calvinist or not). And my experience (of having lived almost all my life in Italy, but having studied one year in Ireland among lots of foreigners) doesn't disconfirm this.

Only a handful responded

I am reserving my judgment for a couple of years. See how I'm doing then.

I'm of the same opinion.

Would Newton have gone even further if he'd known Bayes theory? Probably it would've been like telling the world pool champion to try using more calculus in his shots: not a pretty sight.

An interesting choice of example, given that Bayesian probability theory as we know it (inverse inference) was more or less invented by Laplace and used to address specific astronomical controversies surrounding the introduction of Newton's Laws, having to do with combining multiple uncertain observations.

I have yet to hear what anyone even means by "rationalism" or "rationalist," let alone "x-rationality." People often refer to the "techniques" or "Art of rationality" (a particularly irksome phrase), though as best I can tell, these consist of Bayes theorem and a half-dozen or so logical fallacies that were likely known since the time of Aristotle. Now, I've had an intuitive handle on Bayes theorem since learning of it in high school pre-calc, and spotting a logical fallacy isn't particularly tough for anyo... (read more)

If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

Rationality is not enough to pick the right stocks. You need to have the willpower to read the vast amount of material to enable you to do that pick.

Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts.

Remember your post on haunted rationalists, and Eliezer’s reply about how it’s possible to successfully work to accept rational beliefs even with the not-so-conscious, not-so-verbal parts of oneself that might be continue to believe in ghosts after one rationally understands the arguments against?

It sounds like maybe you mean “rationa... (read more)

Extreme rationality is for important decisions, not for choosing your breakfast cereal. Really important decisions - by which I mean those that you'd sleep on, and allocate more than ten minutes of thought - typically coincide with changes in habits and routine, which don't happen more often than once in several months. For more common decisions, we only have time and energy for ordinary rationality.

Practice creates facility. Facility lowers the bar to practice. Repeat. There is no time at which rationality may not be applied, and without practice at small things, how will you apply it to big things?

But besides, isn't it altogether just more fun to think clearly? When I notice myself not doing so, it is as painfui as watching a beautiful machine labouring with leaking pipes and rust.

I don't keep fit just to catch trains or eke out a few more years from the meat.

Can you give examples of what your practice looks like?
It begins with noticing, and continues by doing. Just from systematically noticing what you are doing, in any sphere, what you do changes even without making a special effort to change. Yvain mentioned this happening for him in footnote 5. Once you see, clearly, that there is a choice in front of you, and what it is, it is no more possible to choose what you think is wrong than believe what you think is false.
This comment is helpful, but if you could include some examples that use concrete nouns, it would be more helpful.

Thank you for pressing me for concrete details.

Some of what follows goes way back before OB, which is one of various things I have studied or done -- a major one, but there are others -- on the matter of how to think better. The first, for example, I describe as inside vs. outside view, because that is what it is. The practice goes back longer; OB gave it a name.

I. Getting out of bed in the morning. That may seem a trifle, but there is no time at which rationality does not matter, and an hour a day is more than a trifle. The inside view whispers seductively to just laze on half-awake, or drift off to sleep again. The outside view reminds me that it has been my invariable experience that lazing on does not wake me up, that the only thing that does is getting up and moving around, and that in twenty minutes after getting up (my typical boot time for both mind and body) I will be more satisfied with myself, the sooner I did so.

The more clearly I can contemplate the outside view, the easier it becomes to make a move. I can't claim expert proficiency in this. I still get up much faster when I have a specific three-alarm-clock reason, the moment the wristwatch pinger goes off.

II. I beg... (read more)

You can backslash the period to defeat automatic list formatting: 2\. Two Foo 1\. One looks like: 2. Two Foo 1. One More details here. Edited to add: Excellent comment, by the way.
Thank you for the link.
On point 2, I wonder how to generalize this lesson. I can see that many people follow similar practices for tracking their spending, and many of them claim similar benefits. But how would you know where else to apply the technique? Few people claim to do the same thing with their time; why is that different? How would you suggest generalizing this approach? What other arenas might it be applicable in? Or is only valuable for increasing awareness of expenses?
It goes beyond increasing awareness: whatever you increase your attention to, within yourself, almost inevitably changes. It has been suggested that there is a fundamental brain mechanism operating here: reorganisation follows attention. Claimer: I have known and worked with William Powers (whose work is described in that link) for many years. Often while reading OB or LW I have itched to recommend his works, but have held off for fear of seeming to be touting a personal hobbyhorse. But I really do think he Has Something. (BTW, I did not have any hand in writing the Wiki article.) Yvain mentioned that looking at his application of rationality is tending to increase it. Steven Barnes recommends the practice of stopping every three hours during the day to meditate for 5 minutes on your major life goals. To-do lists help get things done. Some recommend writing down each day's goals in the morning and reviewing them in the evening. Attention, in fact, is a staple of practically every teaching relating to personal development, whether rationalistic or religious. You cannot change what you are doing until you see what you are doing.
Actually I've been repeatedly recommended to track my time usage as a means of being aware of wasting it and then improving my time management. Alas, I haven't yet gotten around to actually trying it.
5Scott Alexander
I agree with this, but I also think that our big important decisions probably determine a lot less of our success than we like to think. A very large part of success probably comes from either the sum of our smaller decisions, or from decisions that didn't seem too important at the time but ended up making a very large difference in retrospect. The experiment I mentioned has raised my awareness of this. I also think the big decisions are the ones it's hardest to apply extreme rationality to, both because the emotional stakes are so high and because by the time we make them we've already made a pile of smaller decisions that have tipped us in one or the other direction. See . I predict not-significantly-different statistics for people who have trained in extreme rationality, though without a very high degree of confidence.
5Eliezer Yudkowsky
I spend a fair amount of time taking aim at directly this phenomenon, y'know. Summarized in Crisis of Faith. Because the technique as described is too hard for mortals to use, or because the technique as described is inadequate?
Necroing. Your dietary decisions are supposed to have large and long lasting effects on your health. Take into account the multiple and conflicting opinions on what constitutes a good diet, the difficulty of changing one's mind and habits, and it seems extreme rationality might be just the thing you need for choosing breakfast.
Can you give an example of such a decision?

Yes, yes, yes, yes, yes. And also yes.

I had a similar reaction to the fictional rationalist initiation ceremony.

That said, on further consideration, I'm not sure the "Bayesian Conspiracy" has a choice, given its goals.

It's possible that, even though these sorts of policies do turn away perfectly competant rationalists, they are the only alternative to ending up with a comfortable community of one-or-two-sigmas-above-the-mean rationalists rather than an ultra-elite x-rationality club that can bootstrap itself into the sort of excellence that we e... (read more)

If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

Availability of investing is NOT a disproof that akrasia is NOT the complete explanation. Successful investing is rationality+financial education+a lot of work (Buffett is rumored to read an incredible amount of accounting statements), and hence subject to akrasia.

Better decisions are clearly one possible positive outcome of rationality training. But another significant positive outcome is reaching the same decision faster. In my work, there are a number of rationality techniques that I have learned that have not necessarily changed the end result I have come to, but that have contributed to me spending less time confused, and getting to the right result more quickly than I otherwise would have.

Anything that frees up time in this way, has real, positive, and measurable effects on my life. (Also, confusion, and things-not-working are frustrating and stressful; so the less time I spend confused, the better)

Could you please tell us the specific techniques and/or situations? (I'm sorry to keep asking this of everyone, but the answers are really interesting/useful. We need to figure out what different peoples' practice actually looks like, and what mileage people do and don't get from it. In detail.)
[Sorry for the slow response. Have been away for the weekend.] No need to apologize, it's an excellent question. And to be honest, because my work involves a lot of data analysis, and using such analysis to inform decision-making, I may be cheating somewhat here. There are times when remembering that "probability is in the mind" has stopped me getting confused and helped me reach the right answer more quickly, but they're probably not particularly generalizable. ;) Here's a quick list of some techniques that have helped that might be more generally applicable. They're not necessarily techniques that I always manage to apply consistently, but I'm working on it, and when I do, they seem to make a difference. (Listing them like this actually makes them seem pretty trivial; I'll leave others to decide whether they really warrant the imprimatur of "rationality techniques".) (1) Avoiding confirmation bias in program testing: I'm not a great programmer by any stretch of the imagination, but it is something I have to do a fair amount of. Almost every time I write a moderately complicated program, I have to fight the urge to believe that this time I've got it basically right on the first go, to throw a few basic tests at it, and get on with using it as soon as possible, without really testing it properly. The times I haven't managed to fight this urge have almost always resulted in much more time wasted down the line than taking a little more time at the outset to test properly. (2) Leaving a line of retreat. Getting myself too attached to particular hypotheses has also wasted a fair amount of my time. In particular, there's always a temptation, when data happens not to fit your preconceived ideas, to keep trying slightly different analyses to see whether they'll give you the answer you expected. This can sometimes be reasonable, but if you're not careful, can lead to wasting an enormous amount of time chasing something that's ultimately a dead end. I think that forcing

I can't think of any people who started out merely above-average, developed an interest in x-rationality, and then became smart and successful because of that x-rationality.

I'm working on this.

In the spirit of concrete reductions, I have a question for everyone here:

Let's say we took a random but very large sample of students from prestigious colleges, split them into two groups, and made Group A take a year-long class based on Overcoming Bias, in which students read the posts and then (intelligent, engaging) professors explained anything the students didn't understand. Wherever a specific technique was mentioned, students were asked to try that technique as homework.

Group B took a placebo statistics class similar to every other college statisti... (read more)

Does the course use CBT-like techniques, where e.g. when "Leave a line of retreat" is taught, participants specifically list out all the possibilities where fear might be preventing them from thinking carefully, and build themselves lines of retreat for those possibilities? And learn cached heuristics for noticing, through the rest of their lives, when leaving a line of retreat would be a good idea, together with habits for actually doing so? Also, does the course have a community spirit, with peers asking one another how things went, and pushing one another to experiment and implement? If so, I'd give 50% odds (for each separate proposition, not the conjunction) that the group A salaries are higher variance than the group B's, and that the 98th percentile wealthiest / most famous / most impactful of group A is significantly wealthier / more famous / more successful at improving their chosen fields than the 98th percentile of group B. Significantly, like... times five, say (though I'd expect a larger multiplier from the "changing their chosen fields to work well" than from the "making more money"; strategicness is more rarely applied to the former, and there's lower hanging fruit). (I would not expect such a gap between the two groups' medians.)
I would expect very little correlation with salaries. And about self-reported happiness - I often think that knowing about all biases, memory imperfections and all that stuff, and about how difficult it is to decide correctly, makes me substantially less happy.
prase, is happiness much of a goal for you? If so, have you tried to apply rationality toward it, e.g. by reading the academic research on happiness (Jonathan Haidt's "The Happiness Hypothesis" is a nice summary) and thinking through what might work for you?

The most effective way for you to internally understand the world and make good decisions is to be super rational. However, the most effective way to get other people to aid you on your quest for success is to practice the dark arts. The degree to which the latter matters is determined by the mean rationality of the people you need to draw support from, and how important this support is for your particular ambitions.

I strongly suspect that it is unreasonable to expect people to actively apply x-rationality on a frequent, conscious basis--to do so would be to fight against human cognitive architecture, and that won't end well.

Most of our decisions are subconscious. We won't be changing this. The place of x-rationality is not to make on-the-spot decisions, it's to provide a sanity check on those decisions and, as necessary, retrain the subconscious decision making processes to better approximate rationality.

I think you are right that x-rationality doesn't help an individual win much on a day to day basis. But there are some very important challenges that humanity as a whole is failing for lack of x-rationality.

The current depression. The fact that we aren't adequately protecting the earth from asteroids. DDT being banned. Nobody's getting froze. Religion. First-past-the post elections. Most wars.

5Paul Crowley
DDT isn't banned, never has been. I'm with you on most everything else. At some stage we're going to have to work out how to talk about politics here. I've wondered about a top-level post to find out what we practically all agree on - I suspect for example that few of us think the drug war is a good idea.
From a 1972 Environmental Protection Agency press release entitled "DDT Ban Takes Effect":
Religion, FPTP elections and wars are irrational even according to non-x rationality. (With all sorts of caveats, which apply just as much to x-rationality.) The DDT ban thing is a myth, as ciphergoth points out. Asteroids and cryogenics, maybe, in so far as making the right decisions there probably involve a large element of Shut Up And Multiply; but actually we are making some effort to spot asteroids early enough and the probabilities governing whether one should sign up for cryogenics are highly debatable. Perhaps more x-rationality would help humanity as a whole to address those issues, but mostly they come about because so many people aren't even rational, never mind x-rational.
Perhaps--but many a logician has believed in God. Take somebody like Thomas Aquinas--he was for a long time the paradigm of rationality. I'd suggest it takes x-rationality to truly shatter your pre-existing losing framework and re-examine your priors.
Do you have evidence that it was lack of x-rationality that enabled Aquinas to believe in God, rather than (1) different evidence from what we have now (e.g., no long track record of outstandingly successful materialistic science; no evolutionary biology to provide an alternative explanation for the adaptation of living things; no geological investigations to show that the earth is very much older than Aquinas's religious beliefs said it was) and (2) being embedded in a culture that pushed him much harder towards belief in God than ours does to us? Robert Aumann, to take an example Eliezer's used a few times, is pretty expert in at least some aspects of the art of x-rationality, and is also Orthodox Jewish.
Exactly--Aumann has the same evidence that you or I have about materialist scientific facts, yet chooses not to utilize x-rationality to accurately evaluate his beliefs. While I can't interview Aquinas about the reasons he believed in God, I'm sure the things you listed were causally important. However, if he had had x-rationality, the other elements wouldn't have made a difference--in some sense, x-rationality is a way of getting around the limitations of a particular culture and time. Do you think a general AI would have any difficulty disbelieving in God, even if it had been "raised" in a culture in which belief was common and incentivized?
That probably depends on what you mean by "a general AI". We humans are (approximately) general natural intelligences (indeed, that's almost the definition of what many people mean by "general" in this context), and plenty of humans have lots of difficulty disbelieving in God. If you mean an AI whose intelligence and knowledge are greatly superhuman, emerging from a human culture in which belief in God is common, then I expect it would (knowing its own intellectual superiority to us) have little difficulty escaping from the cultural presumption of theism. As for a culture of superhuman AIs in which theism was common, I don't know; the mere existence of such a culture would be extremely interesting and good evidence for something surprising (which might or might not be theism).
I mean an AI that follows Eliezer's general outlines of one; that is, an AI which can extrapolate maximally from a given set of evidence. By the way, I find it hard to imagine a culture of superhuman AIs in which theism is common. I'd be interested to talk a little more about how that would work--in particular, what evidence each AI would accept from other AIs that would convince them to be a theist.
Yeah, me too. That was rather my point.
So by spending our resources on studying rationality, we are cooperating in a giant Prisoner's Dilemma?
2Paul Crowley
No, people don't only do good in the hope that good will be done to them; most people value the welfare of others and the survival of humanity inherently, at least to some extent.

I think one reason might be that the vast majority of the decisions we make are not going to make a significant difference as to our overall success by themselves; or rather, not as significant a difference as chance or other factors (e.g., native talent) could. For example, take the example about not buying into a snake-oil health product lessdazed uses above: you've benefited from your rationality, but it's still small potatoes compared to the amount of benefit you could get from being in the right place at the right time and becoming a pop star... or ge... (read more)

[W]e should generally expect more people to claim benefits than to actually experience them.

I don't think this claim is supported. There are reasons (some presented) why we should expect this. There are also reasons (a few listed below) why we should expect the opposite. I don't see at all why we should expect either set to dominate.

Reasons I might not post a benefit I've accrued:

1) I'm too busy out enjoying my improved life. 2) The self-congratulatory thread smells too much of an affective death spiral. 3) I am unsure how much of the benefit was act... (read more)

study evolutionary psychology in some depth, which has been useful in social situations

Could you elaborate on this?

I doubt that it directly told you anything useful, but it was more likely helpful in telling you to pay attention and not to interpret things through your usual beliefs.

X-Rationality can help you succeed. But so can excellent fashion sense. It's not clear in real-world terms that x-rationality has more of an effect than fashion. And don't dismiss that with "A good x-rationalist will know if fashion is important, and study fashion." A good normal rationalist could do that too; it's not a specific advantage of x-rationalism, just of having a general rational outlook.

Yet many highly intelligent people with normal rationality have terrible fashion sense, particularly males, at least in my anecdotal experience. Di... (read more)

"Yet many highly intelligent people with normal rationality have terrible fashion sense" Hrm, I'm not sure what evidence there is that highly intelligent people worse fashion sense than equivalent people [let's stick to the category of males, with which I'm most familiar]. It seems to me like "fashion" for males comes down to a few simple rules, that a monkey (or, for that matter any programmer or mathematician) can master. The problem seems to be that (1) one does need to master these rules (2) sometimes, it means one does not dress comfortably. I would like to offer a competing hypothesis: nerds have just as much "innate" fashion sense as non-nerds, but they feel that fashion is beneath them, that dressing comfortably is more important than following fashion, or that they would prefer to dress to impress nerds (with T-shirts that say "P(H|E) = P(E|H)*P(H)/P(E)" for example) than to impress non-nerds. In other words, the much simpler hypothesis "dress is usually worn to self-identify as a member of a tribe" is enough to explain nerds' perceived lack of fashion sense. [For the record, here is how a nerd male can "simulate" a reasonable facsimile of fashion sense: for semi-formal occasions, get a couple of nice suits and wear them. If nobody else would wear a tie, wear a suit without the tie (if your ability to predict whether people will wear a tie is that bad, improve it with explicit Bayesian approximation). For all other occasions, wear dark colored slacks and a button down shirt with a compatible color (ask a person you trust about which colors go with which, and keep a table glued to the inside of your closet. Any "nerd" has mastered skills tremendously more complicated than that (hell, correctly writing HTML is more complicated). One can only assume it is lack of motivation, not of ability.] For myself as an example of nerd, I can definitely say the reason I dress "with a horrible fashion sense" is as a tribal identification scheme. In situations where my
Personally, I've been able to get away with just dark slacks and a dark formal shirt. That said, I usually dress quite "horribly" by fashion standards, because there's no one in my day-to-day life who'd be impressed by my mad fashion skills, so I might as well dress comfortably at no penalty.
I've talked before in this same vein about the limits of rationality. One can be a perfect rationalist and always know what to do in a given situation, yet still be unable to do it for whatever reason. This suggests that pretty strongly that good "rationalists" would be wise to invest their time into other areas as well, since rationalism alone won't turn you into the ubermensch. It won't make you healthy and fit, it won't enable you to talk to girls any better or make friends any easier. (And I object to any conception of "rationalism" so sweepingly broad that it manages to subsume every possible endeavor you'd set out on, e.g., the old "a good rationalist would realize the importance of these things and figure out meta-techniques for developing these skills.")
Three other suggestions: (d) they've let "bad at fashion", "bad social skills", and the like become part of their identities, rationalized by the belief that those things are shallow, non-intellectual, whatever; (e) they didn't practice those skills at a young enough age (because they were too young to realize the importance, they were socially excluded, ...) to deeply learn them, also reinforcing both (d) and a (destructive, hard to break) sense of being low-status; (f) high intelligence + interest/aptitude in rationality correlates with mild autism-spectrum traits (not necessarily sufficient to be diagnosed, but enough to cause social problems, particularly in childhood).
I think all of those are highly plausible factors (all of which applied to me, btw). Additionally, they may have internalized the stereotype that rational people should act like Spock. And conversely, they may associate those skills with people they dislike: "those are the shallow kinds of things the popular people do, whereas I am deep." I like the interactionist perspective between nature and nurture you are taking here. It's not necessarily destiny that these people grow up with social deficits, it's just a common outcome of the interaction of their individual characteristics with a negative formative social environment. This is a can of worms that I was thinking about opening up. Our normal intelligent rationalists would also tend to be high on "systemizing" rather than "empathizing" in Simon Baron-Cohen's theory, and more interested in "things" on the "people vs things" dimension. The result is that the kind of neurotypical cognition required for social skills and fashion sense may seem non-intuitive or even alien to the category of people we are talking about. For instance, fashion and social skills often involve doing things simply because other people are doing them, which may defy one's sense of individualism, and belief that behaviors should have objective purpose. Furthermore, this type of individual may feel that people should be accorded status based on "objective merit," which means being good at the things that matter to our intelligent rationalists. They may find it nauseating that status often depends on things like clothing, body language and voice tonality, who you hang out with, etc... rather than on actual intelligence or competence. 90% of social communication will seem meaningless to them, because it is based on emoting, status ploys, or pointing out things that are obvious, in contrast to the type of communication that is "really" meaningful, such as exchanging of ideas, factual information, or practical processes. For this type of int
I agree that there's some level missed by the distinction between 'normal' rationality and 'x-rationality' and it's in that middle ground that I feel I've derived the most practical benefits from rationality. The examples you give are good ones. Other examples I could give from my own experience are personal finance and weight loss. Using personal finance as an example: I consider myself to have always possessed an above average level of intelligence and 'normal' rationality. I have a scientific education and make my living as a computer programmer. Until fairly recently though I let my emotional dislike of form filling get in the way of organizing my personal finances effectively. A general desire to more rigorously apply 'normal' rationality in my life to improve my outcomes led me to recognize that I was irrationally allowing my negative reaction to paperwork to have a significant financial impact. By comparing the marginal utility of a few hours of unpleasant labour optimizing my tax situation to a few hours of tedious paid employment I realized I was making an irrational choice and recognizing that was an aid in overcoming the obstacle. Recognizing the logical flaws in the kinds of rationalizations I'd used to justify my previous lack of organization was also helpful. Often I would use clever-sounding arguments to justify avoiding a task which was simply unpleasant.
HughRistik, this is only evidence if people with a higher level of rationality do better at improving their fashion skills, social skills, etc. My impression is that we do do somewhat better, but it's not obvious, and more data would be good.

In the case of Hubbard, preaching irrationality and being irrational is different. Hubbard went genuinely crazy in his later years, but when he knew what he was doing when he invented Scientology. He even said in an interview once "I'm tired of writing for a penny a page. If a man really wanted to make a million dollars, he would invent a religion."

If you're going to craft memetic weapons, you'd better make damn sure you've developed a resistance to your own products before you begin peddling them. Hubbard ended up spending lots of his time around people who had been infected with his viral religious propaganda... and inevitably, he became infected himself. People with high Int and Cha tend to believe their own propaganda. They're also the ones who tend to have unrealistically positive beliefs about their own intellectual competence, and little concern about going through the tedious and uncomfortable process of examining their own beliefs and practices.
Except that even before all of that and formally inventing Scientology, he hobnobbed with the likes of Crowley and believed that there is a horrible conspiracy of psychologists that Must Be Stopped.

Reading OB/LW forced me to look hard at my contradictory beliefs about politics, and admit that I no longer believed certain things I used to believe, particularly about the market. If I don't get anything else out of it, that alone would be a large bonus.

Even before reading, I formulated an explanation for myself "all people who are too stupid not to jump off the roof will simply die out", market mechanisms and natural selection will remove all the really destructive consequences of everyday stupidity available to them, will collect all the low-hanging fruits, so the study of rationality will help only in rare or individually weak negative consequences issues. On average, rationalists will be more successful than non-rationalists, but differences between individuals will be greater than differences betwe... (read more)

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Well, it did ultimately help you make SlateStarCodex and Astral Codex Ten successful, which provided a haven for non-extremist thought to thousands of people. And since the latter earned hundreds of thousands in annual revenue, you were able to create the ACX grants program that will probably make the world a... (read more)

1: Specifically, reading Overcoming Bias convinced me to study evolutionary psychology in some depth, which has been useful in social situations. As far as I know. I'd probably be biased into thinking it had been even if it hadn't, because I like evo psych and it's very hard to measure.

Oooh! I realize this is an old post, but I'm desperately curious for some concrete examples of this.

"Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts."

I don't understand the implications of seeing it as part of the same art or a different art altogether.


as robin has pointed out on numerous occasions, in many situations it is in our best interest to believe, or profess to believe, things that are false. because we cannot deceive others very well, and because we are penalized for lying about our beliefs, it is often in our best interest to not know how to believe things more likely to be true. refusing to believe popular lies forces you to either lie continually or to constantly risk your relative status within a potentially useful affiliative network by professing contrarian beliefs or, almost as bad, no b... (read more)

winning takes time. few of us have gotten rich yet.

May I humbly suggest changing the title to "Extreme Rationality: It's Not That Great"? (This will not break any links!)

It actually just occurred to me that the intelligence professions might benefit greatly from some x-rationality. We may not have to derive gravity from an apple, but the closer we come to that ideal, the less likely failures of intelligence become.

Intelligence professionals are constantly engaged a very Bayesian activity, incorporating new data into estimates of probabilities and patterns. An ideal Bayesian would be a fantastic analyst.

3Eliezer Yudkowsky
Ja, in particular modern intelligence professionals seem to have problems with separating out the information they get from others and the information they're trying to pass on themselves, reporting only their final combined judgment instead of their likelihood-message, which any student of Bayes nets knows is Wrong.

If people typically found great personal benefits from reading OB/LW type material, then we would not be such a minority.

We hope that that rationality is increasing, and it could be, but I don't have much confidence that 30 years from now people, even people in positions of power, will be much more rational than they are now.

2: Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts.


Your post is a great improvement on mine. Thanks, esp. for the "limiting factor" riff.

Am I alone in thinking the word "akrasia" doesn't quite describe our problem? Isn't it more like "apathy"? Some people wish to be able to do the things they want; lucky them! Me, I just wish to want to do the things I'm able to do.

0Scott Alexander
You're welcome, even though I was pretty sure I was arguing against you. My poor models of other people's opinions strike again.

I will list the only example that comes to my mind : better x-rationality techniques have actually helped me get my university diploma : not a few times getting out of a difficult situation, where I used what I knew of heuristics, biases, the limits and usual mistakes in normal rationality, how one can sound rational regardless of whether he really is ... to give off that impressive aura of someone who knows what he's doing at little cost. To sound rational when facing an audience.

To my defense, I actually faked the cues and tells of my rationality, skill... (read more)