Related to: Individual Rationality is a Matter of Life and Death, The Benefits of Rationality, Rationality is Systematized Winning
But I finally snapped after reading: Mandatory Secret Identities

Okay, the title was for shock value. Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

For this post, I will be using "extreme rationality" or "x-rationality" in the sense of "techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training." It seems pretty uncontroversial that there are massive benefits from going from a completely irrational moron to the average intelligent person's level. I'm coining this new term so there's no temptation to confuse x-rationality with normal, lower-level rationality.

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

So, what are these "benefits" of "x-rationality"?

A while back, Vladimir Nesov asked exactly that, and made a thread for people to list all of the positive effects x-rationality had on their lives. Only a handful responded, and most responses weren't very practical. Anna Salamon, one of the few people to give a really impressive list of benefits, wrote:

I'm surprised there are so few apparent gains listed. Are most people who benefited just being silent? We should expect a certain number of headache-cures, etc., just by placebo effects or coincidences of timing.

There have since been a few more people claiming practical benefits from x-rationality, but we should generally expect more people to claim benefits than to actually experience them. Anna mentions the placebo effect, and to that I would add cognitive dissonance - people spent all this time learning x-rationality, so it MUST have helped them! - and the same sort of confirmation bias that makes Christians swear that their prayers really work.

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn't entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics - syllogisms, fallacies, and the like - have been around much longer. The few groups who made a concerted effort to study x-rationality didn't shoot off an unusual number of geniuses - the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard's superstar followers, many of this century's most successful people have been notably irrational.

There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making better decisions than them not make you more successful?!?

This is a difficult question, but I think it has an answer. A complex, multifactorial answer, but an answer.

One factor we have to once again come back to is akrasia2. I find akrasia in myself and others to be the most important limiting factor to our success. Think of that phrase "limiting factor" formally, the way you'd think of the limiting reagent in chemistry. When there's a limiting reagent, it doesn't matter how much more of the other reagents you add, the reaction's not going to make any more product. Rational decisions are practically useless without the willpower to carry them out. If our limiting reagent is willpower and not rationality, throwing truckloads of rationality into our brains isn't going to increase success very much.

This is a very large part of the story, but not the whole story. If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

So the second factor is that most people are rational enough for their own purposes. Oh, they go on wild flights of fancy when discussing politics or religion or philosophy, but when it comes to business they suddenly become cold and calculating. This relates to Robin Hanson on Near and Far modes of thinking. Near Mode thinking is actually pretty good at a lot of things, and Near Mode thinking is the thinking whose accuracy gives us practical benefits.

And - when I was young, I used to watch The Journey of Allen Strange on Nickleodeon. It was a children's show about this alien who came to Earth and lived with these kids. I remember one scene where Allen the Alien was watching the kids play pool. "That's amazing," Allen told them. "I could never calculate differential equations in my head that quickly." The kids had to convince him that "it's in the arm, not the head" - that even though the movement of the balls is governed by differential equations, humans don't actually calculate the equations each time they play. They just move their arm in a way that feels right. If Allen had been smarter, he could have explained that the kids were doing some very impressive mathematics on a subconscious level that produced their arm's perception of "feeling right". But the kids' point still stands; even though in theory explicit mathematics will produce better results than eyeballing it, in practice you can't become a good pool player just by studying calculus.

A lot of human rationality follows the same pattern. Isaac Newton is frequently named as a guy who knew no formal theories of science or rationality, who was hopelessly irrational in his philosophical beliefs and his personal life, but who is still widely and justifiably considered the greatest scientist who ever lived. Would Newton have gone even further if he'd known Bayes theory? Probably it would've been like telling the world pool champion to try using more calculus in his shots: not a pretty sight.

Yes, yes, beisutsukai should be able to develop quantum gravity in a month and so on. But until someone on Less Wrong actually goes and does it, that story sounds a lot like when Alfred Korzybski claimed that World War Two could have been prevented if everyone had just used more General Semantics.

And then there's just plain noise. Your success in the world depends on things ranging from your hairstyle to your height to your social skills to your IQ score to cognitive constructs psychologists don't even have names for yet. X-Rationality can help you succeed. But so can excellent fashion sense. It's not clear in real-world terms that x-rationality has more of an effect than fashion. And don't dismiss that with "A good x-rationalist will know if fashion is important, and study fashion." A good normal rationalist could do that too; it's not a specific advantage of x-rationalism, just of having a general rational outlook. And having a general rational outlook, as I mentioned before, is limited in its effectiveness by poor application and akrasia.

I no longer believe mastering all these Overcoming Bias and Less Wrong techniques will turn me into Anasûrimbor Kellhus or John Galt. I no longer even believe mastering all these Overcoming Bias techniques will turn me into Eliezer Yudkowsky (who, as his writings from 2001 indicate, had developed his characteristic level of awesomeness before he became interested in x-rationality at all)3. I think it may help me succeed in life a little, but I think the correlation between x-rationality and success is probably closer to 0.1 than to 1. Maybe 0.2 in some businesses like finance, but people in finance tend to know this and use specially developed x-rationalist techniques on the job already without making it a lifestyle commitment. I think it was primarily a Happy Death Spiral around how wonderfully super-awesome x-rationality was that made me once think otherwise.

And this is why I am not so impressed by Eliezer's claim that an x-rationality instructor should be successful in their non-rationality life. Yes, there probably are some x-rationalists who will also be successful people. But again, correlation 0.1. Stop saying only practically successful people could be good x-rationality teachers! Stop saying we need to start having huge real-life victories or our art is useless! Stop calling x-rationality the Art of Winning! Stop saying I must be engaged in some sort of weird signalling effort for saying I'm here because I like mental clarity instead of because I want to be the next Bill Gates! It trivializes the very virtues that brought most of us to Overcoming Bias, and replaces them with what sounds a lot like a pitch for some weird self-help cult...

...

...

...but you will disagree with me. And we are both aspiring rationalists, and therefore we resolve disagreements by experiments. I propose one.

For the next time period - a week, a month, whatever - take special note of every decision you make. By "decision", I don't mean the decision to get up in the morning, I mean the sort that's made on a conscious level and requires at least a few seconds' serious thought. Make a tick mark, literal or mental, so you can count how many of these there are.

Then note whether you make that decision rationally. If yes, also record whether you made that decision x-rationally. I don't just mean you spent a brief second thinking about whether any biases might have affected your choice. I mean one where you think there's a serious (let's arbitrarily say 33%) chance that using x-rationality instead of normal rationality actually changed the result of your decision.

Finally, note whether, once you came to the rational conclusion, you actually followed it. This is not a trivial matter. For example, before writing this blog post I wondered briefly whether I should use the time studying instead, used normal (but not x-) rationality to determine that yes, I should, and then proceeded to write this anyway. And if you get that far, note whether your x-rational decisions tend to turn out particularly well.

This experiment seems easy to rig4; merely doing it should increase your level of conscious rational decisions quite a bit. And yet I have been trying it for the past few days, and the results have not been pretty. Not pretty at all. Not only do I make fewer conscious decisions than I thought, but the ones I do make I rarely apply even the slightest modicum of rationality to, and the ones I apply rationality to it's practically never x-rationality, and when I do apply everything I've got I don't seem to follow those decisions too consistently.

I'm not so great a rationalist anyway, and I may be especially bad at this. So I'm interested in hearing how different your results are. Just don't rig it. If you find yourself using x-rationality twenty times more often than you were when you weren't performing the experiment, you're rigging it, consciously or otherwise5.

Eliezer writes:

The novice goes astray and says, "The Art failed me."
The master goes astray and says, "I failed my Art."

Yet one way to fail your Art is to expect more of it than it can deliver. No matter how good a swimmer you are, you will not be able to cross the Pacific. This is not to say crossing the Pacific is impossible. It just means it will require a different sort of thinking than the one you've been using thus far. Perhaps there are developments of the Art of Rationality or its associated Arts that can turn us into a Kellhus or a Galt, but they will not be reached by trying to overcome biases really really hard.

Footnotes:

1: Specifically, reading Overcoming Bias convinced me to study evolutionary psychology in some depth, which has been useful in social situations. As far as I know. I'd probably be biased into thinking it had been even if it hadn't, because I like evo psych and it's very hard to measure.

2: Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts.

3: This is actually an important point. I think there are probably quite a few smart, successful people who develop an interest in x-rationality, but I can't think of any people who started out merely above-average, developed an interest in x-rationality, and then became smart and successful because of that x-rationality.

4: This is a terribly controlled experiment, and the only way its data can be meaningfully interpreted at all is through what one of my professors called the "ocular trauma test" - when the data hits you between the eyes. If people claim they always follow their rational decisions, I think I will be more likely to interpret it as lack of enough cognitive self-consciousness to notice when they're doing something irrational than an honest lack of irrationality.

5: In which case it will have ceased to be an experiment and become a technique instead. I've noticed this happening a lot over the past few days, and I may continue doing it.

Extreme Rationality: It's Not That Great
New Comment
281 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So the second factor is that most people are rational enough for their own purposes. Oh, they go on wild flights of fancy when discussing politics or religion or philosophy, but when it comes to business they suddenly become cold and calculating. This relates to Robin Hanson on Near and Far modes of thinking. Near Mode thinking is actually pretty good at a lot of things, and Near Mode thinking is the thinking whose accuracy gives us practical benefits.

Seems to me that most of us make predictably dumb decisions in quite a variety of contexts, and that by becoming extra bonus sane (more sane/rational than your average “intelligent science-literate person without formal rationalist training”), we really should be able to do better.

Some examples of the “predictably dumb decisions” that an art of rationality should let us improve on:

  • Dale Carnegie says (correctly, AFAIK) that most of us try to persuade others by explaining the benefits from our point of view (“I want you to play basketball with me because I don’t have enough people to play basketball with”), even though it works better to explain the benefits from their points of view. Matches my experiences, and matches also man
... (read more)

I don't think you need the art of rationality much for that stuff. I think just being reminded is almost as good, if not better. Who do you think would do better on them: someone who read all of LW/OB except this post, or someone who read this post only? Now consider that reading all of LW/OB would take at least 256 times longer.

9loqi
That was only a sample. Should we really prefer keeping them all in mind over learning the pattern behind them?

Learning about rationality won't necessarily help you realize where you're being irrational. If you've got a general method for doing that, I'd be interested, but I don't think it's been discussed much on this blog.

1[anonymous]
Interesting. But searching a bit this applies to business. Looks nice on a job interview. Don't try this on a date! (no lukeprog allowed) Thanks for the advice! For completedness, I'd assume this is what you meant: http://www.dalecarnegie.com/communication_effectiveness_-_present_to_persuade/ or at least gives it a deeper point.

Don't try this on a date! (no lukeprog allowed)

Why not? Lukeprog's mistake, assuming you're talking about what I think you're talking about, seems to have been quite the opposite of trying to explain the benefits of an option from the other person's point of view:

So I broke up with Alice over a long conversation that included an hour-long primer on evolutionary psychology in which I explained how natural selection had built me to be attracted to certain features that she lacked.

I imagine he'd have had better luck, or at least not become the butt of quite so many relationship jokes on LW, if he'd gone with something like "you deserve someone who appreciates you better". Notice that from Alice's perspective, this describes exactly the same situation -- but in terms of what it means to her.

0[anonymous]
Nah. Just meant that considering his posts on relationships, he might try that, so therefore, no lukeprog allowed. In truth I was just trying to use reverse psychology to get him to do it and hopefully post some results. And this is where this silliness ends before I get more downvoetes.
-8Lumifer

Imagine a world where the only way to become really rich is to win the lottery (and everybody is either risk averse or at least risk neutral). With an expected return of less than $1 per $1 spent on tickets, rational people don't buy lottery tickets. Only irrational people do that. As a result, all the really rich people in this world must be irrational.

In other words, it is possible to have situations where being rational increases your expected performance, but at the same time reduces your changes of being a super achiever. Thus, the claim that "rationalists should win" is not necessarily true, even in theory, if "winning" is taken to mean being among the top performers. A more accurate statement would be, "In a world with both rational and irrational agents, the rational agents should perform better on average than the population average."

There's an extent to which we live in such a world. Many people believe you can achieve your wildest dreams if you only try hard enough, because by golly, all those people on the TV did it!

[-]Hans120

But many poor/middle-class people also believe that they can never become rich (except for the lottery) because the only ways to become rich are crime, fraud, or inheritance. And this leads them to underestimate the value of hard work, education, and risk-taking.

The median rationalist will perform better than these cynics. But his average wealth will also be higher, assuming he accurately observes his chances at becoming succesful.

2[anonymous]
From what I can see, crime and fraud are harder get significant success with than 'real' work. Education and risk-taking are also rather vital.
8NicoleTedesco
It can be rational to accept the responsibility of high risk/high reward behavior, on specific occasions and under specific circumstances. The trick is recognizing those occasions and circumstances and also recognizing when your mind is fooling you into believing "THIS TIME IS DIFFERENT". A rational agent is Warren Buffet. An irrational agent is Ralph Cramden. Both accept high risk/high reward situations. One is rational about that responsibility. The other is not. Also, in a world of both rational and irrational agents, in a world where the rational agent must depend upon the irrational, it is sometimes rational to think irrationally!

And this is why I am not so impressed by Eliezer's claim that an x-rationality instructor should be successful in their non-rationality life. Yes, there probably are some x-rationalists who will also be successful people. But again, correlation 0.1. Stop saying only practically successful people could be good x-rationality teachers! Stop saying we need to start having huge real-life victories or our art is useless! Stop calling x-rationality the Art of Winning! Stop saying I must be engaged in some sort of weird signalling effort for saying I'm here because I like mental clarity instead of because I want to be the next Bill Gates! It trivializes the very virtues that brought most of us to Overcoming Bias, and replaces them with what sounds a lot like a pitch for some weird self-help cult...

I think the truth is non-symmetrical: rationalism is the art of not failing, of not being stupid. I agree with you that "rationalists should win big" is not true in the sense Eliezer claims. However, rationalists should be generally above average by virtue of never failing big, never losing too much, e.g. not buying every vitamin at the health food store, not in cults, not bemoaning ancient relationships, etc.

1NicoleTedesco
Very good point!
[-]pjeby420

I'm not sure if it was your intent to point this out by contrast, but I would like to point out that a reasonable art of "kicking" would not rely on you making conscious decisions, let alone explicitly rational ones. Rather, it would rely on you ensuring that your subconscious has been freed from sources of bias ahead of time, and is therefore able to safely leap to conclusions in its usual fashion. An art that requires you to think at the time things are actually happening is not much of an art.

Case in point: when reading "Stuck In The Middle With Bruce", I became aware of a subconsciously self-sabotaging behavior I'd done recently. So I "kicked" it out by crosslinking the behavior with its goal-satisfaction state. It would be crazy to wait until the next occasion for that behavior to strike, and then try to reason my way around it, when I can just fix the bloody thing in the first place. (Interestingly, I mentioned the story to my wife, and described how it related to my own behavior... and she thought of a different sort of self-sabotage she was doing, and applied the same mindhack. So, as of now, I'd say that story was one of the top 5 most ... (read more)

[-][anonymous]250

I voted this up, but I'm replying because I think it's a critical point.

Our brains are NOT designed to make conscious decisions about every thing that crosses our path. Trying to do that is like trying to walk everywhere instead of driving: it's technically possible, but it will take you forever and will be exhausting.

Our brains seem to work more like this: our brains process whatever it is we're doing at the time, and then feed that processed data into our subconscious for use later. Sure it jumps in every once in a while for something important, but generally it sits back and lets your subconscious do the driving.

Rationality should be about putting the best processed information down into your subconscious, so it works the way you'd like it too. Trying to do everything consciously is a poor use of your brain, as it 1) ignores the way your brain is designed to function and 2) forgoes the use of the powerful subconscious circuitry that makes up an enormous part of it.

What does "crosslinking the behavior with its goal-satisfaction state" mean? Specifically, I'm unable to guess what you mean by "crosslinking" and "the goal-satisfaction state" (of a behavior).

1pjeby
More details can be found in this comment.
2roland
I had the same question as Jonathan and I've read the comment you mentioned. Where can we read/learn more about this technique?
9pjeby
It's based on a technique called "Core Transformation", developed by Connirae Andreas and Tamara Andreas, and it's discussed in a book of the same name. (I linked to it once before when someone asked about this a few weeks ago, and was severely downmodded for some reason, so you'll have to find it yourself.) My own version of the technique is a streamlined and stripped-down variation that removes a certain amount of superstition and ritual. (Among other things, I drop the "parts" metaphor, which some schools of NLP now consider to have been a bad idea in the first place.) The technique works by using imagination to elicit the reward states associated with a behavior, going to higher and higher levels of abstraction to reach the top (or root?) of a person's reward tree -- usually a quasi-mystical state like inner peace, oneness, compassion, or something like that. (These "core states" are a good candidate for the "god-shaped hole" in humans, btw.) Anyway, once you have access to such a state, it can be used as a reinforcer for alternative behaviors, as it's stronger than the diluted intermediate versions found at other levels of the person's goal tree. (More precisely, it can be used to extinguish the conditioned appetite that drives the problem behavior.) I teach this method and use it in coaching; my wife and I also use it personally. I'd link to my own workshops and recordings on the subject as well, but since I was downmodded for referring to a site where you could buy someone else's book, I shudder to imagine what would happen if I linked to a site where you could buy my products or services. ;-)
3roland
Please post the link. And why should you be afraid of downmodding? I have been downmodded for saying things that are true(at least IMHO). Don't give that much importance to the mods!
3pjeby
I'm not. I'm simply attempting to respect the wishes of others regarding what should or should not be posted here. Googling "Core Transformation" and "Gateway of Desire" (as phrases in quotes) will get you the links. Don't be confused by something else called "Quantum Touch - Core Transformation"; it's something unrelated (thank goodness).
9MBlume
People are trying to eliminate spam. Spammers tend to include links to outside services which cost money. Thus, your providing such a link gives you the superficial appearance of a spammer, and you got downmodded accordingly. You are not a spammer, you have participated in good faith in this community, at great personal effort, and contributed many useful insights as a result. I think by now, most people are aware of this, and you should not need to worry about giving the appearance of spamming.
4Paul Crowley
http://coretransformation.org/ appears to be the main website. This Google search finds related materials. All I could find on Wikipedia was this article on Steve Andreas.
[-]gjm100

The fact that everything I can find on the web carefully avoids giving details and instead takes the form "We have these fantastic techniques that can solve most of your problems; sign up for our seminars and we'll teach them to you" is ... not promising.

Promising the world, giving few details, and insisting on being paid before saying anything more, seems to me to be strongly correlated with dishonesty and cultishness. Since pjeby seems like a valuable member of this community, I hope this case happens to be different; but I'd like to see some evidence.

1roland
Well, you didn't grant my wish for a simple link, I have to google now. How sad. As for the wishes of others would you rather not post a truth then to be downvoted by the majority?
1Emile
Here's one link: http://themindhackersguild.com/workshops/
2MendelSchmiedekamp
Absolutely, learning to work with your subconscious is a necessity. After all it does far more computation than your conscious mind does. Of course, you ought to explore the techniques that let you take positive advantage of it too.
-3Annoyance
But it's consciously understanding and applying techniques to make your mind as a whole work better that's the heart of rationality. By and large the 'subconscious' is outside of our ability to control. The task isn't to bring the subconscious to heel, but to establish filters through which to screen the output of our minds, discarding that which is incompatible with rational thinking.
7MendelSchmiedekamp
Influencing your subconscious in rational ways is not easy or simple. But at the same time, simply because something is hard doesn't mean it should be discarded out of hand as a viable route to achieving your goals especially if those goals are important.
1pjeby
How about influencing your subconscious in irrational ways? I find that much easier, myself. The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table. If you store the right entries under the right keys, it does useful things. The hardest part of hacking it is that there's no "view source" button or way to get a listing of what's already in there: you have to follow associative links or try keys that have worked for other people. Well, I say hardest, but it's not so much hard as being sometimes tedious or time-consuming. The actually changing things part is usually quite quick. If it's not, you're almost certainly doing something wrong.
6loqi
I'm suspicious of this characterization. I've made a couple surprising subconscious deductions in the past, and they forcefully reminded me that there's a very complex human brain down there doing very complex brain things on the sly all the time. You may have have learned some tricks to manipulate it, but I'd be surprised if you've done more than scratch the surface if you really just consider it to be a simple lookup table.
0pjeby
I didn't say it was a simple lookup table. It's indexed in lots of non-trivial ways; see e.g. my post here about "Spock's Dirty Little Secret". I just said that fundamentally, it's a lookup table. I also didn't say it's not capable of complex behavior. A state machine is "just a lookup table", and that in no way diminishes its potential complexity of behavior. When I say the subconscious doesn't "think", I specifically mean that if you point your built-in "mind projection" at your subconscious, you will misunderstand it, in the same way that people end up believing in gods and ghosts: projecting intention where none exists. This is a major misunderstanding -- if not THE major misunderstanding -- of the other-than-conscious mind. It's not really a mind, it's a "Chinese room". That doesn't mean we don't have complex behavior or can't do things like self-sabotage. The mistake is in projecting personhood onto our self-sabotaging behaviors, rather than seeing the state machine that drives them: condition A triggers appetite B leading to action C. There's no "agency" there, no "mind". So if you use an agency model (including Ainslie's "interests" to some extent), you'll take incorrect approaches to change. But if you realize it's a state machine, stored in a lookup table, then you can change it directly. And for that matter, you can use it more effectively as well. I've been far more creative and better at strategy since I learned to engage my creative imagination in a mechanical way, rather than waiting for the muse to strike. Meanwhile, it'd also be a mistake to think of it as a single lookup table; it includes many things that seem to me like specialized lookup tables. However, they are accessible through the same basic "API" of the senses, so I don't worry about drawing too fine of a distinction between the tables, except insofar as how they appear relate to specific techniques.
0MendelSchmiedekamp
I look forward to seeing where your model goes as it becomes more nuanced. Among other things, I'm very curious about how your model takes into account actual computations (for example finding answers to combinatorial puzzles) that are performed by the subconscious.
0pjeby
What, you mean like Sudoku or something?
0MendelSchmiedekamp
Sudoku would be one example. I meant generally puzzles or problems involving search spaces of combinations.
0pjeby
Well, I'll use sudoku since I've experienced both conscious and unconscious success at it. It used to drive me nuts how my wife could just look at a puzzle and start writing numbers, on puzzles that were difficult enough that I needed to explicitly track possibilities. Then, I tried playing some easy puzzles on our Tivo, and found that the "ding" reward sound when you completed a box or line made it much easier to learn, once I focused on speed. I found that I was training myself to recognize patterns and missing numbers, combined with efficient eye movement. I'm still a little slower than my wife, but it's fascinating to observe that I can now tell the available possibilities for larger and larger numbers of spaces without consciously thinking about it. I just look at the numbers and the missing ones pop into my head. Over time, this happens less and less consciously, such that I can just glance at five or six numbers and know what the missing ones are without a conscious step. This doesn't require a complex subconscious; it's sufficient to have a state machine that generates candidate numbers based on seen numbers and drops candidates as they're seen. It might be more efficient in some sense to cross off candidates from a master list, except that the visualization would be more costly. One thing about how visualization works is that it takes roughly the same time to visualize something in detail as it does to look at it... which means that visualizing nine numbers would take about the same amount of time as it would for you to scan the boxes. Also, I can sometimes tell my brain is generating candidates while I scan... I hear them auditorially verbalized as the scan goes, although it's variable at what point in the scan they pop up; sometimes it's early and my eyes scan forward or back to double check. Is this the sort of thing you're asking about?
2MendelSchmiedekamp
It seems that our models are computationally equivalent. After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it. I have spent a great deal of time and reasoning on developing models of people in such a way. So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models. But the danger with models is that they are always limiting in what they can reveal. In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule. I suspect this is one particular short-coming of the look-up table basis for modeling the subconscious. I suspect my models have similar problems, but it's always hardest to see them from within.
1pjeby
Of course. But mine is a model specifically oriented towards being able to change and re-program it -- as well as understanding more precisely how certain responses are generated. One of the really important parts of thinking in terms of a lookup table is that it simplifies debugging. That is, one can be taught to "single-step" the brain, and identify the specific lookup that is causing a problem in a sequence of thought-and-action. How do you do that with a mind-projection model? The problem with modeling one's self as a "person", is that it gives you wrong ideas about how to change, and creates maladaptive responses to unwanted behavior. Whereas, with my more "primitive" model: 1. I can solve significant problems of myself or others by changing a conceptually-single "entry" in that table, and 2. The lookup-table metaphor depersonalizes undesired responses in my clients, allowing them to view themselves in a non-reactive way. Personalizing one's unconscious responses leads to all kinds of unuseful carry-over from "adversarial" concepts: fighting, deception, negotiation, revenge, etc. This is very counterproductive, compared to simply changing the contents of the table. Interestingly, this is one of the metaphors that I hear back from my clients the most, referencing personal actions to change. That is, AFAICT, people find it tremendously empowering to realize that they can develop any skill or change any behavior if they can simply load or remove the right data from the table. Of course novel solutions can be generated -- I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.
1MendelSchmiedekamp
I'm not talking about a mind projection model, I'm talking about using using information models constructed and vetted to effectively model people as a foundation for a different model of a part of a person. I've modeled my subconscious in a similar manner before, I've gained benefits from it not unlike some you describe. I've even gone so far as to model up to sub-processor levels of capabilities and multi-threading. At the same time I was developing the Other models I mentioned, but they were incomplete. Then during adolescence I refined my Other models well enough for them to start working. I can go more into that later, but as time went on it became clear that computation models simply didn't let me pack enough information in my interactions with my subconscious, so I needed a more information rich model. That is what I'm talking about. So bluntly, but honestly, I feel what you're describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I'm hoping you'll be moving forward. Search-engines and databases don't produce novel solutions on their own, even in the sense of a combinatorial algorithm. And certainly not in the sense of more creative innovation. There are many anecdotes claiming the subconscious can incorporate more dimensions in problems solving than the conscious - some more poetic than others (answers coming in dreams or in showers), it seems dangerous to simply disregard it.
3pjeby
Bluntly, but honestly, I think you'd be better off describing more precisely what model you think I should be using, and what testable benefits it provides. I'm always willing to upgrade, if a model lets me do something faster, easier, quicker to teach, etc. -- Just give me enough information to reproduce one of your techniques and I'll happily try it.
0MendelSchmiedekamp
I said what I meant there. It's a feeling. Which combined with lacking a personalized model of your cognitive architecture makes it foolish for me to suggest a specific replacement model. My comment about deep innovation is intended to point you towards one of the blind spots of your current work (which may or may not be helpful). I've been somewhere similar a long time ago, but I was working on other areas at the same time, which have led me to the models I use now. I sincerely doubt that that same avenue will work for you. Instead, I suggest you cultivate a skepticism of your work, plan a line of retreat, and start delving into the dark corners. As an aside: if you want a technique - using a model close to yours - consider volitional initiation of a problem on your subconscious "backburner" to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method. Using a more nuanced model, you can get much better results, but this should suffice to show you something of what I mean.
0pjeby
I've been doing that for about 24 years now. I fail to see how it has relevance to the model of mind I use for helping people change beliefs and behaviors. Perhaps you are assuming that I need to have ONE model of mind that explains everything? I don't consider myself under such a constraint. Note, too, that autonomous processing isn't inconsistent with a lookup-table subconscious. Indeed, autonomous processing independent of consciousness is the whole point of haivng a state-machine model of brain function. Consciousness is an add-on feature, not the point of having a brain. Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer's parents telling him, "you'll give up your childish ideas as soon as you get older".
0MendelSchmiedekamp
Good. It seemed the next logical step considering what you were describing as your model. It's also very promising that you are not trying to have a singular model. Which at least is useful data on my part. Developing meta-cognitive technology means having negative as well as positive results. I do appreciate you taking the time to discuss things, though.
0Annoyance
Any computational process can be emulated by a sufficiently complicated lookup table. We could, if we wished, consider the "conscious mind" to be such a table. Dismissing the unconscious because it's supposedly a lookup table is thus wrong in two ways: firstly, it's not implemented as such a table, and secondly, even if it were, that puts no limitations, restrictions, or reductions on what it's capable of doing. The original statement in question is not just factually incorrect, but conceptually misguided, and the likely harm to the resulting model's usefulness incalculable.
-1Annoyance
"The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table." Of all your errors thus far, those two are your most damaging.
3Steve_Rayhawk
I agree that the subconscious isn't just a giant lookup table, and that many people who make this error use it to justify practices which destroy other people's minds. But there are some important techniques of making the subconscious work better that are hard to invent unless you imagine that the subconscious is mostly a giant lookup table. pjeby uses these techniques in his practice. Do you deny pjeby's data that these techniques work? Do you even know which data made pjeby want to write "it's just a giant lookup table"? If you do know which data made pjeby want to write that, do you mean that it was wrong for him to write "the subconscious is just a giant lookup table" and not "the subconscious is mostly like just a giant lookup table"? I feel like you don't think through the real details of what other people are thinking and how those details would have to actually interact with the high standards you have for the thoughts of those people. All you do is tell them that you think something they did means they broke a rule.
5gjm
pjeby has provided very little data. He's claimed that his techniques work. He's described them in terms that (1) are supremely vague about what he actually does, and (2) seem to imply that he has gained the ability to change all sorts of things about the behaviour of the unconscious bits of his brain more or less at will. There have been other people and groups that have made similar claims about their techniques. For instance, the Scientologists (though their claims about what they can do are more outlandish than pjeby's). None of this means that pjeby is wrong, still less that he's not being honest with us: but it means that an appeal to "pjeby's data" is a bit naive. All we have so far -- unless there are gems hidden in threads I haven't read, which of course there might be -- are his claims.
2MendelSchmiedekamp
Annoyance has a point here. A look-up table is a very limiting model for a subconscious. What is the benefit you gain by assuming that there is no organizing structure, whether or not it is known to you, within your subconscious? Personally, I prefer a continually evolving model, updating with experience and observations. With periodic sanity checks of varying scales of severity. Not unlike how I model people. Of course this lends a resulting bias that I treat my subconscious a bit like a person, with encouragement, care, and deals. This can also lend positive outcomes like running subconscious mental operations for long term problem solving (a more active and volitional version of waiting for inspiration to strike) and encouraging those operations to have appropriate tracebacks to make it easier for me to consciously verify them. Not sure if that would work for other folks though, cognitive infrastructure may vary.
3[anonymous]
Right. No. More is possible: Is the rational person subject to "March winds"?
1pjeby
Speak for yourself. ;-) That's wasteful and inefficient. Bear in mind that there are two kinds of bias in the brain: hardware and software. The hardware biases cause software biases to get added, but those biases can also be removed, thereby eliminating the need to work around them. Conversely, for "hard" biases that can't be removed, much of the implementation of workarounds can be created by installing compensating biases. And it isn't even that complicated -- given appropriate (i.e. fast and unequivocal) feedback, the brain can make the software revisions on its own, without any complex conscious processes involved.
1Conor
What are the other posts in your top five?

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

In my life, I've used rationality to tackle some pretty tough practical problems. The type of rationality I have been successful with hasn't been the debiasing program of Overcoming Bias, yet I have been employing scientific thinking, induction, and heuristic to certain problems in ways that are atypical for the category of people you are calling normal rationalists. I don't know whether to call this "x-rationality" or not, partly because I'm not sure the boundaries between rationality and x-rationality are always obvious, but it's certainly more advanced rationality than what people usually apply in the domains below.

On a general level, I've been studying how to get good (or at least, dramatically better) at things. Here are some areas where I've been successful using rationality:

... (read more)
7MBlume
I would absolutely love to see the development of a rational art of dating. If you've more to say on this I'll definitely look forward to reading it.
5mattnewport
This is largely the basis of the whole online sub-community of 'Game' and the 'Seduction Community'. It may well fall under what Eliezer refers to as 'the dark arts' but many participants are fairly explicit about applying a rational/scientific approach to success with women.

I am highly familiar with the seduction community, and I've learned a lot from it. It's like extra-systemized folk psychology. It has certain elements of a scientific community, yet it is vulnerable to ideologies developing out of:

(a) bastardized versions of evolutionary psychology being thrown around like the proven truth, often leading to cynical and overgeneralized views of female behavior and preferences and/or overly narrow views of what works,

(b) financial biases,

(c) lack of rigor, because controlled experiments are not yet possible in this field (though I would never suggest that people wait until science catches up and gives us rigorous empirical knowledge before trying to improve their dating lives... who knows how long we will have to wait).

Yet there is promise for the community, because it's beholden to real world results. Its descriptions and prescriptions seems to have been improving, and it has gone through a couple paradigm shirts since the mid 80's.

6mattnewport
I've also learned some useful things from my more limited familiarity with the community. I'd tend to agree with your criticisms but I think the emphasis on rigorous 'field testing' and on 'doing what works' in much of the community shows some common ground with general efforts at rationality. As you say, this is an area (like many areas of day to day life) that is not easily amenable to controlled scientific experiment for a number of reasons but one of the lessons of Bayesian thinking/'x-rationality' that I've found useful is the emphasis on being comfortable with uncertainty, fuzzy evidence and making the best decisions given limited information. It's treacherous terrain for anyone seeking truth since, like investment or financial advice or healthcare, there is a lot of noise along with the signal. It's certainly an interesting area with many cross-currents to those interested in applying rationality though.
3AnnaSalamon
Do you think it would benefit from knowing some of the OB/LW rationality techniques? Or from the general OB/LW picture, where inference is a thing that happens in material systems, and that yields true conclusions, when it does, for non-mysterious reasons that we can investigate and can troubleshoot?
[-]pjeby130

Or from the general OB/LW picture, where inference is a thing that happens in material systems, and that yields true conclusions, when it does, for non-mysterious reasons that we can investigate and can troubleshoot?

One problem with interfacing formal/mathematical rationality with any "art that works", whether it's self-help or dating, is that when people are involved, there are feed-forward and feed-back effects, similar to Newcomb's problem, in a sense. What you predict will happen makes a difference to the outcome.

One of the recent paradigm shifts that's been happening in the last few years in the "seduction community" is the realization that using routines and patterns leads to state-dependence: that is, to a guy's self-esteem depending on the reactions of the women he's talked to on a given night. This has led to the rise of the "natural" movement: copying the beliefs and mindsets of guys who are naturally good with women, rather than the external behaviors of guys who are good with women.

Now, I'm not actually involved in the community; I'm quite happily married. However, I pay attention to developments in that field because it has huge over... (read more)

3AnnaSalamon
Experimenting, implementing, tracking results, etc. is totally compatible with the OB/LW picture. We haven't build cultural supports for this all that much, as a community, but we really should, and, since it resonates pretty well with a rationalist culture and there're obvious reasons to expect it to work, we probably will. Claiming that a particular general model of the mind is true, just because you expect that claim to yield good results (and not because you have the kind of evidence that would warrant claiming it as "true in general"), is maybe not so compatible. As a culture, we LW-ers are pretty darn careful about what general claims we let into our minds with the label "true" attached. But is it really so important that your models be labeled "true"? Maybe you could share your models as thinking gimmicks: "I tend to think of the mind in such-and-such a way, and it gives me useful results, and this same model seems to give my clients useful results", and share the evidence about how a given visualization or self-model produces internal or external observables? I expect LW will be more receptive to your ideas if you: (a) stick really carefully to what you've actually seen, and share data (introspective data counts); (b) label your "believe this and it'll work" models as candidate "believe this and it'll work" models, without claiming the model as the real, fully demonstrated as true, nuts and bolts of the mind/brain. In other words: (1) hug the data, and share the data with us (we love data); and (2) be alert to a particular sort of cultural collision, where we'll tend to take any claims made without explicit "this is meant as a pragmatically useful working self-model" tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models. If you actually tag your models with their intended use ("I'm not saying these are the ultimate atoms the mind is made of, but I have reasonably compelling evidence that thinking in th
1pjeby
Yeah, I've noticed that, which is why my comment history contains so many posts pointing out that I'm an instrumental rationalist, rather than an epistemic one. ;-)

I'm not sure it's about being an epistemic vs. an instrumental rationalist, vs. about tagging your words so we follow what you mean.

Both people interested in deep truths, and people interested in immediate practical mileage, can make use of both "true models" and "models that are pragmatically useful but that probably aren't fully true".

You know how a map of north America gives you good guidance for inferences about where cities are, and yet you shouldn't interpret its color scheme as implying that the land mass of Canada is uniformly purple? Different kinds of models/maps are built to allow different kinds of conclusions to be drawn. Models come with implicit or explicit use-guidelines. And the use-guidelines of “scientific generalizations that have been established for all humans” are different than the use-guidelines of “pragmatically useful self-models, whose theoretical components haven’t been carefully and separately tested”. Mistake the latter for the former, and you’ll end up concluding that Canada is purple.

When you try to share techniques with LW, and LW balks... part of the problem is that most of us LW-ers aren’t as practiced in contact-with-th... (read more)

1Vladimir_Nesov
Trying to interpret this charitably, I'll suggest a restatement: what you call a "theory" is actually an algorithm that describes the actions that are known to achieve the required results. In the normal use of the words, theory is an epistemic tool, leading you to come to know the truth, and a reason for doing something is explanation of why this something achieves the goals. Terminologically mixing opaque heuristic with reason and knowledge is a bad idea, in the quotation above the word "reason", for example, connotes more with rationalization than with anything else.
2pjeby
No, I'm using the term "theory" in the sense of "explanation" and "as opposed to practice". The theory of a self-help school is the explanation(s) it provides that motivate people to carry out whatever procedures that school uses, by providing a model that helps them make sense of what their problems are, and what the appropriate methods for fixing them would be. I don't see any incompatibility between those concepts; per DeBono (Six Thinking Hats, lateral thinking, etc.) a theory is a "proto-truth" rather than an "absolute truth". Something that we treat as if it were true, until something better is found. Ideally, a school of self-help should update its theories as evidence changes. Generally, when I adopt a technique, I provisionally adopt whatever theory was given by the person who created the technique, unless I already have evidence that the theory is false, or have a simpler explanation based on my existing knowledge. Then, as I get more experience with a technique, I usually find evidence that makes me update my theory for why/how that technique works. (For example, I found that I could discard the "parts" metaphor of Core Transformation and still get it to work, ergo falsifying a portion of its original theoretical model.) Also, I sometimes read about a study that shows a mechanism of mind that could plausibly explain some aspect of a technique, for example. Recently, for example, I read some papers about "affective asynchrony", and saw that it not only experimentally validated some of what I've been doing, but that it provided a clearer theoretical model for certain parts of it. (Clearer in the sense of providing a more motivating rationale, and not just because I can point to the papers and say, "see, science!") Similar thing for "reconsolidation" -- it provides a clear explanation for something that I knew was required for certain techniques to work (experiential access to a relevant concrete memory), but had no "theoretical" justification for. (I j

One common theme is recognizing when your theories aren't working and updating in light of new evidence. Many people are so sure that their beliefs about what 'should' work when it comes to dating are correct that they will keep trying and failing without ever considering that maybe their underlying theory is wrong. A common exercise used in the community to break out of these incorrect beliefs is to force yourself to go out and try things that 'can't possibly work' 10 times in a day, and then every day for a week or a month, until the false belief is banished.

I actually think the LW crowd could learn something from this approach - sometimes all the argument in the world is not as convincing as repeated confrontations with real world results. When it comes to changing behaviour (a key aspect of allowing rationality to improve results in our lives), rational argument is not usually the most effective technique. Rational argument may establish the need for change and the pattern for new behaviour but the most effective way to change behavioural habits is to just start consciously doing the new behaviour until it becomes a habit.

[-]MBlume100

In any rational art of dating in which I would be interested, "winning" would be defined to include, indeed to require, respect for the happiness, well-being, and autonomy of the pursued. I don't know enough about these sub-communities to say whether they share that concern -- what is the impression you've gotten?

6mattnewport
Many but by no means all in the community share that concern. I'm finding it interesting to note my own reluctance to link to some of the material since even among those who do share that concern there is discussion of some techniques that might be considered objectionable. One of the cornerstones of much of the material is that people are so conditioned by conventional beliefs about what 'should' work that they are liable to find what actually does work highly counter-intuitive at first. Reactions to the challenging of strongly held beliefs can be equally strong and I've often observed this in comment threads on the material. The most mainstream introduction to the community is probably "The Game" by Neil Strauss. I'm not sure it's the best starting point from the point of view of connections to rationality but it's an entertaining read if nothing else. I certainly believe it's possible to benefit from some of the ideas while maintaining your definition of 'winning' but equally there are some parts of the community which are less appealing.
-2roland
I have extensive knowledge in that matter and I would say that the techniques are value neutral. To make an analogy, think of Cialdini's science of influence and persuasion(http://en.wikipedia.org/wiki/Robert_Cialdini). What Evolutionary Psychology, Cialdini and others showed is that we humans can be quite primitive and react in certain predetermined ways to certain stimuli. The dating community has investigated the right stimuli for women and figured out the way to "get" her. You have to push the right buttons in the right order and we males are not different(although the type of buttons is different). In other words, what you learn in the dating community will teach you how to win the hearts of women. It's up to you how to use this skillset(yes, it's a skillset) IF you manage to acquire it, which btw. is not easy at all. It's just a technique, you can use it for good or bad, although admittedly it lends itself more for selfish purposes IMHO. Btw, women are also very selfish creatures, so don't make the mistake to hold yourself to a too high moral standard. I also think that you might be misguided in that you start with the wrong assumption of what dating is all about. Evolutionarily speaking, dating alias mating is not to make the other people better off. On the contrary, having kids is mostly a disadvantage for the parents, but most people do it anyways because we have this desire to have kids. Rationally speaking we all would probably be better off without them. Of course if you factor in emotions it becomes more complicated. Also there is a fundamental difference between males and females. Males don't get pregnant, they want to have as much sex(pleasure) with as many partners as possible. Women get pregnant(at least before birth control was invented) and so their emotional circuitry is designed to be extremely selective towards which males they will have sex with. Also they want their males to stick around as long as possible(to help them take care of the
8HughRistik
In general, I would agree that the teachings are value-neutral. Yet some of these tools are more conducive towards negative uses, while others are more conducive towards positive uses. It's true that people are not adapted to necessarily make each other optimally happy. Yet in spite of this, our skills give us the capability to find solutions that make both people at least somewhat happy. So in my case, winning is "defined to include, indeed to require, respect for the happiness, well-being, and autonomy of the pursued," as MBlume puts it. Yes, but the description in your post is contaminated by the oversimplified presumptions about evolutionary psychology in the community. I think you would get a lot out of reading more of real evolutionary psychologists, not just reading popularizations, or what the community says evolutionary psychologists are saying. I can find some cites when I'm at home. Typically, males are more oriented towards seeking multiple partners than women, yet that doesn't mean that they want "as many partners as possible." Some males are wired for short-term mating strategies, and other males are more wired for long-term mating strategies. Yes, and this is well-demonstrated experimentally. I don't have the citations on hand because I'm not at home, but a guy named Fisman has done some interesting work in this area. Yet this is again oversimplified, because some present day females follow short-term mating strategies and do not necessarily want males to stick around. True, though pretty good compromises exist. In a lot of cases, dating is like a Prisoner's Dilemma (though many other payoff matrices are possible). Personally, what I like the most about the community is that it gives me the tools to play C while simultaneously raising the chance that the other person will play C. Even when happiness for both people can't be achieved, it's at least possible for both people to treat each other with respect, even if someone can't give the other p
3moshez
I'm not really sure how you can claim "techniques are value-neutral" without assuming what values are. For example, if my values contain a term for someone else's self-esteem, a technique that lowers their self-esteem is not value-neutral. If my values contain a term for "respecting someone else's requests", techniques for overcoming LMR are not value-neutral. Since I've only limited knowledge of the seduction techniques advanced by the community, I did not offer more -- after seeing some of the techniques, I decided that they are decidedly not value neutral, and therefore chose to not engage in them.
3Paul Crowley
A top-level post would be very welcome, I don't want to take this one too far off track. I've slept (and continue to sleep) with a lot of people, and my experience very much contradicts what you say here.
[-]pjeby100

roland:

So you have to be aware that there is a fundamental difference in the objectives of the two which will make it extremely difficult or impossible to make BOTH happy at the same time.

ciphergoth:

my experience very much contradicts what you say here.

That's because it's a great example of theory being used to persuade people to take a certain set of "actions that work". There are other theories that contradict those theories, that are used to get other people to take action... even though the specific actions taken may be quite similar!

People self-select their schools of dating and self-help based on what theories appeal to them, not on the actual actions those schools recommend taking. ;-)

In this case, the theory roland is talking about isn't theory at all: it's a sales pitch, that attracts people who feel that dating is an unfair situation. They like what they hear, and they want to hear more. So they read more and maybe buy a product. The writer or speaker then gradually moves from this ev-psych "hook" to other theories that guide the reader to take the actions the author recommends.

That people confuse these sales pitches with actual theory is... (read more)

1roland
What exactly would you like to know? The subject is very broad, it would be easier if you made me a list of questions that are relevant to LW. There are already TONS of sites about this topic so please don't ask me to write another post about seduction in general.
3Paul Crowley
I think a post tailored to the particular interests and language of LW/OB readers would be fairly different from the ones already out there, but if you have a pointer that you think would be particularly appealing to us lot I'm interested.
4AnnaSalamon
I would personally love to see more cross-fertilization between that sub-community and LW, "dark arts" or no. (At least, I think I would; I don't know the community well and might be mistaken.) We need to make contact between abstract techniques for thinking through difficult issues, and on the ground practical strategicness. Importing people who've developed skilled strategicness in any domain that involves actual actions and observable success/failure, including dating (or sales, or start-ups, or ... ?), would be a good way to do this. If you could link to specific articles, or could create discussion threads that both communities might want to participate in, mattnewport, that would be good.
6Hans
I second that. Here in the LW/OB/sci-fi/atheism/cryonics/AI... community, many of us fit quite a few stereotypes. I'll summarize them in one word that everybody understands: we're all nerds*. This means our lives and personalities introduce many biases into our way of thinking, and these often preclude discussions about acting rationally in interpersonal situations such as sales, dating etc. because we don't have much experience in these fields. Anything that bridges this gap would be extremely useful. *this is not a value judgment. And not everybody conforms to this stereotype. I know, I know, but this is not the point. I'm talking averages here.
-3PhilosophyTutor
I would say that it is largely the ostensible basis of the seduction community. As you can see if you read this subthread, they've got a mythology going on that renders most of their claims unfalsifiable. If their theories are unsupported it doesn't matter, because they can disclaim the theories as just being a psychological trick to get you to take "correct" actions. However they've got no rigorous evidence that their "correct" actions actually lead to any more mating success than spending an equivalent amount of time on personal grooming and talking to women without using any seduction-community rituals. They also have such a wide variety of conflicting doctrines and gurus that they can dismiss almost any critique as being based on ignorance, because they can always point to something written somewhere which will contradict any attempt to characterise the seduction community - not that this ever stops them making claims about the community themselves. They'll claim that they develop such evidence by going out and picking up women, but since they don't do any controlled tests this cannot even in theory produce evidence that the techniques they advocate change their success rate, and even if they did conduct controlled studies their sample sizes are tiny given the claimed success rates. I believe one "guru" claims to obtain sex in one out of thirty-three approaches. I do not believe that anyone's intuitive grasp of statistics is so refined that they can spot variations in such an infrequent outcome and determine whether a given technique increases or decreases that success rate. To do science on such a phenomenon would take a very big sample size. Ergo anyone claiming to have scientific evidence without having done a study with a very big sample size is a fool or a knave. The mythology of the seduction community is highly splintered and constantly changes over time, which increases the subjective likelihood that we are looking at folklore and scams rather than an
1wedrifid
This is an absurd claim. Most of the claims can be presented in the form "If I do X I can expect to on average achieve a better outcome with women than if I do Y". Such claims are falsifiable. Some of them are even actually falsified. They call it "Field Testing". Your depiction of the seduction community is a ridiculous straw man and could legitimately be labelled offensive by members of the community that you are so set on disparaging. Mind you they probably wouldn't bother doing so: The usual recommended way to handle such shaming attempts is to completely ignore them and proceed to go get laid anyway.
0PhilosophyTutor
If they conducted tests of X versus Y with large sample sizes and with blinded observers scoring the tests then they might have a basis to say "I know that if I do X I can expect to on average achieve a better outcome with women than if I do Y". They don't do such tests though. They especially don't do such tests where X is browsing seduction community sites and trying the techniques they recommend and Y is putting an equal amount of time and effort into personal grooming and socialising with women without using seduction community techniques. Scientific methodology isn't just a good idea, it's the law. If you don't set up your tests correctly you have weak or meaningless evidence. Or as the Bible says, "But if any place refuses to welcome you or listen to you, shake its dust from your feet as you leave to show that you have abandoned those people to their fate". It's good advice for door-to-door salespersons, Jehova's Witnesses and similar people in the business of selling. If you run into a tough customer don't waste your time trying to convince them, just walk away and look for an easier mark. However in science that's not how you do things. In science if someone disputes your claim you show them the evidence that led you to fix your claim in the first place. Are you sure you meant to describe my post as a "shaming attempt"? As pejoratives go this seems like an ill-chosen one, since my critique was strictly epistemological. It seems at least possible that you are posting a standard talking point which is deployed by seduction community members to dismiss ethical critiques, but which makes no sense in response to an epistemological critique. (There are certainly concerns to be raised about the ethics of the seduction community, but that would be a different post).
-2wedrifid
Your claim was: Are you familiar with the technical meaning of 'unfalsifiable'? It does not mean 'have not done scientific tests'. It means 'cannot do scientific tests even in principle'. I would like it if scientists did do more study of this subject but that is not relevant to whether claims are falsifiable. I'd be surprised. I've never heard such a reply, certainly not in response to subject matter which many wouldn't understand (unfalsifiability). I used that term 'shaming' because the inferred motive (and, regardless of motive, one of the practical social meanings) of falsely accusing the enemy of behavior that looks pathetic is to provide some small degree of humiliation. This can, the motive implicitly hopes, make people ashamed of doing the behaviors that have been misrepresented. I am happy to conceed that this point is more distracting than useful. I would have been best served to stick purely to the (more conventional expression of) "NOT UNFALSIFIABLE! LIES!" I assert that the "act like JWs" approach is not taken by the seduction community in general either. For most part they do present evidence. That evidence is seldom of the standard accepted in science except when they are presenting claims that are taken from scientific findings - usually popularizations thereof, Cialdini references abound. I again agree that the seduction community could use more scientific rigor. Shame on science for not engaging in (much) research in what is a rather important area! Yes, I agree that you didn't get in to ethics and that your claim was epistemological in nature. I do believe that the act of making epistemological claims is not always neutral with respect to other kinds of implication. As another tangential aside I note that if an exemplar of the seduction community were to be said to be sensitive to public opinion he would be far more sensitive to things that make him look pathetic than things than make him look unethical!
-3PhilosophyTutor
In the case of Sagan's Dragon, the dragon is unfalsifiable because there is always a way for the believer to explain away every possible experimental result. My view is that the mythology of the seduction community functions similarly. You can't attack their theories because they can respond by saying that the theory is merely a trick to elicit specific behaviour. You can't attack their claims that specific behaviours are effective because they will say that there is proof, but it only exists in their personal recollections so you have to take their word for it. You can't attack their attitudes, assumptions or claims because they can respond by pointing at one guru or another and saying that particular guru does not share the attitude, assumption or claim you are critiquing. Their claim could theoretically be falsified, for example by a controlled test with a large sample size which showed that persons who had spent N hours studying and practicing seduction community doctrine/rituals (for some value of N which the seduction community members were prepared to agree was sufficient to show an effect) were no more likely to obtain sex than persons who had spent N hours on things like grooming, socialising with women without using seduction community rituals, reading interesting books they could talk about, taking dancing lessons and whatnot. I suspect but cannot prove though that if we conducted such a test those people who have made the seduction community a large part of their life would find some way to explain the result away, just as the believer in Sagan's dragon comes up with ways to explain away results that would falsify their dragon. Of course it's not the skeptic's job to falsify the claims of the seduction community. Members of that community very clearly have a large number of beliefs about how best to obtain sex, even if those beliefs are not totally homogenous within that community, and it's their job to present the evidence that led them to the belief
2wedrifid
It is dramatically different thing to say "people who are in the seduction community are the kind of people who would make up excuses if their claims were falsified" than to say "the beliefs of those in the seduction community are unfalsifiable". While I may disagree mildly with the former claim the latter I object to as an absurd straw man. I don't accept the role of a skeptic. I take the role of someone who wishes to have correct beliefs, within the scope of rather dire human limitations. That means I must either look for and process the evidence to whatever extent possible or, if a field is consider of insufficient expected value, remain in a state of significant uncertainty to the extent determined by information I have picked up in passing. I reject the skeptic role of thrusting the burden of proof around, implying "You've got to prove it to me or it ain't so!' That's just the opposite stupidity to that of a true believer. It is a higher status role within intellectual communities but it is by no means rational. No, it's their job to go ahead and get laid and have fulfilling relationships. It is no skin of their nose if you don't agree with them. In fact, the more people who don't believe them the less competition they have. Unless they are teachers, people are not responsible for forcing correct epistemic states upon others. They are responsible for their beliefs, you are responsible for yours.
-3PhilosophyTutor
I'm content to use the term "unfalsifiable" to refer to the beliefs of homeopaths, for example, even though by conventional scientific standards their beliefs are both falsifiable and falsified. Homeopaths have a belief system in which their practices cannot be shown to not work, hence their beliefs are unfalsifiable in the sense that no evidence you can find will ever make them let go of their belief. The seduction community have a well-developed set of excuses for why their recollections count as evidence for their beliefs (even though they probably shouldn't count as evidence for their beliefs), and for why nothing counts as evidence against their beliefs. It is not the opposite of stupidity at all to see a person professing belief Y, and say to them "Please tell me the facts which led you to fix your belief in Y". If their belief is rational then they will be able to tell you those facts, and barring significantly differing priors you too will then believe in Y. I suspect we differ in our priors when it comes to the proposition that the rituals of the seduction community perform better than comparable efforts to improve one's attractiveness and social skills that are not informed by seduction community doctrine, but not so much that I would withhold agreement if some proper evidence was forthcoming. However if the local seduction community members instead respond with defensive accusations, downvotes and so forth but never get around to stating the facts which led them to fix their belief in Y then observers should update their own beliefs to increase the probability that the beliefs of the seduction community do not have rational bases. Can you see that from my perspective, responses which consist of excuses as to why supporters of the seduction community doctrine(s) should not be expected to state the facts which inform their beliefs are not persuasive? If they have a rational basis for their belief they can just state it. I struggle to envisage probable s
-1wedrifid
On lesswrong insisting a claim is unfalsifiable while simultaneously explaining how that claim can be falsified is more than sufficient cause to downvote. This is false even if - and especially obviously when - that claim is false. Further, in general downvotes of comments by the PhilsophyTutor account - at least those by myself - have usually been for the consistent use of straw men and the insulting misrepresentation of a group of people you are opposed to. Declaring downvotes of your one's own comments to be evidence in favor of one's position is seldom a useful approach. They should not be persuasive and are not intended as such. Instead, in this case, it was an explicit rejection of the "My side is the default position and the burden of proof is on the other!" debating tactic. The subject of how to think correctly (vs debate effectively) is one of greater interest to me than seduction. I also reject the tactic used in the immediate parent. It seems to be of the form "You are trying to refute my arguments. You are being defensive. That means you must be wrong. I am right!". It is a tactic which, rather conveniently, become more effective the worse your arguments are!
-4PhilosophyTutor
That's rather sad, if the community here thinks that the word "unfalsifiable" only refers to beliefs which are unfalsifiable in principle from the perspective of a competent rationalist, and that the word is not also used to refer to belief systems held by irrational people which are unfalsifiable from the insider/irrational perspective. The fundamental epistemological sin is the same in each case, since both categories of belief are irrational in the sense that there is no good reason to favour the particular beliefs held over the unbounded number of other, equally unfalsifiable beliefs which explain the data equally well. That said, I do find it curious that such misunderstandings seem to exclusively crop up in those posts where I criticise the beliefs of the seduction community. Those posts get massively downvoted compared to posts I make on any other topic, and from my insider perspective there is no difference in quality of posting. There's a philosophical joke that goes like this: "Zabludowski has insinuated that my thesis that p is false, on the basis of alleged counterexamples. But these so- called "counterexamples" depend on construing my thesis that p in a way that it was obviously not intended -- for I intended my thesis to have no counterexamples. Therefore p". Source It's not clear to me at all that I have used straw men or misrepresented a group, and from my perspective it seems that it's impossible to criticise any aspect of the seduction community or its beliefs without being accused of attacking a straw man. Perhaps we should drop this subtopic then, since it seems solely to be about your views of what you see as a particular debating tactic, and get back to the issue of what exactly the evidence is for the beliefs of the seduction community. If we can agree that how to think correctly is the more interesting topic, then possibly we can agree to explore whether or not the seduction community are thinking correctly by means of examining their
-1wedrifid
Then you should indeed be sad. An unfalsifiable claim is a claim that can not be falsified. Not only is it right there in the word it is a basic scientific principle. The people who present a claim happening to be irrational would be a separate issue. Just say that the seduction community is universally or overwhelmingly irrational when it comes to handling counterevidence to their claims - and we can merrily disagree about the state of the universe. But unfalsifiable things can't be falsified.
-1wedrifid
I would update only slightly from the prior for "non-rationalists are dedicated to achieving a goal through training and practice". EDIT: In case the meaning isn't clear - this translates to "They're probably about the same as most folks are when they do stuff. Haven't seen much to think they are better or worse."
-2PhilosophyTutor
That seems to be a poorly-chosen prior. An obvious improvement would be to instead use "non-rationalists are dedicated to achieving a goal through training and practice, and find a system for doing so which is significantly superior to alternative, existing systems". It is no great praise of an exercise regime, for example, to say that those who follow it get fitter. The interesting question is whether that particular regime is better or worse than alternative exercise regimes. However the problem with that question is that there are multiple competing strands of seduction theory, which is why any critic can be accused of attacking a straw man regardless of the points they make. So you need to specify multiple sub-questions of the form "Group A of non-rationalists were dedicated to achieving a goal through training and practice, and found a system for doing so which is significantly superior to alternative, existing systems", "Group B of non-rationalists..." and so on for as many sub-types of seduction doctrine as you are prepared to acknowledge, where the truth of some groups' doctrines precludes the truth of some other groups' doctrines. As musical rationalists Dire Straits pointed out, if two guys say they're Jesus then at least one of them must be wrong. So then ideally we ask all of these people what evidence led them to fix the belief they hold that the methods of their group perform better than alternative, existing ways of improving your attractiveness. That way we could figure out which if any of them are right, or whether they are all wrong. However I don't seem to be able to get to that point. Since you position yourself as outside the seduction community and hence immune to requests for evidence, but as thoroughly informed about the seduction community and hence entitled to pass judgment on whether my comments are directed at straw men, there's no way to explore the interesting question by engaging with you. Edit to add: I see one of the ancestor p
1taelor
I actually agree mainly with you, but am downvoting both sides on the principle that I'm tired of listening to people argue back and forth about PUAs/Seduction communities.
-1wedrifid
I have Hugh in my RSS feed for this reason!
6AnnaSalamon
It sounds as though you have data and experiences that our community should chew on. Please do share specific stories, anecdotes, strategies or habits for thinking strategically about practical domains, techniques you've found useful within "creative rationality", etc. Perhaps in a top-level post?
2HughRistik
Thanks, Anna. Getting more specific is definitely on my list.
2Lethalmud
I'm curious, how did you use rationality to develop fashion sense?

If in 1660 you'd asked the first members of the Royal Society to list the ways in which natural philosophy had tangibly improved their lives, you probably wouldn't have gotten a very impressive list.

Looking over history, you would not have found any tendency for successful people to have made a formal study of natural philosophy.

It would be overconfident for me to say rationality could never become useful. My point is just that we are acting like it's practically useful right now, without very much evidence for this beyond our hopes and dreams. Thus my last sentence - that "crossing the Pacific" isn't impossible, but it's going to take a different level of effort.

If in 1660, Robert Boyle had gone around saying that, now that we knew Boyle's Law of gas behavior, we should be able to predict the weather, and that that was the only point of discovering Boyle's Law and that furthermore we should never trust a so-called chemist or physicist except insofar as he successfully predicted the weather - then I think the Royal Society would be making the same mistake we are.

Boyle's Law is sort of helpful in understanding the weather, sort of. But it's step one of ten million steps, used alone it doesn't work nearly as well as just eyeballing the weather and looking for patterns, and any attempt to judge applicants to the Royal Society on their weather prediction abilities would have excluded some excellent scientists. Any attempt to restrict gas physics itself to things that were directly helpful in predicti... (read more)

[-]badger110

I'm confused about this article. I agree with most you've said, but I'm not sure the point is exactly. I thought the entire premise of this community was that more is possible, but we're only "less wrong" at the moment. I didn't think there was any promise of results for the current state of the art. Is this post a warning, or am I overlooking this trend?

I agree we shouldn't see x-rationality as practically useful now. You don't rule out rationality becoming the superpower Eliezer portrays in his fiction. That is certainly a long ways off. Boyle's Law and weather prediction is an apt analogy. Just trying harder to apply our current knowledge won't go very far, but there should be some productive avenues.

I think I'd understand your purpose better if you could answer these questions: In your mind, how likely is it that x-rationality could be practically useful in, say, 50 years? What approaches are most likely to get us to a useful practice of rationality? Or is your point that any advances that are made will be radically different from our current lines of investigation?

Just trying to understand.

The above would be component 1 of my own reply.

Component 2 would be (to say it again) that I developed the particular techniques that are to be found in my essays, in the course of solving my problem. And if you were to try to attack that or a similar problem you would suddenly find many more OB posts to be of immensely greater use and indeed necessity. The Eliezer of 2000 and earlier was not remotely capable of getting his job done.

What you're seeing here is the backwash of techniques that seem like they ought to have some general applicability (e.g. Crisis of Faith) but which are not really a whole developed rationalist art, nor made for the purpose of optimizing everyday life.

Someone faced with the epic Challenge Of Changing Their Mind may use the full-fledged Crisis of Faith technique once that year. How much benefit is this really? That's the question, but I'm not sure the cynical answer is the right one.

What I am hoping to see here is others, having been given a piece of the art, taking that art and extending it to cover their own problems, then coming back and describing what they've learned in a sufficiently general sense (informed by relevant science) that I can actually absorb it. For that which has been developed to address e.g. akrasia outside the rationalist line, I have found myself unable to absorb.

But you're not a good test case to see whether rationality is useful in everyday life. Your job description is to fully understand and then create a rational and moral agent. This is the exceptional case where the fuzzy philosophical benefits of rationality suddenly become practical.

One of the fundamental lessons of Overcoming Bias was "All this stuff philosophers have been debating fruitlessly for centuries actually becomes a whole lot clearer when we consider it in terms of actually designing a mind." This isn't surprising; you're the first person who's really gotten to use Near Mode thought on a problem previously considered only in Far Mode. So you've been thinking "Here's this nice practical stuff about thinking that's completely applicable to my goal of building a thinking machine", and we've been thinking, "Oh, wow, this helps solve all of these complicated philosophical issues we've been worrying about for so long."

But in other fields, the rationality is domain-specific and already exists, albeit without the same thunderbolt of enlightenment and awesomeness. Doctors, for example, have a tremendous literature on evidence and decision-making as t... (read more)

An x-rationalist who becomes a doctor would not, I think, necessarily be a significantly better doctor than the rest of the medical world, because the rest of the medical world already has an overabundance of great rationality techniques and methods of improving care that the majority of doctors just don't use

Evidence-based medicine was developed by x-rationalists. And to this day, many doctors ignore it because they are not x-rationalists.

...huh. That comment was probably more helpful than you expected it to be. I'm pretty sure I've identified part of my problem as having too high a standard for what makes an x-rationalist. If you let the doctors who developed evidence-based medicine in...yes, that clears a few things up.

One thinks particularly of Robyn Dawes - I don't know him from "evidence-based medicine" per se, but I know he was fighting the battle to get doctors to acknowledge that their "clinical experience" wasn't better than simple linear models, and he was on the front lines against psychotherapy shown to perform no better than talking to any bright person.

If you read "Rational Choice in an Uncertain World" you will see that Dawes is pretty definitely on the level of "integrate Bayes into everyday life", not just Traditional Rationality. I don't know about the historical origins of evidence-based medicine, so it's possible that a bunch of Traditional Rationalists invented it; but one does get the impression that probability theorists trying to get people to listen to the research about the limits of their own minds, were involved.