Reply to Extreme Rationality: It's Not That Great, Extreme Rationality: It could Be Great, the Craft and the Community and Why Don't Rationalists Win?

I’m going to say something which might be extremely obvious in hindsight:

If LessWrong had originally been targeted at and introduced to an audience of competent business people and self-improvement health buffs instead of an audience of STEM specialists and Harry Potter fans, things would have been drastically different. Rationalists would be winning.

Right now, rationalists aren’t winning. Rationality helps us choose which charities to donate to, and as Scott Alexander pointed out in 2009 it gives clarity of mind benefits. However, as he also pointed out in the same article, rationality doesn't seem to be helping us win in individual career or interpersonal/social areas of life.

It’s been nearly ten years since then, and I have yet to see any sign that this fact has changed. I considered the possibility that I just hadn’t heard about other rationalists’ practical success due to having not become a rationalist until around 2015, or simply because no one was talking about their success. Then I realized that was silly. If rationalists had started winning, at least one person would have posted about it here on I recently spoke to Scott Alexander, and he said he still agreed with everything he said in his article.

So rationalists aren’t winning. Why not? The Bayesian Conspiracy podcast (if I recall correctly), proposed the following explanation in one of their episodes: that rationality can only help us improve a limited amount relative to where we started out. They predicted that people who started out at a lower level of life success/cognitive functioning/talent cannot outperform non-rationalists who started out at a sufficiently high level.

This argument is fundamentally a cop-out. When others win in places where we fail, it makes sense to ask, “How? What knowledge, skills, qualities or experience do they have which we don't? And how might we obtain the same knowledge, skills, qualities or experience?” To say that others are simply more innately talented than we are, and leave it at that, doesn't explain the mechanism behind their hypothesized greater rate of improvement after learning rationality. It tells us why but not how. And if there was such a mechanism, could we not replicate it so we could improve more anyway?

So why aren't we winning? What’s the actual mechanism behind our failure?

It's because we lack some of the skills we need to win - not because we don't want to win, and not because we're lazy.

Rationalists are very good at epistemic rationality. But there's this thing that we've been referring to as "instrumental rationality" which we're not so good at. I wouldn’t say it’s just one thing, though. Instrumental rationality seems like many different arts that we're lumping together.

It's more than that, though. We're not just lumping together many different arts of rationality. As anyone who's read the sequence A Human’s Guide to Words would know, categorization and labeling are not neutral actions for a human. By classifying all rationality as one of two types, epistemic or instrumental, we limit our thinking about rationality. As a result of this classification, we fail to acknowledge the true shape of rationality’s similarity cluster.

The cluster’s true shape is that of instrumental rationality: it is the art of winning, a.k.a. achieving your values. All rationality is instrumental, and epistemic rationality is merely one example of it. The art of epistemic rationality is how you achieve the value of truth. Up until now, "instrumental rationality" has been a catch-all term we've been using for the arts of winning at every other value.

While achieving the value of truth is extremely useful for achieving every other value, truth is still only one value among many. The skills needed to achieve other values are not the same as the skills needed to achieve the value of truth. That is to say, epistemic rationality includes the skill sets that are useful for obtaining truth and “instrumental rationality” includes all other skill sets.

Truth is a precious and valuable thing. It's just not enough by itself to win in other areas of life.

That might seem obvious at face value. However, I'm not sure we understand that on a gut level.

I have the impression that many of us assume that so long as we have enough truth, everything else will simply fall into place - that we’ll do everything else right automatically without needing to really develop or practice any other skills.

Perhaps that would be the case with enough computing power. An artificial superintelligence could perhaps play baseball extremely well with the following method:

1. Use math to calculate where the particles in the bat, the ball, the air, and all the players are moving.

2. Predict which particles have to be moved to and from what positions in order to cause a chain reaction that results in the goal state. In this case, the goal state would be a particle configuration that humans would identify as a won game of baseball.

3. Move the key particles to the key positions. If you fail to reach the goal state, adjust your priors accordingly and repeat the process.

An artificial superintelligence could perhaps navigate relationships, or discover important scientific truths, or really anything else, all by this same method, provided that it had enough time and computing power to do so.

But humans are not artificial superintelligences. Our brains compress information into caches for easier storage. We will not succeed at life just by understanding particle physics, no matter how much reductionism we do. As humans, our beliefs are organized into categorical levels. Even if we know that reality itself is really all just one level, our brains don't have the space to contain enough particle-level knowledge to succeed at life (assuming that particles really are the base level, but we’ll leave that aside for now). We need that knowledge compressed into different categorical levels or we can't use it.

This includes procedural knowledge like "how many particles need to be moved to and from what positions to win a game of baseball". If our brains were big enough to be capable of knowing that, then all we would need to do to win is to obtain that knowledge and then output the correct choice.

For an artificial superintelligence, once it has enough relevant knowledge, it would have all that it needs to make optimal decisions according to its values.

For a human, given the limits of human brains, having enough relevant knowledge isn't the only thing needed to make better decisions. Having more knowledge can be extremely useful for achieving one's other goals besides just knowledge for knowledge’s sake, but only if one has the motivation, skills and experience to leverage that knowledge.

Current rationalists are really good at obtaining knowledge, at least when we manage to apply ourselves. But we're failing to leverage that knowledge. For instance, we ought to be dominating prediction markets and stock markets and outputting a disproportionately high number of superpredictors, to the point where other people notice and take an interest in how we managed to achieve such a thing.

In fact, betting in prediction markets and stock markets provides an external criteria for measuring epistemic rationality - just as martial arts success can be measured by the external criteria of hitting your opponent.

So why haven't we been dominating prediction and stock markets? Why aren't we dominating them right now?

In my own case, I'm still an undergraduate college student living largely off of my parents' income. I can't afford to bet on things since I don't have enough money of my own for it, and my income is highly irregular and hard to predict so it’s difficult to budget things. I would need to explain the expense to my mother if I started betting. If I did have more money of my own, though, I definitely would be spending some of it on this. Do a lot of other people here have such extenuating circumstances? Somehow that would feel like too much of a coincidence.

It's more likely to be because many of us haven't learned the instrumental skills needed to get ourselves to go out and bet. Such skills might include time management to set aside time to go bet, or interpersonal/communication skills to make sure the terms of the bets are clear and that we're only betting against those who will abide by the terms once they're set.

Prediction markets and stock markets aren't the only opportunity that rationalists are failing to take advantage of. For example, our community almost entirely neglects public relations, despite its potential as a way to significantly increase staff and funds for the causes we care about by raising the sanity waterline. We need better interpersonal/communication skills for interacting with the general public, and we need to learn to be more pragmatic so we will actually be able to get ourselves to do that instead of succumbing to an irrational deep-seated fear of appearing cultish.

Competent business people and self-improvement health buffs do have those skills. We don’t. That’s why we’re not winning.

In short, we need arts of rationality for the pursuit of values beyond mere truth. One of my friends who has read the Sequences has been spending years working on beginning to map out those other arts, and he recently presented his work to me. It's really interesting. I hope you find it useful.

(Note: Said friend will be introducing himself on here and writing a sequence about his work later. When he does I will add the links here.)

New to LessWrong?

New Comment
90 comments, sorted by Click to highlight new comments since: Today at 5:44 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Humans who won typically just choose harder goals and don't spend a lot of time patting themselves on the back online. Fwiw superforecasters were disproportionately ssc readers, I interviewed four of them. Also, lw, like most self help communities, attracts the walking wounded. See mental health incidence in the ssc survey. Going from well below average in several metrics to slightly above doesn't look impressive from the outside but is very large from the inside.

I support the opposite perspective - it was wrong to ever focus on individual winning and we should drop the slogan.

"Rationalists should win" was originally a suggestion for how to think about decision theory; if one agent predictably ends up with more utility than another, its choice is more "rational".

But this got caught up in excitement around "instrumental rationality" - the idea that the "epistemic rationality" skills of figuring out what was true, were only the handmaiden to a much more exciting skill of succeeding in the world. The community redirected itself to figuring out how to succeed in the world, ie became a self-help group.

I understand the logic. If you are good at knowing what is true, then you can be good at knowing what is true about the best thing to do in a certain situation, which means you can be more successful than other people. I can't deny this makes sense. I can just point out that it doesn't resemble reality. Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that I can no longer dismiss... (read more)

There's a lot in the word "woo".

One of my favorite examples is Roy Baumeister's book Willpower which he published in 2011. He's a professor who got two years later highest award given by the Association for Psychological Science, the William James Fellow award.

The book builds on a bunch of not-replicateable science and goes on to recommend that people should eat sugar to improve their Willpower, in a way that maps well to what Feymann describes as Cargo Cult science. We know the bad effects of sugar on the human body.

Here we have a distinguished psychologists who wrote in this decade a book that does the equivalent of recommending bloodletting. That's not a community with high epistemic norms.

You Scott recently wrote a post where you were suprised that neuroscience as a field messes up a question such as neurogenesis. Given the track record of the community that should be no suprise as they are largely doing the thing Feymann called Cargo Cult Science. They even publish papers that constantly say that they can predict things better then theoretically possible.

Everybody tries to succeed at his life. It feels to me like "not do self-help" b... (read more)

Is the better argument not that the wise ladies were onto something? Traditional medicines are a mixed bag, but some herbal remedies are truly effective and have since been integrated into scientific medicinal practices. Rather than inventing his own theoretical framework, Hippocrates would have been better-served by investigating the existing herbal practices and trying to identify the truly-effective from the placebo. Trial-and-error is a form of empiricism, after all - and this seems to be how cultural knowledge like herbal medicine came to be.

I have… issues with this comment; it is not without flaws. That said, Scott, I want to focus on a point which you make and with which I don’t so much disagree as think that you don’t take it far enough.

I will be the first to agree that becoming a self-help community was, for Less Wrong, a terrible, terrible mistake. But I would go further.

I would say that becoming a community, at all, was a mistake. There was never a need for it. Despite even Eliezer’s approval and encouragement, it was a bad idea—because it predictably led to all the problems, and the problem-generating dynamics, which we have seen and which we continue to see.

It always was, and remains, a superior strategy, to be a, shall we say, “project group”—a collective of individuals, who do not constitute a community, who do not fulfill each other’s needs for companionship, friendship, etc., who do not provide a “primary social circle” to its members, but who are united by their collective interest in, and commitment to, a certain pursuit. A club, in other words.

In short, “bonding” was at least as bad an idea as I initially suspected. Probably much worse.

Very much agree. Some people want the well-being benefits of belonging to a substitute church, and will get these benefits somewhere anyway, but I think productive projects should avoid that association. (And accept the risk of fizzling out, like IAFF and Arbital did when trying to grow independently from LW.) Here's hoping that Abram, Paul and Rohin with their daily posts can make LW a more project-focused place.

4Said Achmiz5y
I’ve never even heard of IAFF! What is that?

Edit: oops, cousin_it beat me to it.

The "Intelligent Agent Foundations Forum" at

It was a platform MIRI built for discussing their research, that required an invite to post/comment. There's lots of really interesting stuff there - I remember enjoying reading Jessica Taylor's many posts summarising intuitions behind different research agendas.

It was a bit hard and confusing to use, and noticing that it seemed like we might be able to do better was one of the things that caused us to come up with building the AI Alignment Forum.

As the new forum is a space for discussion of all alignment research, and all of the old IAFF stuff is subset of that, we (with MIRI's blessing) imported all the old content. At some point we'll set all the old links to redirect to the AI Alignment Forum too.

5cousin_it5y - lots of good stuff there, but most of it gets very few responses. The recently launched is an attempt to do the same thing, but with crossposting to LW.

Very much disagree - but this is as someone not in the middle of the Bay area, where the main part of this is happening. Still, I don't think rationality works without some community.

First, I don't think that the alternative communities that people engage with are epistemically healthy enough to allow people to do what they need to reinforce good norms for themselves.

Second, I don't think that epistemic rationality is something that a non-community can do a good job with, because there is much too little personal reinforcement and positive vibes that people get to stick with it if everyone is going it alone.

Are you saying that epistemic rationality didn't exist before the LW community, or that (for instance) academia is an adequate community?
Academia in general is certainly not an adequate community from an epistemic standards point of view, and while small pockets are relatively healthy, none are great. And yes, the various threads of epistemic rationality certainly predated LessWrong, and there were people and even small groups that noted the theoretical importance of pursuing it, but I don't there were places that actively advocated that members follow those epistemic standards. To get back to the main point, I don't think that it is necessary for the community to "fulfill each other's needs for companionship, friendship, etc," I don't think that there is a good way to reinforce norms without something at least as strongly affiliated as a club. There is a fine line between club and community, and I understand why people feel there are dangers of going too far, but before LW, few groups seem to have gone nearly far enough in building even a project group with those norms.
By whose epistemic standards? And what's the evidence for the claim?
Mine, and my experience working in academia. But (with the very unusual exceptions of FHI, GMU's economics department, and possibly the new center at Georgetown) I don't think you'd find much disagreement among LWers who interact with academics that academia sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals.
I think your comment is unnecessarily hedged -- do you think that you'd find much disagreement among LWers who interact with FHI/GMU-Econ over whether people there sometimes (vs never) fail to do level-one things? I think I understand the connotation of your statement, but it'd be easier to understand if you strengthened "sometimes" to a stronger statement about academia's inadequacy. Certainly the rationality community also sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals -- what is the actual claim that distinguishes the communities?
That's a very good point, I was definitely unclear. I think that the critical difference is that in epistemically health communities, when such a failure is pointed out, some effort is spent in identifying and fixing the problem, instead of pointedly ignoring it despite efforts to solve the problem, or spending time actively defending the inadequate status quo from even pareto-improving changes.
Oh, I so your complaint us about instrumental rationality. Well, naturally they're bad at that. Most people are. You don't get good at doing things by studying rationality in the abstract. EY couldn't succeed in spending $300k of free money on producing software to his exact specifications. I was thinking more of epistemic rationality, having given up on instrumental rationality.

I don't think they get epistemic rationality anywhere near correct either. As a clear and simpole example, there are academics currently vigorously defending their right not to pre-register empirical studies.

And even of those who do preregister, noboby puts down their credence for the likilihood that there's an effect.
7Said Achmiz5y
This is a fine desire to have. I share it. And herein lies your problem. You have, I surmise, encountered many terrible people in your life. This sucks. The solution to this is simple, and it’s one I have advocated in the past and continue to advocate: Be friends with people who are awesome. Avoid people who suck. Let me assure, you in the strongest terms, that “rationalists” are not the only people in the world who are awesome. I, for one, have a wonderful group of friends, who are “interested in rationality” in, perhaps, the most tangential way; and some, not at all. My friends are never “rude or condescending” to me; I can be, and am, as “rational” around them as I wish; and the idea that my close friends would not be ok with curiosity and skepticism is inconceivable. It is even perfectly fine and well if you select your personal friends on the basis of some combination of intelligence, curiosity, skepticism, “rationality”, etc. But this is not at all the same thing as making a “rationality community” out of Less Wrong & co—not even close. Finally: For God’s sake, get out of the Bay Area. Seriously. Things are not like this in the rest of the world.
4Said Achmiz5y
Alright. Well, generalize my advice, then: leave the social circles where that sort of thing is at all commonplace. If that requires physical moving cities, do that. (For example, I live in New York City. I don’t recall ever hearing the term “tech-bro” used in a real conversation, except perhaps as an ironic mention… maybe not even that.)
The thing is -- and here I disagree with your initial comment thread as well -- peer pressure is useful. It is spectacularly useful and spectacularly powerful. How can I make myself a more X person, for almost any value of X, even values that we would assume entirely inherent or immutable? Find a crowd of X people that are trying to be more X, shove myself in the middle, and stay there. If I want to be a better rationalist, I want friends that are better rationalists than me. If I want to be a better forecaster, I want friends that are better forecasters than me. If I want to be a more effective altruist, earn more to give more, learn more about Y academic topic, or any other similar goal, the single most powerful tool in my toolbox -- or at least the most powerful tool that generalizes so easily -- is to make more friends that already have those traits. Can this go bad places? Of course it can. It's a positive feedback cycle with no brakes save the ones we give it. But... ... well, to use very familiar logic: certainly, it could end the world. But if we could harness and align it, it could save the world, too. (And 'crowds of humans', while kind of a pain to herd, are still much much easier than AI.)
8Said Achmiz5y
You’re equivocating between the following: 1. To become more X, find a crowd of people who are more X. 2. To become more X, find a crowd of people who are trying to be more X. Perhaps #1 works. But what is actually happening is #2. … or at least, that’s what we might charitably hope is happening. But actually instead what often happens is: 1. To become more X, find a crowd of people who are pretending to try to be more X. And that definitely doesn’t work.
Actually, no, I explicitly want both 1 and 2. Merely being more X than me doesn't help me nearly as much as being both more X and also always on the lookout for ways to be even more X, because they can give me pointers and keep up with me when I catch up. And sure, 3 is indeed what often happens. ... First of all, part of the whole point of all of this is to be able to do things that often fail, and succeed at them anyway; being able to do the difficult is something of prerequisite to doing the impossible. Secondly, all shounen quips aside, it's actually not that hard to tell when someone is merely pretending to be more X. It's easy enough that random faux-philosophical teenagers can do it, after all :V. The hard part isn't staying away from the affective death spiral, it's trying to find the people who are actually trying among them -- the ones who, almost definitionally, are not talking nearly as much about it, because "slay the Buddha" is actually surprisingly general advice.
7Said Achmiz5y
What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X. EDIT: Empirically, it seems rather hard, in fact. Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…
Right—they call it the "principle of charity."
Fair. Nevertheless, if the average of the group is around my own level, that's good enough for me if they're also actively trying. (Pretty much by definition of the average, really...) ... Okay, sorry, two place function. I don't seem to have much trouble distinguishing. (And yes, you can reasonably ask how I know I'm right, and whether or not I myself are good enough at the relevant Xs to tell, etc etc, but... well, at some point that all turns into wasted motions. Let's just say that I am good enough at distinguishing to arrive at the extremely obvious answers, so I'm fairly confident I'll at least not be easily mislead.)
I think it's possible (and important) to analyze this phenomenon and see what's going on. But the point is that this will involve analyzing a phenomenon - ie truth-seeking, ie epistemic rationality, ie the thing we're good at and which is our comparative advantage - and not winning immediately.

I mostly agree with this, but want to point at something that your comment didn't really cover, that "whether to go to the homeopath or the doctor" is a question that I expect epistemic rationality to be helpful for. (This is, in large part, a question that Inadequate Equilibria was pointed towards.) [This is sort of the fundamental question of self-help, once you've separated it into "what advice should I follow?" and "what advice is out there?"]

But this requires that the question of how to evaluate strategies be framed more in terms of "I used my judgment to weigh evidence" and less in terms of "I followed the prestige" or "I compared the lengths of their articulated justifications" or similar things. A layperson in 1820 who is using the latter will wrongly pick the doctors, and a confused layperson in 200... (read more)

9Eli Tyre5y
It sure seems like you should be able to do better than spending literally two thousand years. There are much better existing methodologies now than there were then.
I would be very vary with using him as an example because the public image of him is very much determined by the media. He did succeed at getting a degree at Wharton Business school. Peter Thiel who actually meet him in person considers him to be expectional at understanding how individual people with whom he deals tick.
1Eli Tyre5y
How do you know this?
One of the interviews on YouTube. I unfortunately don't have a link right now.

I think you are not looking in the right places, as the groups of rationalists I know are doing incredibly well for themselves - tenure-track positions at major universities, promotions to senior positions in US government agencies, incredibly well paid jobs doing EA-aligned research in machine learning and AI, huge amounts of money being sent to the rationalist-sphere AI risk research agendas that people were routinely dismissing a few years ago, etc.

To evaluate this more dispassionately, however, I'd suggest looking at the people who posted high-karma posts in 2009, and seeing what the posters are doing now. I'll try that here, but I don't know what some of these people are doing now. They seem to be a overall high-achieving group. (But we don't have a baseline.) - Page 1: I'm seeing Eliezer, (he seems to have done well,) Hal Finney (unfortunately deceased, but had he lived a bit longer he would have been a multi-multi millionaire for being an early bitcoin holder / developer,) Scott Alexander (I think his blog is doing well enough,) Phil Goetz - ?, Anna Salomon (helping run CFAR,) "Liron" - (?, but he&#x... (read more)

Jim Babcock is working on the LW team with Oli, Ray and I :-)

My hot take response to OP is that the general question should be how well teams of rationalists are doing at their actual goals, not whether they seem superficially successful on common, easy to measure metrics (i.e. lotsa money and popularity).

To pick one of the top goals, how much better is the world doing on its long-term trajectory due to the work of teams around here? There's many key object-level insights (e.g. logical inductors and other core research), and noticeably more global coordination around superintelligence as x-risk (discussion of Bostrom's book, several full-time and thoughtful funders in the space - OpenPhil, BERI, etc, - highly competent research teams at DeepMind and OpenAI and UC Berkeley, focused tech teams building software for the research community *cough*, a few major conferences, and more). Naturally a bunch of stuff is behind the scenes too.

Perhaps you expected all of this to happen by default, but I've be repeatedly surprised by the magnitude of positive events that have occurred. If I compare this to a few bloggers talking about the problem details just 10 years ago, it see... (read more)

Unfortunately, or perhaps fortunately, the really huge deal problems get all the attention from the really motivated smart people who get convinced by the rational arguments. Perhaps the way forward on the "improve general rationality" is to try hiring educational experts from outside the rationality community to build curricula and training based on the sequences while getting feedback from CFAR, instead of having CFAR work on building such curricula (which they are no longer doing, AFAICT.)
6Ben Pace5y
Yep, certainly much of the founding CFAR team have become busy with other projects in recent years. I'm not sure I expect hiring people solely based on their educational expertise to work out well. I agree that one of the things CFAR has done expertly, to a much higher level than any other institution I've interacted with (and I did CS at Oxford), is learn how to teach. But while pedagogy is core to the product, the goal is rationality. And I'm not sure someone good at pedagogy but not rationality will be able to make a lot of progress working on rationality without heavy guidance, they can only (do something like) streamline the existing product. Also: I think you're implying that AI is a really huge deal problem and rationality is less. I'm not sure I agree, I mostly think that AI alignment research has seen an increase in tractability lately. I think that sanity is basically still very rare and super important. For example, people haven't become more sane re: AGI, it's just that the amount of sanity required to not bounce off the problem entirely has decreased (e.g. your local social incentives push against recognising the problem less now that Superintelligence+Musk caused it to be a bit more mainstream, and also there are some open technical problems to work on and orders of magnitude more funding available). If there's another problem as hard and important as noticing AI alignment research as important, it's not obvious we've gotten much better at noticing it, in the past 5 years.
I'm not sure I expect hiring people solely based on their educational expertise to work out well.

Yes, there needs to be some screening other than pedagogy, but money to find the best people can fix lots of problems. And yes, typical teaching at good universities sucks, but that's largely because it optimizes for research. (You'd likely have had better professors as an undergrad if you went to a worse university - or at least that was my experience.)

...they can only (do something like) streamline the existing product.

My thought was that streamlining the existing product and turning it into useable and testably effective modules would be a really huge thing.

Also: I think you're implying that AI is a really huge deal problem and rationality is less.

If that was the implication, I apologize - I view safe AI as only near-impossible, while making actual humans rational is a problem that is fundamentally impossible. But raising the sanity water-line has some low-hanging fruits - not to get most people to CFAR-expert-levels, but to get high schools to teach some of the basics in ways that potentially has significant leverage in improving social decision-making in general. (And if the top 1% of people in High Schools also take those classes, there might be indirect benefits leading to increasing the number of CFAR-expert-level people in a decade.)

Just to fill in the slot: in 2009 I was living in Moscow and mostly just partying and enjoying life, and in 2018 I'm living in Zurich with my wife and five kids. Was very happy with my life then, and am very happy now. Doing nicely in terms of money, but no big accomplishments if that's what you're asking about. And no, I wouldn't attribute it to LW, just normal life going on.

Holy shit. Pyschohistorian taught my AP Calc BC class. I am in shock.
Update: I messaged Dr. Pechenick on LinkedIn, and I regret to report that he is not in fact Psychohistorian on LessWrong, but Psychohistorian on Twitter. Still, hell of a coincidence.
If LessWrong had originally been targeted at and introduced to an audience of competent business people and self-improvement health buffs instead of an audience of STEM specialists and Harry Potter fans, things would have been drastically different. Rationalists would be winning.

This sounds like "the best way to make sure your readers are successful is to write for people who are already successful". It makes sense if you want to brag about how successful are your readers. But if your goal is to improve the world, how much change would that bring? There is already a ton of material for business people and self-help fans; what is yet another website for them going to accomplish? If people are already competent self-improving businessmen, what motive would they have to learn about rationality?

The Bayesian Conspiracy podcast ... proposed ... that rationality can only help us improve a limited amount relative to where we started out. They predicted that people who started out at a lower level of life success/cognitive functioning/talent cannot outperform non-rationalists who started out at a sufficiently high level.

The past matters, because changing things takes time. O... (read more)

That's a simple and plausible point, but its also rather devastating to the claim that you can get significant value out of learning generic rationality skills.
I'm confused; it seems like evidence against the claim that you can get arbitrary amounts of value out of learning generic rationality skills, but I don't see it as "devastating" to the claim you can get significant value, unless you're claiming that "spent years learning all that stuff, and now do it as a day job; some of them 16 hours a day" should imply only a less-than-significant improvement. Or am I missing something here?
Yeah, this. It is a mistake -- and I suspect a popular one -- to think that rationality trumps any amounts of domain-specific knowledge or resources. Ceteris paribus, a rational person playing the stock market should have an advantage against an irrational one, with same amount of skills, experience, time spent, etc. Question is, whether this advantage creates a big difference or a rounding error. Another question is whether playing the stock market is actually a winning move: how much is skill, how much is luck, and whether the part that is skill is adequately rewarded... compared to using the same amount of skill somewhere else, and putting your savings into a passively managed index fund. If you invest your own money, even if you do everything right, you profit will be 1000 times smaller compared with a person who invests 1000× more money equally well. So, even if you make a profit, it may be less than your potential salary somewhere else, because you are producing a multiplier only for a moderate amount of money (unless you started as a millionaire). On the other hand, if you invest other people's money, it depends on the structure of the market: how much other people's money is there to be invested, and how many people are competing for this opportunity. Maybe there are thousands of wannabe investors competing for the opportunity to manage a few dozen funds. Then, even if the smartest ones make a big profit, their personal reward may be quite small. Because the absolute value of your skill is not relevant here; it is the relative value of employing you versus employing the other guy who would love to take your position; and the other guy is pretty smart, too.

I broadly agree with your main points. However,

If rationalists had started winning, at least one person would have posted about it here on

I did post about this, and the benefits have continued to accrue. Compared to my past self, I perceive myself to be winning dramatically harder on almost all metrics I care about.

What does winning look like to you? Lots of rationalists have pretty successful careers as programmers, which depending on what they are going for, could be considered winning. Is it that they aren't "winning" by your definition, or theirs?

Can you describe the thing you think rationalists are failing at, tabooing "winning"?

Not the author, but my guess would be this:

On various metrics, there can be differences in quantity, e.g. "a job that pays $10k" vs "a job that pays $20k", and differences in quality, e.g. "a job" vs "early retirement". Merely improving quantity does not make a good story. And perhaps it is foolish, but I imagine "winning" as a qualitative improvement, instead of merely 30% or 300% more of something.

And maybe this is wrong, because a qualitative improvement brings qualitative improvements as a side effect. A change from "$X income" to $Y income" can also mean a change from "worrying about survival" to "not worrying about survival", a change from "cannot afford X" to "bought the X", or even a change from "the future is dark" to "I am going to retire early in 10 years, but as of today, I am not there yet". Maybe we insufficiently emphasize these qualitative changes, because... uhm, illusion of transparency?

I went from borderline nonfunctional to pretty functional. This is not at all obvious even to those who knew me because I had been masking the growing problems really well using just raw intellectual brute force. More "attracts the walking wounded" anecdote.

Further, I kind of expect that Really Winning in the sense you're talking about is far more likely when (a) you get lucky, and/or (b) you're willing to stomp on other people. The first is not increased and the second is actively decreased by LWing (I think and hope).

Also, we have funded, active research into the not-so-covert true goal of original LW.

Can we do better? Yeah, definitely. Is it really so bleak? I don't think so.

I don't see the 'why aren't you winning?' critique as that powerful, and I'm someone who tends critical of rationality writ-large.

High-IQ societies and superforecasters select for demonstrable performance at being smart/epistemically rational. Yet on surveying these groups you see things like, "People generally do better-than-average by commonsense metrics, some are doing great, but it isn't like everyone is a millionaire". Given the barrier to entry to the rationalist community is more, "sincere interest" than "top X-percentile of the population", it would remarkable if they exhibited even better outcomes as a cohort.

There's also going to be messy causal inference worries that cut either way. If there is in some sense 'adverse selection' (perhaps alike IQ societies) for rationalists tending to have less aptitude at social communication, greater prevalence of mental illness (or whatever else), then these people enjoying modest to good success in their lives reflects extremely well on the rationalist community. Contrariwise, there's plausible confounding where smart creative people will naturally gravita... (read more)

Within a narrow field, where data is plentiful, learning rationality is much less powerful than learning from piles of data. Imagine three people, A, B and C. A doesn't know any chess or rationality, B has studied game theory, bays theorem, principles of decision theory and all round rationality. They have never played chess before, and have just been told the rules. C has been playing chess for years.

I would expect C to win easily. Its much easier to learn from experience, and remember your teachers experience, than it is to deduce what good chess strategies are from first principles. The only time I would expect B to win is if they were playing nim, or some other game with a simple winning strategy, and C had an intuition for this strategy, but sometimes made mistakes. I would expect B to beat A however.

Rationality is learning to squeeze every last drop of usefulness out of your data, and doing this is less effective at just grabbing more data when data is plentiful. Financial markets are another plentiful data domain. Many hedge fundies already know game theory, they also have a detailed knowledge of financial minutiae. Wanabe rationalists, If you want to be a banker, go a... (read more)

3Eli Tyre5y
I think I would add to this, "domains where there is lots of confusing/ conflicting data, where you have to filter the signal from the noise". I'm thinking of fields where there are many competing academic positions like macroeconomics, or nutrition, or (of highest practical relevance) medicine. Many of Scott Alexander's posts, for instance are a wading into a confusing morass of academic papers and then using principles of good reasoning to figure out, as best he/we can, what's actually going on.
3Eli Tyre5y
This is a very important point, and I think it is worthy of being its own, titled, top-level post.
In fact, betting in prediction markets and stock markets provides an external criteria for measuring epistemic rationality [...]
So why haven't we been dominating prediction and stock markets? Why aren't we dominating them right now?

Trying to 'dominate' the stock market is a very bad idea, roughly analogous to your AI baseball example. The generally accepted best approach is to passively accumulate index funds, which I imagine is exactly what many people here are already doing. For individuals, winning is mostly about not-losing, which tends to be invisible; if you succeed, nothing happens.

cf. Antigravity Investments (investment advisor service for EAs), which recommends a passive index fund approach.

My son is winning. Although only 13 he received a 5 (the highest score) on the calculus BC, and the Java programming AP exams. He is currently taking a college level course in programming at Stanford Online High School (Data Structures and Algorithms), and he works with a programming mentor I found through SSC. He reads SSC, and has read much of the sequences. His life goal is to help program a friendly superintelligence. I've been reading SSC, Overcomming Bias, and Lesswrong since the beginning.


Yeah, but which way is the arrow of causality here? Like, was he already a geeky intellectual, and that's why he's both good at calculus/programming and he reads SSC/OB/LW? Or was he "pretty average", started reading SSC/OB/LW, and then that made him become good at calculus/programming?

Yes, genetics + randomness determines most variation in human behavior, but the SSC/LW stuff has helped provide some direction and motivation.

Have you been using turbocharging training with him?
Is there an actual description of turbocharging training beyond "deliberate practice but where you think hard about not Goodhart-ing and practicing the wrong thing"?
3Ben Pace5y
I don't think there's been a write-up of it anywhere.
1Eli Tyre5y
Val started (didn't finish) a sequence once, but it looks like he removed the sequence-index from his blog: In any case, I (who am not Val), would endorse that description.
It was taught at CfAR during the period I think James attended.
What is it? I don't remember turbocharging from CfAR.
Its one of the things Val taught. I honestly don't remember much of the details, but "deliberate practice but where you think hard about not Goodhart-ing and practicing the wrong thing"? actually sounds about right.

"Rationalists are very good at epistemic rationality."

As people very good at _epistemic_ rationality, I am sure you realize that the relevant comparison is success after one had been exposed to rationality with hypothetical success had one not been exposed.

That -- compared to what? -- needed saying

The winningest rationalist I know of is Dominic Cummings, who was the lead strategist behind the Brexit pro-leave movement. While the majority of LWers may not agree with his goals, he did seem to be effective, and he frequently makes references to rationalist concepts (including IIRC some references to the work of Eliezer Yudkowsky) on his blog:

How much of a difference do you think he made? Was there strong pro-remain sentiment before he got stated?

I followed his work, and I estimate the difference he made to be very high relative to other individuals working on the issue (on either side). According to his own estimation, his contribution consisted of assembling highly competent people and then minimizing interference from incompetent ones.

Some context: he had previously worked on the campaign to reject the Euro, and so had more experience with the question of 'how people in the UK feel about the EU' than most, which is why there was a push to recruit him for a campaign. Their campaign took a series of basic steps, like tried to determine what voters actually thought, which none of the other campaigns did. Then they tested a bunch of different methods to communicate with the voters effectively (the other campaigns went with old strategies and did not check whether they worked), and focused on driving voter turnout.

In a nutshell what he did was: determine to solve the actual problem, find other like-minded people, and then set about actually trying to solve the problem using basic tools like measurement and experiment. You can find the list of posts on his blog relevant to the campaign here, but I think the real meat is in #20-22. He does not claim responsibility for success, placing most of the credit with the team and most of the blame with an incompetently run Remain campaign.

On what basis the polling was close before the referendum, and the result of the referendum was close. I am not seeing evidence that he made something happen that would not have happened. Are you saying that he must have got results, because he was using the right methods? How come we sill don't know?
What he made happen that would not have happened is voters turning out to vote for Brexit at a higher rate. When the campaign began, polling was not close. Here is one, from a company which Cummings referred to frequently, that showed a 10 point lead for staying in as of August 2015. The rest of that post is here, wherein he discusses the state of things as the campaign was beginning. Further, I point you to the expected outcomes, which were heavily in favor of the UK remaining. On page 4 here you see the odds Betfair was putting on the question. This is only over the 10 week span of the official campaign immediately before the referendum. Using the right methods, their team was able to determine that the actual level of support for leaving was higher than the other campaigns or the media expected. Investigating what the voters thought (via focus groups) helped them identify what people's concerns were, for and against. Then they tested different ways of communicating with voters, such that their communication resonated with leave voters and minimized antagonism of remain voters. As a result, the turnout for leave voters was higher than expected before the campaign. At the same time, the other campaigns made assumptions both about the real state of opinion and about methods for communicating with voters. These assumptions were wrong, and they did not test them. As a result, turnout for remain voters was mediocre. I'm not sure if this was expected or not; the remain campaign was pretty much business as usual, so I suspect it was.

What does winning look like?

I think I might be a winner. In the past five years: I have won thousands of dollars across multiple prediction market contests. I earned a prestigious degree (PhD Applied Physics from Stanford) and have held a couple of prestigious high-paying jobs (first as a management consultant at BCG, and now an algorithms data scientist at Netflix). I have a fulfilling social life with friends who make me happy. I give tens of thousands to charity. I enjoy posting to Facebook and surfing the internet. I have the means and motivation to keep learning about areas outside my expertise. I floss and exercise and generally am satisfied with my health.

I think I could be considered both a rationalist and a winner.

But I post rarely to LessWrong because my rational perception is that it takes effort but does not provide return. Generally I think my shortcomings are shortcomings of execution rather than irrationality, and those are the areas I aim to improve upon. My arena for self-improvement is my workplace and my life, not a website. As a result, my stories like mine might be underrepresented in your sampling.

If rationalists were winning, how would we know? What would winning look like?

In other words, people who win at offline life spend less time on the internet because they're devoting more time offline. And since rationalists are largely an online community rather than offline at least outside of the bay area, this results in rationalists dropping out of the conversation when they start winning. That's a surprisingly plausible alternative explanation. I'll have to think about this.


The people I know IRL who identify as rationalists are doing great. Not a lot of people bet on prediction markets since the ones that exist are small and hard to use. Not a lot of people bet on stock markets since making money doing so is a boring full-time job.

I presume that the reason people don't post about how they are "winning" is because it's tactless to write a post about how great you are.

If there's a hedge fund out there that leverages superforcasting style reasoning and makes billions with that I doubt that it would be rational for them to openly speak about their secret sausage. It might also be rational for them to currently reinvest all their money if they are getting a great return for it.

This is broadly the pitch of Bridgewater, with the caveat that what they are doing is largely not losing billions. As far as I can tell there is no direct relationship between Tetlock's methods and Dalio's, but they seem to have drawn similar conclusions.
1Eli Tyre5y
This is relevant to my interests. Do you have a particular source that describes their "pitch"?

I'm quite late (the post was made 4 years ago), and I'm also new to LessWrong, so it's entirely possible that other, more experienced members, will find flaws in my argument.

That being said, I have a very simple, short and straightforward explanation of why rationalists aren't winning.

Domain-specific knowledge is king.

That's it.

If you are a programmer and your code keeps throwing errors at you, then no matter how many logical fallacies and cognitive biases you can identify and name, posting your code on stackoverflow is going to provide orders of magnitude... (read more)

Sometimes winning is an evidence of non-rationality. For example, if one plays in a lottery and wins a million dollars, - it was still irrational for him to play as most lotteries have negative total expected utility. The same thing is with becoming very rich: most who try, fail.

Imagine the following game: You are put into a bath where you will be a) dissolved with acid with 99 per cent probability and b) you will become a billionaire with 1 per cent. Would you agree to play?

I would say that playing the game is very irrational, and any winner was likely not able to correctly calculate the odds. So extreme winning is a signal of some form of irrationality.

Its quite possible for a lottery to have positive expected utility for an individual, and this was one of the cases that prompted the development of utility as a separate concept to value.

(Note: Said friend will be introducing himself on here and writing a sequence about his work later. When he does I will add the links here.)


Did you forget to add the links?

It could be that people don't use their rationality skills at their "bottlenecks". You could improve many things but if they aren't your bottlenecks the result would be negligible. I've seen people training to recognize their biases but not using this for strategic planning and just doing "the safe thing" everyone does.

I don't think it's accurate to say that our rationality techniques are only about pursuing truth. It might be true that the sequences are mostly about this but a lot has happened since the sequences were written.

If you look at the recent CFAR handbook there's plenty of techniques that are useful for getting things done.

Humans are only in a small part pliable reasoning. Most of what makes us us is genetic, subconscious and not available to introspection. We have more blind spots than we have sighted, and we actively resist correcting those blind spots. LW-style rationality tends to appeal to the people who on average are at or below mean in interpersonal skills, so you start with a huge handicap and learning about biases and how to deal with them only gives you some marginal advantage over those like you, not a magic bullet to achieve your goals. Speaking of the goals, hu... (read more)

Did your friend ever finish that sequence? I'd still be quite interested in seeing it. After reading Chinese Businessmen: Superstition Doesn't Count, I've become very interested in becoming more instrumental.

If you want to know more about really winning vs. theoretically winning, you might be interested in what Aristotle taught about baseball: