Like Eliezer, I "do my best thinking into a keyboard." It starts with a burning itch to figure something out. I collect ideas and arguments and evidence and sources. I arrange them, tweak them, criticize them. I explain it all in my own words so I can understand it better. By then it is nearly something that others would want to read, so I clean it up and publish, say, How to Beat Procrastination. I write essays in the original sense of the word: "attempts."

This time, I'm trying to figure out something we might call "tacit rationality" (c.f. tacit knowledge).

I tried and failed to write a good post about tacit rationality, so I wrote a bad post instead — one that is basically a patchwork of somewhat-related musings on explicit and tacit rationality. Therefore I'm posting this article to LW Discussion. I hope the ensuing discussion ends up leading somewhere with more clarity and usefulness.


Three methods for training rationality

Which of these three options do you think will train rationality (i.e. systematized winning, or "winning-rationality") most effectively?

  1. Spend one year reading and re-reading The Sequences, studying the math and cognitive science of rationality, and discussing rationality online and at Less Wrong meetups.
  2. Attend a CFAR workshop, then spend the next year practicing those skills and other rationality habits every week.
  3. Run a startup or small business for one year.

Option 1 seems to be pretty effective at training people to talk intelligently about rationality (let's call that "talking-rationality"), and it seems to inoculate people against some common philosophical mistakes.

We don't yet have any examples of someone doing Option 2 (the first CFAR workshop was May 2012), but I'd expect Option 2 — if actually executed — to result in more winning-rationality than Option 1, and also a modicum of talking-rationality.

What about Option 3? Unlike Option 2 or especially Option 1, I'd expect it to train almost no ability to talk intelligently about rationality. But I would expect it to result in relatively good winning-rationality, due to its tight feedback loops.


Talking-rationality and winning-rationality can come apart

I've come to believe... that the best way to succeed is to discover what you love and then find a way to offer it to others in the form of service, working hard, and also allowing the energy of the universe to lead you.

Oprah Winfrey

Oprah isn't known for being a rational thinker. She is a known peddler of pseudoscience, and she attributes her success (in part) to allowing "the energy of the universe" to lead her.

Yet she must be doing something right. Oprah is a true rags-to-riches story. Born in Mississippi to an unwed teenage housemaid, she was so poor she wore dresses made of potato sacks. She was molested by a cousin, an uncle, and a family friend. She became pregnant at age 14.

But in high school she became an honors student, won oratory contests and a beauty pageant, and was hired by a local radio station to report the news. She became the youngest-ever news anchor at Nashville's WLAC-TV, then hosted several shows in Baltimore, then moved to Chicago and within months her own talk show shot from last place to first place in the ratings there. Shortly afterward her show went national. She also produced and starred in several TV shows, was nominated for an Oscar for her role in a Steven Spielberg movie, launched her own TV cable network and her own magazine (the "most successful startup ever in the [magazine] industry" according to Fortune), and became the world's first female black billionaire.

I'd like to suggest that Oprah's climb probably didn't come merely through inborn talent, hard work, and luck. To get from potato sack dresses to the Forbes billionaire list, Oprah had to make thousands of pretty good decisions. She had to make pretty accurate guesses about the likely consequences of various actions she could take. When she was wrong, she had to correct course fairly quickly. In short, she had to be fairly rational, at least in some domains of her life.

Similarly, I know plenty of business managers and entrepreneurs who have a steady track record of good decisions and wise judgments, and yet they are religious, or they commit basic errors in logic and probability when they talk about non-business subjects.

What's going on here? My guess is that successful entrepreneurs and business managers and other people must have pretty good tacit rationality, even if they aren't very proficient with the "rationality" concepts that Less Wrongers tend to discuss on a daily basis. Stated another way, successful businesspeople make fairly rational decisions and judgments, even though they may confabulate rather silly explanations for their success, and even though they don't understand the math or science of rationality well.

LWers can probably outperform Mark Zuckerberg on the CRT and the Berlin Numeracy Test, but Zuckerberg is laughing at them from atop a huge pile of utility.


Explicit and tacit rationality

Patri Friedman, in Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality, reminded us that skill acquisition comes from deliberate practice, and reading LW is a "shiny distraction," not deliberate practice. He said a real rationality practice would look more like... well, what Patri describes is basically CFAR, though CFAR didn't exist at the time.

In response, and again long before CFAR existed, Anna Salamon wrote Goals for which Less Wrong does (and doesn't) help. Summary: Some domains provide rich, cheap feedback, so you don't need much LW-style rationality to become successful in those domains. But many of us have goals in domains that don't offer rapid feedback: e.g. whether to buy cryonics, which 40-year investments are safe, which metaethics to endorse. For this kind of thing you need LW-style rationality. (We could also state this as "Domains with rapid feedback train tacit rationality with respect to those domains, but for domains without rapid feedback you've got to do the best you can with LW-style "explicit rationality".)

The good news is that you should be able to combine explicit and tacit rationality. Explicit rationality can help you realize that you should force tight feedback loops into whichever domains you want to succeed in, so that you can have develop good intuitions about how to succeed in those domains. (See also: Lean Startup or Lean Nonprofit methods.)

Explicit rationality could also help you realize that the cognitive biases most-discussed in the literature aren't necessarily the ones you should focus on ameliorating, as Aaron Swartz wrote:

Cognitive biases cause people to make choices that are most obviously irrational, but not most importantly irrational... Since cognitive biases are the primary focus of research into rationality, rationality tests mostly measure how good you are at avoiding them... LW readers tend to be fairly good at avoiding cognitive biases... But there a whole series of much more important irrationalities that LWers suffer from. (Let's call them "practical biases" as opposed to "cognitive biases," even though both are ultimately practical and cognitive.)

...Rationality, properly understood, is in fact a predictor of success. Perhaps if LWers used success as their metric (as opposed to getting better at avoiding obvious mistakes), they might focus on their most important irrationalities (instead of their most obvious ones), which would lead them to be more rational and more successful.

Final scattered thoughts

  • If someone is consistently winning, and not just because they have tons of wealth or fame, then maybe you should conclude they have pretty good tacit rationality even if their explicit rationality is terrible.
  • The positive effects of tight feedback loops might trump the effects of explicit rationality training.
  • Still, I suspect explicit rationality plus tight feedback loops could lead to the best results of all.
  • I really hope we can develop a real rationality dojo.
  • If you're reading this post, you're probably spending too much time reading Less Wrong, and too little time hacking your motivation system, learning social skills, and learning how to inject tight feedback loops into everything you can.

New to LessWrong?

New Comment
77 comments, sorted by Click to highlight new comments since: Today at 2:58 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
  1. I'd particularly expect many people to have good tacit rationality without having good explicit rationality in domains where success is strongly determined by "people skills." This is the kind of thing I expect LWers to be particularly bad at (being neurotypical helps immensely here) and is not the kind of thing that most people can explain how they do (I think it takes place almost entirely in System 1).

  2. When evaluating the relationship between success and rationality it seems worth keeping in mind survivorship bias. For example, a small number of people can be wildly successful in finance through sheer luck due to the large number of people in finance and the randomness of finance. Those people don't necessarily have any rationality, explicit or otherwise, but you're more likely to have heard of them than a random person in finance. But I don't know enough about Oprah to say anything about how much of her being promoted to our collective attention constitutes survivorship bias and how much is genuine evidence of her competence.

  3. One setting where explicit rationality seems instrumentally more useful than tight feedback loops is in determining which tight feedback loop

... (read more)

When evaluating the relationship between success and rationality it seems worth keeping in mind survivorship bias.

An interesting case is that Will Smith seems likely to be explicitly rational in a way that other people in entertainment don't talk about -- he'll plan and reflect on various movie-related strategies so that he can get progressively better roles and box office receipts.

For instance, before he started acting in movies, he and his agent thought about what top-grossing movies all had in common, and then he focused on getting roles in those kinds of movies.,9171,1689234,00.html

An interesting case is that Will Smith seems likely to be explicitly rational in a way that other people in entertainment don't talk about

In the same venue, I've been impressed by Greene's account of 50 Cent he made in the book "The 50th law". If that's really 50's way of thinking, it's brutally rational and impressively strategical.

Do you want to get more specific about what you mean by "tight feedback loops"? I spent a few years focusing on startup things, and I don't think "tight feedback loops" are a good characterization. It can take a lot of work to figure out whether a startup idea is viable. That's why it's so valuable to gather advance data when possible (hence the lean startup movement). If you want "tight feedback loops", it seems like trying to master some flash game would offer a much better opportunity.

As far as I can tell, what actual entrepreneurs have that wannabee entrepreneurs don't is the ability to translate their ideas in to action. They're bold enough to punch through unendorsed aversions, they're not afraid to make fools of themselves, they don't procrastinate, they actually try stuff out, and they push on without getting easily discouraged. You could think of these skills as being multipliers on rationality... if your ability to act on your ideas is 0, it doesn't matter how good your ideas are and you should focus on improving your ability to act, not improving your ideas. It might help to start distrusting yourself whenever you say "I'll do X&... (read more)

For what it's worth, I'm a pretty successful entrepreneur and I'd say this more like: (Your version scans better.) I'm commenting mostly against a characterisation of this stuff being easy for successful entrepreneurs. If you try something entrepreneurial and find that it's hard, that's not very useful information and it doesn't mean that you're not one of the elect and should give up - it's bloody hard for many successful people, but you can keep working on your own systems until they work (if you try to just keep working I think you'll fail - go meta and work on both what's not working to make it work better and on what is working to get more of it).
Thanks! Yes, I agree that it's possible to get better at most of those things through deliberate effort, which includes system-building, and it's a good point that people shouldn't be dissuaded just 'cause it doesn't seem to come to them naturally.
Here's something I heard about Oprah which is consistent with the wikipedia article but not included in it. People had been talking about wanting more positive talk shows, so Oprah decided to have one. This is a rationality skill because she explored giving people what they said they wanted instead of being offended that they didn't like what she was already doing. It's possible that her gimmick was the result of some thought about the question of how to do a positive talk show while keeping it interesting.
Seems plausible. "Figure out what people want and give it to them" is a widely repeated success principle for salespeople and entrepreneurs. See Paul Graham on making something people want.

What's the evidence that rationality leads to winning? I don't think that claim has been demonstrated.

This whole post seems very circular to me. You believe that rationality leads to winning, in fact you seem to believe that rationality is a necessary condition for winning, so when you see someone win, you conclude that therefore they are rational. And by extension whatever they are doing is rational. And if what the winners do doesn't look rational to us at first glance, we look deeper and rationalize as necessary until we can claim they are rational.

This reminds me not a little of classical economists who believe that people are rational consumers, and therefore treat anything consumers do, no matter how ridiculous, as expressing hidden preferences. I.e. they believe that rational consumers maximize their utility, so they contort utility functions such that they are maximized by whatever consumers choose, rather than recognizing the fact that consumers are often irrational. For instance, if consumers are willing to pay $50 for a bottle of bourbon they won't pay $10 for, they assert that consumers are buying a status symbol rather than a bottle of bourbon. You're proposing an e... (read more)

As with anything, but if you unpack "rationality" to "having the right model of the world", the opposite of "rationality leads to winning" is "the world is made of an explicitly anti-rational force", that is "magic". I would assign a very low probability to it: counter-examples like Oprah seems to elevate more the probability of "people compartimentalize" than "the universe guides her energy".

Also, we should not neglect the base rates. If more than 99% of people on this planet are irrational by LW standards, then we should not be surprised by seeing irrational people among the most successful ones, even if rationality increases the probability of success.

In other words, if you would find that (pulling the numbers out of hat) 99% of all people are irrational, but "only" 90% of millionaires are irrational, that would be an evidence that rationality does lead to (increased probability of) winning.

Also, in real humans, rationality isn't all-or-nothing. Compare Oprah with an average person from her reference group (before she became famous). Is she really less rational? I doubt it.

That seems entirely possible. Consider the old chestnut that entrepreneurs are systematically overoptimistic about their chances of success and that startups and similar risks are negative expected value. Rational people may well avoid such risks precisely because they do not pay, but of the group of people irrational enough to try, a few will become billionaires. Voila! Another example: any smart rational kid will look at career odds and payoffs to things like being a musician or a talk show host, and go 'screw that! I'm going to become a doctor or economist!', and so when we look at mega-millionaire musicians like Michael Jackson or billionaire talk show hosts like Oprah... (We are ignoring all the less rational kids who wanted to become an NFL quarterback or a rap star and wind up working at McDonald's.) Another point I've made in the past is that since marginal utility seems to diminish with wealth, you have to seriously question the rationality of anyone who does not diversify out of whatever made them wealthy, and instead go double or nothing. Did Mark Zuckerberg really make the rational choice to hold onto Facebook ownership percentages as much as possible even when he was receiving offers of hundreds of millions? Yes, he's now currently a billionaire because he held onto it and worth increased some orders of magnitude, but social networks have often died - as he ought to know, having crushed more than his fair share of social networks under his heel! In retrospect, we know that no one (like Google+) has killed Facebook the way Facebook killed Myspace. But only in retrospect. Or since using these past examples may not be convincing to people since it's too easy to think "obviously holding onto Facebook was rational, gwern, don't you remember how inevitable it looked back in 2006?" (no, I don't, but I'm not sure how I could convince you otherwise), let's use a more current example... Bitcoin. At least one LWer currently holds something like >500 bitcoins, w
Do I think people will regard his decision, or would I regard his decision? Are these people general population, or LW? How much do they know about his reasoning process?
I intended general people, and I don't think they would much care. If you want more detailed scenarios and hypotheticals, feel free to reply to my comment with your preferred poll questions.
I wrote she is probably more rational than an average person from her reference group (before she became famous); by which I meant: a poor black woman pregnant at age 14. Being overoptimistic does not contradict that.
No, but it does put pressure on your claim. You have to be very optimistic or very risk-seeking to ride your risky career all the way up past instant-retirement/fuck-you money levels (a few millions) to the billions point, and not sell out at every point before then to enjoy your gains. What fraction of the general population ever founds a startup or new company or takes an equivalent risk? Her career pushes Oprah way out onto the tail. Now, maybe the average black pregnant teenager is so irrational in so many ways that their average problems make Oprah on net more rational even though she's lunatically optimistic or risk-seeking (although here we should question how irrational having a kid is, given issues like welfare and local cultures and issues discussed in Promises I Can Keep and marriage gambits and that sort of thing), but it's going to be much harder to establish that about an Oprah-with-lunatic-risk-appetite rather than what we started with, the Oprah-who-is-otherwise-looking-pretty-darn-rational.
Is retiring relatively young a more rational choice than continuing to work at something you like?
It seems like pretty remarkable luck if the thing you want to do most in the world is also what you're currently being paid to do.
On the other hand, how good are people who retire at finding what they want most to do? A person who's more rational than average (especially about introspection) might do well to retire, but most people might be rationally concerned that they'd just drift.
I don't know what population-wide aggregates might look like. At least in Silicon Valley, there apparently are many people who have retired early and have the ability and inclination to express any dissatisfaction online in places where I might read them, but I can't think of any who have said things like "My life has been miserable since I cashed out my millions of dollars of Google shares and I have nothing to do with myself." Retiring early means you have the money for doing a great many things, and you are still in physical & mental shape to enjoy it; Twain: And what factors enabled this early retirement in the first place? A motivated intelligent person (albeit with a bad appetite for risk and inability to cash out) can find plenty of rewarding things to occupy themselves with, like charity or education. Steve Woziak and Cliff Stoll immediately come to mind, but I'm sure you can name others.
There's a selection effect on such wishes, though. Only a small fraction of humans ① survive to such an age and ② retire with "privileges and accumulations"; many who would desire such a goal do not achieve it.
I don't follow. Everyone has wishes, the people who retire without privileges and accumulations tend to have started without privileges and accumulations but the opposite is not true (the elderly are wealthier than the young).
So, I said he'd be considered rational in all cases except hold/fail. That's because people will take his success as evidence that he knows what he's doing, and if he sells then he's doing what 'everyone else' (i.e. > 99.9% of the world) would do, so even if it doesn't work out that way they'd probably give him some slack. Also, I think it's rational for him to diversify, but it's not a bad idea for him to maintain significant holdings.
why is buying and selling binary? he should clearly rebalance.
Expanding on RomeoStevens' comment... Maths time! Suppose that he has now 10,000 dollars and 500 bitcoins, each bitcoin now costs $100, and that by the end of the year a bitcoin will cost $10 with probability 1/3, $100 with probability 1/3, and $1000 with probability 1/3. Suppose also that his utility function is the logarithm of his net worth in dollars by the end of the year. How many bitcoins should he sell to maximize his expected utility? Hint: the answer isn't close to 0 or to 500. And I don't think that a more realistic model would change it by that much.
Khoth suggests modeling it as starting with an endowment of $60k and considering the sum of the 3 equally probable outcomes plus or minus the difference between the original price and the closing price, in which case the optimal number of coins to hold seems to be 300: last $ sort $ map (\x -> (log(60000 - 90*x) + log(60000) + log(60000 + 900*x), x)) [0..500] (34.11321061509552,300.0) Of course, your specific payoffs and probabilities imply that one should be buying bitcoins since in 1/3 of the outcomes the price is unchanged, in 1/3 one loses 90% of the invested money, and in the remaining 1/3, one instead gains 1000% of the invested money...
I've fiddled around a bit, and ISTM that so long as the probability distribution of the logarithm of the eventual value of bitcoins is symmetric around the current value (and your utility function is logarithm), you should buy or sell so that half of your current net worth is in dollars and half is in bitcoins.
Nevermind, Gwern posted it before me.
I come from the future. Do I try to compensate for hindsight bias, or do I abstain from answering the polls altogether?
Even after the 'crash', the equivalent figure is still like $50k and so the question remains germane. If you want to answer it, feel free. (The raw poll data includes timestamps, so if anyone thinks that answers after time X are corrupting the results, they can always drop such entries.)
Okay. I answered the questions except the first (per RomeoStevens) and the last (I'd expect people to be roughly equally split in that situation).
I'd think that the latter would result in less expected pollution.
Basic statistics question: if we find that 99% of all people are irrational, but "only" 90% of millionaires are irrational, is that evidence that rationality does lead to (increased probability of) winning, or is it only evidence that rationality is correlated with winning? For instance, how do I know that millionaires aren't more rational simply because they can afford to go to CFAR workshops and have more freetime to read LessWrong? I.e. knowing only that 99% of all people are A but "only" 90% of millionaires are A, how do I adjust my respective probabilities that 1. A --> millionaires 2. Millionaires --> A 3. Unknown factor C causes both A and millionaires It feels like I ought to assign some additional likelihood to each of these 3 cases, but I'm not sure how to split it up. Maybe the answer is simply, "gather more evidence to attempt to tease out the proper causal relationship".
This is a causal question, not a statistical question. You answer by implementing the relevant intervention, usually by randomization, or maybe you find a natural experiment, or maybe [lots of other ways people thought of]. You can't in general use observational data (e.g. what you call "evidence") to figure out causal relationships. You need causal assumptions somewhere.
What do you think of this challenge, to detect causality from nothing but a set of pairs of values of unnamed variables?
You can do it with enough causal assumptions (e.g. not "from nothing"). There is a series of magical papers, e.g. this: which show you can use additive noise assumptions to orient edges. ---------------------------------------- I have a series of papers: which show you don't even need conditional independences to orient edges. For example if the true dag is this: 1 -> 2 -> 3 -> 4, 1 <- u1 -> 3, 1 <- u2 -> 4, and we observe p(1, 2, 3, 4) (no conditional independences in this marginal), I can recover the graph exactly with enough data. (The graph would be causal if we assume the underlying true graph is, otherwise it's just a statistical model). ---------------------------------------- People's intuitions about what's possible in causal discovery aren't very good. ---------------------------------------- It would be good if statisticians and machine learning / comp. sci. people came together to hash out their differences regarding causal inference.
Gelman seems skeptical.
I saw that, but I didn't see much substance to his remarks, nor in the comments. Here is a paper surveying methods of methods of causal analysis for such non-interventional data, and summarising the causal assumptions that they make: "New methods for separating causes from effects in genomics data" Alexander Statnikov, Mikael Henaff, Nikita I Lytkin, Constantin F Aliferis
Two things: 1) Your prior probabilities. If before getting your evidence you expect that hypothesis H1 is twice as likely as H2, and the new evidence is equally likely under both H1 and H2, you should update so that the new H1 remains twice as likely as H2. 2) Conditional probabilities of the evidence under different hypotheses. Let's suppose that hypothesis H1 predicts a specific evidence E with probability 10%, hypothesis H2 predicts E with probability 30%. After seeing E, the ratio between H1 and H2 should be multiplied by 1:3. The first part means simply: Before the (fictional) research about rationality among millionaires was made, which probability would you assign to your hypotheses? The second part means: If we know that 99% of all people are irrational, what would be your expectation about % of irrational millionaires, if you assume that e.g. the first hypothesis "rationality causes millionaires" is true. Would you expect to see 95% or 90% or 80% or 50% or 10% or 1% of irrational millionaires? Make your probability distribution. Now do the same thing for each one of the remaining hypotheses. -- Ta-da, the research is over and we know that the % of irrational millionaires is 90%, not more, not less. How good were the individual hypotheses at predicting this specific outcome? (I don't mean to imply that doing either of these estimates is easy. It is just the way it should be done.) Gathering more evidence is always good (ignoring the costs of gathering the evidence), but sometimes we need to make an estimate based on data we already have.
It does not follow that the opposite of "rationality leads to winning" is "the world is made of an explicitly anti-rational force". Were I to discover that rationality does not lead to winning, or worse yet that irrationality leads to winning, I would find it much more likely that incorrect beliefs enable people to take actions that lead them to winning rather than that the world is made of an explciitly irrational force. For example, if people believe they are average or below, they may be less aggressive and settle for less than if, by virtue of Dunning-Krueger, they believe they are exceptional and try for more. I don't know that this is true--I might not assign it even a 50% probablility of being true-- but it's not self-evidently false. The question of whether rationality leads to winning, or which parts of rationality lead to winning, is an empirical question, not a logical question. Another example: it depends on where you set the bar for winning. For example, suppose we set the bar for winning at a billion dollars. Rational people, acting to maximize their own utility, may well choose to plug away in a Fortune 500 company for 40 years, putting away a nice chunk of change in a 401K, investing in index funds, spend no more than 30% of take home pay on housing, and retire at 60 with a few million dollars in the bank. But by this standard they haven't "won" even though they maximized their expected utility at low risk. Irrational people may sink their savings into a startup, and just might hit it big and "win". Then again they may lose it all and die alone in a hole. But the winners will still be comprised of irrational people. I.e. playing the lottery is almost as irrational as you can get, and every single lottery winner is irrational.* Perhaps the mistake here is in in looking at winners individually. Dare I say it, could this be selection bias? What we need to do to figure out if rationality helps us win or not, is not look at the winners and ask what they
The only way I could see this to work is if there's some force that looks at my model of the world and makes it systematically wrong, no matter how updated it is. That is, only if there's some anti-bayesian principle at work. But I think there's a difference here in what we understood to be rationality: indeed you write Let it be clear that I do not conflate being rational with being cautious, being reasonable or even having common sense: I intend it to have the pure meaning of "having the correct model of the world". If in some endeavour those who try more (aggressively) achieve more, it means that the probability of success is low but not impossibly low, and it follows that the rational thing is to try more. Those who try less are maybe being prudent, but in so doing they are underestimating their probability of success (or overestimating the probability of failure), that is: they do not have the correct model of the world, and this leads to irrational behaviour. The second example (hitting the "big idea" or winning at the lottery) is a case in which the winning strategy is uncomputable, but by sheer brute forcing there is someone who will hit it. That's admittedly a case in which winning was not due to rationality, but note that it wasn't due to irrationality either: it was due to pure luck of finding oneself in the only global optimum. I'll specify better my position: let's conflate luck with a resource of some kind (it's a sort of better positioning in a potential space). There are domains in which having the correct model of the world leads to better chance of winning, and there are other domains in which this is indifferent (an impartial beauty contest, a lottery). But there are never domains in which having the correct model of the world leads to a more probable loss. So rationality leads always to a better or equal probability of winning. This, as I agreed, is an empirical question, but one which, if defeated, will imply the existence of the irrational

I'd love to hear a story (or maybe stories) of someone becoming an LW regular, improving their "rationality skills" and going on to "win" (define and achieve their personal goals), thanks to those skills.

Here I explicitly exclude LW-related goals, such as understanding the sequences, attending a CFAR workshop or being hired by CFAR, signing up for cryonics or figuring out how to donate more to GiveWell. Instead, I'd love to hear how people applied what they learned on this site to start a business, make money, improve their love life (hooking up with an LW poly does not count for this purpose), or maybe to take over the world.

Hopefully some of the stories have already been posted here, so links would be appreciated.


When I found LW, I was confused and nonambitious; my goal was to survive on as little money as possible (to ironically humiliate the people who say $17/hr is the minimum living wage), and maybe make a few video games or something, and I spent most of my free time on 4chan and arguing about radial politics on the internet.

Since coming to LW, I've used LW-far-epistemic rationality to figure out a great deal of philosophical confusions and understand a great deal more about those big questions. (this doesn't count, but it should be mentioned)

More specifically and interestingly: it took explicit LW rationality for me to:

  • Think rationally about balancing my resources (time, money) and marginal utility, to great productivity benefit.

  • Step up to run the vancouver LW meetup.

  • Make and maintain a few really valuable friends (mostly through the meetup).

  • Respond positively to criticism at work, so that I've become much more valuable than I was 6 months ago, in a way that has been recognized and pointed out.

  • Achieve lightness and other rationality virtues in exploring design concepts at work, taking design criticism, not getting caught in dead ends. I explicitly apply much of what I've lea

... (read more)
Impressive! I think CFAR could use a testimony like this.
Then again, it may be a lucky draw that finding LW occurred almost exactly at the lowest point in my historical ambition; for 4 or 5 years before that, my goal was to take over the world and dismantle civilization for the good of mankind. It wasn't just an idle "goal" either; I really worked at it. Still, I'd be much more effective at such post-LW than I was then.
Drat, is it? Hmm, I'd be interested in your thoughts if this is a serious goal of yours, but OTOH broadcasting his sort of thing is pretty obviously a Bad Idea. On the other hand, more people could sort of accelerate things. I mean, it would take [REDACTED] [REDACTED] [REDACTED] ... EY could probably [REDACTED] [REDACTED] but he seems busy (with FAI; might be easier with a world, though) ... I'd say LW would be an ideal recruiting ground for help creating a singleton. ETA: edited for [REDACTED].
Is it? Did you take me seriously? It's only a bad idea if you take me seriously enough to try to stop me. Especially considering the following sentences: Also, I meant that the thing that is hard is to switch your goals to the highest value thing available. I just used "taking over the world" as an ironic example. (also, causing FAI to happen is basically taking over the world, with "world" defined a little bit more broadly than "human society").
Fair enough. Incidentally, I meant broadcasting methods to take over the world, rather than the fact that it's a good idea.
lincolnquirk, six months after Rationality Mega-Camp.
The Motivation Hacker is one such story, though it focuses on the relevance of the stuff in my procrastination post rather than on the relevance of the rest of LW.
Is it possible to post anonymously but link it to my account? Some of the stuff I'd like to say aren't things I want the general public to link directly to me, even though LessWrong played a possibly significant positive role.
Not sure what you are trying to achieve. And what you mean by link. You can certainly create another account and mention the original one in the profile.

If Rationality is Winning, or perhaps more explicitly Making The Decisions That Best Accomplish Whatever Your Goals Happen to Be, then Rationality is so large that it swallows everything. Like anything else, spergy LW-style rationality is a small part of this, but it seems to me that anything which one can meaningfully discuss is going to be one such small portion. One could of course discuss Winning In General at a sufficiently high level of abstraction, but then you'd be discussing spergy LW stuff by definition - decision theory, utility, and so on.

If bu... (read more)

I never did find out if any sizable fraction of Less Wrongers would bite this bullet. That is to say, to affirm the claim that, all else equal, a person with more physical strength is necessarily more rational.
I don't see a bullet. Obviously, other things matter as well as rationality. Rationality, even instrumental rationality, is not defined as winning. Those who speak as if it was are simply wrong, and your example is the obvious refutation of such silliness.
Well, the assumption here is that a better knowledge of the world gives you a better chance of achieving your goal, so rationality equals more winning only in strategical domains. Which I suspect are the majority in today's environment, but still being better looking / stronger / better armed etc. counts in certain other domains.
That sentence should end at the comma. Rationality never "equals" more winning. It is, or should be, a cause (among others) of more winning. That is not a relationship that can be called "equals".
It's an incorrect translation of a figure of speech that exists in Italian but apparently not in English: the correct formulation is "rationality never decreases your probability of winning".
I'm curious to know what the literal Italian would be. In English, people do often say "X is Y", "X equals Y", "X is objectively Y" (political historians will recognise that one), X means Y, etc. when X and Y are different things that the speaker is rhetorically asserting to be so closely connected as to be the same thing. For example, the extreme environmental slogan, "a rat is a pig is a dog is a boy". I believe it is a figure of speech better avoided.
Well, if you're curious: "essere razionali equivale a vincere maggiormente solo nei domini strategici". 'equivale' I translated to 'equals', but a more precise meaning would be on the line of 'implies', 'leads to'. It's used most often when listing the component of a process: "A equivale a B che equivale a C" usually is in the meaning of "A -> B -> C" rather than "A = B = C".
I think your thought experiment illustrates well that often the "Rationality is Winning" meme doesn't quite carve the space too well. Here, rationality is using the right tactics or, if only one is available, spending the right amount of time on the right tasks proportional to how much they value their goals and how achievable they are. If we resurrect Alice and Bob as hypothetical monovalue agents who only exclusively value deadlifting X, and who have only one method of attempting/training to deadlift X, then the game tree is skewed, Bob wins faster, Alice is screwed and wins slower. Both are fully rational if they spend all available resources on this goal (since it's all these hypothetical agents value), even though Alice spends more resources for longer before achieving the goal. For more game theory mumbo-jumbo: I view "rationality" more in terms of how you build and navigate the game tree, rather than a post-hoc analysis of who ended up in the best cell of the payoff matrices. Or, to put it differently, rationality is ending up at the best cell of your payoff matrix, regardless of whether someone else just has +5 on all cells of their matrix or has more options or whatever. So if my understanding that you were making with this a critique of the "Rationality is winning" meme, I agree that it's a bit misleading and simplistic, but it still is "taking the best course of action with the resources and options available to you, reflectively and recursively including how much you spend figuring out which courses of action are better" - Expected Winning Within Available Possible Futures
This is a really good point and it is also related to Manfred's comment that I don't personally know how to reconcile with some of the points in the article. On one hand, I would like to have a lot of money because a lot of inconvenient things would suddenly become much easier. On the other hand, I would have to do other inconvenient things, like manage a lot of money. Also, I don't think I would be happy doing Oprah's job, even if it resulted in a lot of money. Basically, I would not mind lots of money but it is not currently a priority. So I don't know if I'm actually winning or not, oops. Therefore, a poll! How successful are you? [pollid:426] From a fame, money or bragging rights perspective, how ambitious are your current goals?[pollid:427]

You say that, "I know plenty of business managers and entrepreneurs who have a steady track record of good decisions and wise judgments, and yet they are religious, or they commit basic errors in logic and probability when they talk about non-business subjects."

You must know different business managers and entrepreneurs than I do. I can think of few if any business managers and entrepreneurs who have a steady track record of good decisions and wise judgments. There are some common positive characteristics I see in the business managers I know, an... (read more)

How do you know their success rate isn't much higher than those who aren't successful, but the base success rate is so low even those who do better are still less than 50%?
Alternately, what if the business equivalent of rapid prototyping is the optimal strategy? Giving up early enough that you can move on to something else can be the best option.

In short, she had to be fairly rational, at least in some domains of her life.

Not a comment about Oprah. Repeat, not a comment about Oprah. Once more: not a comment about Oprah.

But a comment about the idea that rationality leads to success. Deception and violence also lead to success. These problem solvers are systemized winning: IF (application of fraud) THEN (goal met) ELSE (blame others). "Violence isn’t the only answer, but it is the final answer." - Jack Donovan. Violence and deception are social skills. When talking rationality co... (read more)

Martial-Art-Of-Rationality-Wise, this reminds me of people in epistemically vicious arts who say that western boxers couldn't beat them "on the street," because they could just gouge their eyes, bite them, and kick them in the cojones. It turns out that, if a strategy is available to everyone, it gets exploited until it's no longer an overwhelming advantage. Whether that's because everyone's increased their use of violence and deception, or because they've coordinated to lower the marginal effectiveness of an additional unit of violence, is immaterial. Either way, violence and deception aren't a $20 bill lying on the ground, waiting for someone to pick it up. That wouldn't be a nash equilibrium.

Which of these three options do you think will train rationality (i.e. systematized winning, or "winning-rationality") most effectively?

One of these things is not like the others. I can read the Sequences for free, and I can attend a workshop relatively cheaply, but the time and money investments into a startup are quite significant. Most people cannot afford them. Eating cake is not an option for them; they can barely afford bread.

One simplification I think you're making that raises some problems is money. Why Oprah? Why not Charles Wuorinen, who makes excellent musical decisions and has many learned skills? Who has access to tight feedback loops as soon as someone else listens to what he writes? Who is really good at what he does? Because Oprah's skills are better for collecting slips of green paper.

Now, one can collect quite a few slips of green paper and still, say, suffer from depression, or just generally be unhappy. Perhaps we could even claim that Wuorinen is happier than... (read more)

Explicit rationality can help you realize that you should force tight feedback loops into whichever domains you want to succeed in, so that you can have develop good intuitions about how to succeed in those domains.

A realization:

PUA, or at least what seems to me to be the core concept of some schools of PUA, makes a ton of sense when viewed in this light. Trying to pick up a stranger in a bar is probably the tightest feedback loop possible for social skills, and like the OP says, social skills are massively important for success and happiness. Therefore... (read more)

Another interesting example of the utility of tight feedback loops, this time as applied to education, is extreme apprenticeship. I've been taking one math class built around the XA method, and it has felt considerably more useful and rewarding than ordinary math classes.

Among other things, XA employs bidirectional feedback loops - student-to-teacher and teacher-to-student. Students are given a lot of exercises to do from day one, but the exercises are broken into small chunks so that the students can get a constant sense of making progress, and so that th... (read more)

Hypothesis for what tacit rationality might be: glomming onto accurate premises about what actions are likely to achieve one's goals without having a conscious process for how one chooses premises.

(the first CFAR workshop was May 2013)

May 2012*

Fixed, thanks.

I think it would probably be worth going into a bit more about what delineates tacit rationality from tacit knowledge. Rationality seems to me to apply to things that you can reflect about, and so the concept of things that you can reflect about but can't necessarily articulate seems weird.

For instance, at first it wasn't clear to me that working at a startup would give you any rationality-related skills except insofar as it gives you instrumental rationality skills, which could possibly just be explained as better tacit knowledge -- you know a bajillion m... (read more)

I take objection on Mark Zuckerberg and Oprah:

I would ascribe Oprah's success more to her being a charismatic personality/communicator and Zuckerberg didn't even make FB, he paid some guys to make it, the idea of fb wasn't new(there was Orkut, Myspace) it's just the one that ended up taking off. And the latter was due to the good implementation, fb was always simple, fast and snappy. I don't know if Zuckerberg was the one that drove this point or if he had just good guys on the technical side who understood what matters. The same can be said for Google, it... (read more)