Recently there has been a couple of articles in the discussion page asking whether rationalists should do action A. Now such questions are not uninteresting, but by saying "rationalist" they are poorly phrased.

The rational decision at any time is the decision, given a human with a specific utility function B, and information C, should make to maximise B, given their knowledge (and knowledge about their knowledge) of C. It's not a decision a rationalist should make, it's a decision any human should make. If Omega popped into existence and carefully explained why action A is the best thing for this human to do given their function B, and their information C, then said human should agree.

The important question is not what a rationalist should do, but what your utility function and current information is. This is a more difficult question. Humans are often wrong about what they want in the long term, and it's questionable how much we should value happiness now over happiness in the future (in particular, I suspect current and future me might disagree on this point). Quantifying our current information is also rather hard- we are going to make bad probability estimates, if we can make them at all, which lead us into incorrect decisions just because we haven't considered the evidence carefully enough.

Why is this an important semantic difference? Well it's important for the cause of refining rationality that we don't get caught with associating the notion of rationality with certain goals. Some rationalists believe that they want to save the world, and the best way to do it is by creating friendly AI. This is because they have certain utility functions, and certain beliefs about the probabilities of the singularity. Not all rationalists have these utility functions. Some just want to have a happy home life, meet someone nice, and raise a family. These are different goals, and they can be helped by rationality, because rationality IS the art of winning. Being able to clearly state ones goals and work out the best way to acheieve them is useful pretty much no matter what those goals are. (pretty much to prevent silly examples here!)

New Comment
21 comments, sorted by Click to highlight new comments since:
[-]Zed190

I don't think this kind of nitpicking between the different definitions of rational is very productive. Yes, rationality is the art of making the best decisions and yes that means rational people should be "winning".

Unfortunately, I don't get the impression people who identify with LessWrong or with Bayesian reasoning are all that successful in life. I don't get the impression that when people are exposed to LessWrong that their life improves significantly.

It takes at least a 1000 hours to learn the Art of Rationality (probably much more) and if after all that effort people are not noticeably more successful how can we possibly call rationality winning? The opportunity cost of studying rationality is immense so we should get a large return on that investment!

From my point of view it looks like the rational choice is not to study rationality. Instead learn to play the guitar, or take cooking classes, or do any other activity that makes you a more well rounded human being. People often attribute increases in happiness and life satisfaction to that sort of activities.

Alternatively, just look at the people who live the kind of life you want to live and see how they got there and follow their path. This isn't Bayesian reasoning with a prior based on aggregated life experience of your heroes (although technically it is), this is just common sense. This is the same conclusion you'd reach without studying rationality in depth.

It's fun to refer to rationality as the art of winning, but let's not forget that we say this with tongue firmly planted in cheek.

PS: if making a decision is difficult because you have trouble quantifying information then this is probably the solution.

Alternatively, just look at the people who live the kind of life you want to live and see how they got there and follow their path

When I was in high school, I wrote and ran a prisoner's dilemma simulation where strategies reproduced themselves via this mechanism. After every cell played several rounds against its neighbors, each examined itself and its neighbors to see how many points were accumulated, then either mutated randomly or copied its most successful neighbor.

I was trying to experiment in the fashion of vaguely described other simulations I'd read of, and maybe replicate their interesting result: reportedly initial random strategies were soon beaten by always-defect, which would then eventually be beaten out by tit-for-tat, which would then be beaten by always-cooperate, which would in turn be beaten when always-defect reappeared. Psychological/sociological/historical analogies are an interesting exercise for the reader.

But what did I get, instead? Overwhelming victory of a stategy I eventually called "gangster". IIRC it was something like "start with a random high probability of cooperating, then if your opponent cooperates, you start always defecting, but if your opponent defects you start always cooperating".

Sounds like a pretty awful strategy, right? And overall, it was: its resulting scores at each iteration were a sea of always-cooperated-against-defectors losers, punctuated by dots of lucky always-defected-against-many-cooperators winners. But the losers and the winners were using the same strategy! And because each new generation looked at that strategy's great peak performance rather than it's lousy average performance, there was no likelihood of switching to anything better.

Here I'll make some of the sociological analogies explicit: looking at people who live the kind of life you want to live is a lousy way to pick a life path. It's how gangsters are born, because every little hoodlum imagines themselves as one of the rich dealers' dealers' dealers at the top of the pyramid, not as one of the bottom rung dealers risking jail and death for near minimum wage. It's how kids waste their time aiming at star entertainer and athlete careers, because they all imagine themselves as part of the 99.9th percentile millionaire superstars rather than as one of the mere 99th percentile B-listers or the 90th percentile waitstaff. It's how people waste their salaries gambling, even - who doesn't want to live a life as a multimillionaire who didn't have to work for any of it? Other people did it, and all we have to do is follow their path...

This is a nitpicking digression, but I think it's an important nitpick. "Pick a life path whose average results you prefer" is a great metastrategy, but following it means examining the entire lives of the 50th percentile schlubs. Instead emulating your "heroes" as chosen based on the peak of their fame is just common sense, which commonly fails.

Post this as a top level post.

I just want to thank you for this great post, one of the best I've seen in a while.

I very nearly missed this which would have been sad. Might you edit it a little and post it to Less Wrong main? It'd be a great post. A few hints at where psychological/sociological/historical analogies might be drawn would also be cool.

I'd like to have my references in better shape than "vague recollection of code I wrote a decade or two ago" and "vague recollection of decades-old potentially-distorted pop science magazine summary of someone else's unpublished code" first.

Hmmm... that might be doable. My own code I could probably fix with a rewrite (it was a simple algorithm and even the original BASIC version was a few hundred lines at worst). And as for the latter reference, I found one of the articles which inspired me here:

http://www.nature.com/scientificamerican/journal/v272/n6/pdf/scientificamerican0695-76.pdf

Looks like this is behind a paywall, though? I seem to be able to access it from work computers (the UTexas library has a subscription to this swath of Scientific American archives) but not home computers.

One other point of reluctance occurs to me: there are conditions under which imagining yourself to be a superstar, while still bad from a selfish viewpoint, might be good for society as a whole: when you're considering becoming a scientist or inventor. Finding a working tungsten light-bulb filament was more than worth wasting hundreds or thousands of failed filaments in Edison's experiments, both from society's point of view and from Edison's... but what if you look at harder scientific problems, for which each world-changing breakthrough might cost hundreds or thousands of less-successful scientists who would have been happier and wealthier in finance or law or medicine or software or...? Maybe it's a good thing that lots of smart kids imagine being the next Einstein, then pick a career which is likely to be suboptimal in terms of personal utility but optimal in terms of global utility.

On the gripping hand, maybe the world would be better in the long run if science was seen as inglorious, (relatively) impoverishing, low status... but very altruistic. "Less science" might be a tolerable price to pay for "less science in the wrong hands".

Much as good code is measured in lines unwritten, good rationality is measured in goals unachieved.

Assuming you're not trying to level up your personal epistemic practices in large part because it's fun, and a more interesting hobby than flower arranging (or whatever), then my general model for the best that serious rationality training can be expected to deliver is a boost in your exponent that may be worth the up front cost of time spent studying.

With that in mind, it would not surprise me if a few people from this community ended up as billionaires 40 years from now (or maybe winning a Nobel in 25 years based on work done 10 years from now?), but I wouldn't expect there to be dramatic impacts right away, and I wouldn't entirely attribute the long term positive outcomes to the affect of the community so much as to the dramatic filtering that participation in the community represents. The fact that a number of people here use LW for as their "procrastinating distraction" indicates a lot about their character that the site isn't causing so much as revealing.

Learning to read the right books at twice your previous rate using half the time doesn't change much in the short term, but it means that 20 years from now you're much more likely to be exceptionally knowledgeable in your chosen subjects, and the subjects are likely to be important to whatever it is that you actually care about at that time. Also, a lot of what pragmatic clear thinking does is simply make disasters less common so that personal growth trajectories (net wealth, learning opportunities, personal mental health, social networks, etc) hit fewer speed bumps.

Learning to read the right books at twice your previous rate using half the time doesn't change much in the short term, but it means that 20 years from now you're much more likely to be exceptionally knowledgeable in your chosen subjects, and the subjects are likely to be important to whatever it is that you actually care about at that time.

This reminds me of that famous essay which I first read a few weeks ago, You and Your Research. Specifically the part where he talks about how a little additional study per day adds up over time.

I don't get the impression that when people are exposed to LessWrong that their life improves significantly.

LessWrong has been a steady stream of encouragement for me. I'm new around here, so maybe my shock and recoil from this statement is coming from a honeymoon mentality. Still, I'll go on record to disagree and say that I expect significant improvements in my life. A better epistemology and knowledge of the world is bound to change me. I hope I'll look back someday and judge those changes significant.

From my point of view it looks like the rational choice is not to study rationality. Instead learn to play the guitar ...

I'm surprised by this too. Sinking time into this site has not been a chore. I listen to the sequences and visit here because I enjoy my time doing that. I might not stay around if the fun factor goes down, but I'm already looking for ways to keep this fun. I want to want to be rational, and LessWrong is the best resource I have found so far.

It's fun to refer to rationality as the art of winning, but let's not forget that we say this with tongue firmly planted in cheek.

Why is that tongue in cheek? If I can avoid even a few poor decisions each week, that is a real win for me. If I give to the right charities, that is a win for the world.

[-]Zed60

For a second I thought your first paragraph said you were shocked to read that because LessWrong did have a major positive impact on your life. Then I realized it said you were only expecting a significant improvement. Like everybody else. It's only natural, right?

We come into contact with LessWrong. Of course conservation of expected evidence holds! Of course group dynamics are insignificant for evolution! Of course the Everett interpretation does not privilege the hypothesis like the Copenhagen interpretation does! We find bugs in our wetware and realize, wait a second, I can't believe I've been such an idiot! So we leap to the conclusion that fixing these bugs will help us become better and more efficient desire satisfying agents. Absence of evidence be damned.

To make matters worse before we started reading LessWrong we knew exactly what we had to do:

  • Research shows that poor sleeping habits has significant negative impact on your mental health and lifespan.
  • Research shows that poor nutrition and eating habits have significant negative impact on your mental health and lifespan.
  • Research shows that a sedentary lifestyle has a significant negative impact on your mental health and lifespan.

This isn't new research, we've known this for years. If we're so rational, why are we not fixing these things in our life right away? Like NOW. If you know what you have to do to improve your life and health and if you chose to read about the MWI interpretation of Quantum Mechanics because it's more fun then that's fine. But to believe that LessWrong is a better investment in your future than eating right and exercise is utterly delusional[1].

The worst thing is that you even spelled out that you read LessWrong because you want to want to be rational. You believe in the belief that rationality ought to lead to a better life. The cognitive dissonance here is mind boggling. It would be quite the coincidence if what you want to do because it's fun also happens to be exactly what you ought to do to become a happier and more successful person? How convenient! No hard work necessary! The Answer is "Sit on your ass and read stuff on the internet" all along! If only I'd known that 5 years ago!

If I can avoid even a few poor decisions each week, that is a real win for me. If I give to the right charities, that is a win for the world.

Utterly, utterly, utterly delusional.

[1] Straw man argument but applicable to many of us. Substitute it for something that you know you have to fix in your life if these examples don't work for you.

It would be quite the coincidence if what you want to do because it's fun also happens to be exactly what you ought to do to become a happier and more successful person? How convenient! No hard work necessary! The Answer is "Sit on your ass and read stuff on the internet" all along! If only I'd known that 5 years ago!

Seriously? Your rhetoric is thick and off the mark. There are lots of fun things we decide not to do because we see a problem with the ought of doing it. Having fun on this site is no barrier to the happiness or success I am seeking, even though I happen to be in front of a computer right now. That could change, but the same is true for any hobby that gets to be a problem.

Utterly, utterly, utterly delusional.

What is delusional? Thinking I can make better decisions and that LessWrong can actually help me do that? Please elaborate.

Well its important for the cause of refining rationality that we don't get caught with associating the notion of rationality with certain goals.

A central idea floating around here (which I'm not sure anyone has made explicit before) is that we should take a broad view of rationality and try to extend it to as many applications as possible, for example to morality, meta-ethics, or philosophy in general, instead of the view that economists tend to take, in which rationality is just about updating beliefs about the state of the world given a prior and empirical evidence, and choosing decisions given beliefs and a utility function. This stems from the idea that we know humans have cognitive biases (or just often make mistakes) which we can try to correct, and these mistakes must occur in more areas than just updating empirical beliefs or choosing decisions.

So I think it's not necessarily wrong, in principle, to say that certain goals are more rational than others, even if in practice people might be overconfident in making such declarations or implicit assumptions.

Also, I guess some people might phrase their questions as "Should rationalists do X?" without intending to associate rationality with certain goals. What they probably mean is, "What advice (and the rationale behind that) would you give to someone about X, given that they already accept the basic principles of rationality?" but that is a bit too long to put in a title.

I propose saying "We" instead, because that's what they actually mean, even if it sounds bad.

Better yet would be "I".

I agree with your description of why 'should a rationalist do X'.

I think not wanting to associate rationality with specific goals is good , but would stress different points: it's good to avoid the phrase because it will help avoid 1) fostering Us vs. Them mentality 2) looking weird. I get an embarrassment reaction whenever I read that phrase.

"Given goal A, is action B moving me closer, farther away, orthogonally, or keeping static?"

This is basically the nameless virtue.

given their function B

I think I would rephrase to "given their desires B"

I agree, sans the burdensome detail about "utility functions", "information", etc. There are some decisions that require an environment of being a rationalist, or in a rationalist group, but "rationalists should X" is rarely motivated by that condition.