Rationality and Winning

by lukeprog1 min read4th May 201285 comments

30

Personal Blog

Someone who claims to have read "the vast majority" of the Sequences recently misinterpreted me to be saying that I "accept 'life success' as an important metric for rationality." This may be a common confusion among LessWrongers due to statements like "rationality is systematized winning" and "be careful… any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility."

So, let me explain why Actual Winning isn't a strong measure of rationality.

In cognitive science, the "Standard Picture" (Stein 1996) of rationality is that rationality is a normative concept defined by logic, Bayesian probability theory, and Bayesian decision theory (aka "rational choice theory"). (Also see the standard textbooks on judgment and decision-making, e.g. Thinking and Deciding and Rational Choice in an Uncertain World.) Oaksford & Chater (2012) explain:

Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act—but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.

From this viewpoint, normative theories can be viewed as clarifying conditions of consistency… Logic can be viewed as studying the notion of consistency over beliefs. Probability… studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.

Thus, one could have highly rational beliefs and make highly rational choices and still fail to win due to akrasia, lack of resources, lack of intelligence, and so on. Like intelligence and money, rationality is only a ceteris paribus predictor of success.

So while it's empirically true (Stanovich 2010) that rationality is a predictor of life success, it's a weak one. (At least, it's a weak predictor of success at the levels of human rationality we are capable of training today.) If you want to more reliably achieve life success, I recommend inheriting a billion dollars or, failing that, being born+raised to have an excellent work ethic and low akrasia.

The reason you should "be careful… any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility" is because you should "never end up envying someone else's mere choices." You are still allowed to envy their resources, intelligence, work ethic, mastery over akrasia, and other predictors of success.

30

84 comments, sorted by Highlighting new comments since Today at 7:21 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Note, though, that if you fail to achieve a correct answer, it is futile to protest that you acted with propriety. "Rationalists should win" is still normative, even if "Rationalists always win" isn't true.

Another thing that's pretty crucial here is that rationality is only aimed at expected winning.

Suppose we live on Lottery Planet, where nearly everyone has a miserable life, but you can buy a lottery ticket for a chance of $BIGNUM dollars. Nonetheless, the chances of winning the lottery are so small that the expected value of buying a ticket is negative. So the rational recommendation is to refrain from buying lottery tickets.

Nonetheless, the agents who would be "smiling down from their huge piles of utility" could only be the ones who "irrationally" bought lottery tickets. (Credit for this example goes to someone else, but I can't remember who...)

You shouldn't expect rationality to help you win absolutely. Some people will just get lucky. You should expect it to help you do better than average, however. The rationalist on lottery planet is certainly likely to be doing better than the average lottery-ticket buyer.

On a similar note: rationally succeeding and simply succeeding might involve two entirely different approaches. For example, if success is largely a result of other successful people conferring success on you because they see you displaying certain signals, it doesn't follow that gaming the system will be as easy as naturally producing those signals. Signalling often relies on displays that are difficult to fake. The cognitive resources needed to fake it are often vastly disproportionate to the resources used in sincere signalling and, regardless, in many cases we may not even know what the signals are or how to fake them. The rational road to, say, political success might involve a multibillion dollar research program in social neuroscience whereas the natural road simply involves being born into the right family, going to the right schools, etc, and naturally acquiring all the signalling associated with that.

6timtyler9yYes, we already had that one out the first time around [http://lesswrong.com/lw/7i/rationality_is_systematized_winning/4ze].
3Alsadius9yTrue. I've seen a few comments from successful folks(the one that was most memorable was the founders of Home Depot) saying that you need to gamble to be successful. In that particular case, it basically involved calling his boss an idiot and completely rearranging the business model of hardware stores. Now obviously, they wouldn't have founded Home Depot without doing that, but I was thinking as I read this "For every one of you, there's probably a thousand folks who got fired and a hundred who ran the business into the ground". It's a good guide for being extraordinarily successful, but by definition that can't be done ordinarily.
-3shminux9yThe "rational recommendation" would be to figure out a way to decrease your effective ticket price (example: "Mom, next time, instead of buying me yet another pair of black socks I hate, just give me a few lottery tickets!").
5bryjnar9yI feel like you're fighting the hypothetical [http://lesswrong.com/lw/bwp/please_dont_fight_the_hypothetical/] here. The point of the example was to illustrate a particular feature of rationality; obviously it's going to break down if you allow yourself other options.

I am wary of excluding work ethic and mastery over akrasia from rationality, and I am not sure about intelligence.

Akrasia and work ethic are choices. Aspiring rationalists who find themselves not making the choices they have found to be rational should seek to remedy this situation, not excuse themselves for having akrasia.

Some limitations on how rational you can be might be unfair, but that doesn't stop them from making you irrational.

The problem with this is that multiple motivation systems contribute to action, and only one of them looks anything like "do the thing I expect will achieve my goals given what I believe about the world." For example, I wouldn't call a blind reflex a "choice" or "decision."

5JGWeissman9yStill, I think it's useful to ask if the whole person, with all the their motivation systems, is rational. Asking if a person's subsystems are rational seems relevant when you are figuring out to focus your training efforts on those systems most holding the person back. A blind reflex may not itself be rational or irrational, but I can train my reflexes, and make rational choices about what I want to train my reflexes to do. Of course, I can only train reflexes to follow simple heuristics far short of computing a rational decision, and that is an "unfair" limit on my rationality, but that doesn't mean that a system that makes better choices isn't more rational than me.
4lukeprog9yThe cogsci notion of rationality is indeed a personal rather than a subpersonal one. I'm not trying to describe subprocesses as rational or irrational, though. I'm describing the whole person as rational or irrational, but rationality is an ideal standard for choices, not actions, and reflexes are not "choices." In any case, I can't find a sentence in your latest comment that I disagree with.
1keefe9yI think it's appropriate to separate work ethic and akrasia mastery from rationality. Saying that work ethic is a choice is, imho, a relatively simplistic view. People often get fired for something trivial (smoking when a drug test is coming up, repeated absence, etc) that they know full well is a suboptimal decision and the short term benefits of getting high (or whatever) override their concern for the long term possible consequences. I think it makes sense to make some distinction that rationality is the ability to select the right path to walk and self discipline is the wherewithal to walk it. I wonder how well defined "my goals" are here or how much to trust expectations. I think a rough approximation could involve these various systems generating some impulse map and then OPFC and some other structures get involved in selecting an action. I don't think a closed form expression of a goal is required in order to say that the goal exists.
0DanielLC9yThe definitions I've seen on here are (paraphrased): * Epistemic Rationality: Ability to find truth in a wide variety of environments * Instrumental Rationality: Ability to alter reality to fit your desires in a wide variety of environments Work ethic and akrasia are part of epistemic rationality, in that they affect your ability to find the truth, but once you figure out what you need to do, any akrasia in actually doing it is strictly instrumental.
1eurg9yI may be misreading this, but it seems to me that you inverted the meaning of akrasia.
2Viliam_Bur9yAfter careful reading, my understanding is that DanielLC [http://lesswrong.com/r/discussion/lw/c7g/rationality_and_winning/6il4] is saying: "Akrasia generally harms your instrumental rationality only. Except that you need some basic knowledge to bootstrap your epistemic rationality -- and if akrasia prevents you from ever learning this, then it has directly harmed your epistemic rationality, too." as a reply to JGWeissman [http://lesswrong.com/r/discussion/lw/c7g/rationality_and_winning/6iji] saying: "If you know akrasia harms you significantly, and you don't make solving this problem your high priority, you are not even epistemically rational!" Which, by the way, made me realize that I really am not epistemically rational enough. :(
2JGWeissman9yMore like, "If you know akrasia harms you significantly, and you don't make solving this problem your high priority, then it doesn't matter if you are epistemically rational because it's not helping you be (instrumentally) rational." "Rationality" by itself should refer to instrumental rationality. Epistemic rationality is tool of instrumental rationality. Despite these concepts being described as different adjectives modifying the same noun, it is suboptimal to think of them as different aspects of the same category. Epistemic rationality belongs in a category with other tools of rationality, such as actually choosing what you know you should choose.
0DanielLC9yFixed. I know what it means; I just wrote the sentence wrong.
-2DanielLC9yWhoops. Fixed. I know what it means, I just typed it wrong.
-2DanielLC9yWhoops. Fixed. I know what it means, I just typed it wrong.

I'm having trouble calling a person who can rattle out a perfectly rational thing to do in every circumstance but instead spending their life complaining about how they would do this and that if only they didn't have akrasia a rational one.

[-][anonymous]9y 14

you can be an expert on rationality without being an expert at rationality.

With that terminology, I would read shminux's comment as saying: "I have trouble calling rational a person who is an expert on rationality but not an expert at rationality." Where is the failure?

0[anonymous]9yI may have slightly misread shminux's post, or failed to make my point (which I have now forgotten). I will patch my post to at least make sense.
-1shminux9y"... those who cannot, teach."
4[anonymous]9y"Those who can, do, those who know, teach" The less cynical and more realistic original formulation
2private_messaging9yUnfortunately, in practice, those who don't know like to teach too. Fortunately, some of those who can, also teach, so you could listen to those who can.
-11billswift9y
8DanArmak9yThat's what akrasia means. That your actions differ from your spoken intentions in certain ways. In your example, the intentions are rational, the actions are not - in a particular pattern we call akrasia. It comes down to what you identify with more as a "person". The fragment who acts? Or the fragment who talks about how they wish to act differently? And which fragment do you want to assist in making the other fragment be more like them - the intentions like the actions, or the other way around?
7shminux9yThere is an smbc [http://www.smbc-comics.com/index.php?db=comics&id=2601] for that.
1private_messaging9yAgreed completely. If you can't use it, you didn't learn it.
-1duckduckMOO9yjust world fallacy at 10 upvotes. wonderful. edit: unless you mean "rattling" to tell us that they don't really know and they're just making noises. If that is the point it would be nice if you were explicit about it.
6shminux9yFeel free to explain how irrational actions (despite rational words/intentions) constitute a just world fallacy. Sure, you can call akrasia an incurable disease and give up, or you can keep trying to win despite it. Some have.
-1duckduckMOO9ypeople exist who are good at figuring out the best thing to do and not good at doing it. These people are not necessarrilly irrational. e.g. it's hard for a parapelegic to be good at tennis, or an idiot to be good at maths. The playing field is not level.
2shminux9yYes, absolutely. Then a rational thing to do would be figuring out what they are good at doing, and start doing it. Does not mean it is easy, just rational. A paraplegic can find something else to be good at. We had a quadriplegic mayor [http://en.wikipedia.org/wiki/Sam_Sullivan] here for awhile. Design your own playing field.
-1duckduckMOO9y"Find or make a niche" is not a strategy someone can automatically pull off when they hit a certain level of rationality. That someone has not successfully done so does not mean they are irrational. Your original comment implies (basically states) that someone who is not getting anything done is, QED, not rational. This is nonsense for the same reason. You are proposing solutions for which rationality is not the sole determiner of success. people can fail for reasons other than irrationality. Emblematic example of the just world fallacy, with justice here being "rational people succeed."
1shminux9yIt seems that you are intent on applying this label, no matter what, so I will disengage.
0[anonymous]9yedit: My response was useless so I've removed it.
-1albeola9yYou're changing the subject. The question was whether actually having akrasia is compatible with rationality. The question was not whether someone who claims to have akrasia actually has akrasia, or whether it is rational for someone who has akrasia to complain about akrasia and treat it as not worth trying to solve.
6[anonymous]9yHaving akrasia is no more compatible with rationality than having myopia is: saying “if only I had better eyesight” while not wearing eyeglasses [http://lesswrong.com/lw/1sm/akrasia_tactics_review] is not terribly rational.
-2shminux9yI'm pretty sure I expressed my opinion on this topic precisely ("no, it's not compatible"). It's up to you how you choose to misunderstand it, I have no control over it.
-1albeola9yDo you agree the quoted property differs from the property of "having akrasia" (which is the property we're interested in); that one might have akrasia without spending one's life complaining about it, and that one might spend one's life complaining about akrasia without having (the stated amount of) akrasia (e.g. with the deliberate intent to evade obligations)? If this inaccuracy were fixed, would your original response retain all its rhetorical force? (It's worth keeping in mind that "akrasia" is more a problem description saying someone's brain doesn't produce the right output, and not an actual specific mechanism sitting there impeding an otherwise-functioning brain from doing its thing, but I don't think that affects any of the reasoning here.)

There's a tale of Naive Agent. When the Naive Agent comes across a string, NA parses it into a hypothesis, and adds this hypothesis into his decision system if the hypothesis is new to NA (it is computationally bounded and doesn't have full list of possible hypotheses); NA tries his best to set the prior for the hypothesis and to adjust the prior in the most rational manner. Then the NA is acting on his beliefs, consistently and rationally. One could say that NA is quite rational.

Exercise for the reader: point out how you can get NAs to give you money by c... (read more)

2prase9yGive them strings "giving me money is the best thing you can do"? I am not sure how exactly naïve agents are relevant to the post, but it seems interesting. Could you write a full discussion post about naïve agents, so that the readers needn't guess how to pump money from them?
4private_messaging9yPascal's mugging, and it's real world incarnations. The agent I am speaking of, is what happens when you try to be a rationalist on bounded hardware and given the tendency to insert parsed strings as hypotheses given some made up generic priors. That simply does not combine without creating a backdoor for other agents to exploit.
4prase9yWell, sounds plausible, but I would prefer if you described the idea in greater detail. You seem to think that bounded hardware together with Bayesian rationality are necessarily exploitable. At least you have made some assumptions you haven't specified explicitly, haven't you?
4private_messaging9yIntroduction of hypothesis you parsed out of string is the most important issue. You reading this idea I posted was not a proper Bayesian belief update. Your value for the hypothesis I posted was effectively zero (if you didn't think of it before), now it is nonzero (I hope). Of course one could perhaps rationally self modify to something more befitting the limited computational hardware and the necessity to cooperate with other agents in presence of cheaters, if one's smart enough to reinvent all of the relevant strategies. Or better yet, not self modify away from this in the first place.
4prase9ySay that I must decide between actions A and B. The decision depends on an uncertain factor expressed by a hypothesis X: if X is true, then deciding for A gives me 100 utilons while B gives 0, conversely if X is false A yields 0 and B earns me 100 utilons. Now I believe X is true with 20% probability, so the expected utilities are U(A) = 20 and U(B) = 80. You want to make me pick A. To do that, you invent a hypothesis Y such that P(X|Y) = 60% (given my prior beliefs, via correct updating). I haven't considered Y before. So you tell me about it. Now, do you say that after you tell me that Y exists (as a hypothesis) my credence in X necessarily increases? Or that it happens only with a specially crafted Y which nevertheless always can be constructed? Or something else? It's clear that one can somewhat manipulate other people by telling them about arguments they hadn't heard before. But that's not specific to imperfect Bayesian agents, it applies to practically anybody. So I am interested whether you have a formal argument which shows that an imperfect Bayesian agent is always vulnerable to some sort of exploitation, or something along these lines.
5private_messaging9yThe issue is that as you added hypothesis Y, with nonzero probability, yes, the propagation will increase your belief in X. You got to have some sort of patch over this vulnerability, or refuse to propagate, etc. You have to have some very specific imperfect architecture so that agent doesn't get scammed. There's a good very simple example that's popular here. Pascal's mugging. Much been written about it, with really dissatisfying counter rationalizations. Bottom line is, when the agent hears of the Pascal's mugging, at the time 0, the statement gets parsed into a hypothesis, and then at time t, some sort of estimation can be produced, and at time <t , what will agent do? edit: To clarify, the two severe cases are introduction of a hypothesis that should have incredibly low prior. You end up with agent that has a small number of low probability hypotheses, cherry picked out of enormous sea of such hypotheses that are equally or more likely.
2prase9yAdding Y we get by standard updating P(X | being told Y) = P(being told Y | X) P(X) / P(being told Y). Even if Y itself is a very strong evidence of X, I needn't necessarily believe Y if I am told Y. Pascal's mugging is a problem for unbounded Bayesian agents as well, it doesn't rely on computation resource limits.
3private_messaging9yThat update is not the problematic one. The problematic is the one where when you are told Y, you add Y itself with some probability set by P(Y | being told Y) = P(being told Y | Y) P(Y) / P(being told Y). Then you suddenly have Y in your system (not just 'been told Y') . If you don't do that you can't learn, if you do that you need a lot of hacks not to get screwed over. edit: Or better yet there are hacks that make such agent screw over other agents, as the agent self deludes on some form of pascal's mugging, and tries to broadcast the statement that subverted it, but has hacks not to act in self damaging ways out of such beliefs. For example an agent could invent gods that need to be pleased (or urgent catastrophic problems that needs to be solved), then set up a sacrifice scheme and earn some profits. Until unbounded Bayesian agent tells me it got pascal's mugged, that's not really known. I wonder how the Bayesian agent would get the meaning out of pixel values, all way to seeing letters, all way to seeing a message, and then to paying up. Without the 'add hypothesis where none existed before' thing. The unbounded agent got to have pre-existing hypotheses that giving a stranger money will save various numbers of people.
2prase9yI don't think I can't learn if I don't include every hypothesis I am told into my set of hypotheses with assigned probability. A bounded agent may well do some rounding on probabilities and ignore every hypothesis with probability below some threshold. But even if I include Y with some probability, what does it imply? Has a bounded agent told you that it got Pacal-mugged? The problem is a combination of a complexity-based prior together with unbounded utility function, and that isn't specific to bounded agents. Can you show how a Bayesian agent with bounded utility function can be exploited?
1private_messaging9yYou're going on the road of actually introducing necessary hacks. That's good. I don't think simply setting threshold probability or capping the utility on a Bayesian agent results in the most effective agent given specific computing time, and it feels to me that you're wrongfully putting a burden of both the definition of what your agent is, and the proof, on me. You got to define what the best threshold is, or what is the reasonable cap, first - those have to be somehow determined before you have your rational agent that works well. Clearly I can't show that it is exploitable for any values, because assuming hypothesis probability threshold of 1-epsilon and utility cap of epsilon, the agent can not be talked into doing anything at all. edit: and trivially, by setting threshold too low and cap too high, the agent can be exploited. We were talking about LW rationality. If LW rationality didn't give you procedure for determining the threshold and the cap, then I already demonstrated the point I was making. I don't see huge discussion here on the optimal cap for utility, and on the optimal threshold, and on best handling of the hypotheses below threshold, and it feels to me that rationalists have thresholds set too low and caps set too high. You can of course have an agent that will decide with commonsense and then set threshold and cap as to match it, but that's rationalization not rationality.

rationality is a normative concept defined by logic, Bayesian probability theory, and Bayesian decision theory

Compare with what Russell said about mathematics:

"Pure Mathematics is the class of all propositions of the form “p implies q,” where p and q are propositions containing one or more variables, the same in the two propositions, and neither p nor q contains any constants except logical constants."

Where human rationality is concerned, simple measures analogous to deductive correctness in mathematics don't capture many important aspect... (read more)

0Dr_Manhattan9y+1

It would be more correct to say that "Winning as defined by general society norms" is not a strong measure of rationality. "Actual Winning", as defined by the agent's own total values, certainly must be.

7Normal_Anomaly9yAfter controlling for luck, intelligence, and hard work.
8dlthomas9yAnd starting position.

Delta(winning), perhaps?

1dlthomas9yThat should correlate better, for sure.
2drethelin9yIsn't that just a subset of luck?
0dlthomas9yNot an unreasonable way to draw the lines.

Also, while the prior probability of winning is (should be) higher in the rationality group, and lower outside, there are likely still many more winners outside the rationality group, because there are so many more people outside it than within. Making use of the availability heuristic to estimate "winning" and decide whether rationality pays off won't work well.

Rational thinking has helped me overcome my akrasia in the past, so if someone isn't very good at overcoming theirs, I see that as weak evidence of poor rationality.

Thus, one could have highly rational beliefs and make highly rational choices and still fail to win due to akrasia, lack of resources, lack of intelligence, and so on. Like intelligence and money, rationality is only a ceteris paribus predictor of success.

I disagree here. Akrasia, resources, and intelligence are all factors that should be taken into account by a rational agent. The reason rational agents don't always win is the complexity of factors in the real world is too great to predict reliably, no matter how rational and intelligent you are. Rationality provides the best possible means of "balancing the odds", but nothing can guarantee success.

First, it seems to me that this is mainly a debate over the definition of instrumental rationality. And I suspect the reason people want to have this debate is so they can figure out whether they count as especially "instrumentally rational" or not.

The simplest definition of "instrumentally rational" I can think of is "a person is instrumentally rational to the extent they are good at acting to achieve their goals". Thus somebody with akrasia would not qualify as a very instrumentally rational under this simple definition. You... (read more)

0thomblake9yThe post actually seems to equivocate between epistemic and instrumental rationality - note the use of "rational beliefs" and "rational choices" in the same sentence. I think it's easy to defend a much weaker version of the thesis, that instrumental rationality maximizes expected utility, not utility of results.
0John_Maxwell9yHere is a thought experiment that illustrates the slipperiness of instrumental rationality: Let's say there is a world everyone is respected according to their (ELO ranked) chess ability and nothing else. In this world your ability to make friends, earn a high salary, etc. all depend on how well you play chess. Should somebody who is better at playing chess be considered more instrumentally rational in this world? My definition says yes, because chess playing is an ability that resides in the brain. If you define instrumental rationality as "ability to make choices with high expected value" or some such, that definition says yes as well because playing chess is a series of choices. You can imagine a hypothetical Flatland-weird universe where making good choices depends more on the kind of skills required to play chess and less on probabilistic reasoning, calculating expected values, etc. In this world the equivalent of Less Wrong discusses various chess openings and endgame problems in order to help members become more instrumentally rational.

It seems to me that the LessWrong rationality does not concern itself with the computational limitations of the agents, using as normative the idealized model that ignores limitations, and lacks extensive discussion of comparative computational complexities of different methods, as well as the security of the agent from deliberate (or semi accidental) subversion by other agents. (See my post about naive agent)

Thus the default hypothesis should be that the teachings of LessWrong for the most part do not increase the efficacy (win-ness) of computationally bo... (read more)

0bryjnar9ySo, it seems you've hit the nail on the head when you say it's an idealized model. Full rationality (in the sense it's used here) isn't something that you can implement as a computationally bounded agent. There's a whole different question which is how to come up with good approximations to it, though. It's analagous to, say, proving the completeness of natural deduction for first-order logic. That tells you that there is a proof for any true statement, but not that you, as a computationally bounded agent, will be able to find it. And coming up with better heuristics for proving things is a big question of its own.
3private_messaging9yThe issue is that LW handwavy preaches it as lifestyle of some kind (instead of studying it rigorously as idealized model). It is also unlike the ideal models in physics. Ideal gas is a very close approximation to air at normal conditions. Computationally unbounded agent on other hand... it's to bounded agent as ideal gas from classical physics is to cooking omelette. I doubt even the 'coming up with good approximations to it' offers anything (for human self improvement) beyond trivial 'making agent win the most'. One has to do some minor stuff, such as e.g. studying math, and calculating probabilities correctly in some neat cases like medical diagnosis. Actual winning the most is too much about thinking about the right things. edit: and about strategies, and about agent-agent interaction where you want to take in reasoning by other agents but don't want to be exploited, don't want other agent's failures to propagate to you, don't want to fall prey to odd mixture of exploitation and failure where the agent takes own failed reasoning seriously enough to convince you but not seriously enough to allow that failure to damage itself, etc. Overall, a very very complex issue.
-1amcknight9yThe LessWrong community is made up of a lot of people that concern themselves with all kinds of things. I get annoyed when I hear people generalizing too much about LessWrong members, or even worse, when they talk about LessWrong as if it is a thing with beliefs and concerns. Sorry if I'm being too nit-picky.