Taylor & Brown (1988) argued that several kinds of irrationality are good for you — for example that overconfidence, including the planning fallacy, protects you from depression and gives you greater motivation because your expectancy of success is higher.

One can imagine other examples. Perhaps the sunk cost fallacy is useful because without it you're prone to switch projects as soon as a higher-value project comes along, leaving an ever-growing heap of abandoned projects behind you.

This may be one reason that many people's lives aren't much improved by rationality training. Perhaps the benefits of having more accurate models of the world and making better decisions are swamped by the negative effects of losing out on the benefits of overconfidence and the sunk costs fallacy and other "positive illusions." Yes, I read "Less Wrong Probably Doesn't Cause Akrasia," but there were too many methodological weaknesses to give that study much weight, I think. 

Others have argued against Taylor & Brown's conclusion, and at least one recent study suggests that biases are not inherently positive or negative for mental health and motivation because the effect depends on the context in which they occur. There seems to be no expert consensus on the matter.

(Inspired by a conversation with Louie.)

New to LessWrong?

New Comment
48 comments, sorted by Click to highlight new comments since: Today at 2:53 AM

Perhaps the sunk cost fallacy is useful because without it you're prone to switch projects as soon as a higher-value project comes along, leaving an ever-growing heap of abandoned projects behind you.

Sunk costs might in fact be useful heuristic for many people. But if rationality gets you in trouble in this area, use more rationality! - after all, once you know about this switching behavior, you can consciously counteract it with strategies (precommitment, etc).

What you're pointing to is that a little rationality can be a dangerous thing.

If someone believes in X, they will like the general argument: "If X causes a problem, the solution is to use more X." But can it be somehow proved, for X = rationality?

Perhaps this is all just one big sunk cost fallacy. Some of us have already invested too much resources into learning rationality -- whether so called traditional rationality, x-rationality, or whatever is your popular flavor. Maybe we just refuse to update...

One view is that people are able to vary their biases, so that they're more biased when the bias is helpful and less when it gets in the way. For instance, Armor & Sackett (2006) found that people were more optimistic about how they'd perform on hypothetical tasks but relatively accurate in predicting how they'd perform on real tasks that they were actually about to do (when they'd have to face the reality of their actual performance). That's consistent with near/far theory - in far mode people can afford to have more biased, flattering images of themselves.

With sunk costs, one of the most relevant theories is Gollwitzer's theory about deliberative and implemental mindsets. He argues that when people are faced with a decision, they tend to go into a deliberative mindset which allows them to view the options relatively accurately so that they can make a good decision. But once they've made a decision, their focus shifts to how to implement the chosen course of action - planning how to get it done & figuring out how to overcome any obstacles. Thoughts about whether the course of action really is desirable (or feasible) are distractions and potential demotivators, so people in an implemental mindset mostly don't think about that, and are overly optimistic when they do.

If rationality training de-compartmentalizes people and makes it harder to switch between different mindsets then that could be a disadvantage. It could be harder to carry out a project if you keep putting yourself back in the mindset of deciding whether or not it's worth doing.

But it doesn't have to happen that way - rationality training could train you to respond appropriately to these changes in circumstances. For example, one alternative way of thinking in situations where people are prone to the sunk cost fallacy is to ask yourself if now is a good time to re-evaluate your course of action. Maybe the answer is no, it's not a good time - you're in the middle of doing things and it would be a distraction, or you put a lot of thought into your original decision and wouldn't have the perspective to re-evaluate it now that you're immersed in the project. In that case you can keep going with the project and trust your original decision, no need to justify it. But sometimes the answer is yes, it is a good time to re-evaluate - circumstances have changed, or you have new information which might have changed your decision if you knew it from the start, or you're considering whether to make some new commitment which you hadn't expected. If it is a good time to re-evaluate, then take a step back and take a fresh look at the decision, taking into account all that you've learned - no need to be beholden to your old decision.

Armor, D. A., & Sackett, A. M. (2006). Accuracy, error, and bias in predictions for real versus hypothetical events. Journal of Personality and Social Psychology, 91, 583-600.

Our brains obviously have things like the sunk cost fallacy and all the various heuristics built in for a reason. We don't have an explicit utility function in our brain, and so we have all kinds of hacked on solutions.

For example the sunk cost fallacy in our brain is there to prevent us from giving up on tasks prematurely. Our brains need this tendency, because we also have a boredom tendency built in. An AI would simply calculate the expected value of working on each of the things it could be doing. The boredom tendency would be countered by an understanding of diminishing marginal returns (or not at all, for an unconscious AI), and the sunk cost fallacy would be unnecessary.

You need to be explicit that all these heuristics and biases are not overall good things, but simply hacked on aspects of human psychology.

I personally feel that learning about rationality has greatly improved my life, but 1. so do New Age advocates and 2. that's just one data point.

...many people's lives aren't much improved by rationality training.

The outlook for rationality training may be brighter than you think. I've been looking at a number of materials aimed at businessmen who want greater success, and a lot of the techniques described and used by successful businessmen bear a strong resemblance to the sort of things you read on Less Wrong. For example, consider this passage in Peter Drucker's book The Effective Executive, describing Alfred P. Sloan, a legendary CEO of GM:

Alfred P. Sloan is reported to have said at a meeting of one of his top committees: "Gentlemen, I take it we are all in complete agreement on the decision here." Everyone around the table nodded assent. "Then," continued Mr. Sloan, "I propose we postpone further discussion of this matter until our next meeting to give ourselves time to develop disagreement and perhaps gain some understanding of what the decision is all about."

Sloan was anything but an "intuitive" decision-maker. He always emphasized the need to test opinions against facts and the need to make absolutely sure that one did not start out with the conclusion and then look for the facts that would support it. But he knew that the right decision demands adequate disagreement.

So here we have talk of the need to consider multiple alternatives, the importance of not just getting one side of a story, the need to test hypotheses against reality, and the importance of not starting with the bottom line... all familiar topics in our community.

I think this is a discussion about what the best order is in which debiasing should occur.

Project management seems to be an implementation of debiasing strategies and they first teach you to make more accurate predictions and then later teach you how to prevent the sunk cost fallacy from not cancelling a failing project.

Because of this i think debiasing should occur in a kind of logical order, one that prevents someone from cancelling all projects due to a good grasp of sunk cost and a bad grasp of utility calculations.

Another potential negative is a reduced capacity for strategic self-deception.

It seems like I can engage in strategic self-deception while acknowledging it as such in order to reduce negative thoughts or tolerate unpleasantness in situations where it's beneficial. Rationality practice seems to be a benefit inasmuch as it allows me to understand better situations in which self-deception leads to negative vs positive outcomes.

To the extent that a brain baptized in rationality is magically consistency-enforcing, yes.

I get a giddy thrill out of believing things like "if I tell myself X, it will (without changing anything I know about the world) affect the way this part of my brain functions". Perhaps that's a vice I need to subdue, but I doubt it.

I'm not sure I grasp what your point is - could you try stating it again directly, rather than via sarcasm please? Thanks!

I'm surprised you thought that was sarcastic. It's really stating exactly my partial agreement with you. Doing it again, I'd replace "magically" by "perfectly". I don't think what you said is stupid.

I realize you only said "potential", but I don't think any training will impair the efficacy of strategic self-deception at all, except to the extent that a rational person isn't going to tolerate "afraid to think about that" feelings.

The only value I see in self-deception is in modifying affect toward people/things/plans - altering our decision making and our direct-line physical responses (consciously faking some things is costly - facial expression, tone of voice, etc).

The only obstacle I've noticed in myself in so consciously self-deceiving/affirming is "I don't do that" self-embarrassment hesitation to try - part of my self-identification as "rational".

My intuition in support of believing that total consistency enforcement isn't trainable (beyond personal experience) is that coordination amongst brain submodules is limited, and any training probably reaches some of the submodules only.

Ah yes, it was the "magically" that threw me - thanks!

Perhaps the sunk cost fallacy is useful because without it...

This sounds like a fake justification. For every justification of a thing by pointing to the positive consequences of it, one can ask how much better that thing is than other things would be.

I expect evolution to produce beings in local optima according its criteria, which often results in a solution similar to what would be the best solution according to human criteria. But it's often significantly different, and rarely the same.

For every systematic tendency to deviate from the truth I have, I can ask myself the leading question "How does this help me?" and I should expect to find a good evolutionary answer. More than that, I would expect to actually be prone to justifying the status quo according to my personal criteria, rather than evolution's. Each time I discover that what I already do habitually is best, according to my criteria, in the modern (not evolutionary) environment, I count it as an amazing coincidence, as the algorithms that produced my behavior are not optimized for that.

Perhaps the sunk cost fallacy is useful because without it you're prone to switch projects as soon as a higher-value project comes along, leaving an ever-growing heap of abandoned projects behind you.

There's actually some literature on justifying the sunk cost fallacy, pointing to the foregone learning of switching. (I should finish my essay on the topic; one of my examples was going to be 'imagine a simple AI which avoids sunk cost fallacy by constantly switching tasks...')

EDIT: you can see my essay at http://www.gwern.net/Sunk%20cost

'imagine a simple AI which avoids sunk cost fallacy by constantly switching tasks...')

Why would an AI have the sunk cost fallacy at all? Aren't you anthropomorphizing?

No, his example points out what an AI that specifically does not have the sunk cost fallacy is like.

The thing is, an AI wouldn't need to feel a sunk cost effect. It would act optimally simply by maximising expected utility.

For example, say that I'm decide to work on Task A, which will take me five hours and will earn me $200. After two hours of work, I discover Task B which will award me $300 after five hours. At this point, I can behave like a human, and feel bored and annoyed, but the sunk cost effect will make me continue, maybe. Or I can calculate expected return: I'll get $200 after 3 hours of work on Task A, which is %67 per hour, wheras I'll get $300 after 5 hours on Task B, which is $60 per hour. So the rational thing to do is to avoid switching.

The sunk cost fallacy reflects that after putting work into something, the wage for continuing work decreases. An AI wouldn't need that to act optimally.

One of my points is that you bury a great deal of hidden complexity and intelligence in 'simply maximize expected utility'; it is true sunk cost is a fallacy in many simple fully-specified models and any simple AI can be rescued just by saying 'give it a longer horizon! more computing power! more data!', but do these simple models correspond to the real world?

(See also the question of whether exponential discounting rather than hyperbolic discounting is appropriate, if returns follow various random walks rather than remain constant in each time period.)

[-][anonymous]12y00

You neglected the part where the AI may stand to learn something from the task, which may have a large expected value relative to the tasks themselves.

Yeah, but that comes under expected utility.

[-][anonymous]12y00

What else are you optimising besides utility? Doing the calculations with the money can tell you the expected money value of the tasks, but unless your utility function is U=$$$, you need to take other things into account.

Off-topic, but...

I like how you have the sections on the side of your pages. Looks good (and works reasonably well)!

Thanks. It was a distressing amount of work, but I hoped it'd make up for it by keeping readers oriented.

Yep, it seems to. :)

(Bug report: the sausages overlap the comments (e.g. here), maybe just a margin-right declaration in the CSS for that div?)

I don't see it. When I halve my screen, the max-width declaration kicks in and the sausages aren't visible at all.

Hmm, peculiar...

Here is what I see: 1 2 (the last word of the comment is cut off).

First image link is broken; I see what you mean in the second. Could it be your browser doesn't accept CSS3 at all? Do the sausages ever disappear as you keep narrowing the window width?

(Not sure what happened to that link, sorry. It didn't show anything particularly different to the other one though)

Those screenshots are Firefox nightly (so bleeding edge CSS3 support) but chrome stable shows a similar thing (both on Linux).

Yes, the sausages do disappear if the window is thin enough.

I would think existing irrationality's superiority to rational hacks would be covered by Algernon's Law.

It occurs to me that I have internalized the value of a group that I identify with, the value of truth from my association with the physicists and engineers guild. Interestingly, I accept this value uncritically, it is at this point in my life, an axiom.

I probably suffer as much as anybody from making myself more rational in pursuit of this value I accept uncritically. I am prone to depression, akrasia, and wonder constantly at why I should accept any of my values seriously since I knoew the truth is they are imposed on me by evolution, they are not really "mine." All except the value of the truth, which I seem to accept uncritically.

Would I be better off if I accepted a value that was more in line with human nature uncritically? The value of thinking I was sexy (and therefore working on being sexier), or the value of controlling people around me? You know, I probably would.

But that would go against my most deeply (and uncritically) held value of truth, so I am very unlikely to move in that direction. Hmmm.

[-][anonymous]12y30

Rationalists look at the winner and do what they do. Rationalists do not bemoan their new rationality because the sunk-cost fallacy leaves abandoned projects.

That said, it may be rational to remember that finishing something has huge experience value the next time you use sunk costs fallacy to defect from a project.

I think the argument above is that the instrumental rationalist might decide not to become an epistemic rationalist? If the 'winners' (say, 'happy people' or 'productive people') tend to believe in a load of codswallop, then should I attempt to brainwash myself to believe the same?

[-][anonymous]12y10

then should I attempt to brainwash myself to believe the same?

No. You should find out what value it is that they get from that, and do the optimal thing to get that value.

Think for analogy of pulling the bayes structure out of a neural network or something. Once you know how it works, you can do better.

Likewise with this; once you know why these particular bad beliefs have high instrumental value, you can optimise for that without the downsides of those particular beliefs.

Once you know how it works, you can do better.

Well, maybe. Or maybe not. The "irrational" behavior might be optimal as it is. (Probably not, but we can't know until we look to see, right?) One hopes that successful, but irrational-looking behaviors have rational cores. But it might be that the benefit (the rational part) only comes with the associated costs (the irrational-looking part).

[-][anonymous]12y00

The probability of random irrational behavior being optimal is extremely low. The space of behaviors is super huge.

Also, If you can show me even one case where the irrationality is necessary, I will be very surprised. My current model of the mind has no room for such dependencies.

If, as I believe, no such dependency is possible, then observing that some behavior has irrational components is pretty strong evidence for non-optimality.

Interesting. What kind of possibility do you have in mind when you say that no such dependency is possible? I can definitely imagine simplistic, abstract worlds in which there is a cost that necessarily has to be paid in order to get a larger benefit.

Example. Consider a world in which there are two kinds of game that you play against Omega. Games of type A have huge payouts. Games of type B have modest payouts. Both kinds of game come up with roughly equal regularity. Omega has rigged the games so that you win at type A games iff you implement an algorithm that loses at type B games. From the perspective of any type B game, it will look like you are behaving irrationally. And in some sense, you are. But nonetheless, given the way the world is, it is rational to lose at type B games.

Anyway, I can see why you might want to ignore such fanciful cases. But I would really like to know where you confidence comes from that nothing like this happens in the actual world.

[-][anonymous]12y00

What kind of possibility do you have in mind when you say that no such dependency is possible

This universe, things you might actually run into (including omega's usual tricks, tho ve could certainly come up with something to break my assumption). I know of no reason that there are gains that are lost once you become a rationalist. And I have no reason to believe that there might be.

I can't explain why. I don't have introspective access to my algorithms, sorry.

I can definitely imagine simplistic, abstract worlds in which there is a cost that necessarily has to be paid in order to get a larger benefit.

But a cost that can't be analyzed rationally and paid by a rationalist who knows what they are doing? I don't buy it.

Game A, Game B, Omega

you may be behaving irrationally in game B, but that's ok, because Game B isn't the game you are winning.

You can take almost any rationally planned behavior out of context such that it looks irrational. The proof is that locally optimal/greedy algorithms are not always globally optimal.

If you look at the context where your strategy is winning, it looks rational, so this example does not apply.

If you look at the context where your strategy is winning, it looks rational, so this example does not apply.

I think maybe we're talking past each other, then. I thought the idea was to imagine cases where the algorithm or collection of behaviors generated by the algorithm is rational even though it has sub-parts that do not look rational. You are absolutely right when you say that in-context, the play on Game B is rational. But that's the whole point I was making. It is possible to have games where optimal play globally requires sub-optimal play locally.

That is why I put "irrational" in those scare quotes in my first comment. If a behavior really is optimal, then any appearance of irrationality that it has must come from a failure to see the right context.

The probability of random irrational behavior being optimal is extremely low. The space of behaviors is super huge.

Our behaviors based on system 1 judgments aren't "random"; they are likely psychological adaptions that were ESS's in the ancestral environment.

[-][anonymous]12y00

ok good point. It's still not optimal tho. And if it does work (not optimal, just works), that means it's got some rational core to it that we can discover and purify.

The probability of random irrational behavior being optimal is extremely low. The space of behaviors is super huge.

Is the space of surviving behaviors super huge?

Yup. I imagine that there are some cases where you can get the benefits without the costs but I'd doubt that it's always the case. It seems perfectly plausible that sometimes happiness/success relies on genuinely not knowing or not thinking about the truth.

Perhaps the sunk cost fallacy is useful because without it you're prone to switch projects as soon as a higher-value project comes along, leaving an ever-growing heap of abandoned projects behind you.

That sounds an awful lot like irrational behavior. If only we had some group who liked thinking about ways to overcome exactly that sort of thing, and who even had enough success (by hypothesis) that they'd already eliminated some parts of human irrationality.

:D

I think there is a valid issue in our biases and irrational behavior being dependent on each other. My typical example is the planning fallacy combined with the fact that we feel loss more strongly than gain (and that pain can quickly gets higher than pleasure). Since it's much easier to feel pain than pleasure, by trying to maximize pleasure, we should be tempted to not take risks. But if in addition with underestimate risks, then we'll end up taking some risks.

More generally, I think it comes from evolution not being perfect design. It'll grow brains full of bugs. But then, another bug that lower the important of a previous bug will have a net advantage, and will spread. But once you have the two bugs, a fix for just one will have a net disadvantage, and will not spread.

It's something that occurs in computer programs too - programmers among us will remember cases in which removing a bug allowed to discover another bug, which was before compensating the first bug. Like a bug that was reverting a sort order, and another bug which another part in the program was reverting it again, leading to the correct order. Or a "off by one" in one direction at one point, compensated by an "off by one" in the other direction at another.

But from all that it comes that debiasing or rationality is not harmful in itself, it's the intermediate state where you're partly debiased which is unstable. Of course, it creates lots of issues in practically trying to debias yourself (or someone).

[-][anonymous]12y00

To quote Final Fantasy VI: "How will we know unless we try for ourselves?"

Among other things, it is less painful to learn from other people's mistakes, wherever possible.

Two problems with this 1) the arguments suggest that this could be actually harmful, and it's not clear that you can reverse the effect. Not to mention the opportunity costs. If someone told me that cutting off all contacts with everyone I knew to go and live in an abandoned bus in Alaska would be great for my personal development, I'd consider the potentially harmful aspects and the time given to this, rather than applying the maxim you give 2) that principle can be applied to anything: you'd have to try every programme of study and every life-improvement scheme on the same basis. And aside from any risks (above) you literally would die before you'd finished trying for yourself.

You obviously don't need to be certain that rationality training is good before you do it. But you should have good reasons to think it has a positive net expected value. And a higher one than other schemes (or potentially much higher and you're able to move on to other options if it fails etc)

[-][anonymous]12y30

Research on human subjects is like that. That's why they invented human research ethics and IRBs.

Honestly, I didn't say half of the things in your comment, so I'm not sure how (or even if it's productive) to respond.

Apologies if I've misread you. You seemed to be responding to the claim 'rationality training might have a negative effect' with 'the only solution is to try for ourselves'. I was saying the same principle could be applied to any number of things. If you were simply saying that whether something is good needs to be tested in practice, I agree. But I thought you were saying that we should try rationality training specifically, rather than other methods some think improve your life, (e.g. Scientology). If you're claiming that, then you need some reason to expect better results from rationality training than alternatives.