I just finished the first draft of my essay, "Are Sunk Costs Fallacies?"; there is still material I need to go through, but the bulk of the material is now there. The formatting is too gnarly to post here, so I ask everyone's forgiveness in clicking through.

To summarize:

  1. sunk costs are probably issues in big organizations
    • but maybe not ones that can be helped
  2. sunk costs are not issues in animals
  3. they appear to be in children & adults
    • but many apparent problems can be explained as part of a learning strategy
  4. there are few clear indications sunk costs are genuine problems
  5. much of what we call 'sunk cost' looks like simple carelessness & thoughtlessness

(If any of that seems unlikely or absurd to you, click through. I've worked very hard to provide multiple citations where possible, and fulltext for practically everything.)

I started this a while ago; but Luke/SIAI paid for much of the work, and that motivation plus academic library access made this essay more comprehensive than it would have been and finished months in advance.


New Comment
81 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There are interesting examples of this in Go, where pro play commentary often discusses tensions between "cutting your losses" and "being strategically consistent".

If things in Go aren't as clear-cut as the classic utilitarian example of "teleporting into the present situation" (which is typically the way Go programs are written, and they nevertheless lose to top human players), then maybe we can expect that they aren't clear-cut in complex life situations either.

This doesn't detract from the value of teaching people the sunk-cost fallacy: novice Go players do things such as adding stones to an already dead group which are clearly identifiable as instances of the sunk cost fallacy, and improvement reliably follows from helping them identify this as thinking that leads to lost games. Similarly, improvement at life reliably results from improving your ability to tell it's time to cut your losses.

I don't think this is correct. Novice players keep adding stones because they don't realize the group is dead, not because they can't give up on it.
That's probably right at higher kyu levels, when you really have no good grasp of group status. When you ask a novice "what is the status of this group", though, there is typically a time when they can correctly answer "dead" in exercise settings, but fail to draw the appropriate conclusion in a game by cutting their losses, and that's where I want to draw a parallel with the sunk cost fallacy. This is similar to life situations where if you'd just ask yourself the question "is this a sunk cost, and should I abandon it" you'd answer yes in the abstract, but you fail to ask that question. In high-pressure or blitz games this even happens to higher level novice players - you strongly suspect the group is dead, but you keep adding stones to it, playing the situation out: the underlying reasoning is that your opponent has to respond to any move that might save the group, so you're no worse off, you've played one more move and they've played one more. This is in fact wrong - by making the situation more settled you're in fact wasting the potential to use these plays later as ko threats.

Any idea whether Go beginners' tendency to "throw good stones after bad" results from sunk cost fallacy in particular, or from wishful thinking in general?

Like, is the thought "I don't want my stones to have been wasted" or "I really want to have that corner of the board"?

I'd have to look at actual evidence to answer that question with any degree of authority, and that would take more time than I have right now, but I can sketch an answer... My source of empirical evidence would be the Go Teaching Ladder, where you get a chance to see higher level players commenting on the inferred thought processes of more novice players. (And more rarely, novice players providing direct evidence of their own thought processes.) Higher level players tend to recommend "light" play, over "heavy" play: a typical expression is "treat this stone lightly". Unpacked, this means something like "don't treat this stone as an investment that you must then protect by playing further moves reinforcing your conception of this stone as a living group that must be defended; instead, treat this stone as bait that you gladly abandon to your opponent while you consolidate your strength elsewhere". "Heavy" play sounds a lot like treating a sunk cost as a commitment to a less valuable course of action. It is play that overlooks the strategic value of sacrifice. See here for some discussion. However, this is usually expressed from an outside perspective - a better player commenting on the style of a more novice player. I don't know for sure what goes on in the mind of a novice player when making a heavy play - it might well be a mixture of defending sunk costs, wishful thinking, heuristic-inspired play, etc.
It may be an example of a different bias at play, specifically confirmation bias: they don't realize that the stones are being wasted and can't be retrieved. For example, chess masters commit confirmation bias less than weaker players. (It's not that the players explicitly realize that there are better moves elsewhere but decide to keep playing the suboptimal moves anyway, because of sunk costs which would be sunk cost bias; it's that they don't think of what the opponent might do - which is closer to 'thoughtlessness'.)
That's more a fact about Go programs, I think; reading the Riis material recently on the Rybka case, I had the strong impression that modern top-tier chess programs do not do anything at all like building a model or examining the game history, but instead do very fine-tuned evaluations of individual board positions as they evaluate plys deep into the game tree. So you could teleport a copy of Shredder into a game against Kramnik played up to that point by Shredder, and expect the performance to be identical. (If there were any research on sunk cost in Go, I'd expect it to follow the learning pattern: high initially followed by steady decline with feedback. I looked in Google Scholar for '("wei qi" OR "weiqi" OR "wei-chi" OR "igo" OR "baduk" OR "baeduk") "sunk cost" game' but didn't turn up anything. GS doesn't respect capitalization so "Go" is useless to search for.)

Two remarks :

  1. Be careful with the Concorde example. As a French citizen, I was told that the goal of the Concorde never was to be profitable as a passenger service, but it served two goals : public relation/advertising to demonstrate the world the technical ability of french engineering and therefore sell french-made technology (civilian and military planes for example, but also through halo effect, trains or cars or nuclear power plants), and stimulating research and development that could then lead to other benefits (a bit like military research or spac

... (read more)
IMO, the Concorde justifications are transparent rationalizations - if you want research, buy research. It'd be pretty odd if you could buy more research by not buying research but commercial products... In any case, I mention Concorde because it's such a famous example and because a bunch of papers call it the Concorde effect. I'm not terribly confident in that claim; it might be that one suffers them both simultaneously. I had to resort to anecdotes and speculation for that section; it's intuitively appealing, but we all know that means little without hard data. Yeah. I was quite surprised when I ran into Arkes's claim - it certainly didn't match my memories of being a kid! - and kept a close eye out thenceforth for studies which might bear on it.
Focusing money too closely on the research itself runs the risk that you'll end up paying for a lot of hot air dressed up to look like research. Cool-but-useless real-world applications are the costly signalling mechanism which demonstrates an underlying theory's validity to nonspecialists. You can't fly to the moon by tacking more and more epicycles onto the crystalline-sphere theory of celestial mechanics.
If you want to fly to the moon, buy flying to the moon. X-prizes etc. You still haven't shown that indirect mechanisms which happen to coincide with the status quo are the optimal way of achieving goals.
"Modern-day best-practices industrial engineering works pretty well at it's stated goals, and motivates theoretical progress as a result of subgoals" is not a particularly controversial claim. If you think there's a way to do more with less, or somehow immunize the market for pure research against adverse selection due to frauds and crackpots, feel free to prove it.
I disagree. I don't think there's any consensus on this. The success of prizes/contests for motivating research shows that grand follies like the Concorde or Apollo project are far from the only effective funding mechanism, and most of the arguments for grand follies come from those with highly vested interests in them or conflicts of interest - the US government and affiliated academics are certainly happy to make 'the Tang argument' but I don't see why one would trust them.
I didn't say it was the only effective funding mechanism. I didn't say it was the best. Please respond to the argument I actually made.
You haven't made an argument that indirect funding is the best way to go and you've made baseless claims. There's nothing to respond to: the burden of proof is on anyone who claims that bizarrely indirect mechanisms through flawed actors with considerable incentive to overstate efficacy and do said indirect mechanism (suppose funding the Apollo Project was an almost complete waste of money compared to the normal grant process; would NASA ever under any circumstances admit this?) is the best or even a good way to go compared to directly incentivizing the goal through contests or grants.
On this point we are in agreement. I'm not making any assertions about what the absolute best way is to fund research. Please be more specific. All humans are flawed. Were you perhaps under the impression that research grant applications get approved or denied by a gleaming crystalline logic-engine handed down to us by the Precursors? Here is the 'bizarrely indirect' mechanism by which I am claiming industrial engineering motivates basic research. First, somebody approaches some engineers with a set of requirements that, at a glance, to someone familiar with the current state of the art, seems impossible or at least unreasonably difficult. Money is piled up, made available to the engineers conditional on them solving the problem, until they grudgingly admit that it might be possible after all. The problem is broken down into smaller pieces: for example, to put a man on the moon, we need some machinery to keep him alive, and a big rocket to get him and the machinery back to Earth, and an even bigger rocket to send the man and the machinery and the return rocket out there in the first place. The Tsiolkovsky rocket equation puts some heavy constraints on the design in terms of mass ratios, so minimizing the mass of the life-support machinery is important. To minimize life-support mass while fulfilling the original requirement of actually keeping the man alive, the engineers need to understand what exactly the man might otherwise die of. No previous studies on the subject have been done, so they take a batch of laboratory-grade hamsters, pay someone to expose the hamsters to cosmic radiation in a systematic and controlled way, and carefully observe how sick or dead the hamsters become as a result. Basic research, in other words, but focused on a specific goal. They seem to be capable of acknowledging errors, yes. Are you? http://www.newscientist.com/article/dn11326
That's like asking "If homeopathy worked and all the doctors were wrong, would they admit it?" You can't just flip a bit in the world setting Homeopathy_Works to TRUE and keep everything else the same. If homeopathy worked and yet doctors still didn't accept it, that would imply that doctors are very different than they are now, and that difference would manifest itself in lots of other ways than just doctors' opinion on homeopathy. If funding the Apollo Project was a complete waste of money compared to the normal grant process, the world would be a different place, because that would require levels of incompetency on NASA's part so great that it would get noticed. Or for another example: if psi was real, would James Randi believe it?
No; it's like asking "If homeopathy didn't work and all the homeopaths were wrong, would they admit it?" You can find plenty of critics of Big Science and/or government spending on prestige projects, just like you can find plenty of critics of homeopathy. If homeopathy was a complete waste of money compared to normal medicine implying 'great' levels of incompetency on homeopaths, how would the world look different than it does?
Those people generally claim that Apollo was a waste of money period, not that Apollo was a waste of money compared to going to the moon via the normal grant process.
You can look at cases like chiropractors. Over a long time there was a general belief that chiropractors didn't provide any good for patients because they theory based on which chiropractors practice is in substantial conflict with the theories used by Western medicine. Suddenly in 2008 Cochrane comes out with the claim that chiropractors actually do provide comparable health benefits for patients with back pain as conventional treatment for backpain. A lot of the opposition to homeopathy is based on the fact that the theory base of homeopathy is in conflict with standard Western knowledge about how things are supposed to work. People often fail to notice things for bad reasons.
There are very good reasons why finding that one set of studies shows an unusual result is not taken as proof by either doctors or scientists. (It is also routine for pseudoscientists to latch onto that one or few studies when they happen.) In other words, chiropractic is not such a case. I hope you're not suggesting that the theories used by Western medicine are likely to be wrong here.
Cochrane meta studies are the gold standard. In general they do get taken as proof. The main point is that you don't need to have a valid theory to be able to produce empirical results. Then I'm also don't believe that issues surrounding back pain are very well understood by today's Western medicine.
As a matter of simple Bayseianism, P(result is correct|result is unusual) depends on the frequency at which conventional wisdom is wrong, compared to the frequency at which other things (errors and statistical anomalies) exist that produce unusual results. The probability that the result of a study (or meta-study) is correct given that it produces an unusual result is not equivalent to the overall probability that studies from that source are correct, so "Cochrane meta studies are the gold standard" is not the controlling factor. (Imagine that 0.2% of their studies are erroneous, but conventional wisdom is wrong only 0.1% of the time. Then the probability that a study is right given that it produces a result contrary to conventional wisdom is only 1/3, even though the probability that studies in general are right is 99.8%.) That's why we have maxims like "extraordinary claims require extraordinary evidence".
FYI it isn't even clear the review he mentions says what he thinks it says, not to mention the reviewers noted most of the studies had high risk of bias. "Other therapies" as controls in the studies doesn't necessarily mean therapies that are considered to be effective.
The evidence for chiropractic intervention for lower back pain is good enough that RationalWiki which is full of people who don't like chiropractics write: "There is evidence that chiropractic can help alleviate symptoms of low back pain." RationalWiki then adds that the cost and risks still suggest to that it's good to stay aware from chiropractors. Conventional wisdom by people who care about evidence for medical treatment is these days is that chiropractical interventions have effects for alleviate symptoms of low back pain. That makes it a good test to identify people who pretend to care about evidence-based medicine but who care about medicine being motivated by orthodox theory instead of empirical evidence.
Of course they'll write that. After all, there is evidence. You were implying that there's good evidence. In other words, the evidence isn't all that good. This is a no true Scotsman fallacy. You're asserting that anyone who seems to be part of conventional wisdom but doesn't agree doesn't count because he doesn't care about evidence.
No. Saying that costs and side effects aren't worth something is very different than saying it doesn't work and produces no effect. Conventional treatment is often cheaper than chiropractics. Dismissing it on those grounds is very different than dismissing it on grounds that it produces no effect. Given that they don't like it they need to make some argument against it ;) Not being able to argue that it doesn't work make them go for risks and cost effectiveness. Cochrane meta studies have a reputation that's good enough that even venues like RationalWiki accept it when it comes to conclusions that they don't like. There no meta study that's published after the Cochrane results that argues that the Cochrane analysis get's things wrong. Conventional of evidence-based medicine than suggests to use the Cochrane results as best source of evidence. It not only RationalWiki. Any good evidence-based source that has a writeup about chiropractics will these days tell you that the evidence suggests that it works for back pain for a value of works that means it works as well as other conventional treatments for back pain.
No, they're not very different at all. In fact they are directly related. Saying that costs and side effects are too great means that costs and side effects are too great for the benefit you get. If there is some probability that the study is bad and there is no benefit, that gets factored into this comparison; the greater the probability that the study is bad, the more the costs and side effects tip the balance against getting the treatment. You didn't say that everyone accepts it. You said that everyone who cares about evidence accepts it. This is equivalent to "the people who don't accept it don't count because their opinions are not really based on evidence". Likewise, now you're claiming "any good evidence-based source" will say that it works. Again, this is a No True Scotsman fallacy; you're saying that anyone who disagrees can't really be an evidence-based source.
It's only a No True Scotsman if you can point to an actual citizen of Scotland who doesn't meet the 'true Scotsman' standard. You are conflating two claims here. One is that chiropractic is more expensive than conventional treatments for lower back pain, and the other is that chiropractic is less effective than conventional treatments for lower back pain. What support do you have for the latter claim?
I covered that:
If there was some non-negligible probability that the study was bad, RationalWiki would, given their dislike for chiropractics, have seized upon that and discussed it explicitly, would they not?
They describe the Cochrane study as "weak evidence" that chiropractic is as effective as other therapy. This implicitly includes some non-negligible probability that the benefit is less than the study seems to say it is.
"works pretty well" is not a controversial claim, but "motivates theoretical progress" is more iffy. Offhand, I would say that it motivates incremental progress and applied aspects. I don't think it motivates attempts at breakthroughs and basic science.
'Breakthroughs and basic science' seem to be running in to diminishing returns lately. As a policy matter, I think we (human civilization) should focus more on applying what we already know about the basics, to do what we're already doing more efficiently.
It really depends on your view of academics. If you think that if you hand them a pile of money they just invest it into playing status games with each other, giving them a clear measurable outcome to provides feedback around which they have to structure their research could be helpful.

Is Sunk Cost Fallacy a Fallacy?

Yes, it is. Roughly speaking it is when you reason that you should persist in following a choice of actions that doesn't give the best expected payoff because you (mistakenly) treat already spent resources as if they are a future cost of abandoning the path. If your essay is about "Is the sunk cost fallacy a problem in humans?" then the answer is not so trivial.

It is not clever or deep to title things as though you are overturning a basic principle when you are not. As far as I am concerned a (connotatively) fals... (read more)


If your essay is about "Is the sunk cost fallacy a problem in humans?" then the answer is not so trivial.

And if it isn't, as I conclude (after an introduction discussing the difference between being valid in a simplified artificial model and the real world!), then it's perfectly legitimate to ask whether accusations of sunk cost fallacy - which are endemic and received wisdom - are themselves fallacious. Sheesh. I feel as if I were discussing someone's credibility and someone said 'but that's an ad hominem!'. Yes. Yes, it is.

(Notice your Wikipedia link is full of hypotheticals and description, and not real world evidence.)

It is not clever or deep to title things as though you are overturning a basic principle when you are not.

People do not discuss sunk cost because it is a theorem in some mathematical model or a theoretical way possible agents might fail to maximize utility; they discuss it because they think it is real and serious. If I conclude that it isn't serious, then in what sense am I not trying to overturn a basic principle?

Finally, your criticism of the title or what overreaching you perceive in it aside, did you have any actual criticism like missing refs or anything?

But none of this changes the fact that the title is still misleading. Even if accusations of sunk cost fallacy are themselves often fallacious, this doesn't change the fact that you are arguing that the sunk cost fallacy is a mode of reasoning which doesn't often occur, rather than one that is actually valid. Claiming that it is not serious may indeed be overturning a basic principle, but it is not the basic principle the title claims you may be overturning. Sensationalize if you like, but there's no need to be unclear.
I don't know how you got that from the essay. To quote, with added emphasis:
I believe Sniffnoy, like myself, gave the author the benefit of the doubt and assumed that he was not actually trying to argue against a fundamental principle of logic and decision theory but rather claiming that the principle applies to humans far less than often assumed. If this interpretation is not valid then it would suggest that the body of the post is outright false (and logically incoherent) rather than merely non-sequitur with respect to the title and implied conclusion.
Sniffnoy claims that gwern has argued "that the sunk cost fallacy is a mode of reasoning which doesn't often occur, rather than one that is valid." Actually, what gwern has argued is that while the sunk cost fallacy is often used as an heuristic there is little evidence that it is sound to do so in real world situations. This also seems to be what you've said, but it is not what Sniffnoy has said. Hence my confusion. On a side note, I don't really understand your qualms with the title, but that's less important to me.
The qualms are similar in nature to if I encoutered an article: Is 7+6 = 16 not an arithmetic error? followed by an article explaining that it doesn't matter because humans only have 10 fingers, it's not like anyone counts on their toes and besides, sometimes it's healthier to believe the answer is 16 anyway because you were probably going to make a mistake later in the calculation and you need to cancel it out.
Precisely. The wikipedia article set out to explain what the Sunk Cost Fallacy is and did it. It did not set out to answer any of the dozens of questions which would make sense as titles to your post (such as "Is the sunk cost fallacy a problem in humans?") and so real world 'evidence' wouldn't make much sense. Just like filling up the article on No True Scottsman with evidence about whether True Scottsman actually do like haggis would be rather missing the point! (The hypothetical is built right into the name for the informal fallacy!) And with a slight tweak that is another thing that you could make your post about that wouldn't necessitate dismissing it out of hand. Please consider renaming along these lines. * Are most accusation of the Sunk Cost Fallacy fallacious? * Fallacious thinking about Sunk Costs * Sunk Costs - not a big deal * Accusations of Sunk Cost Fallacy Often Fallacious? * Fallacious thinking about Sunk Costs - a problem in the real world? Without implicitly accepting the connotations here by responding - No, your article seems to be quite thorough with making references. In particular all the dot points in the summary seem to be supported by at least one academic source.

Wait a second:

Arkes & Ayton cite 2 studies finding that committing sunk cost bias increases with age - as in, children do not commit it.

Information is worth most to those who have the least: as we previously saw, the young commit sunk cost more than the old

These are in direct contradiction with each other. What gives?

They are in contradiction, but the latter claim is supported by the large second paragraph in the children (the section that 'previously saw' was linking to) where I quote the criticism of the 2 studies and then list 5 studies which find either that children do commit it on questions or that avoidance increases over lifetimes, which to me seem to override the 2 studies.
Ah. Can I suggest you re-write that section to make it clearer? I admit I wasn't reading closely, but I assumed that a two-line statement before a quote from a paper was going to be the conclusion of the section. Also, given that the evidence there is far from unidirectional, I'd rather you didn't cite it as the first piece of supporting evidence for the "gaining information" hypothesis. I expect an argument to start with its strongest pieces of evidence first. P.S. I'm not sure I agree with your argument, but thanks for putting this together!
I already modified it; hopefully the new version is clearer. I was going in what I thought was logical implication order of the learning hypothesis.

when one engages in spring-cleaning, one may wind up throwing or giving away a great many things which one has owned for months or years but had not disposed of before; is this an instance of sunk cost where you over-valued them simply because you had held onto them for X months, or is this an instance of you simply never before devoting a few seconds to pondering whether you genuinely liked that checkered scarf?

If (during spring cleaning) you balk at throwing away something simply because it's sat so long in your basement, you are tempted to justify ho... (read more)

Balking at getting rid of things you own may sometimes be more about the endowment effect than the sunk cost fallacy.

A few brief comments:

The study in footnote 6 seems to show the opposite of what you say about it. The study found that diffusion of responsibility reduced the effect of sunk costs while you say "responsibility is diffused, which encourages sunk cost."

In the "subtleties" section, it's unclear what is meant by saying that "trying to still prove themselves right" is "an understandable and rational choice." After someone has made a decision and it is either right or wrong, it does not seem rational to try to prove it r... (read more)

Good point. My interpretation of what you're saying is that the error is actually failure to re-plan at all, not bad math while re-planning.

I find that a very helpful formulation. I could not tell where Gwern was drawing distinctions.

About the “Learning” section:

I think I understand the basic argument here: sometimes an escalation of commitment can be rational as a way to learn more from a project by continuing it for longer. But it seems like this only applies to some cases of sunk cost thinking and not others. Take Thaler's example: I don't see why a desire to learn would motivate someone to go to a football game in a blizzard (or, more specifically, how you'd learn more if you had paid for your ticket than if you hadn't).

And in some cases it seems like an escalation of commitment ... (read more)

You'd learn more what it's like to go in a blizzard - maybe it's not so bad. (Personally, I've gone to football games in non-blizzards and learned that it is bad.) If you knew in this specific instance, drawn from all the incidents in your life, that you wouldn't learn anything, then you've already learned what you can and sunk cost oughtn't enter into it. It's hard to conclude very much from answers to hypothetical questions. Any result is consistent with an indefinite number of theories, as we all know. The results fit very neatly with a learning theory, and much more uncomfortably with things like self-justification.

I had serious trouble understanding the paragraph "COUNTERING HYPERBOLIC DISCOUNTING?" beyond "sunk costs probably counter other biases".

Also, I'd like to point out that, if sunk costs are indeed a significant problem in large organizations, they are indeed a significant problem; large organizations are (unfortunately?) rather important to modern life.

What's not clear about it? That's the idea. Only if there are better equilibriums which can be moved to by attacking sunk cost - otherwise they are simply the price of doing business. (I only found two studies bearing on it, neither of which were optimistic: the study finding sunk costs encouraged coordination and the bank study finding attacking sunk cost resulted in deception and falsification of internal metrics.)

Content aside, you should generally avoid the first person as well as qualifiers and you should definitely avoid both, e.g. "I think it is interesting." Where some qualifiers are appropriate, you often phrase them too informally, e.g. "perhaps it is more like," would read much better as, "It is possible that," or, "a possible explanation is." Some first person pronouns are acceptable, but they should really only be used when the only alternative is an awkward or passive sentence.

The beginning paragraph of each subsec... (read more)

I think how important these criticisms are depends on who the intended audience of the essay is -- which Gwern doesn't really make clear. If it's basically for SIAI's internal research use (as you might think, since they paid for it), tone probably hardly matters at all. The same is largely the case if the intended audience is LW users -- our preference for accessibly, informally written scholarly essays is revealed by our being LW readers. If it's meant as a more outward-facing thing, and meant to impress academics who aren't familiar with SIAI or LW and who judge writings based on their adherence to their own disciplinary norms, then sure. (Incidentally, I do think this would be a worthwhile thing to do, so I'm not disagreeing.) Perhaps Gwern or Luke would care to say who the intended beneficiaries of this article are. For myself, I prefer scholarly writing that's as full of first-person statements as the writer cares to make it. I feel like this tends to provide the clearest picture of the writer's actual thought process, and makes it easier to spot where any errors in thinking actually occurred. I rarely think the accuracy of an article would be improved if the writer went back after writing it and edited out all the first-person statements to make them sound more neutral or universal.
Well, style wasn't really what I had in mind since it's already so non-academic in style, but your points are well taken. I've fixed some of that.

I prefer the way the Beeminder cofounder explains this on this page of his blog.

I'm impressed with the thoroughness that went into this review, and with its objectivity and lack of premature commitment to an answer.

I would like to argue that it is less important to determine IF it is a fallacy, than what kind it is.

One view is that this is a "deliberation" fallacy, along the lines of a failed thought experiment; e.g. 'something went wrong because conditions weren't met.' Another view is that this fallacy, which relates if I am correct to "resource shortages" or "debt crises" is in fact a more serious 'systems error' such as a method fallacy involving recursivity or logic gates.

To some extent at this point I am prone to take the view that... (read more)

Linked in a footnote, BTW.
Also related: Sunk Cost Fallacy by Zachary M. Davis

Well, I always thought it was obvious that "sunk cost" has one advantage going for it.

Placing a single incident of a "sunk cost" in a larger context, "sunk costs" can serve as a deterrent against abandoning projects. I wonder if the virtue of persistence isn't maligned. After all, as limited rationality machines, 1) we hardly ever can look at the full space of possible alternatives, and 2) probably underestimate the virtue of persistence. Pretty much every success story I've ever read is of someone who persisted beyond what yo... (read more)

Sure, but why do you expect people to systematically err in judging when it is time to abandon a project? Unless you have a reason for this, this is buck-passing. ('Why do people need sunk cost as a deterrent? Well, it's because they abandon projects too easily.' 'But why do they abandon projects too easily?' 'Heck I dunno. Same way opium produces sleep maybe, by virtue of a dormitive fallacy.') This line of thought is why I was looking into hyperbolic discounting, which seems like a perfect candidate for causing that sort of easily-abandonment behavior. Which doesn't necessarily prove anything; we could just be seeing the winner's curse writ large. To win any auction is easy, you just need to be willing to bid more than anyone else... Persistence beyond 'the frustration barrier' may lead to outcomes like 'I am the Japanese Pog-collecting champion of the world.' Well, OK, but don't tell me that's something I should aspire to as a model of rationality!
"Sure, but why do you expect people to systematically err in judging when it is time to abandon a project? Unless you have a reason for this, this is buck-passing." Because we aren't psychic and can only guess expected payoffs. Why would I hypothesize that we underestimate expected payoffs for persistence rather than the reverse? Two reasons--or assumptions, I suppose. 1. Most skills compound--the better we get, the faster we can get better. And humans are bad at estimating compounded effects, which is why Americans on the whole find themselves surprised at how much their debt has grown. 2. The better you get, the fewer competitors you have, and thus the more valuable your skill is, disproportionate to absolute skill level (a separate compounding effect). "Persistence beyond 'the frustration barrier' may lead to outcomes like 'I am the Japanese Pog-collecting champion of the world.'" Yes, but the activity one persists in/with is a completely separate issue, so I feel you can just assume 'for activities that reasonably seem likely to yield large benefit'. On a separate note, the sunk cost fallacy may not be a fallacy because it fails to take into account the social stigma of leaving projects incomplete versus completing them. Oh, sure, if you're extra careful, you would take that into account in your utility function. You can always define your utility function to include everything relevant, but in real life estimations of utility, some things just don't occur to us. I mean, consider morality. It's so easy to say that moral rules have plenty of exceptions and so arrive at a decision that breaks one or more of these rules (and not for simple reason of internal inconsistency). But this may be bad overall for society. You might arrive at a local maximum of overall good, but a global maximum would require strict adherence to moral rules. I believe this is the common "objection" to utilitarianism and why hardly anyone (other than a LWer) professes to be utilitarian.
The first is not true at all; graphs of expertise follow what looks like logarithmic curves, because it's a lot easier to master the basics than to become an expert. (Question: did Kasparov's chess skill increase faster from novice to master status, or from grandmaster to world champion?) #2 may be true, but everyone can see that effect so I don't see how that could possibly cause systematic underestimation and compensating sunk cost bias. Mentioned in essay. One objection, and why variants like rule utilitarianism exist and act utilitarians emphasize prudence since we are bounded rational agents and not logical omniscient utility maximizers.

I came up with example of how sunk cost fallacy could helps increase the income for 2 competing agents.

Consider two corporations that each sunk considerable sum of money into two interchangeable competing IP-heavy products. Digital cameras for example. They need to recover that cost, which they would be unable to if they start price-cutting each other while ignoring the sunk costs. If they both act as not to price cut beyond the point where the sunk costs are not recovered, they settle at a price that permits to recover the software development costs. If t... (read more)

There are better ways of making credible commitments than having a tendency to commit sunk cost fallacy.
For ideal agents, absolutely. For things like humans... Have you looked at the models in "Do Sunk Costs Matter?", McAfee et al 2007? EDIT: I've incorporated all the relevant bits of McAfee now, and there are one or two other papers looking at sunk cost-like models where the behavior is useful or leads to better equilibria.
While that may be true, I don't see how it has any consequences.
Of course. But what works, works; you'd cripple an agent by dispelling it's fallacies without providing alternatives.

Is the sunk cost fallacy a fallacy?

I ask myself about many statements: would this have the same meaning if the word "really" were inserted? As far as my imagination can project, any sentence that can have "really" inserted into it without changing the sentence's meaning is at least somewhat a wrong question, one based on an unnatural category or an argument by definition.

If a tree falls in the forest, does it make a sound? --> If a tree falls in the forest, does it really make a sound?

Is Terry Schiavo alive? --> Is Terry Schiav... (read more)

Did you really mean “that can have” rather than “that can't have”?
As far as I can tell you can do that with any sentence.
Can you really do that with any sentence?
"Really" in this context means that an answer has already been provided by someone but you object to the rationale given for this provided answer, particularly because it's too shallow. In other words, it's not a description of the problem the question asks you to solve, it's a description of the context in which the problem is to be solved. So the fact that it can be done with any sentence doesn't mean that it provides no information, just like "Like I was discussing with Joe last week, is the sunk cost fallacy a fallacy?" doesn't provide no information.
Do you really ask yourself that about many statements? Would this really have the same meaning if the word "really" were inserted? Is any sentence that can have "really" inserted into it without changing the sentence's meaning really at least somewhat a wrong question, one based on an unnatural category or an argument by definition?