Edit: I recommend reading Scott's response to this essay in addition to the essay itself.
I've been tracking the replication crisis and how it affects a bunch of behavioral economics for a while. I found reading this post useful for a particularly negative take. Some key quotes:
It sure does look alive... but it's a zombie—inside and out.
Why do I say this?
Two primary reasons:
- Core behavioral economics findings have been failing to replicate for several years, and *the* core finding of behavioral economics, loss aversion, is on ever more shaky ground.
- Its interventions are surprisingly weak in practice.
Because of these two things, I don't think that behavioral economics will be a respected and widely used field 10-15 years from now.
[...]
It turns out that loss aversion does exist, but only for large losses. This makes sense. We *should* be particularly wary of decisions that can wipe us out. That's not a so-called "cognitive bias". It's not irrational. In fact, it's completely sensical. If a decision can destroy you and/or your family, it's sane to be cautious.
"So when did we discover that loss aversion exists only for large losses?"
Well, actually, it looks like Kahneman and Tversky, winners of the Nobel Prize in Economics, knew about this unfortunate fact when they were developing Prospect Theory—their grand theory with loss aversion at its center. Unfortunately, the findings rebutting their view of loss aversion were carefully omitted from their papers, and other findings that went against their model were misrepresented so that they would instead support their pet theory. In short: any data that didn't fit Prospect Theory was dismissed or distorted.
I don't know what you'd call this behavior... but it's not science.
This shady behavior by the two titans of the field was brought to light in a paper published in 2018: "Acceptable Losses: The Debatable Origins of Loss Aversion".
I encourage you to read the paper. It's shocking. This line from the abstract sums things up pretty well: "...the early studies of utility functions have shown that while very large losses are overweighted, smaller losses are often not. In addition, the findings of some of these studies have been systematically misrepresented to reflect loss aversion, though they did not find it."
Even with Kaj's highlight comments, which are helpful, I don't feel educated enough in the economics area (one semester of high school Econ) to tell whether this is
1) vigorous academic debate or
2) damning evidence of academic fraud by Kahneman and Tversky.
Given how K&T are pretty central to parts of the Sequences, and that Judgment Under Uncertainty is at the top of my book pile--can someone with some expertise in this area give their take? I would donate $20 to Givewell for 500+ words that helped me to understand this situation.
Academic here, it's (1). Loss aversion is so popular that people think it underpins everything. Although loss aversion doesn't show up in every dataset, it does show up (https://doi.org/10.1002/jcpy.1156) - even the "second paper" shared by Kaj just says it appears sometimes. But does that mean it explains all these other findings? No! But some reviewers or authors think "isn't that just loss aversion?" and it seems authors take the easy route to publication (or just aren't well-read enough) instead of probing the psychological source of their findings more seriously. For example, loss aversion was the classic explanation for the endowment effect, but research in the last couple decades has generated results that loss aversion cannot really explain and that other theories readily explain, yet LA is sometimes still cited as the explanation the authors endorse.
I would also be interested in an explanation of how the replication crisis effects the sequences and willing to put in 10$ to givewell
I note that I am confused. I am confused mostly because the claim "loss aversion exists only for large losses" seems to be completely disharmonious with my anecdotal experience, and I tend to view anecdotal experience as an often semi-reliable guide to the accuracy of social science. If the strong version of this claim is true, how would you explain the following facts?
My guess is that you could come up with ad-hoc explanations for all of these things without reference to loss aversion, but that doesn't seem very elegant to me. A proclivity for loss aversion present in 50+% of humans appears like the most natural, simple explanation.
[ETA: However, after reading the link from Kaj Sotala I'm starting to feel my mind being changed.]
How the heck do I update on this?
I don't feel like I have a graceful way to de-weight something when it turns out poorly in this fashion. I feel comfortable with unwinding an update I previously made, but in this case it amounts to just throwing out everything I have head-chunked as behavioral economics.
This feels wrong-ish, in the sense that it isn't as though all the research was a complete fiction; a more correct operation would be to adjust my priors in such a way as to capture what the research actually shows, rather than what I thought it showed.
Trouble is, this is even more work than making the initial updates, because the whole failure mode is an inability to have confidence in any existing distillation of the ideas. This means tackling the relevant studies one at a time, with only a few newer review or meta papers to help.
On the upside, it occurs to me that I integrated virtually none of the mentioned results well enough that it met the anticipated experiences standard; maybe that means I never really updated in the first place and this costs nothing to lose.
There's also a second paper linked from that article which is quite interesting (some excerpts in child comments).
Risky choice
Gain seeking (the opposite of loss aversion) in the stock market
Self-rated losses vs. gains
The endowment effect and loss aversion
Status quo bias and loss aversion
So the one that still stands is confirmation bias?
It sounds like much of loss aversion is just an intuitive use of the Kelly Criterion?
I'm glad for this article because it sparked the conversation about the relevance of behavioral economics. I also agree with Scott's criticism of it (which unfortunately isn't part of the review). But together they made for a great update on the state of behavioral economics.
I checked if there's something new in the literature since these articles were published, and found this paper by three of the authors who wrote the 2020 article Scott wrote about in his article. They conclude that "the evidence of loss aversion that we report in this paper and in Mrkva et al. (2020) reject the idea that loss aversion is a “fallacy”" as the 2018 paper Hreha cited called it. The experimental design seems to be very thoughtful and careful, but I found the paper hard to understand and would have to invest a lot more to really understand and judge it. Perhaps someone else more in the know can do that.
I gave this post +4 cause I think the discussion (including the responses) is important, even though I think the article itself was quite lacking. Not sure how to reconcile that. But I sure wouldn't put it in a book or best-of sequence.
It's hard to believe that scientists would deliberately manipulate their findings. The risk of getting caught and discredited is just too high – oh wait.
Link to pre-publication (but presumably near-identical) version of the 2018 paper: https://ie.technion.ac.il/~yeldad/Y2018.pdf.
Scott Alexander has written an in-depth article about Hreha's article:
See also Alex Imas's and Chris Blattman's criticisms of Hreha (on Twitter).
I'm not sure it's any more dead than other fields of social science. Which, maybe they're all actually zombies, but that sounds excessively strong. For example, take the effect sizes of nudges. I believe that the effect of "opt out" policies for organ donation have absolutely massive effects (see https://sparq.stanford.edu/solutions/opt-out-policies-increase-organ-donation ). So is the problem that the field is dead, or that it's just sick with the same diseases as psychology and better work needs to be done to separate wheat from chaff? Forgetting hypotheses that turn out not to hold up, doing more replications, etc. For example, I believe hindsight bias has held up as being real, having significant effects, and being difficult to overcome.
Does this suggest they don't hold to loss aversion in any sense? I'm taking the claim, selective analysis/data presentation, at face value here. If true that seems like it would suggest a very significant loss to current and future status as well as position and potential future positions.
I'd like to recommend a book named Radical uncertainty, it does a great job at criticizing Behavioural economics (among other things) and how we should get many of their results with a pinch of salt. I think this community specifically can benefit greatly from it
I would be interested to read a review of it on LessWrong. (I have also not read the book, and do not have the book either.) The only review I found that was not just a summary of the book described the authors' recommendations as "Their alternative to probability models seems to be, roughly, experienced judgment informed by credible and consistent “narratives” in a collaborative process." That sounds to me like dressing up the non-apple of "not using probability models" as a banana.
(surprised) No way!! I bought that book three months ago, at the recommendation of no one. I haven't read it yet, but it's good to see that I have made a good investment on my own judgment.
Here is a little detail I learned in behavioral finance class: you don’t need behavioral finance/econ to discover loss aversion. All you need is a rational utility maximizing agent in a standard neoclassical framework who has a concave utility function (such as LOG which is commonly assumed to model diminishing marginal utility). From this you see that the rational agent has more to loose from a one unit negative change than a one unit positive change i.e. loss aversion.
More relevant to AI than you think.