Gell-Mann amnesia refers to "the phenomenon of experts reading articles within their fields of expertise and finding them to be error-ridden and full of misunderstanding, but seemingly forgetting those experiences when reading articles in the same publications written on topics outside of their fields of expertise, which they believe to be credible". Here I use "Gell-Mann effect" to mean just the first part: experts finding that popular press articles on their topics to be error-ridden and fundamentally wrong. I'll also only consider non-political topics and the higher tier of popular press, think New York Times and Wall Street Journal, not TikTok influencers.
I have not experienced the Gell-Mann effect. Articles within my expertise in the top popular press are accurate. Am I bizarrely fortunate? Are my areas of expertise strangely easy to understand? Let's see.
Examples:
My PhD was in geometry so there isn't a whole lot of writing on this topic but surprisingly the New York Times published a VR explainer of hyperbolic geometry. It's great! The caveat is that it's from an academic group's work, and was not exactly written by NYT.
I now work in biomedical research with a lot of circadian biologists and NYT some years ago had a big feature piece on circadian rhythms. All my colleagues raved about it and I don't recall anyone pointing out any errors. (I'm no longer certain which article this was since they've written on the topic multiple times and I don't have a subscription to read them all.)
During a talk, the speaker complained about the popular press reporting on his findings regarding the regulation of circadian rhythms by gene variants that originated in neanderthals. I didn't understand what his complaint was and so wrote him afterwards for clarification. His response was: "Essentially, many headlines said things like "Thank Neanderthals if you are an early riser!" While we found that some Neanderthal variants contribute to this phenotype and that they likely helped modern humans adapt to higher latitudes, the amount of overall variation in chronotype (a very genetically complex trait) today that they explain is relatively small. I agree it is fairly subtle point and eventually have come to peace with it!" Now, his own talk title was 'Are neanderthals keeping me up at night?' - which is equally click-baity and oversimplified as his example popular press headline, despite being written for an academic audience. Moreover, his headline suggests that neanderthal-derived variants are responsible for staying up late, when in fact his work showed the other direction ("the strongest introgressed effects on chronotype increase morningness"). So, the popular press articles were more accurate than his own title. Overall, I don't consider the popular press headlines to be inaccurate.
But then why would Gell-Mann effect be so popular?
Gell-Mann amnesia is pretty widely cited in some circles, including Less Wrong adjacent areas. I can think of a couple reasons why my personal experience contradicts the assumption it's built on.
I'm just lucky. My fields of expertise don't often get written about by the popular press and when they do come up the writers might rely very heavily on experts, leaving little room for the journalists to insert error. And they're non-political so there's little room for overt bias.
People love to show off their knowledge. One-upping supposedly trustworthy journalists feels great and you bet we'll brag about it if we can, or claim to do it even if we can't. When journalists make even small mistakes, we'll pounce on them and claim that shows they fundamentally misunderstand the entire field. So when they get details of fictional ponies wrong, we triumphantly announce our superiority and declare that the lamestream media is a bunch of idiots (until we turn the page to read about something else, apparently).
I'm giving the popular press too much of an out by placing the blame on the interviewed experts if the experts originated the mistake or exaggeration. Journalists ought to be fact-checking and improving upon the reliability of their sources, not simply passing the buck.
I suspect all are true to some extent, but the extent matters.
What does the research say?
One 2004 study compared scientific articles to their popular press coverage, concluding that: "Our data suggest that the majority of newspaper articles accurately convey the results of and reflect the claims made in scientific journal articles. Our study also highlights an overemphasis on benefits and under-representation of risks in both scientific and newspaper articles."
A 2011 study used graduate students to rate claims from both press releases (that are produced by the researchers and their PR departments) and popular press articles (often based off those press releases) in cancer genetics. They find: "Raters judged claims within the press release as being more representative of the material within the original science journal article [than claims just made in the popular press]." I find this study design unintuitive due to the way it categorizes claims, so I'm not certain whether it can be interpreted the way it's presented. They don't seem to present the number of claims in each category, for example, so it's unclear whether this is a large or small problem.
A 2015 study had both rating of basic objective facts (like research names and institutions) and scientist ratings of subjective accuracy of popular press articles. It's hard to summarize but the subjective inaccuracy prevalence was about 30-35% percent for most categories of inaccuracies.
Overall, I'm not too excited by the research quality here and I don't think they manage to directly address the hypothesis I made above that people are overly critical of minor details which they then interpret as the reporting missing the entire point (as shown in my example #3). They do make it clear that a reasonable amount of hype originates from press releases rather than from the journalists per se. However, it should be noted that I have not exhausted the literature on this at all, and I specifically avoided looking at research post 2020 since there was a massive influx of hand-wringing about misinformation after COVID. No doubt I'm inviting the gods of irony to find out that I've misinterpreted some of these studies.
It could be interesting to see if LLMs can be used to be 'objective' ('consistent' would be more accurate) raters to compare en masse popular press and their original scientific publications for accuracy.
Conclusion
I think that the popular press is not as bad as often claimed when it comes to factuality of non-political topics, but that still leaves a lot of room for significant errors in the press and I'm not confident in any numbers of how serious the problem is. Readers should know that errors can often originate from the original source experts instead of journalists. This is not to let journalism off the hook, but we should be aware that problems are often already present in the source.
Disclaimer
A close personal relation is a journalist and I am biased in favor of journalism due to that.
Gell-Mann amnesia refers to "the phenomenon of experts reading articles within their fields of expertise and finding them to be error-ridden and full of misunderstanding, but seemingly forgetting those experiences when reading articles in the same publications written on topics outside of their fields of expertise, which they believe to be credible". Here I use "Gell-Mann effect" to mean just the first part: experts finding that popular press articles on their topics to be error-ridden and fundamentally wrong. I'll also only consider non-political topics and the higher tier of popular press, think New York Times and Wall Street Journal, not TikTok influencers.
I have not experienced the Gell-Mann effect. Articles within my expertise in the top popular press are accurate. Am I bizarrely fortunate? Are my areas of expertise strangely easy to understand? Let's see.
Examples:
Now, his own talk title was 'Are neanderthals keeping me up at night?' - which is equally click-baity and oversimplified as his example popular press headline, despite being written for an academic audience. Moreover, his headline suggests that neanderthal-derived variants are responsible for staying up late, when in fact his work showed the other direction ("the strongest introgressed effects on chronotype increase morningness"). So, the popular press articles were more accurate than his own title. Overall, I don't consider the popular press headlines to be inaccurate.
But then why would Gell-Mann effect be so popular?
Gell-Mann amnesia is pretty widely cited in some circles, including Less Wrong adjacent areas. I can think of a couple reasons why my personal experience contradicts the assumption it's built on.
I suspect all are true to some extent, but the extent matters.
What does the research say?
One 2004 study compared scientific articles to their popular press coverage, concluding that: "Our data suggest that the majority of newspaper articles accurately convey the results of and reflect the claims made in scientific journal articles. Our study also highlights an overemphasis on benefits and under-representation of risks in both scientific and newspaper articles."
A 2011 study used graduate students to rate claims from both press releases (that are produced by the researchers and their PR departments) and popular press articles (often based off those press releases) in cancer genetics. They find: "Raters judged claims within the press release as being more representative of the material within the original science journal article [than claims just made in the popular press]." I find this study design unintuitive due to the way it categorizes claims, so I'm not certain whether it can be interpreted the way it's presented. They don't seem to present the number of claims in each category, for example, so it's unclear whether this is a large or small problem.
A 2012 study compared press releases and popular press articles on randomized controlled trials. They find: "News items were identified for 41 RCTs; 21 (51%) were reported with “spin,” mainly the same type of “spin” as those identified in the press release and article abstract conclusion."
A 2015 study had both rating of basic objective facts (like research names and institutions) and scientist ratings of subjective accuracy of popular press articles. It's hard to summarize but the subjective inaccuracy prevalence was about 30-35% percent for most categories of inaccuracies.
Overall, I'm not too excited by the research quality here and I don't think they manage to directly address the hypothesis I made above that people are overly critical of minor details which they then interpret as the reporting missing the entire point (as shown in my example #3). They do make it clear that a reasonable amount of hype originates from press releases rather than from the journalists per se. However, it should be noted that I have not exhausted the literature on this at all, and I specifically avoided looking at research post 2020 since there was a massive influx of hand-wringing about misinformation after COVID. No doubt I'm inviting the gods of irony to find out that I've misinterpreted some of these studies.
It could be interesting to see if LLMs can be used to be 'objective' ('consistent' would be more accurate) raters to compare en masse popular press and their original scientific publications for accuracy.
Conclusion
I think that the popular press is not as bad as often claimed when it comes to factuality of non-political topics, but that still leaves a lot of room for significant errors in the press and I'm not confident in any numbers of how serious the problem is. Readers should know that errors can often originate from the original source experts instead of journalists. This is not to let journalism off the hook, but we should be aware that problems are often already present in the source.
Disclaimer
A close personal relation is a journalist and I am biased in favor of journalism due to that.