I suggest looking at, not articles about scientific topics, but articles that are about something else but where the author invokes scientific topics.
Good suggestion, though I don't know how to systematically assess that. I can't even think of what topics would be most likely to have this come up in.
Anecdotally, as someone who works on non-AGI-targetting AI research, I find pop-sci articles on AI research to be horribly misrepresentive.
A paper that introduces a new algorithm that guides drones around a simulator by creating sub-tasks might be presented as "AI researchers create a new kind of digital brain - and it has its own goals". That's obviously a click-bait headline, but the article itself usually does little to clean things up.
However, I would imagine that AI is currently among the worst fields for this kind of thing due to manufactured hype, culture wars, and the age-old anthropomorphization of AI algorithms.
Thanks for this example. I definitely see ridiculous headlines like that from less reputable places. Do you also have examples from the type of news media I'm talking about like WSJ? For example, searching "Washington Post AI robotics" I get headlines:
(I realize now that "robotics" wasn't really in your original statement, I guess I extrapolated that from your drone example.)
I wonder if Gell-Mann amnesia might be more historically contingent than people assume.
When Crichton coined the term in 2002, (scientific) information was a lot less accessible, because the internet was more niche, and social media did not exist. Traditional media (including print) was also a much larger field than today, in part because you couldn't just check twitter to learn about current events. People had no independent means of fact-checking a claim made in their dailies. Journalists, in turn, had access to much sparser resources on any given topic and much less oversight for accuracy.
I suspect that the press was generally less accurate in Crichton's time than today. The New York Times and the Wall Street Journal survived because they were top-tier newspapers, more accurate than the rest of the press. They could rely on this reputation to survive the broader press crisis. But the many mid-size newspapers were less accurate and simply didn't survive.
I don't read newspapers, so I don't have much data. Perhaps I notice the bad things more, because I do not have the good things to balance it with? (Kinda like if neither you nor your friends have a dog, so the typical moment when you notice a dog is when some stranger's dog threatens you. So your model of a dog is that dogs attack strangers, and you miss all the nice moments when they play or relax, which is what their owners see.)
I was interviewed by a journalist twice in my life; both time the journalist wrote totally made up things unrelated to what I said; and I suspect that the story was already written long before they talked to me, they just wanted a name to attach to their fictional person.
Once I participated in a small peaceful protest (imagine a group of less than ten people standing on a street with banners for 30 minutes, then going home), and a TV commented on it while showing videos of looting (that happened a few months before, on the opposite side of the country, in a situation related neither to our cause nor our organization). When we called them by phone to complain, they just laughed at us, said that there were tiny letters saying that the videos were "illustrations" so it's legally okay, and if we have any complaints we are supposed to address them to their well-paid legal department. (We didn't do anything about it.)
A few years ago (I don't remember when exactly) there were "scientific" articles approximately every month about how theory of relativity was experimentally debunked; people shared them on Hacker News and social networks. And always a few weeks later there was a blog post somewhere explaining now it was just a mistake in calculation, because someone forgot to use a proper relativistic equation somewhere. Of course, these blog posts were not shared so much. -- Later, I guess, this topic went out of fashion. (Perhaps because the newspapers switched to stronger clickbait?)
My very first blog post was a response to a popular journalist, basically just a long list of factual mistakes he made in a popular article. (And I mean factual mistakes in a very literal sense, like how many countries were members of a specific organization, what year the organization started, etc. That is, not something that could be explained by different people having a different political opinion.)
Uhm, Gamergate. A situation where a bunch of nerds complains about the way journalists report on their hobby, and the journalists decide to go nuclear on them: holding ranks, posting absurd fabrications, refusing to even mention the talking points of the other side, then doubling down repeatedly until the topic gets debated at UN.
Which reminds me of how journalists treated James Damore. The "original memo" that practically all newspapers referred to was actually heaving redacted (all links to scientific papers removed). They even changed font to random sizes to have it appear unhinged.
...all these things considered, why should I even read newspapers?
The clear reason to pay for news is that you can buy higher quality news than what your social media shows you. But I did definitely carve out politically sensitive areas in my discussion for a reason.
> They even changed font to random sizes to have it appear unhinged
This caught my eye, but appears to be false: https://web.archive.org/web/20170805210606/https://gizmodo.com/exclusive-heres-the-full-10-page-anti-diversity-screed-1797564320 Has some weird formatting, presumably from copying it in from a Google doc, and presumably also why it lost the figures and URLs. The formatting doesn't look unhinged at all, just a bit awkward, though their summarizing the changes as removing "several" hyperlinks is terrible (it looks more like a couple dozen links in the original to me). Though, I would not have ever thought of Gizmodo as a being high tier journalism in the first place.
Weird. I think I remember seeing a different version. Not sure how that happened...
...maybe some of my ad-blocking programs interacted with the website's CSS in a bad way?
Uhm, if that's the case, I apologize for spreading misinformation.
.
Off topic, but Jesus, in the comment section: people [...] go to better schools [...] to increase their IQs [...] Not like anyone is born with a 170
I used the oldest version available in the Wayback machine so presumably it was how it was published, but it does include an "update" note as if it's undergone at least one revision. It's not impossible that the wayback machine is missing the earliest version. I still think that "copy and paste into a janky content management system interface" is probably the cause of whatever bad formatting it had rather than outright malice, but it may have been worse then than we see now (they state that formatting was changed though it's not clear when).
To actually read, probably not, but to buy or pay for subscriptions to them might be worth doing: it's probably the best way to sustainably ensure the existence of journalism as an industry, which you might be incentivized to do if you think it'll hurt society at large more than you, so you're relatively better off, much like how you would pump raw sewage into the city's water supply after securing your own independent sources of freshwater, or iocaine powder into the air vents after building up immunity.
Explanation 1 seems largely correct to me. Circadian biology as a field is unconducive to Gell-Mann; there's pop psychology and pop philosophy, but I'm yet to encounter pop circadian biology. The Venn diagram of "can write coherently about X" and "can't write accurately about X" has much more overlap in more popularly saturated fields.
This is definitely a leading hypothesis but I think it's also the case that going to the experts directly will lead you more astray in psychology than in some other fields because the quality of the work there has been lower. It makes sense that journalism is low quality if the experts are also low quality, though of course we would hope that journalists would be able to improve upon what they're given (by e.g. consulting multiple experts). I guess one of my points is: if you don't believe the traditional press media, who do you believe? I'm not convinced there's an answer that improves upon the media (Wikipedia?). In fact, a fair number of the articles you might be thinking of could be authored by psychologists: at least my local paper often includes articles written by local researchers, physicians, etc. on the topics in their field, under the Opinion heading.
Not sure what articles would count as pop philosophy, though.
For what it's worth, circadian biology is quite open to popification. Eliezer Yudkowsky has written about finding the right timing to take melatonin for his sleep timing disorder. And practically all of us struggle with jet lag, daylight savings time changes (this part even being quite politicized!), or work schedules.
Gell-Mann amnesia refers to "the phenomenon of experts reading articles within their fields of expertise and finding them to be error-ridden and full of misunderstanding, but seemingly forgetting those experiences when reading articles in the same publications written on topics outside of their fields of expertise, which they believe to be credible". Here I use "Gell-Mann effect" to mean just the first part: experts finding that popular press articles on their topics to be error-ridden and fundamentally wrong. I'll also only consider non-political topics and the higher tier of popular press, think New York Times and Wall Street Journal, not TikTok influencers.
I have not experienced the Gell-Mann effect. Articles within my expertise in the top popular press are accurate. Am I bizarrely fortunate? Are my areas of expertise strangely easy to understand? Let's see.
Examples:
Now, his own talk title was 'Are neanderthals keeping me up at night?' - which is equally click-baity and oversimplified as his example popular press headline, despite being written for an academic audience. Moreover, his headline suggests that neanderthal-derived variants are responsible for staying up late, when in fact his work showed the other direction ("the strongest introgressed effects on chronotype increase morningness"). So, the popular press articles were more accurate than his own title. Overall, I don't consider the popular press headlines to be inaccurate.
But then why would Gell-Mann effect be so popular?
Gell-Mann amnesia is pretty widely cited in some circles, including Less Wrong adjacent areas. I can think of a couple reasons why my personal experience contradicts the assumption it's built on.
I suspect all are true to some extent, but the extent matters.
What does the research say?
One 2004 study compared scientific articles to their popular press coverage, concluding that: "Our data suggest that the majority of newspaper articles accurately convey the results of and reflect the claims made in scientific journal articles. Our study also highlights an overemphasis on benefits and under-representation of risks in both scientific and newspaper articles."
A 2011 study used graduate students to rate claims from both press releases (that are produced by the researchers and their PR departments) and popular press articles (often based off those press releases) in cancer genetics. They find: "Raters judged claims within the press release as being more representative of the material within the original science journal article [than claims just made in the popular press]." I find this study design unintuitive due to the way it categorizes claims, so I'm not certain whether it can be interpreted the way it's presented. They don't seem to present the number of claims in each category, for example, so it's unclear whether this is a large or small problem.
A 2012 study compared press releases and popular press articles on randomized controlled trials. They find: "News items were identified for 41 RCTs; 21 (51%) were reported with “spin,” mainly the same type of “spin” as those identified in the press release and article abstract conclusion."
A 2015 study had both rating of basic objective facts (like research names and institutions) and scientist ratings of subjective accuracy of popular press articles. It's hard to summarize but the subjective inaccuracy prevalence was about 30-35% percent for most categories of inaccuracies.
Overall, I'm not too excited by the research quality here and I don't think they manage to directly address the hypothesis I made above that people are overly critical of minor details which they then interpret as the reporting missing the entire point (as shown in my example #3). They do make it clear that a reasonable amount of hype originates from press releases rather than from the journalists per se. However, it should be noted that I have not exhausted the literature on this at all, and I specifically avoided looking at research post 2020 since there was a massive influx of hand-wringing about misinformation after COVID. No doubt I'm inviting the gods of irony to find out that I've misinterpreted some of these studies.
It could be interesting to see if LLMs can be used to be 'objective' ('consistent' would be more accurate) raters to compare en masse popular press and their original scientific publications for accuracy.
Conclusion
I think that the popular press is not as bad as often claimed when it comes to factuality of non-political topics, but that still leaves a lot of room for significant errors in the press and I'm not confident in any numbers of how serious the problem is. Readers should know that errors can often originate from the original source experts instead of journalists. This is not to let journalism off the hook, but we should be aware that problems are often already present in the source.
Disclaimer
A close personal relation is a journalist and I am biased in favor of journalism due to that.