This is a very interesting personal account, thanks for sharing this. I would imagine and be curious about whether this kind of issue crops up with any number of economics research topics, like research around environmental impacts, unethical technologies more generally, excessive (and/or outright corrupt) military spending, and so on.
There are perhaps (good-faith) questions to be asked about the funding sources and political persuasions of the editors of these journals, or the journal businesses themselves, and why they might be incentivized to stay clear of such topics. Of course, we are actively seeing a chill in the US right now on research into many other areas of social science. One can imagine how you might be seeing something related.
So, I do imagine things like the psychological phenomenon of denial of mortality might be at play here, and that's an interesting insight. But I would also guess there to be many other phenomena, as well, and frankly of a more unsavory nature.
I am a professor of economics. Throughout my career, I was mostly working on economic growth theory, and this eventually brought me to the topic of transformative AI / AGI / superintelligence. Nowadays my work focuses mostly on the promises and threats of this emerging disruptive technology.
Recently, jointly with Klaus Prettner, we’ve written a paper on “The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI”. We have presented it at multiple conferences and seminars, and it was always well received. We didn’t get any real pushback; instead our research prompted a lot of interest and reflection (as I was reported, also in conversations where I wasn’t involved).
But our experience with publishing this paper in a journal is a polar opposite. To date, the paper got desk-rejected (without peer review) 7 times. For example, Futures—a journal “for the interdisciplinary study of futures, visioning, anticipation and foresight” justified their negative decision by writing: “while your results are of potential interest, the topic of your manuscript falls outside of the scope of this journal”.
Until finally, to our excitement, it was for once sent out for review. But then came the reviews… and sure they delivered. The key arguments for the paper’s rejection were the following:
1/ As regards the core concept of p(doom), Referee 1 complained that “the assignment of probabilities is highly subjective, and it lacks empirical support”. Referee 2 backed this up with: “there is a lack of substantive basic factual support”. Well, yes, precisely. These probabilities are subjective by design, because empirical measurement of p(doom) would have to involve going through all the past cases where humanity lost control of a superhuman AI and consequently became extinct. And hey, sarcasm aside, our central argument doesn’t actually rely on any specific probabilities. We find that in most circumstances even a very small probability of human extinction suffices to justify a call for more investment in existential risk reduction.
2/ Referee 1—the one whose review was longer than four short bashing sentences—complained also that “the definitions of "TAI alignment" and "correctability" [we actually wrote “corrigibility”—JG] are overly abstract, lacking actionable technical or institutional implementation pathways.” Well again, yes, precisely: TAI alignment has not been solved yet, so sure there are no “actionable technical or institutional implementation pathways”.
3/ We also enjoyed the comment that “the assumption that takeover, once occurring, is irreversible, is overly absolute.” Apparently, we must have missed the fact that in reality John Connor or Ethan Hunt may actually win.
You may think that I am sour and frustrated because the paper was rejected. I sure am, but there’s a much broader point here.
My point is that theoretical papers on the scenarios of transformative AI, both in terms of their promises and (particularly) risks, are extremely hard to publish. You can see that in the resumes of essentially all authors who pivoted to this topic.
First, journals prefer empirical studies. In all the other contexts, this would be understandable—that’s how the scientific method works after all. However, with AI the problem is that the technology is developing so quickly that all data empirical researchers get a hand on is instantly obsolete. Which means that all empirical research, no matter how brilliant and insightful, is also necessarily backward-looking. We may only begin to understand the economic consequences of GPT-3 while already using GPT-5.
At the same time, if we want to take a proactive stance and at least attempt to guide our policy so that it could steer the future towards desirable states—for example, such that we don’t become an extinct species—we’d better also publish and discuss the various AI scenarios which could potentially unfold, including the less conservative ones (predicting, e.g., “no more than a 0.66% increase in total factor productivity (TFP) over 10 years.”). And research journals should support the debate, or otherwise the public and policymakers would get the impression that the entire economics community believes that TAI/AGI/ASI will for sure never arrive, and AI existential risk does not exist, which is clearly not the consensus view.
Second, the problem seems to go beyond the preference for empirical papers. It seems that, on top of that, the very notion of AI existential risk scares the editors away. To deny the thoughts of one’s own mortality is a documented psychological phenomenon, and acknowledging extinction risk is probably even scarier. Also, the editors may be tempted to think their journals have nothing to gain by publishing doom scenarios: even if they turn out to be true, there will be no-one left to capitalize on that correct prediction anyway. But the citations don’t always reward those whose predictions are ultimately correct, they come wherever the debate is—and that includes scenarios and viewpoints we may or may not agree with.
Peer review, for all its flaws, is the best tool to ensure the integrity and rigor of scientific discourse about any important issue—and the future of humanity, faced with the imminent threats (and promises) of transformative AI, certainly qualifies as such. And as according to many, transformative AI may arrive within the next 10 years, so the matter is also urgent. If research journals continue to desk-reject this entire debate, our future will be decided based solely on arguments developed in blogposts, videos, and (at best) arXiv papers. Without peer review, this debate risks becoming less and less scientifically sound, and driven more by controversy and clickbait than logic and rigor.
Against this unfortunate background, I am happy to point out the few publications in the official channels that do exist, such as the invited volume by the NBER. I am also happy that Professor Chad Jones of Stanford GSB used his stellar reputation to warn the economists of AI existential risk in a top-tier scholarly journal. But given the stakes at hand, this forward-looking literature needs to be much, much larger, and much more mainstream.
After all, we are living in very uncertain times, and the possible emergence of transformative AI is a prime source of this uncertainty. In such circumstances, we don’t have the time to wait idly until evidence-based policies are established. Instead, we need to quickly introduce basic prudent policies, motivated by forward-looking scenario-based analysis, which could at least minimize the expected downsides, and—at the very bare minimum—allow us to live on and keep thinking about good futures for humanity.