Publishing academic papers on transformative AI is a nightmare
I am a professor of economics. Throughout my career, I was mostly working on economic growth theory, and this eventually brought me to the topic of transformative AI / AGI / superintelligence. Nowadays my work focuses mostly on the promises and threats of this emerging disruptive technology. Recently, jointly with Klaus Prettner, we’ve written a paper on “The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI”. We have presented it at multiple conferences and seminars, and it was always well received. We didn’t get any real pushback; instead our research prompted a lot of interest and reflection (as I was reported, also in conversations where I wasn’t involved). But our experience with publishing this paper in a journal is a polar opposite. To date, the paper got desk-rejected (without peer review) 7 times. For example, Futures—a journal “for the interdisciplinary study of futures, visioning, anticipation and foresight” justified their negative decision by writing: “while your results are of potential interest, the topic of your manuscript falls outside of the scope of this journal”. Until finally, to our excitement, it was for once sent out for review. But then came the reviews… and sure they delivered. The key arguments for the paper’s rejection were the following: 1/ As regards the core concept of p(doom), Referee 1 complained that “the assignment of probabilities is highly subjective, and it lacks empirical support”. Referee 2 backed this up with: “there is a lack of substantive basic factual support”. Well, yes, precisely. These probabilities are subjective by design, because empirical measurement of p(doom) would have to involve going through all the past cases where humanity lost control of a superhuman AI and consequently became extinct. And hey, sarcasm aside, our central argument doesn’t actually rely on any specific probabilities. We find that in most circumstances even a very small probability of
I think we essentially agree. The only difference seems to be with the word "alien" in "AI is an extinction risk because it would have unpredictable alien values that are indifferent to human survival". In my opinion, they may be alien, but more likely they may also be the familiar power-oriented values implied by instrumental convergence, which also coincidentally constitute an important subset of "human values".