Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.
Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).
The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.
If aging was solved or looked like it will be solved within next few decades, it would make efforts to stop or slow down AI development less problematic, both practically and ethically. I think some AI accelerationists might be motivated directly by the prospect of dying/deterioration from old age, and/or view lack of interest/progress on that front as a sign of human inadequacy/stagnation (contributing to their antipathy towards humans). At the same time, the fact that pausing AI development has a large cost in lives of current people means that you have to have a high p(doom) or credence in utilitarianism/longtermism to support it (and risk committing a kind of moral atrocity if you turn out to be wrong).
2 is important because as tech/AI capabilities increase, the possibilities to "make serious irreversible mistakes due to having incorrect answers to important philosophical questions" seem to open up exponentially. Some examples:
If your point is that we could delegate solving these problems to aligned AI once we have them, my worry is that AI, including aligned AI, will be much better at creating new philosophical problems (opportunities to make mistakes) than at solving them. The task of reducing this risk (e.g., by solving metaphilosophy or otherwise making sure AIs' philosophical abilities keep up with or outpace their other intellectual abilities) seems super neglected, in part because very few people seem to acknowledge the importance of avoiding errors like the ones listed above.
(BTW I was surprised to see your skepticism about 2, since it feels like I've been talking about it on LW like a broken record, and I don't recall seeing any objections from you before. Would be curious to know if anything I said above is new to you, or you've seen me say similar things before but weren't convinced.)