Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Today, the AI Extinction Statement was released by the Center for AI Safetya one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.

Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).

The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.

New to LessWrong?

New Comment
80 comments, sorted by Click to highlight new comments since: Today at 7:38 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]trevor11mo10378

For those who might not have noticed, this actually is historic, they're not just saying that- the top 350 people have effectively "come clean" about this, at once, in a schelling-point **kind-of** way. 

The long years of staying quiet about this and avoiding telling other people your thoughts about AI potentially ending the world, because you're worried that you're crazy or that you take science fiction too seriously- those days **might have** just ended. 

This was a credible signal, none of these 350 high-level people can go back and say "no, I never actually said that AI could cause extinction and AI safety should be a top global priority", and from now on you and anyone else can cite this announcement to back up your views (instead of saying "Bill Gates, Elon Musk, and Stephen Hawking have all endorsed...") and go straight to AI timelines (I like sending people Epoch's Literature review).

EDIT: For the record, this might not be true, or it might not stick, and signatories retain ways of backing out or minimizing their past involvement. I do not endorse unilaterally turning this into more of a schelling point than it was originally intended to be. 

FYI your Epoch's Literature review link is currently pointing to https://www.lesswrong.com/tag/ai-timelines

[-]Kaj_Sotala11moΩ124415

Some notable/famous signatories that I noted: Geoffrey Hinton, Yoshua Bengio, Demis Hassabis (DeepMind CEO), Sam Altman (OpenAI CEO), Dario Amodei (Anthropic CEO), Stuart Russell, Peter Norvig, Eric Horvitz (Chief Scientific Officer at Microsoft), David Chalmers, Daniel Dennett, Bruce Schneier, Andy Clark (the guy who wrote Surfing Uncertainty), Emad Mostaque (Stability AI CEO), Lex Friedman, Sam Harris.

Edited to add: a more detailed listing from this post:

Signatories include notable philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists. [...]

Signatories of the statement include:

  • The authors of the standard textbook on Artificial Intelligence (Stuart Russell and Peter Norvig)
  • Two authors of the standard textbook on Deep Learning (Ian Goodfellow and Yoshua Bengio)
  • An author of the standard textbook on Reinforcement Learning (Andrew Barto)
  • Three Turing Award winners (Geoffrey Hinton, Yoshua Bengio, and Martin Hellman)
  • CEOs of top AI labs: Sam Altman, Demis Hassabis, and Dario Amodei
  • Executives from Microsoft, OpenAI, Google, Google DeepMind, and Anthropic
  • AI professors from Chinese universities
  • T
... (read more)

Bruce Schneier has posted something like a retraction on his blog, saying he focused on the comparisons to pandemics and nuclear war and not on the word "extinction".

1jeffreycaruso1mo
That's a good example of my point. Instead of a petition, a more impactful document would be a survey of risks and their probability of occurring in the opinion of these notable public figures.  In addition, there should be a disclaimer regarding who has accepted money from Open Philanthropy or any other EA-affiliated non-profit for research. 
7Joel Burget11mo
Though the statement doesn't say much the list of signatories is impressively comprehensive. The only conspicuously missing names that immediately come to mind are Dean and LeCun (I don't know if they were asked to sign).

The statement not saying much is essential for getting an impressively comprehensive list of signatories: the more you say, the more likely it is that someone whom you want to sign will disagree.

Relatedly, when we made DontDoxScottAlexander.com, we tried not to wade into a bigger fight about the NYT and other news sites, nor to make it an endorsement of Scott and everything he's ever written/done. It just focused on the issue of not deanonymizing bloggers when revealing their identity is a threat to their careers or personal safety and there isn't a strong ethical reason to do so. I know more high-profile people signed it because the wording was conservative in this manner.

IMO,  Andrew Ng is the most important name that could have been there but isn't. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.

4Stephen Fowler11mo
For anyone who wasn't aware both Ng and LeCun have strongly indicated that they don't believe people existential risks from AI are a priority. Summary here You can also check out Yann's twitter.  Ng believes the problem is "50 years" down the track, and Yann believes that many concerns AI Safety researchers have are not legitimate. Both of them view talk about existential risks as distracting and believe we should address problems that can be seen to harm people in today's world.   
2zchuang11mo
He posted on a twitter a request to talk to people who feel strongly here.
5trevor11mo
I'd say the absence of names from Facebook, Amazon, and Apple in general are worrying, as well as that there were only two from Microsoft. Apple's absence, in particular, is what keeps me up at night.
5mako yass11mo
Does anyone see any hardware names? What is it about hardware. I've never seen anyone from there express concern. I wonder if it's that, for anyone else in AI, their research is either fairly neutral - not accelerating towards AGI, or if it is in AGI, it could be repurposed towards alignment. But if your identity is rooted in hardware, if you admit to any amount of extinction risk, there's no way for you to keep your job and stay sane?
3dr_s11mo
Yann LeCun at least is very, very, loudly and repeatedly open on Twitter about considering X-risk a bunch of doomerist nonsense, so we know where he (and thus, Facebook) stands.
2Joel Burget11mo
We don't hear much about Apple in AI -- curious why you rank them so important.
4Sherrinford11mo
Here is the coverage on the "most frequently quoted online media product in Germany": Spiegel.de  I mention this mainly to note that even if you get close to a consensus among experts, a newspaper website may still write a paragraph about it that gives the imporeesion that the distribution of expert opinion is completely unclear: "However, there is also disagreement in the research community. Meta's AI chief scientist Yann LeCun, for example, who received the Turing Award together with Hinton and Bengio, has not wanted to sign any of the appeals so far. He sometimes describes the warnings as “AI doomism”" (linking to a twitter thread by LeCun).   To be clear, the statement and its coverage are very impressive.
1Nathan Young11mo
Seems extremely likely (90%) that someone either asked them to sign or that people thought it very unlikely they would. I guess I'd go for the second. LeCun doesn't look to want to sign something like this.
4Leon Lang11mo
https://twitter.com/ai_risks/status/1664323278796898306?s=46&t=umU0Z29c0UEkNxkJx-0kaQ Apparently Bill Gates signed. Stating the obvious: Do we expect that Bill Gates will donate money to prevent the extinction from AI?
4Daniel_Eth11mo
Gates has been publicly concerned about AI X-risk since at least 2015, and he hasn't yet funded anything to try to address it (at least that I'm aware of), so I think it's unlikely that he's going to start now (though who knows – this whole thing could add a sense of respectability to the endeavor that pushes him to do it).
[-]Wei Dai11moΩ204031

Is it just me or is it nuts that a statement this obvious could have gone outside the overton window, and is now worth celebrating when it finally (re?)enters?

How is it possible to build a superintelligence at acceptable risk while this kind of thing can happen? What if there are other truths important to safely building a superintelligence, that nobody (or very few) acknowledges because they are outside the overton window?

Now that AI x-risk is finally in the overton window, what's your vote for the most important and obviously true statement that is still outside it (i.e., that almost nobody is willing to say or is interested in saying)? Here are my top candidates:

  1. Dying of old age, as well as physical and mental deterioration from it, are bad and worth substantial coordinated effort to prevent.
  2. It's possible to make serious irreversible mistakes due to having incorrect answers to important philosophical questions. In fact, this is likely, considering how much confusion and disagreement there is on many philosophical questions that seem obviously important.
7Daniel Kokotajlo11mo
Why is 1 important? It seems like something we can defer discussion of until after (if ever) alignment is solved, no? 2 is arguably in that category also, though idk.
[-]Wei Dai11moΩ122813

Why is 1 important? It seems like something we can defer discussion of until after (if ever) alignment is solved, no?

If aging was solved or looked like it will be solved within next few decades, it would make efforts to stop or slow down AI development less problematic, both practically and ethically. I think some AI accelerationists might be motivated directly by the prospect of dying/deterioration from old age, and/or view lack of interest/progress on that front as a sign of human inadequacy/stagnation (contributing to their antipathy towards humans). At the same time, the fact that pausing AI development has a large cost in lives of current people means that you have to have a high p(doom) or credence in utilitarianism/longtermism to support it (and risk committing a kind of moral atrocity if you turn out to be wrong).

2 is arguably in that category also, though idk.

2 is important because as tech/AI capabilities increase, the possibilities to "make serious irreversible mistakes due to having incorrect answers to important philosophical questions" seem to open up exponentially. Some examples:

  • premature value lock-in
  • value drift,
  • handing over too much control/resources to a
... (read more)
5Daniel Kokotajlo11mo
Something like 2% of people die every year right? So even if we ignore the value of future people and all sorts of other concerns and just focus on whether currently living people get to live or die, it would be worth delaying a year if we could thereby decrease p(doom) by 2 percentage points. My p(doom) is currently 70% so it is very easy to achieve that. Even at 10% p(doom), which I consider to be unreasonably low, it would probably be worth delaying a few years. Re: 2: Yeah I basically agree. I'm just not as confident as you are I guess. Like, maybe the answers to the problems you describe are fairly objective, fairly easy for smart AIs to see, and so all we need to do is make smart AIs that are honest and then proceed cautiously and ask them the right questions. I'm not confident in this skepticism and could imagine becoming much more convinced simply by thinking or hearing about the topic more.
6Wei Dai11mo
Someone with with 10% p(doom) may worry that if they got into a coalition with others to delay AI, they can't control the delay precisely, and it could easily become more than a few years. Maybe it would be better not to take that risk, from their perspective. And lots of people have p(doom)<10%. Scott Aaronson just gave 2% for example, and he's probably taken AI risk more seriously than most (currently working on AI safety at OpenAI), so probably the median p(doom) (or effective p(doom) for people who haven't thought about it explicitly) among the whole population is even lower. I think I've tried to take into account uncertainties like this. It seems that in order for my position (that the topic is important and too neglected) to be wrong, one has to reach high confidence that these kinds of problems will be easy for AIs (or humans or AI-human teams) to solve, and I don't see how that kind of conclusion could be reached today. I do have some specific arguments for why the AIs we'll build may be bad at philosophy, but I think those are not very strong arguments so I'm mostly relying on a prior that says we should be worried about and thinking about this until we see good reasons not to. (It seems hard to have strong arguments either way today, given our current state of knowledge about metaphilosophy and future AIs.) Another argument for my position is that humans have already created a bunch of opportunities for ourselves to make serious philosophical mistakes, like around nuclear weapons, farmed animals, AI, and we can't solve those problems by just asking smart honest humans the right questions, as there is a lot of disagreement between philosophers on many important questions. What's stopping you from doing this, if anything? (BTW, beyond the general societal level of neglect, I'm especially puzzled by the lack of interest/engagement on this topic from the many people in EA with formal philosophy backgrounds. If you're already interested in AI and x-risks a
2Daniel Kokotajlo11mo
I guess I just think it's pretty unreasonable to have p(doom) of 10% or less at this point, if you are familiar with the field, timelines, etc.  I totally agree the topic is important and neglected. I only said "arguably" deferrable, I have less than 50% credence that it is deferrable. As for why I'm not working on it myself, well, aaaah I'm busy idk what to do aaaaaaah! There's a lot going on that seems important. I think I've gotten wrapped up in more OAI-specific things since coming to OpenAI, and maybe that's bad & I should be stepping back and trying to go where I'm most needed even if that means leaving OpenAI. But yeah. I'm open to being convinced!
4Wei Dai11mo
I guess part of the problem is that the people who are currently most receptive to my message are already deeply enmeshed in other x-risk work, and I don't know how to reach others for whom the message might be helpful (such as academic philosophers just starting to think about AI?). If on reflection you think it would be worth spending some of your time on this, one particularly useful thing might be to do some sort of outreach/field-building, like writing a post or paper describing the problem, presenting it at conferences, and otherwise attracting more attention to it. (One worry I have about this is, if someone is just starting to think about AI at this late stage, maybe their thinking process just isn't very good, and I don't want them to be working on this topic! But then again maybe there's a bunch of philosophers who have been worried about AI for a while, but have stayed away due to the overton window thing?)
2Daniel Kokotajlo11mo
Somehow there are 4 copies of this post
4[comment deleted]11mo
2[comment deleted]11mo
2[comment deleted]11mo
3dr_s11mo
1 is an obvious one that many would deny out of sheer copium. Though of course "not dying" has to go hand in hand with "not aging" or it would rightly be seen as torture. 2 seems vague enough that I don't think people would vehemently disagree. If you specify, such as suggesting that there are absolutely correct or wrong answers to ethical questions, for example, then you'll get disagreement (including mine, for that matter, on that specific hypothetical claim).
[-]Algon11mo3727

Disclaimer: I've never been to an academic conference
EDIT: Also, I'm just thinking out loud here. Not stating my desire to start a conference, just thinking about what can make academics feel like researching alignment is normal.

Those are some big names. I wonder if arranging a big AI safety conference w/ these people would make worrying about alignment feel more socially acceptable to a lot of researchers. It feels to me like a big part of making thinking about alignment socially acceptable is to visibly think about alignment in socially acceptable ways. In my imagination, you have conferences on important problems in academia. 

You talk about the topic there with your colleagues and impressive people. You also go there to catch up with friends, and have a good time. You network. You listen to big names talking about X, and you wonder which of your other colleagues will also talk about X in the open. Dismissing it no longer feels like it will go uncontested. Maybe you should take care when talking about X? Maybe even wonder if it could be true. 

Or on the flip side, you wonder if you can talk about X without your colleagues laughing at you. Maybe other people will back you up when you say X is important. At least, you can imply the big names will. Oh look, a big name X-thinker is coming round the corner. Maybe you can start up a conversation with them in the open. 

There have been some strong criticisms of this statement, notably by Jeremy Howard et al here. I've written a detailed response to the criticisms here:

https://www.soroushjp.com/2023/06/01/yes-avoiding-extinction-from-ai-is-an-urgent-priority-a-response-to-seth-lazar-jeremy-howard-and-arvind-narayanan/

Please feel free to share with others who may find it valuable (e.g. skeptics of AGI x-risk).

[-]Jan_Kulveit11moΩ11165

I feel somewhat frustrated by execution of this initiative.  As far as I can tell, no new signatures are getting published since at least one day before the public announcement. This means even if I asked someone famous (at least in some subfield or circles) to sign, and the person signed, their name is not on the list, leading to understandable frustration of them.  (I already got a piece of feedback in the direction "the signatories are impressive, but the organization running it seems untrustworthy") 

Also if the statement is intended to serve as a beacon, allowing people who have previously been quiet about AI risk to connect with each other, it's essential for signatures to be published. It's nice that Hinton et al. signed, but for many people in academia it would be actually practically useful to know who from their institution signed - it's unlikely that most people will find collaborators in Hinton, Russell or Hassabis.

I feel even more frustrated because this is second time where similar effort is executed by xrisk community while lacking basic operational competence consisting in the ability to accept and verify signatures. So, I make this humble appeal and o... (read more)

[-]ThomasW11mo2315

Hi Jan, I appreciate your feedback.

I've been helping out with this and I can say that the organizers are working as quickly as possible to verify and publish new signatures. New signatures have been published since the launch, and additional signatures will continue to be published as they are verified. There is a team of people working on it right now and has been since launch.

The main obstacles to extremely swift publication are:

  • First, determining who meets our bar for name publication. We think the letter will have greater authority (and coordination value) if all names are above a certain bar, and so some effort needs to be put into determining whether signatories meet that bar.
  • Second, as you mention verification. Prior to launch, CAIS built an email verification system that ensures that signatories must verify their work emails in order for their signature to be valid. However, this has required some tweaks, such as making the emails more attention grabbing and adding some language on the form itself that makes clear that people should expect an email (before these tweaks, some people weren't verifying their emails).
  • Lastly, even with verification, some submissions are still po
... (read more)
5Jan_Kulveit11mo
Thanks for the reply.  Also for the work - it's great signatures are added - before I've checked bottom of the list and it seemed it's either same or with very few additions. I do understand verification of signatures requires some amount of work. In my view having more people (could be volunteers) to process the initial expected surge of signatures fast would have been better; attention spent on this will drop fast.  

Any explanations for why Nick Bostrom has been absent, arguably notably, in recent public alignment conversations (particularly since chatgpt)?

He's not on this list (yet other FHI members, like Toby Ord, are). He wasn't on the FLI open letter, too, but I could understand why he might've avoided endorsing that letter given its much wider scope.

[-]habryka11mo4231

Almost certainly related to that email controversy from a few months ago. My sense is people have told him (or he has himself decided) to take a step back from public engagement. 

I think I disagree with this, but it's not a totally crazy call, IMO.

3ROM11mo
I think this explains his absence from this + the FLI letter.  He still seems to be doing public outreach though: see interview NY Times, interview with RTE, Big Think video and interview with Analytics India Magazine.  None of these interviews have discussed the email. 
1dr_s11mo
Yeah, beyond that honestly I would worry that his politics in general might do even more to polarize the issue in an undesirable way. I think it's not necessarily a bad call in the current atmosphere.
1Vishrut Arya11mo
Aha. Ugh, what an unfortunate sequence of events.

It's a step, likely one that couldn't be skipped. Still just short of actually acknowledging nontrivial probability of AI-caused human extinction, and the distinction between extinction and lesser global risks, availability of second chances at doing better next time. Nuclear war can't cause extinction, so it's not properly alongside AI x-risk. Engineered pandemics might eventually get extinction-worthy, but even that real risk is less urgent.

3dr_s11mo
Eh, I think this is really splitting hairs. I have seen already multiple people using the lack of reference to climate change to dismiss the whole thing. Not every system of values places extinction on its own special pedestal (though I think in this case "biological omnicide" might be more it: unlike pandemics, AI could also kill the rest of non-human life). But in terms of expected loss of life AI could be even with those other things if you consider them more likely.
2Vladimir_Nesov11mo
Well this is wrong, and I'm not feeling any sympathy for a view that it's not. An eternity of posthuman growth after recovering from a civilization-spanning catastrophe really is much better than lights out, for everyone, forever. I agree that there are a lot of people who don't see this, and will dismiss a claim that expresses this kind of thing clearly. In mainstream comments to the statement, I've seen frequent claims that this is about controlling the narrative and ensuring regulatory lock-in for the big players. From the worldview where AI x-risk is undoubtedly pure fiction, the statement sounds like Very Serious People expressing Concern for the Children. Whereas if object level claims were to be stated more plainly, this interpretation would crumble, and the same worldview would be forced to admit that the people signing the claim are either insane, or have a reason for saying these things that is not Controlling the Narrative. It's the same thing as with AI NotKillEveryoneism vs. AI Safety.
1dr_s11mo
You can't really say anything is objectively wrong when it comes to morals, but also, I generally think that evaluating the well-being of potential entities to be leads to completely nonsensical moral imperatives like the Repugnant Conclusion. Since no one experiences all of the utility at the same time, I think "expected utility probability distribution" is a much more sensible metric (as in, suppose you were born as a random sentient in a given time and place: would you be willing to take the bet?). That said, I do think extinction is worse than just a lot of death, but that's as a function of the people who are about to witness it and know they are the last. In addition, I think omnicide is worse than human extinction alone because I think animals and the rest of life have moral worth too. But I wouldn't blame people for simply considering extinction as 8 billion deaths, which is still A LOT of deaths anyway. It's a small point that's worthless arguing. We have wide enough uncertainties on the probability of these risks anyway that we can't really put fixed numbers to the expect harms, just vague orders of magnitude. While we may describe them as if they were numerical formulas, these evaluations really are mostly qualitative; enough uncertainty makes numbers almost pointless. Suffice to say, I think if someone considers, say, a 5% chance of nuclear war a bigger worry than a 1% chance of AI catastrophe, then I don't think I can make a strong argument for them being dead wrong. I agree this makes no sense, but it's a completely different issue. That said, I think the biggest uncertainty re: X-risk remains whether AGI is really as close as some estimate it is at all. But this aspect is IMO irrelevant when judging the opportunity of actively trying to build AGI. Either it's possible, and then it's dangerous, or it's still way far off, and then it's a waste of money and precious resources and ingenuity.
1Vladimir_Nesov11mo
It's not that complicated. There is a sense in which these claims are objective (even as the words we use to make them are 2-place words), to the same extent as factual claims, both are seen through my own mind and reified as platonic models. Though morality is an entity that wouldn't be channeled in the physical world without people, it actually is channeled, the same as the Moon actually is occasionally visible in the sky. My point is not about anyone's near term subjective experience, but about what actually happens in the distant future. It's their resources and ingenuity. If there is no risk, it's not our business to tell them not to waste them.
2dr_s11mo
I really, really don't care about what happens in the distant future compared to what happens now, to humans that actually exist and feel. I especially don't care about there being an arbitrarily high amount of humans. I don't think a trillion humans is any better than a million as long as: 1. they are happy 2. whatever trajectory lead to those numbers didn't include any violent or otherwise painful mass death event, or other torturous state. There really is nothing objective about total sum utilitarianism; and in fact, as far as moral intuitions go, it's not what most people follow at all. With things like "actually death is bad" you can make a very cogent case: people, day to day, usually don't want to die, therefore there never is a "right moment" in which death is not a violation; if there was, people can still commit suicide anyway, thus death by old age or whatever else is just bad. That's a case where you can invoke the "it's not that complicate" argument IMO. Total sum utilitarianism is not; I find it a fairly absurd ethical system, ripe for exploits so ridiculous and consequences so blatantly repugnant that it really isn't very useful at all.
2Vladimir_Nesov11mo
I agree, amount of humans and a lot of other utilitarian aims is goodharting for bad proxies. The distinction I was gesturing at is not about amount of what happens, but about perception vs. reality. And a million humans is very different from zero anyone, even if the end was not anticipated nor perceived.
1dr_s11mo
Ol, let's consider two scenarios: 1. humanity goes extinct gradually and voluntarily via a last generation that simply doesn't want to reproduce and is cared for by robots to the end, so no one suffers particularly in the process; 2. humanity is locked in a torturous future of trillions in inescapable torture, until the heat death of the universe. Which is better? I would say 1 is. There are things worse than extinction (and some of them are on the table with AI too, theoretically). And anyway you should consider that with how many "low hanging fruit" resources we've used, there's fair odds that if we're knocked back into the millions by a pandemic or nuclear war now we may never pick ourselves back again. Stasis is better than immediate extinction but if you care about the long term future it's also bad (and implies a lot more suffering because it's a return to the past).
2Lichdar11mo
As a normie, I would say 1 is. Depending on how some people see things, 2 is the past - which I disagree with and at any rate would say was the generator of an immense quantity of joy, love and courage along with the opposite qualities of pain, mourning and so on. So for me, I would indeed say that my morality puts extinction on a higher pedestal than anything else(and also am fully am against mind uploading or leaving humans with nothing to do). Just a perspective from a small brained normie.
1dr_s11mo
I mean, 2 is not the past in a purely numerical sense (I wouldn't say we ever hit trillions of total humans). But the problem is also on the inescapable part, which assumes e.g. permanent disempowerment. That's not a feature of the past. I'm not sure which your "I would say 1" meant - I asked which was better, but then you said you think extinction is its own special thing. Anyway I don't disagree that extinction is a special kind of bad, but it is IMO still in relation to people living today. I'd rather not die, but if I have to die, I'd rather die knowing the world goes on and the things I did still have purpose for someone. Extinction puts an end to that. I want to root that morality in the feelings of present people because I feel like assigning moral worth to not-yet-existing ones completely breaks any theory. For example, however many actual people exist, there's always an infinity of potential people that don't. In addition, it allows for justifying making existing people suffer for the sake of creating more people later (e.g. intensive factory farming of humans until we reach population 1 trillion ASAP, or however many is needed to justify the suffering inflicted via created utility), which is just absurd.
2Lichdar11mo
I would just say as a normie, that these extensive thought experiments of factory humans mostly don't concern themselves to me - though I could see a lot of justification of suffering to allow humanity to exist for say, another 200 billion years. People have always suffered in some extent to do anything; and certainly having children entails some trade-offs, but existence itself is worth it. But mostly the idea of a future without humanity, or even one without our biology, just strikes me with such abject horror that it can't be countenanced. I have children myself and I do wonder if this is a major difference. To imagine a world where they have no purpose drives me quite aghast and I feel this would reflect the thinking of the majority of humans. And as such, hopefully drive policy which will, in my best futures, drive humanity forward. I see a good end as humanity spreading out into the stars and becoming inexhaustible, perhaps turning into multiple different species but ultimately, still with the struggles, suffering and triumphs of who we are. I've seen arguments here and there about how the values drift from say, a hunter gatherer to us would horrify us, but I don't see that. I see a hunter-gatherer and relate to him on a basic level. He wants food, he will compete for a mate and one day, die and his family will seek comfort from each other. My work will be different from his but I comprehend him, and as writings like Still A Pygmy show, they comprehend us. The descriptions of things like mind uploading of accepting the extinction of humanity strike me with such wildness that it's akin to a vast, terrifying revulsion. It's Lovecraftian horror and I think, very far from any moral goodness to inflict upon the majority of humanity.
3dr_s11mo
My point isn't that extinction is a-ok, but rather that you could "price it" as the total sum of all human deaths (which is the lower bound, really) and there would still be a case for that. It still remains very much to avoid! I think it's worse than that but I also don't think it's worse than anything. If the choice was between going extinct now or condemning future generations to lives of torture, I'd pick extinction as the lesser evil. And conversely I am also very sceptical of extremely long term reasoning, especially if used to justify present suffering. You bring up children but those are still very much real and present. You wouldn't want them to suffer for the sake of hypothetical 40th century humans, I assume.
3Lichdar11mo
Depends on the degree of suffering to be totally honest- obviously I'm fine with them suffering to some extent, which is why we drive then to behave, etc so they can have better futures and sometimes conjoin them to have children so that we can continue the family line. I think my answer actually is yes, if hypothetically their suffering allows the existence of 40th century humans, it's pretty noble and yes, I'd be fine with it.
1dr_s11mo
So supposing everything goes all right, for every more human born today there might be millions of descendants in the far future. Does that mean we have a moral duty to procreate as much as possible? I mean, the increased stress or financial toll surely don't hold a candle to the increased future utility experienced by so many more humans! To me it seems this sort of reasoning is bunk. Extinction is an extreme of course but every generation must look first and foremost after the people under its own direct care, and their values and interests. Potential future humans are for now just that, potential. They make no sense as moral subjects of any kind. I think this extends to extinction, which is only worse than the cumulative death of all humans insofar as current humans wish for there to be a future. Not because of the opportunity cost of how non-existing humans will not get to experience non-existing pleasures.
1Lichdar11mo
I apologize for being a normie but I can't accept anything that involves non-existence of humanity and would indeed accept an enormous amount of suffering if those were the options.
2Vladimir_Nesov11mo
Humanity went from Göbekli Tepe to today in 11K years. I doubt even after forgetting all modern learning, it would take even a million years to generate knowledge and technologies for new circumstances. I hear the biosphere can last about a billion years more. (One specific path is to use low-tech animal husbandry to produce smarter humans. This might even solve AI x-risk by making humanity saner.)
1dr_s11mo
I disagree it's that easy. It's not a long trajectory of inevitability; like with evolution, there are constraints. Each step generally has to be on its own aligned with economic incentives at the time. See how for example steam power was first developed to fuel pumps removing water from coal mines; the engines were so inefficient that it was only cost effective if you didn't also need to transport the coal. Now we've used up all surface coal and oil, not to mention screwed up the climate quite a bit for the next few millennia, conditions are different. I think technology is less uniform progression and more a mix of "easy" and "hard" events (as in the grabby aliens paper, if you've read it), and by exhausting those resources we've made things harder. I don't think climbing back up would be guaranteed. This IMO even if it was possible would solve nothing while potentually causing an inordinate amount of suffering. And it's also one of those super long term investments that don't align with almost any incentive on the short term. I say it solves nothing because intelligence wouldn't be the bottleneck; if they had any books left lying around they'd have a road map to tech, and I really don't think we've missed some obvious low tech trick that would be relevant to them. The problem is having the materials to do those things and having immediate returns.
3Vladimir_Nesov11mo
Intelligence is also a thing that enables perceiving returns that are not immediate, as well as maintenance of more complicated institutions that align current incentives towards long term goals.
1dr_s11mo
This isn't a simple marshmallow challenge scenario. If you have a society that has needs and limited resources, it's not inherently "smart" to sacrifice those significantly for the sake of a long term project that might e.g. not benefit anyone who's currently living. It's a difference in values at that point; even if you're smart enough you can still not believe it right. For example, suppose in 1860 everyone knew and accepted global warming as a risk. Should they, or would they, have stopped using coal and natural gas in order to save us this problem? Even if it meant lesser living standards for themselves, and possibly more death?
2Vladimir_Nesov11mo
Your argument was that this hopeless trap might happen after a catastrophe and it's so terrible that maybe it's as bad or worse as everyone dying quickly. If it's so terrible, in any decision-relevant sense, then it's also smart to plot towards projects that dig humanity out of the trap.
3dr_s11mo
No, sorry, I may have conveyed that wrong and mixed up two arguments. I don't think stasis is straight up worse than extinction. For good or bad, people lived in the Middle Ages too. My point was more that if your guiding principle is "can we recover", then there are more things than extinction to worry about. If you aspire at some kind of future in which humans grow exponentially then you won't get it if we're knocked back to preindustrial levels and can't recover. I don't personally think that's a great metric or goal to adopt, just following the logic to its endpoint. And I also expect that many smart people in the stasis wouldn't plot with only that sort of long term benefit in mind. They'd seek relatively short term returns.
2Vladimir_Nesov11mo
I see. Referring back to your argument was more an illustration of existence for this motivation. If a society forms around the motivation, at any one time in the billion years, and selects for intelligence to enable nontrivial long term institution design, that seems sufficient to escape stasis.

skeptical reaction with one expression of support: https://statmodeling.stat.columbia.edu/2023/05/31/jurassic-ai-extinction/

I have to wonder what people — both the signatories and all the people suddenly taking this seriously — have in mind by "risk of extinction". The discussions I've seen have mentioned things like deepfakes, autonomous weapons, designer pathogens, AI leaving us nothing to do, and algorithmic bias. No-one I have heard is talking about Eliezer's "you are made of atoms that the AI wants to use for something else".

2dr_s11mo
I honestly think that's for the best because I don't believe super fast takeoff FOOM scenarios are actually realistic. And in any slower takeoff existential risk looks more like a muddled mix of human and AI driven processes than "nanomachines, everyone dies".
3Vladimir_Nesov11mo
When a claim is wrong, ignoring its wrongness and replacing it in your own perception with a corrected steelman of completely different literal meaning is not for the best. The sane thing would be to call out the signatories for saying something wildly incorrect, not pretending that they are saying something they aren't. The sane thing for the signatories would be to mean what they sign, not something others would hear when they read it, despite it contradicting the literal meaning of the words in the statement.
1dr_s11mo
My point is I don't think they're incorrect. All those things are ALSO problems, and many are paths to X-risk even, which I'd consider more likely (in a slow takeoff scenario) than FOOM. A few possible scenarios: 1. designer pathogens are the obvious example because there's so many ways they can cause targeted human extinction without risking the AI's integrity in any way, so they're obvious candidates for both a misuse of AI by malicious humans and for a rogue AI that is actively trying to kill us for whatever reason. Plus there's also the related risk of designer microorganisms that alter the Earth's biosphere as a whole, which would be even deadlier (think something like the cyanobacteria that caused the Great Oxidation Event, except now it'd be something that grows out of control and makes the atmosphere deadly) 2. autonomous weapons are obviously dangerous in the hands of an AI because that's really handing it our defenses on a silver platter and makes it turning against us much more likely to succeed (and thus a potentially attractive strategy to seek power) 3. deepfakes or actual even more sophisticated forms of deception could be exploited by an AI to manipulate humans towards carrying out actions that benefit it or allow it to escape confinement and so on. Being able to see through deepfakes and defend against them would be key to the security of important strategic resources 4. we don't know what happens socially and economically if AGI really takes everyone's job. We go into unexplored territory, and more so, we go there from a place where the AGI will probably be owned by a few to begin with. We might hope for glorious post-scarcity abundance but that's not the only road nor, I fear, the most likely. If the transition goes badly it can weaken our species enough that all AGI needs to do to get rid of us for good is give a little push. All these things of course are only relevant with relatively weak-ish AGI. If it is possible to have a self-impro
2Vladimir_Nesov11mo
Misconstruing an incorrect statement with a correct steelman is incorrect. If I say "I've discovered a truly marvelous proof that 2+2=3000 that this margin is too small to contain," and you reply, "Ah, so you are saying 2+2=4, quite right," then the fact of your inexplicable discussion of a different and correct statement doesn't make your interpretation of my original incorrect statement correct.
1dr_s11mo
I explained in the rest of the comment why I don't think they're incorrect, literally. The signed statement anyway is just: So that seems to me like it's succinct enough to include all categories discussed here. I don't see the issue.
3Vladimir_Nesov11mo
"Extinction from AI" really doesn't refer to deepfakes, AI leaving us nothing to do, and algorithmic bias. It doesn't include any of these categories. There is nothing correct about interpreting "extinction from AI" as referring to either of those things. This holds even if extinction from AI is absolutely impossible, and those other things are both real/imminent and extremely important. Words have meaning even when they refer to fictional ideas.
1dr_s11mo
Deepfakes as in "hey, my uncle posted an AI generated video of Biden eating a baby on FB", no (though that doesn't help our readiness as a species). The general ability of AI to deceive, impersonate, and pretty much break any assumption about who we may believe we are talking to though is a prominent detail that often features in extinction scenarios (e.g. how the AI starts making its own money or manipulating humans into producing thing it needs). I would say "extinction scenarios" include everything that features extinction and AI in the event chain, which doesn't even strictly need to be a takeover by agentic AI. Anyway the actual signed statement is very general. I can guess that some of these people don't worry specifically about the "you are made of atoms" scenario, but that's just arguing against something that isn't in the statement.

What aspect of AI risk is deemed existential by these signatories? I doubt that they all agree on that point. Your publication "An Overview of Catastrophic AI Risks" lists quite a few but doesn't differentiate between theoretical and actual. 

Perhaps if you were to create a spreadsheet with a list of each of the risks mentioned in your paper but with the further identification of each as actual or theoretical, and ask each of those 300 luminaries to rate them in terms of probability, then you'd have something a lot more useful. 

2RHollerith1mo
The statement does not mention existential risk, but rather "the risk of extinction from AI".
1jeffreycaruso1mo
Which makes it an existential risk.  "An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population." - FLI