What’s the longest timeline that you could still consider a short timeline by your own metric, and therefore a world “we might just want to grieve over”? I ask because, in your original comment you mentioned 2037 as a reasonably short timeline, and personally if we had an extra decade I’d be a lot less worried.
Most of that comes from me sharing the same so-called pessimistic (I would say realistic) expectations as some LWers (e.g. Yudkowsky's AGI Ruin: A List of Lethalities) that the default outcome of AI progress is unaligned AGI -> unaligned ASI -> extinction, that we're fully on track for that scenario, and that it's very hard to imagine how we'd get off that track.
Ok, but I don’t read see those LWers also saying >99%, so what do you know that they don’t which allows you to justifiably hold that kind of confidence?
...That's a disbelief in superint
I think your defense of the >99% thing is in your first comment where you provided a list of things that cause doom to be “overdetermined”- meaning you believe that any one of those things is sufficient enough to ensure doom on its own (which seems nowhere near obviously true to me?).
Ruby says you make a good case, but considering what you’re trying to prove, (I.e. near-term “technological extinction” is our nigh-inescapable destiny) I don’t think it’s an especially sufficient case, nor is it treading any new ground. Like yeah, the chances don’t look go...
Do you really think p(everyone dies) is >99%?
I admit, it’s pretty disheartening to hear that, even if we had until 2040 (which seems less and less likely to me anyway), you’d still think there’s not much we could do but grieve in advance.