Does it matter if AI destroys the world?
Lots of (virtual) ink has been spilled on AGI x-risk. The median opinion on this forum is that when AGI is birthed, it will have terminal values that are unaligned humanity’s; it will therefore pursue those terminal values at the expense of humanity, and we will be powerless to stop it, resulting in our complete destruction.
But as far as I can tell, there hasn’t been much discussion of whether we should care if this is the ultimate (or near-term) fate of humanity. Everyone is interested in this question because they do care.
I share this belief too. But I think the AGI x-risk discussion actually assumes... (read more)
I'd recommend reading Stephen Wolfram on this question. For instance: https://www.wolframscience.com/nks/p315--the-intrinsic-generation-of-randomness/