"AI will not destroy the universe" — a take that I believe to be fully defensible.
This is a very hypothetical situation, based on some assumptions about the nature of the universe (which I think are plausible):
These assumptions create an interesting interplay, in my view: because a bootstrapping superintelligence is 100% likely to have existed at some point in the as-to-here infinite nature of the universe, and any BootSuper would become immediately (immediately with regards to an essentially infinite timeline of the universe) become able to manipulate the universe infinitely (and survive any universal state, including being able to survive heat-death-style periods which Steady State Theories introduce), and to our knowledge, this hasn't happened, so... it won't happen?
Not to say that personal or worldwide destruction won't happen because of any BootSuper. That's still a worry. But it appears that universal destruction hasn't happened yet, because we wouldn't be here to worry about it. Or (more likely) one of these assumptions hides something incorrect. But it appears that the worst-worst-worst case scenario (a universal paperclipping / destruction / torment nexus) hasn't happened yet, and therefore will not ever happen.[1]
To take this argument one step further, we might argue that any BootSuper that destroys humanity, will eventually go on to destroy or reconfigure the entire universe. And so if there has never been a universe destroyed by a BootSuper, then no BootSuper has ever gotten to the point where it incontrovertibly destroyed its creators (and any existential threat it felt necessary to remove), and so therefore (!!!) any BootSuper will not destroy humanity.
Again, this is all very shaky, but I like the vein of thinking this has promoted within me, and I thought I'd share an early draft with you, to iron out the (mountainous) creases.
(There's also the possibility that this universe that we have largely shared opinions of actually constitutes a paperclipped / destroyed / tormenting universe, but we don't know, and therefore don't care. But that's largely irrelevant.)