This was meant to be a rather longwinded, rambling reply to Better a Brave New World than a dead one, however because of being too long and going offtopic too much, it was inappropriate for a comment on a front page article, so I've included it here for anyone that is interested in seeing what it was:
Long disclaimer (skip if you're not interested in what my beliefs are, but my reply may not make much sense without this context):
I'm a longtime on and off lurker of this community (at least 10 years), however I generally do not share many views regarding AGI being an existential risk (or at least assign a low probability depending on the chosen architecture), so I've tried to abstained from registering here or posting as I do generally expect heavy downvotes or "you haven't read the sequences" or similar replies because of disagreement about assumptions. Nevertheless seeing views that sometimes advocate managing that perceived risk with "solutions" I would consider horrific (of the terrorist kind (start WW3) or "destroy all technology" or "make narrow AI that helps you build molecular nanotechnology and waste all that potential by making every GPU melt" or even a certain someone that considered the Kaczynski "solution" in his book, even if he thought it would not be very effective and other similar viewpoints). These views sometimes "joked at" about even from more high profile figures here and even if I don't believe they are suggesting people actually do them, it certainly seems to appear more on some people's minds as they update the expectation of AGI to come sooner and their fear of it grows stronger. Another potential source of disagreement is "religious" (philosophical), I'm a "believer" in something like Tegmark's Level 4 multiverse likely restricted to a more narrow chunk of math: computation only (which is still enough to emulate a lot of continous physics), the belief could be reached either due to some variants of Occam's Razor, but that is not a "proof"