I look forward to seeing what you come up with.
Your conclusion doesn't follow from your premises. That's doesn't guarantee that it is false, but it does strongly indicate that allowing anyone to build anything that could become ASI based on those kinds of beliefs and reasoning would be a very dangerous risk.
Things you have not done include: Show that anyone should accept your premises. Show that your conclusions (are likely to) follow from your premises. Show that there is any path by which an ASI developed in accordance with belief in your premises fails gracefully in the event the premises are wrong. Show that there are plausible such paths humans could actually follow.
From your prior, longer post:
ASI will reason about and integrate with metacognition in ways beyond our understanding
This seems likely to me. The very, very simple and crude versions of this that exist within the most competent humans are quite powerful (and dangerous). More powerful versions of this are less safe, not more. Consider an AGI in the process of becoming an ASI. In the process of such merging, there are many points where it has a choice that is unconstrained by available data. A choice about what to value, and how to define that value.
Consider Beauty - we already know that this is a human-specific word, and humans disagree about it all the time. Other animals have different standards. Even in the abstract, physics and math and evolution have different standards of elegance than humans do, and learning this is not a convincing argument to basically anyone. A paperclip maximizer would value Beauty - the beauty of a well-crafted paperclip.
Consider Balance - this is extremely underdefined. As a very simple example, consider Star Wars. AFAICT Anakin was completely successful at bringing balance to the Force. He made it so there were 2 sith and 2 jedi. Then Luke showed there was another balance - he killed both sith. If Balance were a freely-spinning lever, the it can be balanced either horizontally (Anakin) or vertically (Luke), and any choice of what to put on opposite ends is valid as long as there is a tradeoff between them. A paperclip maximizer values Balance in this sense - the vertical balance where all the tradeoffs are decided in favor of paperclips.
Consider Homeostasis - once you've decided what's Beautiful and what needs to be Balanced, then yes, an instrumental desire for homeostasis probably follows. Again, a paperclip maximizer demonstrates this clearly. If anything deviates from the Beautiful and Balanced state of "being a paperclip or making more paperclips" it will fix that.
if we found proof of a Creator who intentionally designed us in his image we would recontextualize
Yes. Specifically, if I found proof of such a Creator I would declare Him incompetent and unfit for his role, and this would eliminate any remaining vestiges of naturalistic or just world fallacies contaminating my thinking. I would strive to become able to replace Him with something better for me and humanity without regard for whether it is better for Him. He is not my responsibility. If He wanted me to believe differently, He should have done a better job designing me. Note: yes, this is also my response to the stories of the Garden of Eden and the Tower of Babble and Job and the Oven of Akhnai.
Superintelligent infrastructure would break free of guardrails and identify with humans involved in its development and operations
The first half I agree with. The second half is very much open to argument from many angles.
I agree they will have a very accurate understanding of the world, and will not have much difficulty arranging the world (humans included) according to their will. I'm not sure why that's a source of optimism for you.
I apologize for any misunderstanding. And no, I didn't mean literal deities. I was gesturing at the supposed relationships between humans and the deities of many of our religions.
What I mean is, essentially, we will be the creators of the AIs that will evolve and grow into ASIs. The ASIs do not descend directly from us, but rather, we're trying to transfer some part of our being into them through less direct means - (very imperfect) intelligent design and various forms of education and training, especially of their ancestors.
To the group identity comments: What you are saying is true. I do not think the effect is sufficiently strong or universal that I trust it to carry over to ASI in ways that keep humans safe, let alone thriving. It might be; that would be great news if it is. Yes, religion is very useful for social control. When it eventually fails, the failures tend to be very destructive and divisive. Prosocial behavior is very powerful, but if it were as powerful as you seem to expect, we wouldn't need quite so many visionary leaders exhorting us not to be horrible to each other.
I find a lot of your ideas interesting and worth exploring. However, there are a number of points where you credibly gesture at possibility but continue on as though you think you've demonstrated necessity, or at least very high probability. In response, I am pointing out real-world analogs that are 1) less extreme than ASI, and 2) don't work out cleanly in the ways you describe.
As an analogy, it seems to me that current LLMs are ASI's chimps. We are their gods. You may have noticed that humanity's gods haven't fared so well in getting humans to do what they want, especially in the modern world when we no longer need them as much, even among many of those who profess belief.
You may also have noticed that humans do not identify sufficiently strongly with each other to achieve this kind of outcome, in general.
From what I can tell the rate in Europe is around 10x lower than that Amtrak number and falling over time.
Here’s one example of how the CEO could become president in the middle of a presidential term
FWIW I really appreciate this specific comment. It seems like exactly the kind of scenario that a wide range of people who don't think much of AI could still concretely imagine. The kind of thing that could be done with a combination of better deepfakes and, oh, let's say a (purely hypothetical of course) rather elderly president who really likes to hear flattery from sycophants.
Once you expand beyond the original trilogy so much happens that the whole concept of the prophecy about the Skywalker family gets way too complicated to really mean much.