My P(doom) is 100% - Ɛ to Ɛ: 100% - Ɛ, Humanity has already destroyed the solar system (and the AGI is rapidly expanding to the rest of the galaxy) on at least some of the quantum branches due to an independence gaining AGI accident. I am fairly certain this was possible by 2010, and possibly as soon as around 1985. Ɛ, at least some of the quantum branches where independence gaining AGI happens will have a sufficiently benign AGI that we survive the experience or we will actually restrict computers enough to stop accidentally creating AGI. It is worth noting that if AGI has a very high probability of killing people, what it looks like in a world with quantum branching is periodically there will be AI winters when computers and techniques approach being capable of AGI, because many of the "successful" uses of AI result in deadly AGI, and so we just don't live long enough to observe those.
And if I am just talking with an average person, I say > 1%, and make the side comment that if a passenger plane had a 1% probability of crashing in the next flight, it would not take off.
Edit: And to really explain this would take a lot longer post, so I apologize for that. That said, if I had to choose, to prevent accidental AGI, I would be willing to restrict myself and everyone else to computers on the order of 512 KiB of RAM, 2 MiB of disk space as part of a comprehensive program to prevent existential risk.
Thank you for this, I still read it periodically.
First of all, I do agree with you that why haven't other civilizations created AGIs that have then spread far enough to reach Earth is a really interesting question as well, and I would be happy to see a discussion on that question.
For that question, I think you are missing a fourth possibility, AGI is almost always deadly, so on quantum branches where it develops anywhere in the light cone, no one observes it (at least not for long). So we don't see other civilization's AGI because we just are not alive on those quantum branches.
My first attempt to turn this into a paper: http://jjc.freeshell.org/writings/hardware_limits_for_agi_d1.pdf
For what it is worth, I did write a programming language over the course of about two years of my life ( https://github.com/jrincayc/rust_pr7rs/ ). I do agree that there are better and worse ways to spend time, and it is probably worth thinking about this. I think that when you "recoiled from the thought of actually analyzing whether there was anything better I could have been doing" is a good hint maybe writing a programming language wasn't the best thing for you. I wish you good skill and good luck in finding ways to spend your time.
Sounds interesting.
For somewhat related reasons, I made a 2 column version of Rationality from AI to Zombies https://github.com/jrincayc/rationality-ai-zombies which is easier to print than the original version, and have printed multiple copies in the hope that some survive a global catastrophe.
Hm, I don't think I want the Human-Descended Ideal Agent and the AI-Descended Ideal Agent to be in complete agreement. I want them to be compatible, as in able to live in the same universe. I want the AI to not make humans go extinct, and be ethical in a way that the AI can explain to me and (in a non-manipulative way) convince me is ethical. But in some sense, I hope that AI can come up with something better than just what humans would want in a CEV way. (And what about the opinion of the other vertebrates and cephalopods on this planet, and the small furry creatures from Alpha Centauri?)
I don't think it is okay to do unethical things for music, music is not that important, but I hope that the AIs are doing some things that are as incomprehensible and pointless to us as music would be to evolution (or a being that was purely maximizing genetic fitness).
As a slightly different point, I think that the Ideal Agent is somewhat path dependent, and I think there are multiple different Ideal Agents that I would consider ethical and I would be happy to share the same galaxy with.