Hi everyone,
I've recently discovered lesswrong and love it! So first, let me thank you all for fostering such a wonderful community.
I've been reading a lot of the AI material and I find myself asking a question you all have surely considered, so I wanted to pose it.
If I believe that human beings are evolutionarily descended from apes, and I ask myself whether apes -- if they had control over allowing human evolution to happen -- should have allowed it or stopped it, I'm honestly not sure what the answer should be.
It seems like apes would in all likelihood be better off without humans around, so from the perspective of apes, they should probably have not allowed it to happen. However, looked at from a different frame of reference, like maybe what is good for the earth, or for the universe, then maybe the evolution of humans from apes was a good thing. Certainly from the perspective of humans, most of us would believe that it was allowed to happen was a good thing.
Do we find ourselves in a similar scenario with humans and AI? Are there benefits from other frames of reference besides humanity to allow the development of AI, even if that AI may pose existential threats to human civilization? And if so, are those perspectives being taken into full enough account when we think about AI risk assessment?
I do think so, yeah. Here are some links to some stuff I and others have said; It's taking me enough time to choose this list of links that I'm not going to spend much time summarizing them unless your reply indicates that you're stuck on understanding them. But feel free to say so if that's the case and I'll see what I can do. Note that for all of these, I browsed and skimmed enough to feel that I was linking something relevant, but consider these to be human-grade semantic search results (I used several semantic search engines as seed, actually), so not necessarily guaranteed to be what you seek.
Some comments where I claim without evidence things that seem related to your question