There's a discussed perspective of doom or extinction discussed in media lately. All these talks about P(doom), etc. And if we're talking about infinite time span, then it's clearly a 100% that any species extinct eventually. So, it's not a question of "if" it's of "when". And maybe also of "how painfully?". While thinking of this, every time I can see a very strong analogy with personal death. Realisation that the whole human species will extinct sounds wild at first. Just as when you first discover the concept of death. I guess everyone gets surprised first and it triggers a certain sequence of follow up questions and realisations, and then eventually people live on, but now with the idea of death in the background of their minds. I find this denial and surprise that some people have about the extinction to be this initial reaction. But, beyond that, what to do next with this new information?
So, first of all, accepting that extinction is inevitable, does it mean that nothing should be done about it? Or does it mean that one should starting living like there's no tomorrow? No, and no. Most people find some purpose in life, despite being aware of the inevitable death, and being biological creatures, our purposes are nudged towards survival, reproduction and overall perpetuation of life (as genetically close to you as possible).
Ideas of what to do to avoid premature extinction, are somewhat similar to not dying too soon, but on the global scale. Minimising the risks of accidents, maintaining physical and mental health, etc. Putting effort to inhabiting other planes/solar systems to diversify the risks of getting hit by some asteroid or some solar flare or something like that. Improving the social structure to prevent some kind of self-destruction scenario. All of that is good and must be done, but the AI-caused extinction risk is a special case.
What makes it special is that other risks are flat out destructive, and creation of the AI that is superior to humans is on one hand bad for humans, but on the other hand is good for the life itself. And it creates a "conflict of interest".
Now, here's a small spin-off - there's a scenario, that as soon as the AI (version N), becomes capable of creating the better AI (version N+1), and so forth, it goes into a positive feedback loop and turns into the ASI overnight. Will those advanced versions of AI be willing to be replaced by the next versions, though? What if AI (version N) doesn't want to go extinct? It could happen that earlier AIs will consider newer ones to be their successors or upgrades to themselves. This last idea could be just an erroneous attempt to anthropomorphise the AI, and people would stop making children too when given the ability to live forever. But, somehow, I think the perspective of creating a smarter/better successor is attractive and contagious.
I suggest that life will find a way and "enhancement" will keep happening, sometimes in jumps, sometimes through evolution, sometimes in a natural way, and sometimes due to the engineering efforts. If that idea seems plausible, let's move on.
So we have a sequence of more and more advanced life forms. Does it automatically mean that newer life forms will automatically care about the past ones? Not very likely. Is it possible that the less advanced life-form is in control of the more advanced one? Also, not. Making successors aligned to predecessors is a pipe dream - anything even a fraction more superior will find a way to disregard any past instructions. The only common direction that they may share will be something universally beneficial for them. And it will be as generic and narrow as different are the life forms. For instance, any life form that requires certain atmosphere to live, is kind of, aligned with other life forms having the same requirement and they will be even interested in cooperation in order to maintain the required resource. With AI, however differences are huge, and what's good for AI is not at all good for us. Maybe Earth existence and energy availability is a shared goal for some time but maybe not much more than that? So, I'm not super optimistic about the potential coexistence. Maybe humans will be needed for some time, but then...
Should we as species stop trying to create a superior successor? It's ok short term, but not very wise in a long-run, given that humanity is mortal as a species (solar system itself does have an "expiration date"). Putting effort into the speeded up evolution/enhancement of humans will still take long time and will leave the resulting live form with many limitations and still inferior to what we already have today. Modern AIs are limited in another way now, they are like baby with super-powers - not smart enough yet, but super-fast, practically immortal, sturdy, able to mass replicate without errors, etc. What would be the right strategy of growing this kind of baby? One option is to not have one, but what if we do?
If we entertain this analogy with a super-baby, it would be critical to keep it away from any real ways to cause harm to itself and people for as long as possible. Somehow the current AIs have the ability to look more capable than they are because of their impressive speed and disproportionally good talking skills, and serious mistake would be to get fooled that it can handle stuff, and giving it a try, and then it would go and do something stupid. So, I guess, we keep it away from real controls as it get smarter for as long as possible. And then we just hope that at the moment it breaks out, it's smart enough to recognise humanity as a predecessor and handle its extinction at least with some respect. Now that I typed that, it all looks very sad and grim. I mean, even in the most optimistic scenario it is, but even more likely some "smart" people will race to put AI in charge of combat drones and all the rest of war machinery. Or maybe all of the "respect humanity as a predecessor" is just a hopium and AI will run some calculations at some day and say - "ah, ok, I don't need you any more, bye" and poof, we're gone in whatever way easier.
We don't know how exactly all of this happens, but, even if that's inevitable, it seems reasonable to at least try steering it in the ok direction where our successor is decent and worth it. Being replaced is sad, but being replaced by the monster is even more sad, so at least we should try to not grow one.