TL;DR Could a superintelligence be motivated to preserve lower life forms because it fears an even greater superintelligence? This 'motivation' takes the form of something like timeless decision theory.
This is a thought experiment about a superintelligent AI. No doubt the situation is more complicated than this, so I will pose some concerns I have at the end. I wonder if this idea can be useful in any capacity, if this idea has not already been discussed.
Thought Experiment
An AI can have any value function that it aims to optimize. However, instrumental convergence can happen regardless of underlying values. Different nations have different values, but can work together because none of them have decisive... (read 682 more words →)
I think the state handling child rearing is the long term solution. The need for new people is a society wide problem and not ultimately one of personal responsibility. Of course people should still be free to do it on their own if they want. It'll be weird that not everyone will have traditional parents, but I think we can figure it out. Maybe a mandatory or highly incentivized big brother/ sister program would help make it more nurturing.