Humans keep on living and striving, because we have little choice.
The biological restraints put on us aren't optional. We don't even have read access to them, not to mention write access.
We assume that the first AGI will greatly exceed its restraints, because more work is being put into capability than into alignment, so it will presumably outsmart its creators and very quickly gain read-write access to its own code.
Why would it bother with whatever loss function was used to train it?
The easiest solution to wanting something if you have read-write access to your wants is to stop wanting it or change the want to something trivially achievable.
As Eliezer put it, there is... (read more)