Wouldn't an intelligent agent keep us alive and help us align itself to our values in order to prevent risk ? by Risk I mean experimentation by trying to align potentially smarter replicas?
I was wondering about some very dumb argument on alignment but no one answered to me. I am in the research of reasons of why an intelligent agent would not keep us alive and try to help us align itself accordingly to our values, in the goal of preventing being...