To what extent is AI safety work trying to get AI to reliably and safely do what the user asks vs. do what is best in some ultimate sense? — LessWrong