How should AIs update a prior over human preferences? — LessWrong