I have collected some links on topic!
Correctly handling the uncertainty in values, knowledge and predictions is necessary for reaching any complex goal or executing any complex plan. So, capability of doing that is probably something that AI will have to obtain in order to be AGI.
I've recently started to think about how nascent "hot mess" superintelligence can reflect on its own values and converge to something consistent. The simplest route to think about this, it seems to me, is model it like a process of resolving uncertainity of superintelligence about its own preferences.
Suppose an agent knows that it is an expected utility maximizer and is uncertain between two utility functions, and , with assigned probabilities and . The agent must choose between two actions, and . Let's say that the optimal decision for is and for is . To maximize the expected value of , the agent chooses . However, choosing is also a decisive evidence in favor of , and therefore, the agent updates to 1. This representation of uncertain preferences looks unsatisfactory because it quickly and predictably converges to only one utility function.
Does anyone know of a good model for uncertain preferences that can meet these criteria after some additions?
Nash bargaining (between different hypotheses about preferences) looks like something that is close to desirable properties but I am not sure, may be something better has already been developed.