AGI is likely to be cautious
According to Professor Stuart Russell, and with a sentiment I have seen re-expressed often in the AI safety community: > A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values;...
Feb 23, 20239