Omohundro presents two sets of values, one for self-improving artificial intelligences [12] and another he says will emerge in any sufficiently advanced AGI system [23]. The former set is composed of four main drives:
Bostrom argues for an orthogonality thesis: But he also argues that, despite the fact that values and intelligence are independent, any recursively self-improving intelligence would likely possess a particular set of instrumental values that are useful for achieving any kind of terminal value [4].3 On his view, those values are:
Yudkowsky echoes Omohundro's point that the convergence thesis is consistent with the possibility of Friendly AI. However, he also notes that the convergence thesis implies that most AIs will be extremely dangerous, merely by being indifferent to one or more human values:values [45]:
I feel like using changing it to proper footnotes would be better