TL;DR
I think that a large and significant chunk of the goal-intelligence plane would be ruled out if moral truths are self-motivating, contrary to what Bostrom claims in his presentation of the orthogonality thesis.
Intro
In the seminal paper The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, Nick Bostrom introduces his Orthogonality Thesis, proposing the independence of goal-content and intelligence level:
The Orthogonality Thesis
Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal.
Bostrom then goes on to address various objections that might be raised and provides counter... (read 1829 more words →)