I am not sure what 'accurate moral beliefs' means. By analogy with 'accurate scientific beliefs', it seems as if Mr Danaher is saying there are true morals out there in reality, which I had not thought to be the case, so I am probably confused. Can anyone clarify my understanding with a brief explanation of what he means?

Well, I suppose I had in mind the fact that any cognitivist metaethics holds that moral propositions have truth values, i.e. are capable of being true or false. And if cognitivism is correct, then it would be possible for one's moral beliefs to be more or less accurate (i.e. to be more or less representative of the actual truth values of sets of moral propositions).

While moral cognitivism is most at home with moral realism - the view that moral facts are observer-indepedent - it is also compatible with some versions of anti-realism, such as the constructiv... (read more)

-4Jayson_Virissimo8yAnd yet, several high-status Less Wrongers continue to affirm utilitarianism (specifically, with equal weight for each person in the social welfare function). I have criticized these beliefs in the past (as not, in any way, constraining experience), but have not received a satisfactory response.

John Danaher on 'The Superintelligent Will'

by lukeprog 1 min read3rd Apr 201212 comments

5


Philosopher John Danaher has written an explication and critique of Bostrom's "orthogonality thesis" from "The Superintelligent Will." To quote the conclusion:

 

Summing up, in this post I’ve considered Bostrom’s discussion of the orthogonality thesis. According to this thesis, any level of intelligence is, within certain weak constraints, compatible with any type of final goal. If true, the thesis might provide support for those who think it possible to create a benign superintelligence. But, as I have pointed out, Bostrom’s defence of the orthogonality thesis is lacking in certain respects, particularly in his somewhat opaque and cavalier dismissal of normatively thick theories of rationality.

As it happens, none of this may affect what Bostrom has to say about unfriendly superintelligences. His defence of that argument relies on the convergence thesis, not the orthogonality thesis. If the orthogonality thesis turns out to be false, then all that happens is that the kind of convergence Bostrom alludes to simply occurs at a higher level in the AI’s goal architecture. 

What might, however, be significant is whether the higher-level convergence is a convergence towards certain moral beliefs or a convergence toward nihilistic beliefs. If it is the former, then friendliness might be necessitated, not simply possible. If it is the latter, then all bets are off. A nihilistic agent could do pretty anything since, no goals would be rationally entailed.