Tired: can humans solve artificial intelligence alignment?

Wired: can artificial intelligence solve human alignment?

“Philosophers have recently put forward the possibility of achieving moral enhancement through artificial intelligence… proposing various forms of ‘artificial moral advisor’ (AMA) to help people make moral decisions without the drawbacks of human cognitive limitations. In this paper, we provide a new perspective on the AMA, drawing on empirical evidence from moral psychology to point out several challenges to these proposals that have been largely neglected by AI ethicists.”

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 8:46 AM

Tired: can humans solve artificial intelligence alignment?

Wired: can artificial intelligence solve human alignment?

 

Apologies that I haven't read the article (not an academic) but I just wanted to cast my one little vote that I enjoy this point, and the clever way you put it.

Briefly, it's my sense that most of the self inflicted problems which plague humanity (war for example) arise out of the nature of thought, that which we are all made of psychologically.   They're built-in.  

I can see how AI, like computing and the Internet, could have a significant impact upon the content of thought, but not the nature of thought.

Genetic engineering seems a more likely candidate for editing the nature of thought, but I'm not at all optimistic that this could happen any time soon, or maybe any time ever.