With regards to your (and Eliezer's) quest, I think Oppenheimer's Maxim is relevant:

It is a profound and necessary truth that the deep things in science are not found because they are useful, they are found because it was possible to find them.

A theory of machine ethics may very well be the most useful concept ever discovered by humanity. But as far as I can see, there is no reason to believe that such a theory can be found.

Daniel_Burfoot,

I share your pessimism. When superintelligence arrives, humanity is almost certainly fucked. But we can try.

A Brief Overview of Machine Ethics

by lukeprog 1 min read5th Mar 201191 comments

6


Earlier, I lamented that even though Eliezer named scholarship as one of the Twelve Virtues of Rationality, there is surprisingly little interest in (or citing of) the academic literature on some of Less Wrong's central discussion topics.

Previously, I provided an overview of formal epistemology, that field of philosophy that deals with (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory.

Now, I've written Machine Ethics is the Future, an introduction to machine ethics, the academic field that studies the problem of how to design artificial moral agents that act ethically (along with a few related problems). There, you will find PDFs of a dozen papers on the subject.

Enjoy!