From my perspective, it seems inaccurate to claim that I ignored your argument - since I deat with it pretty explicitly in my paragraph about EMACS.

I certainly put a lot more effort into addressing your points than you just put into addressing mine.

I said that public access to an AI under development would be bad, because if it wasn't safe to run - that is, if running it might cause it too foom and destroy the world - then no one would be able to make that judgment and keep others from running it. You responded with an analogy to EMACS, which no one believes or has ever believed to be dangerous, and which has no potential to do disastrous things that their operators did not intend. So that analogy is really a non sequitur.

"Dangerous" in this context does not mean "powerful", it means "volatile", as in "reacts explosively with Pentiums".

A Brief Overview of Machine Ethics

by lukeprog 1 min read5th Mar 201191 comments


Earlier, I lamented that even though Eliezer named scholarship as one of the Twelve Virtues of Rationality, there is surprisingly little interest in (or citing of) the academic literature on some of Less Wrong's central discussion topics.

Previously, I provided an overview of formal epistemology, that field of philosophy that deals with (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory.

Now, I've written Machine Ethics is the Future, an introduction to machine ethics, the academic field that studies the problem of how to design artificial moral agents that act ethically (along with a few related problems). There, you will find PDFs of a dozen papers on the subject.