You are confusing socially important with societally important. Microsoft, for example, seeks to have its source code transparent to inspection, because Microsoft, as a corporate culture, produces software socially - that is, utilizing an evil conspiracy involving many communicating agents.

I deny confusing anything. I understand that transparency can be a matter of degree and perspective. What I am pointing out is lip-service to transparency. Full transparency would be different.

Microsoft's software is not very transparent - and partly as a result it is some of the most badly-designed, insecure and virus-ridden software the planet has ever seen. We can see the mistake, can see its consequences - and know how to avoid it - but we have to, like actually do that - and that involves some alerting of others to the problems often associated with closed-source proposals.

A Brief Overview of Machine Ethics

by lukeprog 1 min read5th Mar 201191 comments


Earlier, I lamented that even though Eliezer named scholarship as one of the Twelve Virtues of Rationality, there is surprisingly little interest in (or citing of) the academic literature on some of Less Wrong's central discussion topics.

Previously, I provided an overview of formal epistemology, that field of philosophy that deals with (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory.

Now, I've written Machine Ethics is the Future, an introduction to machine ethics, the academic field that studies the problem of how to design artificial moral agents that act ethically (along with a few related problems). There, you will find PDFs of a dozen papers on the subject.