Consider three cases in which someone is asking you about morality: a clever child, your guru (and/or Socrates, if you're more comfortable with that tradition), or an about-to-FOOM AI of indeterminate friendliness. For each of them, you want your thoughts to be as clear as possible- the other entity is clever enough to point out flaws (or powerful enough that your flaws might be deadly), and for none of them can you assume that their prior or posterior morality will be very similar to your own. (As Thomas Sowell puts it, children are barbarians who need to be civilized before it is too late; your guru will seem willing to lead you anywhere, and the AI probably doesn't think the way you do.)

I suggest that all three can be approached in the same way: by attempting to construct an amoral approach to morality. At first impression, this approach gives a significant benefit: circular reasoning is headed off at the pass, because you need to explain morality (as best as you can) to someone who does not understand or feel it.

Interested in what comes next?

The main concern I have is that there is a rather extensive Metaethics sequence already, and this seems to be very similar to The Moral Void and The Meaning of Right. The benefit of this post, if there is one, seems to be in a different approach to the issue- I think I can get a useful sketch of the issue in one post- and probably a different conclusion. At the moment, I don't buy Eliezer's approach to the Is-Ought gap (Right is a 1-place function... why?), and I think a redefinition of the question may make for somewhat better answers.

(The inspirations for this post, if you're interested in me tackling them directly instead, are criticisms of utilitarianism obliquely raised in a huge tree in the Luminosity discussion thread (the two interesting dimensions are questioning assumptions, and talking about scope errors, of which I suspect scope errors is the more profitable) and the discussion around, as shokwave puts it, the Really Scary Idea.)

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 10:52 AM

I'd be interested in seeing you try, and I would almost certainly provide feedback and criticism, mainly because I consider the Meta-ethics sequence to be by far the worst of the Sequences that I have read.

Right is a 1-place function... why?

What Eliezer calls Right others might call Right_human. It's pure semantics, but for most people, my impression is that Eliezer's approach seems to be less intuitive.

He says

Here we are treating morality as a 1-place function. It does not accept a person as an argument, spit out whatever cognitive algorithm they use to choose between actions, and then apply that algorithm to the situation at hand.

but also says

If I define rightness to include the space of arguments that move me, then when you and I argue about what is right, we are arguing our approximations to what we would come to believe if we knew all empirical facts and had a million years to think about it - and that might be a lot closer than the present and heated argument. Or it might not.

The thrust of his argument is that for any given being, 'right' corresponds to some implicit function, which does not depend on who is performing the action. That function, however, may differ for different beings. So Right_human is not guaranteed to be well-defined, but Right_topynate is.

I don't disagree. I would only add that CEV requires a large degree of agreement among each person's implicit Right_x function. Hence me saying Right_human.

Well, I might disagree. Right_topynate isn't guaranteed to be well defined either, but it's more likely than Right_human.