What is Eliezer Yudkowsky's meta-ethical theory?

Eliezer's metaethics might be clarified in terms of the distinctions between sense, reference, and reference-fixing descriptions. I take it that Eliezer wants to use 'right' as a rigid designator to denote some particular set of terminal values, but this reference fact is fixed by means of a seemingly 'relative' procedure (namely, whatever terminal values the speaker happens to hold, on some appropriate [if somewhat mysterious] idealization). Confusions arise when people mistakenly read this metasemantic subjectivism into the first-order semantics or mea... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 11 replies (Click to show all)

What is objection (1) saying? That asserting there are moral facts is incompatible with the fact that people disagree about what they are? Specifically, when people agree that there is such a thing as a reason that applies to both of them, they disagree about how the reason is caused by reality?

Do we not then say they are both wrong about there being one "reason"?

I speak English(LD). You speak English(RC). The difference between our languages is of the same character as that between a speaker of Spanish and a speaker of French. I say "I"... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

4Wei_Dai8yThis summary of Eliezer's position seems to ignore the central part about computation [http://lesswrong.com/lw/sw/morality_as_fixed_computation/]. That is, Eliezer does not say that 'Right' means 'promotes external goods X, Y and Z' but rather that it means a specific computation that can be roughly characterized as 'renormalizing intuition [http://lesswrong.com/lw/n9/the_intuitions_behind_utilitarianism/]' which eventually outputs something like 'promotes external goods X, Y and Z'. I think Eliezer would argue that at least some of the objections list here are not valid if we add the part about computation. (Specifically, disagreements and fallibility can result from from lack of logical omniscience regarding the output of the 'morality' computation.) Is the reason for skipping over this part of Eliezer's idea that standard (Montague) semantic theory treats all logically equivalent language as having the same intension? (I believe this is known as "the logical omniscience problem" in linguistics and philosophy of language.)
1lukeprog9yThinking more about this, it may have been better if Eliezer had not framed his meta-ethics sequence around "the meaning of right." If we play rationalist's taboo with our moral terms and thus avoid moral terms altogether, what Eliezer seems to be arguing is that what we really care about is not (a) that whatever states of affairs our brains are wired to send reward signals in response to be realized, but (b) that we experience peace and love and harmony and discovery and so on. His motivation for thinking this way is a thought experiment - which might become real in the relatively near future - about what would happen if a superintelligent machine could rewire our brains. If what we really care about is (a), then we shouldn't object if the superintelligent machine rewires our brains to send reward signals only when we are sitting in a jar. But we would object to that scenario. Thus, what we care about seems not to be (a) but (b). In a meta-ethicists terms, we could interpret Eliezer not as making an argument about the meaning of moral terms, but instead as making an argument that (b) is what gives us Reasons, not (a). Now, all this meta-babble might not matter much. I'm pretty sure even if I was persuaded that the correct meta-ethical theory states that I should be okay with releasing a superintelligence that would rewire me to enjoy sitting in a jar, I would do whatever I could to prevent such a scenario and instead promote a superintelligence that would bring peace and joy and harmony and discovery and so on.

What is Eliezer Yudkowsky's meta-ethical theory?

by lukeprog 1 min read29th Jan 2011375 comments


In You Provably Can't Trust Yourself, Eliezer tried to figured out why his audience didn't understand his meta-ethics sequence even after they had followed him through philosophy of language and quantum physics. Meta-ethics is my specialty, and I can't figure out what Eliezer's meta-ethical position is. And at least at this point, professionals like Robin Hanson and Toby Ord couldn't figure it out, either.

Part of the problem is that because Eliezer has gotten little value from professional philosophy, he writes about morality in a highly idiosyncratic way, using terms that would require reading hundreds of posts to understand. I might understand Eliezer's meta-ethics better if he would just cough up his positions on standard meta-ethical debates like cognitivism, motivation, the sources of normativity, moral epistemology, and so on. Nick Beckstead recently told me he thinks Eliezer's meta-ethical views are similar to those of Michael Smith, but I'm not seeing it.

If you think you can help me (and others) understand Eliezer's meta-ethical theory, please leave a comment!

Update: This comment by Richard Chappell made sense of Eliezer's meta-ethics for me.