In You Provably Can't Trust Yourself, Eliezer tried to figured out why his audience didn't understand his meta-ethics sequence even after they had followed him through philosophy of language and quantum physics. Meta-ethics is my specialty, and I can't figure out what Eliezer's meta-ethical position is. And at least at this point, professionals like Robin Hanson and Toby Ord couldn't figure it out, either.
Part of the problem is that because Eliezer has gotten little value from professional philosophy, he writes about morality in a highly idiosyncratic way, using terms that would require reading hundreds of posts to understand. I might understand Eliezer's meta-ethics better if he would just cough up his positions on standard meta-ethical debates like cognitivism, motivation, the sources of normativity, moral epistemology, and so on. Nick Beckstead recently told me he thinks Eliezer's meta-ethical views are similar to those of Michael Smith, but I'm not seeing it.
If you think you can help me (and others) understand Eliezer's meta-ethical theory, please leave a comment!
Update: This comment by Richard Chappell made sense of Eliezer's meta-ethics for me.
An unusual amount of the comments here are feeling unnecessary to me, so let me see if I understand this.
I have a utility function which assigns an amount of utility (positive or negative) to different qualities of world-states. (Just to be clear, ‘me being exhausted’ is a quality of a world-state, and so is ‘humans have mastered Fun Theory and apply it in a fun-maximizing fashion to humankind.’) Other humans have their own utility functions, so they may assign a different amount of utility to different qualities of world-states.
I have a place in my utility function for the utility functions of other people. As a result, if enough other people (of sufficient significance to me) attach high utility to X being a quality of a future world-state, I may work to make X a quality of the future world-state even if my utility function attaches a higher utility to Y than X before considering other people’s utility functions (given that X and Y are mutually exclusive). Other rational agents will do something similar, depending on the weight that other people’s utility functions are likely to have on their own. Of course, if I gain new information, I may act differently in order to maximize my utility function and/or, more relevantly to this discussion, I may change my utility function itself to because I feel differently.
Do I grok? If so, I don’t really understand why people are trying to formulate a definition of the word ‘should’ because it doesn’t seem to have any use in maximizing a utility function. Saying ‘Gary should Y’ or ‘Gary should_Gary Y’ seems to be statements that people would make before learning to reduce desires to parts of a utility function.
Dorikka,
If that's what Eliezer means, then this looks like standard practical rationality theory. You have reasons to act (preferences) so as to maximize your utility function (except that it may not be right to call it a "utility function" because there's no guarantee that each person's preference set is logically consistent). The fact that you want other people to satisfy their preferences, too, means that if enough other people want world-state X, your utility function will assign higher utility to world-state X than to world-state Y even if w... (read more)