I think we are simply having a definitional dispute. As the term is used generally, moral realism doesn't mean that each agent has a morality, but that there are facts about morality that are external to the agent (i.e. objective). Now, "objective" is not identical to "universal," but in practice, objective facts tend to cause convergence of beliefs. So I think what I am calling "moral realism" is something like what you are calling "Friendliness realism."

Lengthening the inferential distance further is that realism is a two place word. As you noted, there is a distinction between realism(Friendliness, agents) and realism(Friendliness, humans).

That said, I do think that "people would perceive an AI implementing objective morals as Friendly" if I believed that objective morals exist. I'm not sure why you think that's a stronger claim than "people who are sufficiently educated and exposed to the right knowledge will come to agree with certain universal objective morals." If you believed that there were objective moral facts and knew the content of those facts, wouldn't you try to adjust your beliefs and actions to conform to those facts, in the same way that you would adjust your physical-world beliefs to conform to objective physical facts?

I think we are simply having a definitional dispute.

That seems likely. If moral realists think the morality is a one-place word, and anti realists think it's a two place word, we would be better served by using two distinct words.

It is somewhat unclear to me what moral realists are thinking of, or claiming, about whatever it is they call morality. (Even after taking into account that different people identified as moral realists do not all agree on the subject.)

So I think what I am calling "moral realism" is something like what you are call

... (read more)

Stupid Questions Open Thread Round 3

by OpenThreadGuy 1 min read7th Jul 2012209 comments

8


From the last thread:

From Costanza's original thread (entire text):

"This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well.  Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent.  If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant."

Meta:

  • How often should these be made? I think one every three months is the correct frequency.
  • Costanza made the original thread, but I am OpenThreadGuy. I am therefore not only entitled but required to post this in his stead. But I got his permission anyway.

Meta:

 

  • I still haven't figured out a satisfactory answer to the previous meta question, how often these should be made. It was requested that I make a new one, so I did.
  • I promise I won't quote the entire previous threads from now on. Blockquoting in articles only goes one level deep, anyway.