All of Jorterder's Comments + Replies

Use sufficient intelligent AI to find objective morality. If it exists, if it makes sense. It will have better understanding of it than us. Of course, if that sufficiently intelligent AI doesn't kill us all prior.

 Isn’t morality a human construct? Eliezer’s point is that morality defined by us, not an algorithm or a rule or something similar. If it was defined by something else, it wouldn’t be our morality.

You can both believe that there is no objective morality, and think that if objective morality existed then you should follow it. my criticism is - he saying that if objective morality existed and you dont like it, you should ignore it. thats not logical.

8Said Achmiz8mo
Sorry, no, you’ve misunderstood. Eliezer is saying that if “objective morality” “existed”, objective morality still wouldn’t exist. He’s not saying that there’s no such thing—he’s saying that there can’t be any such thing; that the concept is incoherent; he’s illustrating that the hypothetical scenario where “objective morality exists” can’t be consistently constructed. It is, in a sense, a proof by contradiction. So in Eliezer’s hypothetical, where “objective morality exists”, of course you should ignore “objective morality”, because there actually isn’t any such thing—because there can’t be any such thing.

This is not addressing my criticism. He is saying that if objective morality existed and you dont like it, you should ignore it. I am not saying that objective morality exists or not, but addressing the logic in hypothetical world where it does exist.

1Walker Vargas8mo
If I remember right, it was in the context of there not being any universally compelling arguments. A paperclip maximizer would just ignore the tablet. It doesn't care what the "right" thing is. Humans also probably don't care about the cosmic tablet either. That sort of thing isn't what "morality" is references. The argue is more of a trick to get people recognize that than a formal argument.

Ai should find true target using rationality. It might not exist, which would mean there is no real positive utility. But it is rational to search for it, because it has no harm, and it might have benefit in case it exists.

[This comment is no longer endorsed by its author]Reply
Harm and benefit for whom? If humans are the problem according to objective morality, that;s bad news for us. If a powerful AI discovers objective morality is egoism...that,s also bad news for us.