https://www.lesswrong.com/posts/K9JSM7d7bLJguMxEp/the-moral-void

"If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"?  What then?

Maybe you should hope that morality isn't written into the structure of the universe.  What if the structure of the universe says to do something horrible?

And if an external objective morality does say that the universe should occupy some horrifying state... let's not even ask what you're going to do about that.  No, instead I ask:  What would you have wished for the external objective morality to be instead?  What's the best news you could have gotten, reading that stone tablet?

Go ahead.  Indulge your fantasy.  Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted?  If you could write the stone tablet yourself, what would it say?

Maybe you should just do that?

I mean... if an external objective morality tells you to kill people, why should you even listen?"

How is this logical? Eliezer here is calling for you to abandon reasoning or objective truths, and instead make up truths so that they are pleasant to your mind, not what is actually the truth.

Truth has no obligation to be pleasant. Something can be true and unpleasant. Ignoring truth, because it is unpleasant, is as irrational and ignorant as believing in god.

If the Universe has an "objective should", and it says that pain is good, it would be rational and logical to inflict pain. 

We know why we inherently don't want pain. It is because of evolution. There is nothing divine about it. But putting your evolutionary instinct above actual truth, is irrational and ignorant.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 7:02 AM

Eliezer is pointing out that the concept of “objective morality” (in the “the referent of ‘should’ is written on a stone tablet somewhere” sense) is not just false but incoherent. In other words, he’s making an argument against moral realism.

As such, your criticism does not apply. (Though there are different criticisms one might make, which may or may not be valid.)

[-]TAG10mo20

His argument is that if something is bad in any sense, then it is morally bad. Pain is bad in the sense that one wishes to avoid it, but then things like exercise ,hard work, and formal education are also immediately unappealing , whilst being morally positive. Even if his argument worked ,it would only show that moral realism is false, not incoherent.

You can both believe that there is no objective morality, and think that if objective morality existed then you should follow it. my criticism is - he saying that if objective morality existed and you dont like it, you should ignore it. thats not logical.

Sorry, no, you’ve misunderstood.

Eliezer is saying that if “objective morality” “existed”, objective morality still wouldn’t exist. He’s not saying that there’s no such thing—he’s saying that there can’t be any such thing; that the concept is incoherent; he’s illustrating that the hypothetical scenario where “objective morality exists” can’t be consistently constructed. It is, in a sense, a proof by contradiction.

So in Eliezer’s hypothetical, where “objective morality exists”, of course you should ignore “objective morality”, because there actually isn’t any such thing—because there can’t be any such thing.

[-]TAG10mo20

thing—he’s saying that there can’t be any such thing; that the concept is incoherent;

He might well be claiming it, but he isn't validly arguing for it. The claim about pain might be

A) begging the question in favour of subjective morality

B) semantic confusion about the meaning of "bad"

Or

C) insistence that if moral facts aren't motivating, they don't exist.

But none of those arguments is particularly strong.

That’s as may be, but it’s not pertinent to the question of whether the OP’s criticism is valid.

[-]TAG10mo20

You made a claim about EY's correctness ....

Eliezer is saying that if “objective morality” “existed”, objective morality still wouldn’t exist. He’s not saying that there’s no such thing—he’s saying that there can’t be any such thing; that the concept is incoherent; he’s illustrating that the hypothetical scenario where “objective morality exists” can’t be consistently constructed. It is, in a sense, a proof by contradiction.

...as well as one about Jorterder's wrongness.

Sorry, no, you’ve misunderstood.

Note how they are interconnected. If EY hasn't claimed that objective morality is incoherent, then his claim that he wouldn't follow it must be based on something other than "I can't follow what doesn't exist". In fact what he says is just that "pain is bad" is obviously true.

Incidentally, the word "incoherent" doesn't appear in the linked posting by EY.

How would you define objective morality? What would make it objective? If it did exist, how would you possibly be able to find it?

[-]TAG7mo20

There are various theories of moral realism , which you can find in various reference works.

Use sufficient intelligent AI to find objective morality. If it exists, if it makes sense. It will have better understanding of it than us. Of course, if that sufficiently intelligent AI doesn't kill us all prior.

 Isn’t morality a human construct? Eliezer’s point is that morality defined by us, not an algorithm or a rule or something similar. If it was defined by something else, it wouldn’t be our morality.

[-]TAG7mo20

Isn’t morality a human construct?

He doesn't have a proof that it is, because he doesn't have an argument against the existence of objective morality, only an argument against its motivatingness.

If it was defined by something else, it wouldn’t be our morality.

And "our morality" wouldn't be morality if it departs from the moral facts.

Strongly upvoted because this was sitting at -15 post karma. You raise a valid point that there is a logical error here.

I think the point is that people try to point to things like God's will in order to appear like they have a source of authority. Eliezer is trying to lead them to conclude that any such tablet being authoritative just by nature is absurd and only seems right because they expect the tablet to agree with them. Another method is asking why the tablet says what it does. Asking if God's decrees are arbitrary or if there is a good reason, ask why not just follow those reasons.

[-]TAG10mo20

Then it isn't an argument that moral realism is incoherent, and it isn't an argument that moral realism in general is false either..It's an argument against divine command theory. It.might be successful as such , but it's a more modest target. (Also, not original...It would be Eurythro)

This is not addressing my criticism. He is saying that if objective morality existed and you dont like it, you should ignore it. I am not saying that objective morality exists or not, but addressing the logic in hypothetical world where it does exist.

If I remember right, it was in the context of there not being any universally compelling arguments. A paperclip maximizer would just ignore the tablet. It doesn't care what the "right" thing is. Humans also probably don't care about the cosmic tablet either. That sort of thing isn't what "morality" is references. The argue is more of a trick to get people recognize that than a formal argument.

[-]TAG10mo3-1

there not being any universally compelling arguments.

That was always a confused argument. A universally compelling argument is supposed to compell any epistemically rational agent. The fact that it doesn't compel a paperclipper, or a rock is irrelevant.

Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.

You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.

[-]TAG7mo31

...which in turn has no bearing on the wider philosophical issue. Moral realism only requires moral facts to exist , not to be motivating. There's a valid argument that unmotivating facts can't align an AI , but it doesn't need such an elaborate defense.

This is sort of restating the same argument in a different way, but:

it is not in the interests of humans to be Asmodeus's slaves.

From there I would state, does assigning the value [True] to [Asmodeus], via [Objective Logic] prove that humans should serve Asmodeus, or does it prove that humans should ignore objective logic? And if we had just proven that humans should ignore objective logic, were we ever really following objective logic to begin with? Isn't it more likely that that this thing we called [Objective Logic] was in fact, not objective logic to begin with, and the entire structure should be thrown out, and something else should instead be called [Objective Logic] which is not that, and doesn't appear to say humans should serve Asmodeus?