Today's post, No License To Be Human was originally published on 20 August 2008. A summary (taken from the LW wiki):

 

We don't pull children off of train tracks because that's what humans do, or because we can consistently hold that it's better to pull children off of train tracks. We pull children off of train tracks because it's right.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was You Provably Can't Trust Yourself, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 5:07 AM

Now all you need is a system that you believe cannot prove anything which is false. It isn't permitted to be that way by definition, and it needs to be able to prove a significant number of things.

The first step to finding that system is being able to tell if many moral statements are false, without referencing our morality. Unless we create a morality oracle, I don't see a way to do that.

Start by considering the class of statements "In situation S, it is immoral to take an action in set A" and their complementary sets "In situation S, it is immoral to refrain from all actions in set A". If immorality is always avoidable, then one of those two statements is false, and any system which can prove both of them is therefore excluded.

What are the odds that what is 'right-right' will be a system compatible with a system which developed with competing pressures, and has as a major characteristic that 'people who have this system successfully convince other people to adopt it'?

What are the odds that what is 'right-right' will be a system compatible with a system which developed with competing pressures, and has as a major characteristic that 'people who have this system successfully convince other people to adopt it'?

Well, this is the case for mathematics and science.

[-]Decius12y-20

How? Can mathematics or science know if two moralities are compatible with each other?

What I meant was that what is true is indeed "a system compatible with a system which developed with competing pressures, and has as a major characteristic that 'people who have this system successfully convince other people to adopt it'". Sorry if that was unclear.

On what basis do you make that assertion?

Also, I don't think that 'true' is a correct descriptor for the One Correct Morality. 'Right' is the best word I think we have for what it is.

Me:

What I meant was that what is true is indeed "a system compatible with a system which developed with competing pressures, and has as a major characteristic that 'people who have this system successfully convince other people to adopt it'".

Decius:

On what basis do you make that assertion?

The two examples of such systems of what is true that I mentioned in the great-gradparent: mathematics and science.

Neither of those are systems of morality.

I never said they were. My point was that the statement you were implying to be extremely unlikely, is in fact valid for non-moral truths.

That's because we have a physical truth oracle that we can do two use to test the validity of physical truths. If we could objectively observe the morality of an action, then we could have a system of morality as accurate to the perfect one as our system of science is accurate.

Right now I would figure that our system of morality (being roughly as developed as Aristotle's) bears as much relation to the proposed right system as Aristotle's science bears to the actual laws of physics. For the same reasons.

Further, I don't think that according to my current morality, I should switch my beliefs to what is right, even if I could figure out what that is. I'd have to know how to test acts for morality first, but if that test goes against my current judgement I am likely to judge the test flawed, in the same way that I would call a calculator wrong if it consistently told me that 2+2=5

Can anybody help me out by suggesting an alternate word for "license"? Or describe what it's supposed to mean? I'm really having trouble understanding what this post is saying.

On a related note, does anybody want to take a stab at writing a summary for this post?

[-][anonymous]12y30

A license is a kind of credential that allows one the moral/ethical/legal right to perform a certain kind of action.

In this case, I believe he's saying that merely doing "the human thing to do" is not a credential that makes "doing the human thing" moral in and of itself.

Unfortunately his analogy is with some complicated mathematical logic.

Thank you. That's very helpful.

"License" merely means "permission" or "justification".