Today's post, Fighting a Rearguard Action Against the Truth was originally published on 24 September 2008. A summary (taken from the LW wiki):

 

When Eliezer started to consider the possibility of Friendly AI as a contingency plan, he permitted himself a line of retreat. He was now able to slowly start to reconsider positions in his metaethics, and move gradually towards better ideas.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was That Tiny Note of Discord, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
7 comments, sorted by Click to highlight new comments since:

This post says "Comments (2)" and yet no comments are showing up. Why? EDIT: Now "Comments (3)" but I can only see my own comment.

There are two banned nonsense comments which are still counted. See this bug report.

For me, it says "Comments (5)", but I see 3. This will (theoretically, be the sixth/fourth.

Yeah, so the number of comments visible is two less than the number claimed. Where are the missing comments?

[-][anonymous]00

Testing.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

Eliezer2000 is starting to think inside the black box. His reasons for pursuing this course of action—those don't matter at all. link

When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI. His reasons for doing this don't matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism. link

That's two instances of Eliezer placing no moral value "at all" on his own motives in his pursuit of the motive of AI morals. Not necessarily a contradiction, but less elegant than might be.

I don't think he's saying that motives are morally irrelevant - I think he's saying that they are irrelevant to the point he is trying to make with that blog post.