LESSWRONG
LW

Personal Blog

6

[SEQ RERUN] Fighting a Rearguard Action Against the Truth

by MinibearRex
6th Sep 2012
1 min read
7

6

Personal Blog

6

[SEQ RERUN] Fighting a Rearguard Action Against the Truth
7Oscar_Cunningham
2Vladimir_Nesov
0MinibearRex
0Oscar_Cunningham
0Oscar_Cunningham
0[anonymous]
2KPier
New Comment
7 comments, sorted by
top scoring
Click to highlight new comments since: Today at 7:36 PM
[-]Oscar_Cunningham13y70

This post says "Comments (2)" and yet no comments are showing up. Why? EDIT: Now "Comments (3)" but I can only see my own comment.

Reply
[-]Vladimir_Nesov13y20

There are two banned nonsense comments which are still counted. See this bug report.

Reply
[-]MinibearRex13y00

For me, it says "Comments (5)", but I see 3. This will (theoretically, be the sixth/fourth.

Reply
[-]Oscar_Cunningham13y00

Yeah, so the number of comments visible is two less than the number claimed. Where are the missing comments?

Reply
[-][anonymous]13y00

Testing.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]13y00

Eliezer2000 is starting to think inside the black box. His reasons for pursuing this course of action—those don't matter at all. link

When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI. His reasons for doing this don't matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism. link

That's two instances of Eliezer placing no moral value "at all" on his own motives in his pursuit of the motive of AI morals. Not necessarily a contradiction, but less elegant than might be.

Reply
[-]KPier13y20

I don't think he's saying that motives are morally irrelevant - I think he's saying that they are irrelevant to the point he is trying to make with that blog post.

Reply
Moderation Log
More from MinibearRex
View more
Curated and popular this week
7Comments

Today's post, Fighting a Rearguard Action Against the Truth was originally published on 24 September 2008. A summary (taken from the LW wiki):

 

When Eliezer started to consider the possibility of Friendly AI as a contingency plan, he permitted himself a line of retreat. He was now able to slowly start to reconsider positions in his metaethics, and move gradually towards better ideas.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was That Tiny Note of Discord, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

Mentioned in
6[SEQ RERUN] My Naturalistic Awakening