Today's post, Extenuating Circumstances was originally published on 06 April 2009. A summary (taken from the LW wiki):

 

You can excuse other people's shortcomings on the basis of extenuating circumstances, but you shouldn't do that with yourself.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Real-Life Anthropic Weirdness, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 11:11 AM

It was interesting to read the conversation between Eliezer and PJEby in the comments, as they are two very smart people whom I admire.

PJEby puts emphasis on the "growth mindset". I agree with that. I just think that working for years to save the world, building a superintelligent friendly AI, raising the sanity waterline, and expanding one's harem :) seems like a textbook example of a growth mindset. I guess even Steve Jobs didn't have ambitions that high; he seemed to be satisfied with making a few shiny toys.

I suspect the difference is that for Eliezer, the "mind" he is expecting to grow is not limited to a human mind, but extends to other people, and ultimately the artificial intelligence. For a typical self-improvement fan, the goal is to expand their human mind to achieve all the great goals. For Eliezer, the proper way seems to expand the whole "Eliezer's brain + rationalist community + Friendly AI" systems, until all the great goals are achieved. Because it all happens in a causally connected universe, and for a consequentialist the important outcome is that the things get done, no matter by who specifically does them. If it is great to do X, it is equally great to start a movement or to design a machine which does X, and one should rationally choose the best path. There is no need to emphasise the human brain part, except when it really is the best part to do the job.

Saying "I don't believe I can fly by merely waving my hands" is not contradictory to a growth mindset, if the person has already started an airplane construction project.

(This said, I also think Eliezer understimates PJEby's expertise.)