Well, like I said. How's that careful avoidance of any phrasing that potentially smacks of egotism, working out for you in terms of producing world-saving actions?

You seem to believe that it is good to encourage a lot of actions. That is true if the effects of the actions are limited to increasing human rationality. Well, even that is not true, because if you increase the rationality of a destructive patent lawyer or politician, (note that I do not want to get into a discussion of whether patent lawyers or politician are harmful on average: I just needed to grab some likely suspects to keep my prose from getting too abstract) you simply enable him to be more effective at undeservedly harming people -- and I humbly suggest that for the purposes of this discussion, "harm" can be defined as "decrease the rationality of". But in general I will grant that what I just said is mostly probably just a quibble and that increasing the sanity waterline is a good thing.

In general, though, I am sceptical that "producing world-saving actions" is what we should be aiming for. Maybe I am biased by the fact that I am a cautious person, but I think that if only we could make everyone a lot more cautious (about the right things, namely, about effects on the global situation, not effects on one's personal situation) we'd be in much better shape than we actually are.

In great great grandparent (GGGP) I talk of egotism, but now I am talking of caution. The reason that that is not changing the subject is that an egotist is significantly more likely to cause harm through lack of caution than a non-egotist is. Egotists tend to have higher self-esteem and status and both arguments from evolutionary psychology and observation of people lead me to believe that higher self-esteem and status make people less cautious. (Nor is it the case that low-self-esteem types are necessarily ineffectual.)

Note also that in GGGP I wasn't asking you to eschew incautious people; I was merely asking you to avoid using language that actively repels cautious people because it might be nice to keep some around.

Also, I do not think teaching incautious people rationality skills is an effective response to human lack of caution. Some of them (particularly those with the best control over their motivational architecture) will be made more cautious that way, but some one them will simply be made more effective in pursuing their incautious ends.

I almost did not publish this because the probability that it will sway you in any significant way is so low. In fact it might be wise for you to consider this as simply a notification and a brief description of a longer conversation it might be worthwhile to have with you some day about my worries that SIAI is paying insufficient attention to an large class of potential contributors. SIAI understands altruists well because SIAI leaders are altruists. And they seem to understand egoists well. "egoist": someone whose values and terminal goals are largely selfish -- Hopefully Anonymous and Roko 2008 being salient examples. (I say "Roko 2008" instead of "Roko" because he might become or have become much less aligned with the egoists.) But it's not just all altruists and egoists. I'm talking about motivations here: which natural human positive reinforcer (fancy word for desire) motivates the person's e-risks or philantropic work.

I think that if only we could make everyone a lot more cautious (about the right things, namely, about effects on the global situation, not effects on one's personal situation) we'd be in much better shape than we actually are.

I think the precautionary principle is useless. It's easy to see why when reading books such as, We Wish to Inform You That Tomorrow We Will Be Killed with Our Families, which describes the 1994 Rwandan genocide. My motto is, "The only way out is through."

10Eliezer Yudkowsky9yAaaand not to put too fine a point on it, but how much research is that caution getting done, exactly? Philanthropic donations produced by this philosophy? Anything?

People who want to save the world

by Giles 1 min read15th May 2011247 comments


atucker wants to save the world.
ciphergoth wants to save the world.
Dorikka wants to save the world.
Eliezer_Yudkowsky wants to save the world.
I want to save the world.
Kaj_Sotala wants to save the world.
lincolnquirk wants to save the world.
Louie wants to save the world.
paulfchristiano wants to save the world.
Psy-Kosh wants to save the world.

Clearly the list I've given is incomplete. I imagine most members of the Singularity Institute belong here; otherwise their motives are pretty baffling. But equally clearly, the list will not include everyone.

What's my point? My point is that these people should be cooperating. But we can't cooperate unless we know who we are. If you feel your name belongs on this list then add a top-level comment to this thread, and feel free to add any information about what this means to you personally or what plans you have. Or it's enough just to say, "I want to save the world".

This time, no-one's signing up for anything. I'm just doing this to let you know that you're not alone. But maybe some of us can find somewhere to talk that's a little quieter.