atucker wants to save the world.
ciphergoth wants to save the world.
Dorikka wants to save the world.
Eliezer_Yudkowsky wants to save the world.
I want to save the world.
Kaj_Sotala wants to save the world.
lincolnquirk wants to save the world.
Louie wants to save the world.
paulfchristiano wants to save the world.
Psy-Kosh wants to save the world.

Clearly the list I've given is incomplete. I imagine most members of the Singularity Institute belong here; otherwise their motives are pretty baffling. But equally clearly, the list will not include everyone.

What's my point? My point is that these people should be cooperating. But we can't cooperate unless we know who we are. If you feel your name belongs on this list then add a top-level comment to this thread, and feel free to add any information about what this means to you personally or what plans you have. Or it's enough just to say, "I want to save the world".

This time, no-one's signing up for anything. I'm just doing this to let you know that you're not alone. But maybe some of us can find somewhere to talk that's a little quieter.

New to LessWrong?

Mentioned in
New Comment
247 comments, sorted by Click to highlight new comments since: Today at 9:37 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I want the world to be saved.

I agree Alicorn's phrasing is better. My own position would literally be: "I want to act so as to maximize the degree to which the world is saved". In practice this is more likely to be "helping other people to save the world", but that's a strategy not a goal. I'm indifferent to personal glory etc. I want to maximize something rather like a utility function, so I want my degree of ambition to naturally scale with the opportunities available. If I only have the opportunity to do a very little good, I want to do a very little good. If I have the opportunity to do a lot (even very indirectly), I want to do a lot. From my point of view, I'm always at the site of the action (or at least, at the site of my own decisions, which is all I can directly control). Finally, I don't think I'm a consequentialist. What I'm describing is my volition, not my ethical system. I haven't quite decided my metaethics - I need to do some more thinking on that, and maybe wait for more of lukeprog's sequence.
3David Althaus13y
Wow, maybe I'm stupid, but why did this comment got so much karma? I'm really just curious...

Alicorn, a deontologist, wishes that a certain consequence (the salvation of the world) obtain. Whether she is involved in producing that consequence or not.

Giles, presumably a consequentialist, phrases his own wish so as to egoistically place himself at the site of the action.

The juxtaposition carries a certain irony.

Before seeing this subthread, I interpreted it almost exactly opposite. I thought of "I want the world to be saved" as just that, but "I want to save the world" as meaning "I want the world to be saved, and I am willing to work toward this goal myself." Sort of along the lines of this exchange from Terry Prachett's The Wee Free Men:

‘Ah. Something bad is happening.’

Tiffany looked worried.

‘Can I stop it?’

‘And now I’m slightly impressed,’ said Miss Tick. ‘You said, “Can I stop it?”, not “Can anyone stop it?” or “Can we stop it?” That’s good. You accept responsibility. That’s a good start.

When I say that I want to save the world, that's what I try to mean.

I personally think it is a horrible start. That is the kind of start that leads to young men with boxcutters boarding airplanes, with the Crusades as one intermediate step in the causal chain. It is the kind of start that leads to brave little fellows in kilts bashing everyone around them with clubs just to demonstrate their manhood. The kind of start I would prefer Tiffany to make would begin with a different question: "Oh, what is happening? And how do we know it is bad?". I would prefer that the Ravenclaws figure out what it is that needs to be done, before the Griffindors and Hufflepuffs start chanting "I want to do something!" and begin to look around for a Slytherin to suggest something for them to do. Don't take this personally. I don't think that you or anyone else reading this blog is a potential terrorist. But I came of age in the sixties and knew quite a few people who were involved in radical politics. And quite a few more people in the military. The slogan back then was "By whatever means necessary." And it still amazes me how many horrible things got done just because people were unwilling to show lack of commitment to the cause. Because when you commit to action in the abstract, and believe that the end justifies the means, it becomes a contest to find the means that most conclusively demonstrates one's allegiance to the end. So "I want the world to be saved, and I am willing to work toward this goal myself." is not something I like to hear. Nor is "I, as an individual, accept responsibility for the fate of the world." I would much rather hear, "Here is what is wrong and here is how we can fix it. Won't you help me convince enough other people of this?"
"Phrases his own wish so as to egoistically place himself at the site of the action," is an apt summary of my problem with the phrase, "I want to save the world".

How's that philosophy working out for you in terms of producing world-saving actions?

Heh. OK, good point. Would it help if I said that shortly after publishing grandparent I considered appending words to the effect of, This is more of a gut reaction than a conclusion informed by my experiences with social reality, and I am very willing to change my mind. In other words, if I really got to know more of the people who define themselves as "world savers," good chance I'd change my mind. But would it really hurt your plans to use the phrase, "improve the world," rather than, "save the world"? If the world needs saving (and I definitely believe it does need saving from irresponsible AGI researchers) then aren't people unlikely enough to overlook the fact that improving the world entails saving the world?
7Eliezer Yudkowsky13y
Well, like I said. How's that careful avoidance of any phrasing that potentially smacks of egotism, working out for you in terms of producing world-saving actions?
You seem to believe that it is good to encourage a lot of actions. That is true if the effects of the actions are limited to increasing human rationality. Well, even that is not true, because if you increase the rationality of a destructive patent lawyer or politician, (note that I do not want to get into a discussion of whether patent lawyers or politician are harmful on average: I just needed to grab some likely suspects to keep my prose from getting too abstract) you simply enable him to be more effective at undeservedly harming people -- and I humbly suggest that for the purposes of this discussion, "harm" can be defined as "decrease the rationality of". But in general I will grant that what I just said is mostly probably just a quibble and that increasing the sanity waterline is a good thing. In general, though, I am sceptical that "producing world-saving actions" is what we should be aiming for. Maybe I am biased by the fact that I am a cautious person, but I think that if only we could make everyone a lot more cautious (about the right things, namely, about effects on the global situation, not effects on one's personal situation) we'd be in much better shape than we actually are. In great great grandparent (GGGP) I talk of egotism, but now I am talking of caution. The reason that that is not changing the subject is that an egotist is significantly more likely to cause harm through lack of caution than a non-egotist is. Egotists tend to have higher self-esteem and status and both arguments from evolutionary psychology and observation of people lead me to believe that higher self-esteem and status make people less cautious. (Nor is it the case that low-self-esteem types are necessarily ineffectual.) Note also that in GGGP I wasn't asking you to eschew incautious people; I was merely asking you to avoid using language that actively repels cautious people because it might be nice to keep some around. Also, I do not think teaching incautious people rationality

In general, though, I am sceptical that "producing world-saving actions" is what we should be aiming for. Maybe I am biased by the fact that I am a cautious person, but I think that if only we could make everyone a lot more cautious

Aaaand not to put too fine a point on it, but how much research is that caution getting done, exactly? Philanthropic donations produced by this philosophy? Anything?

I think the precautionary principle is useless. It's easy to see why when reading books such as, We Wish to Inform You That Tomorrow We Will Be Killed with Our Families, which describes the 1994 Rwandan genocide. My motto is, "The only way out is through."
6David Althaus13y
AHAA! I got it, at least I hope so. For me "I want to save the world" and " I want the world to be saved" meant exactly the same, i.e. I didn't realize that the sentence "I, person P, want to save the world" meant that P had to be involved in this whole save-the-world-business. Now "I want to save the world" evokes rather egoistic and self-aggrandizing characters in my mind. Strange world...
It may be worth noting - again - that my non-moral reasons for action (prudential considerations) work more or less consequentialistically.
Hey guys, how about we debate who's being egoistic about saving the world and who isn't? That sounds like a really good way to use LessWrong and knowledge of world-saving.
We do seem to love accusing people of being altruistic only for singling.
Props for precision.
Does this mean that you don't want to be involved in doing it? And if so, why? Or is it just you want it to happen, which may or may not involve you?

I don't actively want to be involved in doing it. I would be quite happy to be among the masses of the saved by someone else's hand. I'm willing to help when ways to do that present themselves, since ignoring ways to make things I want to happen happen would be pretty dumb.

"I'm willing to help when ways to do that present themselves" And if they don't, will you sit back and wait for them, or will you look for them? (Not passing judgment, just trying to tease out more details of your position.)
I worked for Singinst for a while. I'm not really dedicating my life to diligently ferreting out more things to do, but I do put myself in the way of such information should it come to light (e.g. I hang out here, I'm on the Singinst mailing list).

Where can I exchange units of applause for units of world-saving?

For every non-duplicate comment replying to this one praising me for my right action, I will donate $10 to SIAI, up to a cap of $1010, with the count ending on 1 June 2011. Also accepting private messages.

Edit: The cap was met on 30 May. Donation of $1010 made.

This comment inspired me to make a donation to Village Reach. Your right action just got $350 worth of preventative medical care for kids, plus this praising comment.


I just made the donation of $1010. Thanks to all those who commented!

I will extol thee, my fellow LessWronger, O SIAI donor, and I will bless thy name until June 1. Every day I will bless thee; and I will praise thy name until June 1. Great is Rain, and greatly to be praised; and eir greatness is searchable and indexed by Google.

Your action is particularly right in not requiring that every user must limit the amount of praise to one comment.

I do a virtual Rain dance to honor this right action.

Further, I compound this by donating an additional $30 myself to SIAI right now.

I'll pat myself on the back for coming up with this idea, which has promised $340 to SIAI as of me submitting this comment.

This way of doing things is pretty cool, because now not only do you get to feel good for taking a right action, others get to feel good for getting you to do it, and you get to feel good for getting others to get you to do it.
The total is now $740.
l33t pr41z Ph0R R41N. j00 r0X0R!
Public commitment is a great way to improve one's chances of right action. And the "praise me" part of the set-up lets you potentially get even more warm fuzzies than the donation would alone! Nice job of community-usage and self-manipulation to get something productive done. Seriously.
I momentarily stopped to think about a way to make this praise-comment clever. When I couldn't, on the spot, come up with anything clever enough, I considered waiting until I would. But then I realized that that might make me forget to comment entirely! So let me now praise you for your wonderful deed, which provides SIAI money, acts of creativity to us, and great well-being in the form of positive emotions and group bonding to everyone! Huzzah!
All my praise are belong to you.
Congratulations on doing a thing closer to the best thing than many other relevant alternatives!
I hereby praise ya. Make it rain for the singularity!
Excellent! You've made my day better and done something good at the same time! ETA: To be clear, the word "Excellent!" is praise.
Kudos for a right action!
For your act of righteousness, this comment praises you.
6Scott Alexander13y
This is an excellent action! Commendations and praise be to you!
* Rationalist!Yoda
I praise you for having the wisdom of using a long enough deadline. When I first read your comment, it felt like you were exploiting me, as if you were forcing me to share my limited praise resources. But because I had enough time, I got over myself, realized that this is not a zero-sum game, that this is not an attack on my status and that what you are doing is clever and good. Well done, I praise you for your right action.
I commend anyone who donates to SIAI unless the donor acquired the assets by stealing, defrauding or otherwise imposing undeserved harm on another -- and based on his writings here, the latter seems very unlikely in Rain's case.
And unto the ten thousandth generation, they sing Rain's praises for he saves 80 of them for each donation. Thank you very much for doing this.
I can't find your source for that number. I'm interested.
Here Anna Salamon calculates that a dollar donated to SI saves on average 8 human lives.
I praise you for your right action. Also, here is a random string of integers to prove the non-duplicate nature of my comment: 5224818730
How did you generate those integers? Are they really random?!
Here is the link I used.
I praise you for this specific right action, and for the virtuous character and skills that it signals (honestly, based on other available info).
Your right action is most excellent!
I praise this act of taking a snarky comment literally and turning it into something wonderful. If this idea takes hold, we'll either see less snarkiness or more wonder.
Praised be this commitment of action by Rain.
I praise you for acting less wrong.
Thanks and compliments for your right action.
You've my sincerest praise for this right and good action.
Such an action is worthy of the praise it received!
Praise and blessings be upon thy name!
I praise you for acting rightly.
Praise for right action! Thanks for doing this!
4Eliezer Yudkowsky13y
I praise this right action.
I hereby praise you for your right action. My username is arundelo.
Yes, a somewhat munchkinish way of fulfilling the non-duplicate requirement. Thanks for the free $10!
Upvoted and replied. Kudos to you for the right action!
My own donations to SIAI are currently limited by the peer pressure not to donate, rather than my actual available funds. As such, replying to your comment gives me an excellent way to donate by stealth. Praise for your weird brilliance!
Praise for right action. Attempting to do good should have positive EV, let's encourage that.
*hugs* for donations to reduce x-risk!
3David Althaus13y
You are awesome and your action is praiseworthy!
Props for your righteous action! clenched fist salutes
Oh Rain, I praise thou so that your status may soar (temporarily) for your right action!
I praise you for your right action.
This is good and right of you. I approve.
I hereby declare your action to be praised by me.
Aaaand, you also get a 2X Praise Bonus! Thanks to Nesov for the suggestion.
Felicidades a Rain por su buena acción.
*praises Rain*
Our multitude of voices exalting Rain's donation rebound off the faster-approaching towers of the Singularity!
Praise be unto Rain and the right action he is to undertake!
Praise + action
Wow. That's really awesome for you to do. Praise for Rain, and Rain's right action!
Yet more praises rain on Rain.
I too praise this right action.
Sweet. A free (for me) way to donate money. Thank you very much for providing this opportunity (i.e. I praise you for your right action.)
Cunya praises your right action!
Thanks for your right action! I sincerely praise you.
For your right action, I praise you.
I could probably come up with some contrarian rationalization not to praise you, but I'll just not do that. Praise to you for making this minute more useful to the world than my last minute.
Usually, donating conditionally would be less right than unconditionally and asking for praise later. Yet in this context, knock-on effects make it righter. Major props.
I praise you for this right action.
I praise you for your right action. Not only does your action have recursive beauty, but it also, like a socio-volitional whirlpool, a decision-theoretic attractor, guides me by example. Edit: Ah, so that's what you meant by duplicate.
I hereby extend my praise for: * Your right action. * Its contextual awesomeness. * Setting up a utility gradient that basically forces me to reply to your comment, itself a novel experience.
Thanks for doing such a great thing! :D
I praise you for you right action, Rain. I honestly do.
The Knights Who Say Ni salute your noble undertaking, provided that you first build a working cello out of toothpicks.
Praise be to you for your right action! May you be blessed by the gods.
Thank you ^_^ i really appreciate you supporting a path towards a FAI singularity.
I praise your right action.
Congratulation for raising the expected utility of the future!
Good for you. Allowing other people to force you to do what you should be doing anyway is a great way to increase utility!
I certainly hope you mean non-duplicate per-user, since I'm not going to read through every one of the comments to ensure that my response is non-duplicate. In any case, I sing your praise on high.
I praise your right action, and accept the minor karma hit. Hmm, I wish I knew how to avoid this post polluting the "Recent Comments" list.
Rain accepts PMs.
It seems most people want their praise to be public, in which case avoiding the recent comments list would be counterproductive.
Praise for your right action.
Thank you for your right action.

Where can I exchange snide remarks for constructive criticism?

In all seriousness, I don't think Giles is trying to get much applause here, so much as make it easier for people to coordinate their efforts.

I think (correct me if I'm wrong) that he knows that he doesn't know the specific steps to take in order to accomplish his goals. Which is why he wants to talk to these people.

I think that he has done a pretty bad job of PR, and should have more concrete ideas and plans before he continues posting on the subject. Furthermore, he's continuing to use the heavily loaded phrase "save the world" in ways which probably discredit it, and this site.

That being said, I think that this comment is almost entirely destructive, and makes no progress towards anything other than continuing to tear Giles down. Which the current karma system is already doing.

If I'm going to be torn down, I appreciate information as to why. A snide remark is a lot more useful for this than a plain downvote.
Right here. Following Rain's example: Reply to this comment with snide remarks, about a linked comment/post on a topic which a LW reader could be expected to have some familiarity with. I will attempt, within my ability and within reason, to turn each snide remark into a constructive criticism. Up to a limit of 101 comments. I won't respond if someone else does a satisfactory job first. Duplicates are allowed but will yield duplicate responses. There is no per-user limit but please play nice and don't hog them all for yourself. (EDIT: time limit - end of 2011)
I'll kick it off with Vladimir_Nesov's example:
I praise you for your wry incisiveness.

I'm planning to save the world by accumulating a large amount of money and donating it to the most effective charity that I can find.

Two reasons why I currently think this path is best for me:

1) I think that my mind is much better suited to accumulating money than directly working on really hard problems. Decision theory just makes my head hurt.

2) If I change my mind about which charity I consider effective, being a donor allows me to immediately act on my updated beliefs without wasting my past learning. Ex: If I became an FAI researcher and then (after I had spent years learning how to be an effective FAI researcher) decided that life-extension technologies were more effective, I would have to study a bunch of new stuff. If I'm donating, I just send the money to a different place. Curious note: The influence of this factor on my final decision is inversely related to my confidence level in my current judgement.

Edit: I may be wrong about #2; the instrumental utility granted from such may be smaller than I am estimating it to be. However, I think that I have enough of a comparative advantage in making money that even if #2 grants me only a small amount of utility, my decision is... (read more)

I would very strongly advise that you donate something while you're trying to accumulate money. Otherwise I would bet against a generic person in your situation ever following through (Outside View).

Your statement makes intuitive sense, but do you have any data that you think would be a more persuasive argument?
I hadn't considered this one as an argument against the "entrepreneur now, donate later" strategy. It works from the inside view too - I don't want to expose myself to influences that might make me strongly modify my utility function in the direction of selfishness, and surrounding myself with go-getting business types might do just that. Speaking of which, I still owe you money. I have personal issues which currently prevent me from making a significant SIAI donation, but I'm trying to strategize my way around them.
Maybe we could set up a donation matching system that, while not as amazing as the ones by the big donors, could add up to something interesting and fruitful. The logistics seem a bit difficult to set up, but I know that I would be willing to match funds with someone in a similar position as myself.
I'm interested. What is it that means people want to pair up, rather than just individually giving as much as they can? If there's a pool of potential donors who are limited by "akrasia" then yes, that's a totally awesome idea. I'll see if anyone at SIAI is interested and maybe discuss with you how it could be implemented.
I think you're right, that akrasia would be one of the biggest reasons. There's also the possibility that there aren't enough applause lights for giving, and that thus giving to the SIAI just doesn't feel as good as it should. And since LW doesn't press my superstimulus buttons the way a video game does, it hurts more to pay for 20 hours of entertainment here than it does to pay for, say, Portal 2, which didn't even provide 20 hours of fun (but oh what fun...). I've tried to set up donation matching with friends before. Most are just not interested. The one that was willing has recently decided to buy a house and get married, so he can't play any more. But for a while, I was a part of a superorganism that had twice the donating power as just me alone, and that felt pretty cool. I'll start thinking of how it could be implemented, just in case the SIAI is interested.
Until I explicitly asked for it, this was certainly true for me. The Red Cross thanks me and provides gifts or status boosts more than 12 times, in person, on each individual visit to donate blood, sometimes doing so in a public forum. SIAI doesn't even send an automated email any more.
I find it annoying when the Red Cross calls me, even when it's just with thanks, but part of why I've given blood in the past is that there's a plaque on the wall in my grandma's house of a newspaper clipping in which my grandfather is praised for exceeding the (I think) 10-gallon mark of blood donation.
Human blood has very low iron content by weight -- it is measured in micrograms per deciliter.
I was also disappointed when I learned that the process of extracting the iron is nontrivial.
3David Althaus13y
What do you think is the best strategy to earn money?
I think that it's opening a business, though I don't yet know in what industry nor in which country such would be most profitable.
My strategy as explained in this LW comment has accumulated 351k USD in almost 7 years; I'm almost 28 years old. It may not be optimal, and it's definitely not universally applicable, but I suspect that it would work for many people. Its virtues are that it's not risky, and (most importantly!) it's devoid of magic tricks. It just requires hard work (but not that hard) over many years (but not that many). I've been thinking about writing a top-level post (which would be my first) along these lines.
Out of curiosity, what percentage of that amount have you donated? I would encourage you to write this post.
This is my default strategy (I'm getting a degree in Chemical Engineering) if I can't get a better one to come to fruition. If you have any additional insights beyond those in your linked comment, a top-level post might be useful.
A blog I follow with a similar life strategy is Get Rich Slowly.

I want to have saved the world.

If I have a choice between actions, and one of them is more likely to save the world than the other, I will take the one that is more likely to save the world.

Even I don't live up to that every time, not even close, but it sure sounds a lot scarier than "wanting to save the world", doesn't it?

Much less scary. "Save the world" is a very high-level goal, and I don't know how to do it. Your procedure is straightforward. I just need to invoke it when I recognize there's a choice. Resisting temptation is much easier (not just simpler, easier) than deciding whether you're being tempted. Also, that's not actually your goal. You don't rob banks.
Dunno - to me they sound almost equivalent (except that you have no other motivations at all, and I'm not sure I can honestly say that about myself). In any case, I'm not sure what sounds scary. It's all the people who don't seem to want to improve the world in any way at all that scare me.
There are certain actions for me such that the impact that they have on the probability that the world will be saved is insignificant enough that such impact is overwhelmed by the amount of immediate fun that they will generate, so an action that generates lots of immediate fun may be more desirable than one which increases the chance that the world will be saved by a really-super-small amount. Are you saying that for EV_Eliezer, there is no increase small enough in the chance that the world will be saved such that a huge amount of immediate fun is of greater terminal utility?
I think you quoted the wrong part to answer your question. It appears he takes many actions which he thinks are less likely to save the world than the known alternative.
Approximately what proportion of your actions (or time spent, if that's easier to compute) have a clear chance of contributing to saving the world?

Yesterday I went on vacation from LW, but today I thought I'd see how this post was going, since it had the potential to produce something new... Alas, in about 12 hours, it has sunk from -1 to -6, as the mob decides it is about nothing but "applause lights" and votes it down. This is a failure of imagination and it's about to become a lost opportunity. It is not every day that someone shows up wanting to organize the world-savers, and in this case, I see definite potential. Or is it really the case that all those altruists have no need for support? End of lecture, back to vacation.

I'm glad I have your support, but from my point of view none of this really works as an excuse. I'm trying to win here, and this post was clearly not a win (although it generated some interesting discussion, so maybe not quite so clearly). There are things I want to change about LW culture too, but I know that I won't achieve that by whingeing. If LW culture is to change, then my own attitude really has to change first. The lost opportunity may not be as great as you think. I'm committed to this, and I'm not going to stop trying to organize and support rational altruists just because of a few failed attempts.

Yes, I strongly prefer that earth-originating humane life survive and thrive and spread throughout the universe and make it much more fun and awesome to the fullest extent of what the laws of physics will allow, and I intend to use my life for this purpose.

(Though I'm curious, what kind of cooperation are you talking about, beyond what's already facilitated by entities like SIAI, LW, FHI, and the Existential Risk Reduction Career Network?)

I am nauseated by the very thought of being included in your list, despite my own practical plans in that direction. What is it with with empty applause generating exhortations these days? Ick. Double ick.

PS: Being put on a list of people with Dorikka's line of thought would not be psychologically distressing to me in the least. It is not nearly so creepy sounding.

Being "creepy sounding" seems like a very bad reason to be opposed to something. Cryonics is creepy sounding. The mission of the SIAI is creepy sounding (for exactly the same reason as this post, I would say). I don't even see how this differs from aversion to anything strange, which seems horribly destructive in the aggregate. There may be plenty of other reasons to downvote or criticize this post (empty applause being the main one), but I don't see any legitimate cause for psychological distress. Of course, you may fear that a reader will draw incorrect conclusions about your motivations/beliefs. I don't see why that in particular would nauseate, though--just prompt (actionable) concern.
Making a public declaration is a social act, as is making your own identity be visibly attached to something. When considering the consequences of such actions the 'creepy' vibe or 'ick' aversion provides critically important information about the effects that can be expected.
This seems perfectly fair (and if anything is a particular concern for me, given how easily my online/offline identities are connected). But my response would be more along the lines of "I am concerned that this statement feels extreme and arrogant even if technically accurate; I really don't want my identity so publicly associated with this position. Could you either remove my name from the list, or clarify my position inline?" Alternatively, "My gut reaction to this is that it feels creepy, and while I wouldn't use this gut reaction to support a normative judgment, I am concerned that others might." Neither of these sounds much like your position as you've expressed it.
Well, gee. Look at all the applause wedrifid has garnered. Applause lights still work around here, especially if you know your audience.
When was that ever in doubt?
Disapproval, not surprise.
I think you meant "would not be".

Looks like if you want to save the world, you've gotta accept that you're going to lose some karma.

Seems like the stakes have lessened somewhat. Socrates lost his life doing similar things.

A call to action should come with a definite goal IMHO. This call to action comes with not much more than a collection of vague motherhood statements.

I want the world to not need to be saved, but will settle for it being saved. The reality of existential risk is such an inconvenience. I want to help, but probably won't have, recognize, and successfully act on the opportunity to do so.

The scenarios I can imagine where a list like this would be useful are farfetched.


I've high preference for the world staying around.

Well, I think most of us want to save the world, or at least help to save it. The BIG problem is to find an efficient strategy to do so. We should make concrete proposals, not merely profess our altruism. ....and to be not too hypocritical here are my naive proposals:

  1. If your IQ is enormous -> FAI-research
  2. If you have money-making skills-> donate millions to SIAI, FHI, or other charities
  3. If your IQ is really high-> do some research ( maybe SENS, computer science, nanotech, etc...)
  4. or if you not that clever or you suffer akrasia: get an useful,
... (read more)
I would slightly modify step 1 as follows: if you think there's a chance you might be useful to SIAI, send them a letter. If they don't accept you, continue to step 2.
This isn't exactly what I did. Instead I'm signing up as a volunteer. But in either case the SIAI is the closest thing I know of to a group of rational do-gooders who are actually cooperating. So I want to try and get involved.
1David Althaus13y
I agree. IMO FAI means to first contact and consult the smartest guys in the field, which is presumably the Yudkowsky-gang.
Good post, but this gave me pause. Does LW / do you really think that IQ is the relevant factor here?
Is everyone at SIAI in the triple nines cut off? (IQ in the 99.9 percentile)
For these eleven ... maybe. Much more likely than 10^-33 for eleven average or random people. My guess is yes, but they may just be good at presenting their credentials.
Sure, it's a higher chance, but I'd still say it's pretty improbable - my understanding is that IQ isn't that great.
0David Althaus13y
To be clear, with IQ I mean intelligence, or abstract, analytical reasoning. But what else should you need? Maybe self-confidence?
Motivation, energy and persistence. The best smarts in the world don't help much if actually studying all the requisite subjects feels like too much work. Many people with high IQs are at a disadvantage, since they get used to all schoolwork being easy and not requiring any effort. When things start to actually get hard, they give up. This is one of the main reasons why I concluded that it isn't worth it for me to try to get into machine learning or other high-mathy fields, after beating my head against a rock wall for a couple of years.
7David Althaus13y
Right. I am a poster child for akrasia and lazyness. But, wow, I never thought this was a problem for you. Your output is impressing. ( At least to me, I've published on my blog one sentence in 1 year....) Enough flatter. What kind of research do you focus on, instead? You've mentioned cognitive science somewhere, at least I think so. In which fields do you think people with akrasia-problems and IQ of around 120 can have the most impact for reducing existential risks? Hopefully not "Make money and donate", I have some emotional, maybe irrational concerns with capitalism. Sorry, if this comment is to personal, the lesswrong-culture seems to punish this kind of comment, but I would value your advice!
I don't mind. :) I think there should be more of this kind of discussion. I don't think there is a general answer to this. There are many forms of akrasia. They have several different causes and also several different effects. Where you can have the most impact depends on what your akrasia allows you to do, and which parts of your akrasia you can beat. It also depends on what your natural talents are otherwise, what you're intrinsically motivated to do, and what you can motivate yourself to do even though you have no intrinsic interest in it. For instance, you were surprised to hear about my issues because you've seen me write a lot. The thing is, I find writing-related akrasia relatively easy to beat. However, when it comes to learning new math, my akrasia gets a lot worse. Overcoming it usually requires that I see interesting applications for which I can use the math at once. I'm also not intrinsically curious about most math: the best math folks are the ones who get a lot of practice at it because they keep playing with fun math problems all the time. I certainly play with math problems every now and then as well, but nowhere to the degree that some people do. I still haven't read most of the decision theory discussion here. My advice would be to figure out where your comparative advantage is. Look at the things you're good at and which come easily to you. Then try to figure out whether there's something x-risk-related that could benefit from those skills. Personally, I finally figured out that my comparative advantage is probably in writing and the social sciences. I just finished a BA in cognitive science, and I'm now taking a three-month sabbatical to concentrate on a) getting practice in writing b) improving my mental health by various means, particularly meditation. My current long-term goal involves honing my writing to such a point that I can make a living with it and become an influential enough writer/public figure to significantly raise support for
4David Althaus13y
Thanks for the advice! I really appreciate it.
That I would agree with (maybe some sort of "intellectual creativity" if that's not included, though I guess it should be). Generally, though, I see IQ used to refer to the thing measured by IQ tests instead of to intelligence.
1David Althaus13y
Ah, sorry. For me IQ is just an abbreviation for intelligence. ( In its broadest meaning, I can't define it. But you know, Einstein, Russell,Bostrom, Yudkowsky ect... have something in common, which I would say is Intelligence. ) But you're right, in reality IQ means something different, guess I should change my use of the word.....
Well, cheers then! Confusion: solved.
Here, my dear Giles, have a written downvote in the form of supporting this comment. This. Is. Applause Lights.
I'm willing to accept and update on criticism that this post was trollish or otherwise inappropriate. But I'm not sure I agree with the applause light criticism in particular. If I understood it correctly, Eliezer described an applause light as a statement that was vacuous because its negation was obviously unacceptable. But there have been people here who stated that they don't want to save the world (not just that they disagree with how it's phrased or presented). And they didn't get demolished for it.

This seems more appropriate for the Discussion section than for the main page.

OK you're right. Technical note: I appear to have the ability to move it to the discussion area, but do you know what will happen if I do? I don't want to end up with a duplicate post, or accidentally losing all my hard-earned negative karma by scaling all the downvotes back to 1 point each.
Moved it for you (also wanted to see how/whether that works when I do it). Your negative Karma looks intact. :-)
Yep, that seems to have worked as intended. Thanks.

I want to save the world.

I want to save the world. Specifically by means of satisfying SIAI's mission.

Awesome. I've signed up to SIAI as a volunteer, as they seem to be an example of what I'm interested in - a community of genuine rational altruists. I hope it'll work out well.

I want the world to be saved, and am willing to take action to make that happen so long as the actions I take to make it happen don't make me feel like a victim. I tend to feel like a victim if I take an action that reduces my standard of living, I contribute to a lost cause, or a few other scenarios that don't seem relevant here.

I presently feel that SIAI is blocking itself by apparently believing that solving the FAI problem is blocked on any or all of the following:

  • Newcomb's problem
  • Dealing with people who have non-instrumental concerns about what is
... (read more)
9Wei Dai13y
In the past I've made the opposite argument to SIAI, which seemed to be well received, that there were more philosophical problems that need to be solved for FAI than they may have realized. Obviously it would be great news if that turns out not to be the case, so I would be really interested to hear your arguments.
I thought SIAI consensus was that Newcomb's problem was solved, and not a block at all? It's not so much that they feel they have to deal with those people as that they are those people. (Haven't read further yet.)

"I am concerned that this statement feels extreme and arrogant even if technically accurate; I really don't want my identity so publicly associated with this position." Could you remove my name from the list please?

ETA: Thanks!

I want mankind to be saved, and reach the stars.

I want the world to be saved. If that means I have to do something about it, then I have to do something about it.

There's probably something you can do to make it a little more saved, or saved with a slightly higher probability. Does the expected payoff mean it's not worth looking into when compared to your other motivations?
It's much too important not to look into. But I think I need to become better and more powerful (which I am working on!) before I can really be of service.

I want to save the world.

I want to increase the probability of world survival. This I intend to do by choosing a career which has some impact on existential risk and by donating money to SIAI. I also believe that promoting cryonics decreases existential risk indirectly - if you expect to be around 1000 years from now, that tends to give a longer-term view on matters.

The effort it takes to keep up with the amount of analysis and meta analysis done here is quite exhausting.

I, too, want to save the world.

The Lifeboat Foundation has built a list of people, some high-status, who have said that they want the world saved. They have done nothing else, but this list is a good thing to have.

I want to live forever.

And I can't do that if the world ends, now can I?

I want to help save the world just as much as I want the world to be saved but either would be amazing from my perspective.

I want the world (i.e., civilization) to survive. I would choose a lower standard of living for myself and a lower probability of personal survival to increase the probability of global survival.

Except for rather minor exertions (such as devoting a minor fraction of my time and energy over a couple of years to making sure that my rather strange set of values had at least one advocate in the singularitarian conversation -- something I stopped doing about Apr 2009) I have not actually done anything for my civilization because I am so ridiculously disabled by chronic illness that with p=.95 I must allocate almost all of my resources into solving that bitch of a problem before I can be any significant use to myself or the world.

Hope you get well soon!

I want to participate in saving the world in an important way.

But equally clearly, the list [of people who want to save the world] will not include everyone.

What are you basing this claim on?

Obviousness? Exposure to at least one person who has declared their disinclination to save the world?
Point taken. The list likely won't include everyone. :-) I interpreted the original statement as "the list won't include a significant majority", because of the context it was given in. Perhaps Giles can chip in and say whether I was mistaken.
I meant "the list won't include a significant majority". (Possibly weak) evidence for this is the underfunding of organizations which actually appear to be trying to save the world (specifically GiveWell's charities and the SIAI). I say possibly weak because this funding gap comes about as a result of people's behaviour, not their stated preference. So this could be seen as a failure of rationality rather than motivations. As mentioned on this site before, people lack a window on the back of their neck which allows you to read their volition, so it's difficult to distinguish between these two cases from the outside. Also note the apparent lack of a thriving support community for people with these ambitions.
A Google search for "save the world" yields 11,000,000 results. A search for "harm the world" yields 242,000. Also, the top results for the latter are framed as cautionary tales, rather than normative instructions, or communities for how to accomplish the malignant goal. Saving the world is a very commonly expressed sentiment, which is why compiling a list of people who want to save the world seems a little redundant to me. A list about people who have saved the world might be a tad more useful. As far as I know, an infinitesimal amount of the world population consciously sets out to be evil, or to do harm to the world. It's more a case of the road to hell being paved with good intentions. I'm pretty sure there have been many studies about this, though I'd have to dig for them again. Perhaps someone else can post them. Neither the stated desire nor the action implies donating to charities. Even you have admitted to this in the past. I thought your claim might be based on the replies to your HELP! I want to do good thread. In that case, I thought I should point out that no equivalent "HELP! I want to do bad" or "HELP! I want to be completely benign" threads were ever created. One could easily verify your claim by making such posts, and counting the replies. If one wanted to be really accurate about it, one could also go through the post history of the respondants, to be sure they're not just being contentious, but truly ill-intentioned. Extending the survey to the population at large would be similarly trivial. One could tell people on the street about a one-question survey, and if they decide to participate, alternate between: "Do you want to save/improve the world?" and "Do you want to harm the world?" (This might be a fun exercise for the Toronto LW group, now that I think about it. Both to find the answer out for ourselves, and to get people thinking about the subject. Because thinking often precedes action. Or at least it should... )
Stanislov Petrov, for one.
Only Disney villains want to harm the world. The alternative to "wanting to save the world" is "using world quality as a free variable when optimizing for other purposes" (that is, not caring). There's no reason for a "HELP! I want to do something unrelated to saving the world" thread.
A Google search for "using world quality as a free variable when optimizing for other purposes" yields... 0 results. Though a search for "I don't care about the world" yields a respectable 58,600,000. If -cup is introduced in the search query, the result drops by 10,000,000 or so. In somewhat related news, I'm starting to doubt my own heuristic.
Searching for "i want * more than anything in the world" -"to save the world" yields 17,700,000 results.
I'd say that the reason for the underfunding is more the fact that the organizations are rather unknown, not that most people wouldn't prefer saving the world. E.g. when walking to the university I almost daily meet Greenpeace and Amnesty representants harvesting new members but no-one representing SIAI or GiveWell. What are the latter two doing to make themselves more known to the public?

I want to save the world.

[This comment is no longer endorsed by its author]Reply

"Save the world" is a subset of "improve the world" where saving is improving by a lot in a way that the world really needs it. "Improving the world" can mean settling for a smaller improvement, but probably doesn't mean "improving in every way so it will include saving the world". If people stop wanting to "save the world" because they weighted their desire to improve it in lesser ways anywhere near their desire to save it, to sound less egotistical, to avoid the applause light, or to dissociate from peopl... (read more)

It's enough just to say, "I want to save the world".

I want to save the world.

But maybe some of us can find somewhere to talk that's a little quieter.

I guess we could have an irc meetup or something? To talk about what specifically we're doing, and what we can help each other with.

OK - I'll be hanging around on #rationaltruism on freenode. As soon as I find out when I'm not going to be busy, I'll suggest a time for a meetup there.

Comparing a disliked belief to a religious one has all the universal applicability of repeating what they say in a high - pitched tone of voice.

I think ciphergoth is right in that argument-by-reference-class should be avoided if possible. I think that timtyler is onto something with the superstimulus thing - there are mundane, reductionist reasons why I might have ended up with the motivations that I do. I had pictured it more as "the result of a peculiar mix of social conditioning and rationalist memes". In evolutionary terms it definitely feels like a "mistake", which is why I wouldn't expect all that many people to be motivated the same way I am (maybe 0.1% of people, and I'm not even sure what to do with those people if they're hostile to rationalist ideas). But even if I knew the exact cause of my motivations, I wouldn't want to change them.
In terms of DNA-genes, yes. However, the SAVE THE WORLD meme gets quite a good deal out of it. Budding world-savers often prosletyse - resulting in more brains hijacked by the meme. It seems to be a case of meme-evoution outstripping the defenses of the natural memetic immune system.
I think religions have by far the most extensive set of prior claims relating to trying to save large numbers of people - or the world. Comparisons seem inevitable. In the past, most with such beliefs have been delusional - suffering from hubris - and have subsequently been proclaimed false messiahs. This raises the issue of how best to avoid that fate.

I second Alicorn's wording.

Post any "meta" (i.e. anything that's not "I want to save the world") under here to keep things tidy. Thanks.

"Save the world" has icky connotations for me. I also suspect that it's too vague for there to be much benefit to people announcing that they would like to do so. Better to discuss concrete problems, and then ask who is interested/concerned with those problems and who would like to try to work on them.

I hate to say it, but the icky connotations are sort of the point. I'm interested in people who want to save the world enough to overcome the icky factor. I realise that "Lonely Dissent" is essentially a troll's manifesto, and I apologise. But I'm publicly committing to stop writing trollish LW posts.
I'll start with a quick clarification: * Yes, "saving the world" is deliberately vague. It will mean different things to different people. * Saving the world isn't a yes/no thing. Some good outcomes can be better than others. Think of it as a rough utility function. * This doesn't imply total altruism; you can want to save the world within the constraints that the rest of your life will allow. * To help save the world, you need to be rational. Mainly because it's a really, really hard problem.
Being irrational doesn't prevent one from stumbling upon some technique necessary for world-saving. It just doesn't concentrate the likelihood of finding it in that direction. See for instance the irrationalist list, or Buckminster Fuller.

Well I just want to rule the world. To want to abstractly "save the world" seems rather absurd, particularly when it's not clear that the world needs saving. I suspect that the "I want to save the world" impulse is really the "I want to rule the world" impulse in disguise, and I prefer to be up front about my motives...

I'm being upfront about my motives. By committing to them publicly I add social pressure to keep me on my desired track. As to what my unconscious motives might be, well I love my unconscious mind dearly but there are times when it can just go screw itself.

Everyone wants to save something, don't you think?

(ETA: I've realized that my comment isn't helpful.)

As someone who accepts both the doomsday argument and EDT (as opposed to TDT), I don't think the world can be saved.

I want to improve the world.

I'm not sure of the predictive value of the doomsday argument but my own thought experiments seem to give a fairly high probability that we're all ultimately doomed (and long before thermodynamics wins out). So I'm with you: if the world can't be "saved" then I want some to achieve some tradeoff between prolonging our existence as much as possible, and improving the condition of the world in the remaining time.
I am sure of the predictive value of the doomsday argument, but I'm not sure of the predictive value of virtually anything else. Exactly how sure can you be that your thought experiments aren't biased? The galaxy can support about 10^40 people. If there's only a one in ten billion chance of being wrong, it's an expected 10^30 people. And that's not even getting into the fact that the laws of thermodynamics might be wrong.
What about the smoking lesion problem?
I suggest arguing about the smoking lesion problem on the article about that problem, or discussing EDT on an article about it.
Okay; if you reply to a post about the smoking lesion problem or if you know of a post defending EDT then I will discuss it with you there.

Saved from what exactly?