Most of us want to make the world a better place. But what should we do if we want to generate the most positive impact possible? It’s definitely not an easy problem. Lots of smart, talented people with the best of intentions have tried to end war, eliminate poverty, cure disease, stop hunger, prevent animal suffering, and save the environment. As you may have noticed, we’re still working on all of those. So the track record of people trying to permanently solve the world's biggest problems isn’t that spectacular. This isn’t just a “look to your left, look to your right, one of you won’t be here next year”-kind of thing, this is more like “behold the trail of dead and dying who line the path before you, and despair”. So how can you make your attempt to save the world turn out significantly better than the generations of others who've tried this already?

It turns out there actually are a number of things we can do to substantially increase our odds of doing the most good. Here's a brief summary of some on the most crucial considerations that one needs to take into account when soberly approaching the task of doing the most good possible (aka "saving the world").

1. Patch your moral intuition (with math!) - Human moral intuition is really useful. But it tends to fail us at precisely the wrong times -- like when a problem gets too big [“millions of people dying? *yawn*”] or when it involves uncertainty [“you can only save 60% of them? call me when you can save everyone!”]. Unfortunately, these happen to be the defining characteristics of the world’s most difficult problems. Think about it. If your standard moral intuition were enough to confront the world’s biggest challenges, they wouldn’t be the world’s biggest challenges anymore... they’d be “those problems we solved already cause they were natural for us to understand”. If you’re trying to do things that have never been done before, use all the tools available to you. That means setting aside your emotional numbness by using math to feel what your moral intuition can’t. You can also do better by acquainting yourself with some of the more common human biases. It turns out your brain isn't always right. Yes, even your brain. So knowing the ways in which it systematically gets things wrong is a good way to avoid making the most obvious errors when setting out to help save the world.

2. Identify a cause with lots of leverage - It’s noble to try and save the world, but it’s ineffective and unrealistic to try and do it all on your own. So let’s start out by joining forces with an established organization who’s already working on what you care about. Seriously, unless you’re already ridiculously rich + brilliant or ludicrously influential, going solo or further fragmenting the philanthropic world by creating US-Charity#1,238,202 is almost certainly a mistake. Now that we’re all working together here, let's keep in mind that only a few charitable organizations are truly great investments -- and the vast majority just aren’t. So maximize your leverage by investing your time and money into supporting the best non-profits with the largest expected pay-offs.

3. Don’t confuse what “feels good” with what actually helps the most - Wanna know something that feels good? I fund micro-loans on Kiva. It’s a ridiculously cheap way to feel good about helping people. It totally plays into this romantic story I have in my mind about helping business owners help themselves. And there’s lots of shiny pictures of people I can identify with. But does loaning $25 to someone on the other side of the planet really make the biggest impact possible? Definitely not. So I fund a few Kiva loans a month because it fulfills a deep-seated psychological need of mine -- a need that doesn’t go away by ignoring it or pretending it doesn’t exist. But once that’s out of the way, I devote the vast majority of my time and resources to contributing to other non-profits with staggeringly higher pay-offs.

4. Don’t be a “cause snob” - This one's tough. The more you begin to care about a cause, the more difficult it becomes not to be self-righteous about it.  The problem doesn’t go away just because you really do have a great cause... it only gets worse. Resist the temptation to kick dirt in the faces of others who are doing something different. There are always other ways to help no matter what philanthropic cause you're involved with. And everyone starts out somewhere. 15 years ago, I was optimizing for anarchy. Things change. And even if they don't, people deserve your respect regardless of whether they want to help save the world or not. We're entitled to nothing and no one. Our fortunes will rise and fall based on our abilities, including the ability to be nice -- not the intrinsic goodness of our causes.

5. Be more effective - You know how sometimes you get stuck in motivational holes, end up sick all the time, and have trouble getting things done? That’s gonna happen to everyone, every now and then. But if it’s an everyday kind of thing for you, check out some helpful resources that can get you unstuck. This is incredibly important because the steps up until now only depended on what you believed and what your priorities were. But your beliefs and priorities won’t even get you through the day, much less help you save the world. You're gonna need to formulate goals and be able to act on them. Becoming more capable, more organized, more well-connected, and more motivated is an essential part of saving the world. Your goals aren’t going to just accomplish themselves the first time you “try”. If you want to succeed, you’ll likely have to fail a bunch first, and then try harder.

6. Spread awareness - This is a necessary meta-strategy no matter what you’re trying to accomplish. Remember, deep down, most people really do want to find a way to help others or save the world. They just might not be looking for it all the time. So tell people what you’re up to and if they want to know more, tell them that too. You shouldn’t expect everyone to join you, but you should at least give people a chance to surprise you. And there are other less obvious things you can do, like join networking groups for your cause or link to the website of your favorite cause a lot from your blog and other sites where they might not be mentioned quite so much. That way, they can consistently turn up higher in Google searches. Or post this article on Facebook. Some of your friends will be happy you shared it with them. Just saying.

7. Give money - Spreading awareness can only accomplish so much. Money is still the ultimate meta-tool for accomplishing everything. There are millions of excuses not to give, but at the end of the day, this is the highest-leverage way for you to contribute to that already high-leverage cause that you identified. And don’t feel like you’re alone in finding it difficult to give. Most people find it incredibly difficult to give money -- even to a cause they deeply support. But even if it’s a heroically difficult task, we should still aspire to achieve it... we’re trying to save the world here, remember? If this were easy, someone else (besides Petrov) would have done it already.

8. Give now (rather than later) - I’ve seen fascinating arguments that it might be possible to do more good by investing your money in the stock market for a long time and then giving all the proceeds to charity later. It’s an interesting strategy but it has a number of limitations. To name just two: 1) Not contributing to charity each year prevents you from taking advantage of the best tax planning strategy available to you. That tax-break is free money. You should take free money. Not taking the free money is implicitly agreeing that your government knows how to spend your money better than you do. Do you think your government’s judgment and preferences are superior to yours? and; 2) Non-profit organizations can have endowments and those endowments can invest in securities just like individuals. So if long term-investment in the stock market were really a superior strategy, the charity you’re intending to give your money to could do the exact same thing. They could tuck all your annual contributions away in a big fat, tax-free fund to earn market returns until they were ready to unleash a massive bundle of money just like you would have. If they aren’t doing this already, it’s probably because the problem they’re trying to solve is compounding faster than the stock market compounds interest. Diseases spread, poverty is passed down, existential risk increases. At the very least, don’t try to out-think the non-profit you support without talking to them - they probably wish you were donating now, not just later.

9. Optimize your income - Do you know how much you should be earning? Information on salaries in your industry / job market could help you negotiate a pay raise. And if you’re still in school, why not spend 2 hours to compare the salaries of the different careers you’re interested in? Careers can last decades. Degrees take 4-6 years to complete. Make sure you really want the kind of salaries you’ll be getting and you know what it will be like to work in your chosen industry. Even if you’re a few years into a degree program, changing course now is still better than regretting not having explored other options later. Saving the world is hard enough. Don’t make it harder on yourself by earning below market wages or choosing the wrong career to begin with.

10. Optimize your outlays - Cost of living can vary drastically across different tax districts, real estate markets, commuting methods, and other daily spending habits. It’s unlikely you ended up with an optimal configuration. For starters, if you don’t currently track your spending, I highly recommend you at least try out something light-weight like so you can figure out where all your money is going. Remember, you don’t have to scrimp and sacrifice your quality of life to save money -- a lot of things can be less expensive just by planning ahead a little and avoiding those unnecessary “gotcha” fees. No matter what you want to do to improve the world, having more money to do it makes things easier.

11. Look into matching donations - If you’re gonna give money to charity anyway, you should see if you can get your employer to match your gift. I've done this before and know others who have too. Thousands of employers will match donations to qualified non-profits. When you get free money -- you should take it.

12. Have fun! - Don’t get so wrapped up trying to save the world that you sacrifice your own humanity. Having a rich, fulfilling personal life is a well-spring of passion that will only boost your ability to contribute -- not distract you. Trust me: you won’t be sucked into the veil of Maya and forget about your vow to save the world. So have a beer. Call up your best friend. Watch a movie that has absolutely no world-saving side-benefits whatsoever! You should do whatever it is that connects to that essential joy of being human and you should do it as often as you need; without apologies. Enough people sacrifice their lives without even realizing it -- don’t sacrifice your own on purpose.

New to LessWrong?

New Comment
135 comments, sorted by Click to highlight new comments since: Today at 4:24 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This is an awesome post! Thanks, Louie :)

some obvious suggestions:

  • Make friends with other world-savers.
  • Spend less time with your current friends, if it's obvious that they are causing you to be significantly less effective at world-saving, and the situation isn't likely to improve any time soon. But don't break contact with any of your current friends entirely, just because they aren't world-savers.
  • Find other world-savers who can significantly benefit from skills or other resources that you have, and offer to help them for free.
  • Find other people who are willing to help you for free, with things that you especially need help with.
  • Look for opportunities to share resources with other world-savers. Share a house, share an apartment, share a car... There's lots of synergy among the people living at the SIAI house.
  • Join the x-risks career network
  • If you know of an important cause that currently doesn't have a group dedicated to that cause, consider starting a group. For example, the x-risks career network didn't exist a year ago.
  • Check out the Rationality Power Tools
  • really, anything that will help make your life more efficient will help you be more efficient at world-saving.
... (read more)
A great list, although the first two points seem distinctly cult-like. I think it's important for worldsavers as a group to maintain very broad connections to the greater social network.
good point, thanks, but I think it would still be a very bad idea to avoid having any friends who are world-savers, just to avoid seeming cult-like. And I should mention that I think that it would also be a bad idea to avoid being friends with anyone who currently isn't a world-saver, because of a mistaken belief that only world-savers are worthy of friendship. Also, even the cults know that making friends with non-cult-members can be an effective recruitment strategy. I rephrased the second point as "Spend less time with your current friends, if it's obvious that they are causing you to be significantly less effective at world-saving, and the situation isn't likely to improve any time soon. But don't break contact with any of your current friends entirely, just because they aren't world-savers." the original version was "Spend less time with your current friends, if it's obvious that they have no interest in world-saving, and they aren't helping you be more effective at world-saving, and you're not likely to make them any more interested in world-saving." or maybe I should just drop the second point entirely...
That depends whether you are optimising for world saving advice or social signalling. In the current form it doesn't seem cultish so much as it seems blatantly obvious. To be honest the part about synergy and sharing actually struck me as more cultish.
In my geographical area, I know only about 10 people who might be described as trying to save the world. I would hate to have that small a pool of potential friends. Also, I think spending time on non-world-saving activities is essential to my mental health. Some of that off time happens with friends who aren't interested in saving the world. That's fine.
It is fine. Just so long as it is not intended as any more than tangentially relevant to the grandparent.
Not if maintaining my mental health (via social connections) is important to my ability to save the world.
Thoughts on the 1st and 2nd points: To the extent that you are or can be someone others look up to and are inspired by, stay friends with as many non-world-savers as possible. If you assess yourself as unable to exert a possible influence in this way, have less non-world-saver friendships. Or at least keep your two worlds from colliding, so the positive one isn't hampered by the recreational one. Having friends with shared interests is critical for many people -- I can't tell you how little I care about IT (my job) when I don't have other enthusiastic people to discuss the tech with. Or, wait, I guess I just did. Jordan - When Ben Franklin started the Junto, and later the American Philosophical Society, was he being cultish?
Great ideas! I incorporated a not so subtle mention of the x-risks career network into #6 based on your suggestion. My goal here was to keep things general in tone and only deeply permeate the subtext + links with my own value judgments. It's a kind of overt neutrality with a strong undercurrent of things you can look into if you're interested. But if you never click on a link, you could just as easily be a member of any current activist set and still get a lot of value out of my writing. Actually I think I'll write up a new section like "Become more generally capable" which seems like something I didn't specifically cover but almost certainly should. Anyone have suggestions for "must have" items to go in that summary section? What other Less Wrong posts are good for that? EDIT: Added as the new point #5 now -- it's general if you just read it but rich in specific examples if you follow up on the resources linked from it
"Become more generally capable" is an applause sign; it's too generic, not actionable. Although you can mitigate this by including as many specific actions as possible. Maybe stress the importance of proper diet (Paleo) and movement and sufficient sleep on general capability. Not sure what else would count without it turning into a list of how to become more specifically capable, contra "generally".
A rather weak one if it is. I don't associate it with strong affect of any kind. Possibly. More specificity could be helpful. Sure it is. Search your brain, the internet or lesswrong for personal development techniques and practices. There are posts here on self improvement, including some specifically for developing capabilities for 'world saving'. (One way to be less general would be to link to one of them.)
Perhaps I'm using the term "applause sign" incorrectly. My intended meaning there is that it is obvious, it provides no new information to anyone, everyone will not their heads as though it is wisdom, but it is not specific enough to make it easy for people to do. Much like "lose weight" is a bad goal, but "get to 190 lbs, 10% body fat by April 15th" is a better goal, and is even better as "get to 190 lbs, 10% body fat by April 15th by limiting intake to 1000-1500 calories, 90% Paleo/primal foods, heavy lifting 3 days a week, daily yoga and mobility work, and 5 nature hikes a week for at least 30 minutes." Pardon if the "applause sign" term was misappropriated. "Sounds like wisdom, but is not informative enough to be helpful" is probably closer.
"Not even an applause light" reflects well on your point. ;)

I don't really like this post. It reads like one of those fake advice websites set-up by companies selling products that target those advice seekers. Like "How To Get Rid of Acne" with not-so-subtle links to an order page for Clearasil. After I get over my exasperation at the tone, feel, and SIAI pitch I don't see anything new here to get excited about. Good collection of links I guess. Everyone else seems to love it though, so I suppose it just rubbed me the wrong way.

Thanks for your thoughtful criticism. Could you point out the worst abuses of my tone? I'm happy to modify it to improve things if anyone has specific suggestions from the text. Also, you're incredibly fortunate to have learned nothing from my summary. I suggest that in your case (and probably others who agreed with you), you're a Less Wrong legend. Heck, you're #6 all-time in comment karma! For reference, Yvain is #8. Anyone who's been here long enough to be that right, that often, will find (almost) nothing new in this article. But if you had counter-factually never seen Less Wrong and arrived here in the past month or two, amazing as it may seem, you likely wouldn't know the majority of this "basic" information. Did you at least get a little out of points #8 and #10? Those were the two bits that were actually my own original contributions and not generally part of the Less Wrong cannon. Also, several of the links in #5 are unique to me including the heading link which didn't exist before I posted my friend Dennis' presentation online. Also, did anyone who read this actually sign up for any of, or I would be tremendously less effective without each of those. They help on different time frames (daily, monthly, and yearly respectively). Again, I'm sorry if this post is mostly repetitive and unnecessary for those of us who have been here awhile. But as FormallyknownasRoko points out, this article somehow didn't exist. Just like Roko, I needed to point a smart friend with no background in this stuff to something about optimal philanthropy. I felt like linking them straight to Anna's "Back of the Envelope" talk from 2009 or Eliezer's "Money the unit of caring" were both "too zoomed in" a spot to dump someone who didn't have an overview of why they might want to be an optimal philanthropist to begin with. Anyway, I think this article is actually really important to get right. So your issue with the tone is very important and

Heck, you're #6 all-time in comment karma!

Wait. I am? Yikes! Where is this information available? I think I probably just make a lot of comments. You're right though, I've been around here a while I should adjust for that.

Re: Style and tone

I have pattern-match aversions that are stronger than I'd like sometimes (though at other times this is extremely helpful). It's possible that I'm reading things into your post that the you and the people who liked it didn't.

Just to start with, your post includes lots of links to pages that explain your point in detail- but it is so overhyperlinked that the signal/noise ratio is greatly diminished. I don't understand why you linked to the wikipedia page on Ghandi, Code Pink,, the Red Cross, Oxfam, PETA and Greenpeace, the entire Metaethics sequence, wikipedia on axiology, the Gates Foundation, the Clinton Global Initiative... and that's only halfway through point number two! People will be a lot more likely to click the links you think are important if they're the only links on the page.

Numbers 8 and 10 included some decent, new points.

I think the main issue though was that if you just look at points 2, 3, 6, 7 and 8 (half the post... (read more)

Thanks for your suggestions. They're very helpful. I removed six of the less relevant links. Mostly from the beginning. The signal to noise ratio in them was indeed too high. Thanks again for pointing that out. I also removed a link to SIAI from point #6 based on your suggestions. I left the links to other charities in the first paragraph for now because I feel like they are similar to the list of below-average charities I link to in #2 -- I mention them in the context of failure. So I think most people will realize they are not recommending them as helpful resources but just citing them as well-known examples. Although maybe I should remove the links just to deny those groups PageRank juice... especially since I mention them so high up in the article. I'm gonna go "nofollow" them now. I don't want to quibble too much because my intent isn't to be right, but to make the best article I'm capable of making that people can link their friends to. So if you still have objections, could you elaborate on how I'm being partisan in #7 and #8? Here's how it looks to me now that I've made updates: 3 - Guilty. I'm definitely being partisan here. I make a direct link to SIAI. Although I then immediately link to a video which goes a long way to support my claim that SIAI is in fact a high leverage charity. I think scope insensitivity prevents most people (including me) from imagining that a cause with something approaching existential risk reduction's potential to create value could even be possible. That video which I link to for support has been out for a year now. There were hundreds of people who saw that presentation. And over a thousand have watched it online. I've never seen anyone make any counter-claims or a better estimates. I'm sure a refinement must be possible -- one which I'd love to see if anyone's up to the challenge. But I feel like it's a solid argument in a broad sense and justifies me linking to SIAI at least once directly. 6 - Link to SIAI removed. 7 - I
First, can you tell me how you know about comment karma? Do you have admin powers or talk to someone who does? It is a little creepy. Second, I'm not sure at this point what your goal is with this piece. Is it merely to provide general advice that will help people become more effective at saving the world? Or are you trying to get people to give money to SIAI, by convincing them this is what they should do to save the world? I think there is some inferential distance between our positions as the result of you considering those questions more closely related than I do. There is so much SIAI-cluster stuff in here that it seems like your goal is the latter. I ask because while you've been more than admirable in responding to my individual criticisms (I upvoted the above comment) it's begun to feel like what you want this article to be just isn't something I would upvote even if we kept going through iterations of criticism and revision. Less Wrong is a fine place to test run articles promoting rationality generally, I'm not crazy about it as a place to run test articles promoting SIAI. If you just want to drop in an endorsement of SIAI I recommend doing it in first person and possibly in a parenthetical. Instead of: say And then leave it there! Number two has the exact same problem as number three (and the exact same Anna Salamon video, incidentally). The links in number six still aren't about spreading awareness generally, the x-risk career network isn't going to be a helpful for most people who read your article; same goes for the thing about Less Wrong search engine optimization. Unless your goal is to get people involved with the SIAI/FHI cluster of organizations it doesn't make sense to link to them unless they are accompanied by a bunch of other examples for other kinds of charity. Seven and eight aren't problematic on their own they're problematic after you endorse a charity. They're particularly problematic after I get curious, do ten seconds of googling
I have access to a copy of the LW database because I'm coordinating the addition of new features to the site between SIAI and Tricycle. I don't have any admin privileges on the live site or promotion powers or anything else that anyone else doesn't have though. I've been trying to think of more site stats to add for people. I think a top commenter list might be nice... or at least having it appear in people's profiles so everyone can check their own stats. If there's interest, I can work on that or get someone else to add it.
I would love to be able to sort my own contributions by (Popular, New, Old, Controversial) the same way we can sort comments on a post. I'd also like Unpopular as a sort key there. Basically, I use comment karma as a way of getting feedback on what people think is good and bad, but having to page through all of my comments looking for items with a high absolute value is awkward. The current arrangement seems to assume that comment karma scores don't vary much after a few days, and that just isn't true. Less valuably but still interestingly, I'd like to be able to do the same with other people's contributions... e.g., find the most popular comments someone else has made.
I'd also like to be able to do this, especially for other people. When I'm checking someone's profile and wondering "who is this person?", being able to see their highest karma posts/comments would be a quick way to get some information about them.
"I've been trying to think of more site stats to add for people." I'd like to see average score per comment. I.e., karma from comments divided by number of comments made. Hacker News puts this number in the profile. (Actually, I prefer karma divided by words posted but karma divided by comments posted conforms better to people's expectations because other sites like HN use it.)
Ah.... Thats not what means to me (it sounds like the karma gotten from comments rather than posts). Which would be much better evidence of my having been around a while than a poll I once did. But I am #6 there so I guess thats what he was talking about. ETA: Actually it looks like that comment is still getting upvotes. It has no business being on that page as it is meta and no longer useful. If people want to down vote it off page I would totally endorse that (just find a comment of mine you like or up vote the karma dump that is attached to it. I'll edit the comment accordingly.
Woah! Font hurts my eyes!
Yeah, that was weird. Not sure how that happened. Fixed.
That's tracked somewhere? Where?
Bottom of the right column - just above the sitemeter.
That's total karma, which I do not believe was being referred-to above.
I did. and, this weekend. So far the act of writing down some things I wanted to do has been good enough to spark action on a couple of random things I'd been procrastinating on (unsubscribing from after exporting my queue, buying a new mattress, and signing up for a cashback credit card). We'll see if they do anything longterm.
I think this may be an entry to my competition.
I have a similar opinion.

Can someone give or link to a convincing argument, possibly in the form of a lesswrong post, that having fun is beneficial? It seems intuitive, but that intuition doesn't answer:

How much fun should one have? What kind of fun should one have? etc.

if one wants to save the world.

What kind of fun should one have?

Sex is probably the ideal form. It encompasses social connectedness, physical exertion, flow and physical coordination. Each of those are important, in approximately that order.

One reason is that if you attempt to be an optimized world-saving robot, your mental health will deteriorate. Mine did, at least. Now I'm in therapy. Take your mental health seriously, don't think you can sweep it under the rug.
What fun would you say is optimal for preserving mental health? Seems like that would be social contact, but it's not clear whether that intuition is correct.
Yeah, profound isolation is definitely my #1 problem. Apart from that, I couldn't say. I have the problem, not the solution.
This intuition is correct if you take mental health to highly correlate health in general. Except for Ageing, and Tabagism (also called slow motion suicide), not having a deeply rewarding and intrincate social life is the most important factor determining your health. "People with strong social relationships were 50 percent less likely to die early than people without such support, the team at Brigham Young University in Utah found. They suggest that policymakers look at ways to help people maintain social relationships as a way of keeping the population healthy. "A lack of social relationships was equivalent to smoking up to 15 cigarettes a day," psychologist Julianne Holt-Lunstad, who led the study, said in a telephone interview. Her team conducted a meta-analysis of studies that examine social relationships and their effects on health. They looked at 148 studies that covered more than 308,000 people. Having low levels of social interaction was equivalent to being an alcoholic, was more harmful than not exercising and was twice as harmful as obesity. Social relationships had a bigger impact on premature death than getting an adult vaccine to prevent pneumonia, than taking drugs for high blood pressure and far more important than exposure to air pollution, they found. Paper is here:
We use markdown. Click the "Help" link at the bottom right of the comment box. Use either asterices or underscores around a word or phrase that you wish to emphasise. Two of the same on each side for bold.
I made this same mistake, and ended up being significantly less optimized at world-saving as a result.
Yes, Will's intuition is right. The literature is clear on how important social connectedness is to human health and well-being.
I thought I specified this with "as often as you need". Although, after reading your comment it now occurs to me that it could be possible that others might not know how much fun they need. Is that true? If so, I recommend you explore having an "unlimited" amount of fun without ceasing for days on end (think cliches of "spring break") until you can naturally feel the inflection point at which adding more hedonistic experiences on top of your current pleasure no longer improves your happiness and you long for "relief from recreation". If I'm remembering correctly, this is how I actually calibrated how much fun I need. Once you know this point, you can more naturally feel the bend of your own hedonistic pleasure curve and keep yourself in a state of content, disciplined happiness or slide yourself up towards bliss or down towards more subdued states depending on what's appropriate for the situation. Sex is indeed the correct answer. In some ways, I feel like a chicken-shit for not finding the right way to say this directly in my article. I guess I didn't want to point out sex as an ideal form of recreation since, based on reading comments here on LW, I perceive it as being relatively scarce among some readers. Now that I think of it, my mind actually estimates it as so low that it effectively rounds it down to zero unless I think it through consciously and realize that it can't possibly be that divergent from any other community. Still, I know the pain of being someone who has had sex before, and then being reminded of how awesome sex is without having an outlet for it at the time, and having it leave me feeling unbelievably miserable. I didn't want to leave even a single person reading my article in a place like that. [ OTOH, if they're down here reading the comments, sorry about that. ] I guess normally just avoid mentioning sex to people unless I know they're in an abundant sexual situation in life. This heuristic is probably overkill. How do other people deal
I'm not interested in sex, what's the next best thing?
Cuddling with people who are willing to accept it as just cuddling might be a good place to start. Another avenue to explore is, imagine you had enough resources that neither physical health and safety nor status contests presented any challenge to you, but not quite enough for world-shattering extravagances. Beloved king of a small, peaceful city-state, maybe, with a staff of wise and dutiful ministers who can handle all the routine administrative drudgery. What would you do with your time? Appeal to the senses with fine food and music, perhaps, or explore mathematics? Once you have a list of things, you'll probably notice at least a few of those diversions don't actually require regal-level assets to dabble in and enjoy, so try the practical stuff for real and explore different variations until you find something you like. Then, for each of the 'sweet spots,' go find a community of hundreds of people on the internet who have been obsessing about that particular sort of enjoyment for longer than you've been alive, and mine them for ideas, bearing in mind always that the only wrong ways to have fun are the ones that either a) have unacceptable long-term consequences for yourself and/or people you care about or b) don't actually result in fun.
Exercise, socialise and do things that are challenging without being frustrating. If you can combine more than one of these into one activity then so much the better.
Cocaine? For me, being the center of (positive) attention. For example, performing for an audience gives me a HUGE rush.
For me, it's sugar. YMMV.
Dancing is mine.
This thought is very much appreciated.
Everyone is embedded (Buss 2004) with a model of the interpreter when we speak in language. This model prevents us from saying imoral things in presence of selfs we are well acquainted with in their concept of morality. I assume for speaking, just remind people of sex as much as your mind naturally allows. In the case of writing, where readers are many and do not have a model in your mind, shut up and calculate. That is, just talk about the pleasure of sex if you are using it in an argument about something else. This is also helpful because it avoids Status Promotion bias, the bias that you have to pretend to have an awesome sex life so that people become attracted to you. There are so many kinds of fun to be had, I suggest sex is overrated. Take great movies, roller coasters, conversations with friends, swimming, watching fire burn, pic-nics and hiking as prime examples which do Not last very long (as opposed to videogames, that simply exaust your minutes away).
Well, if you believe in a utilitarian theory of morality, then the most ethical thing to do is to maximize utility (happiness) for everyone, including yourself. So basically, you should have as much fun as you can, except in cases when you could devote that same effort to increase someone else's happiness by a greater value.
That's not relevant. The claim being made is that the best way to increase other's happiness is to have fun yourself, at least some.
Fair enough. In that case, I would just mention that if you improve your own mood, that is likely to improve the mood of people close to you and in your social network in general. Both happiness and sadness are contagious. Also, maintaining a positive mood is likely to make your more efficient at other tasks.

We're entitled to nothing and no one.

What does this mean? I understand the intended affect, but not the denotation.

In such cases, the topic is often the existence of moral arguments against a position. "What? He did S? How could he?" is raw material for constructing moral arguments that allow you to have less of S done, by affecting either the person in question, or others with influence over that person. But in this particular case, it's not apparent to me what kind of moral argument is to be constructed (apart from using empathy of others t... (read more)

I actually thought this was a very useful transition between the two sentences it abuts because it summarizes and repeats the ideas in them in another way. Is it not clear that the underlying sub-text of my sentence is more like "The way the world currently works, we're entitled to no material support and no ones a priori support for our causes." Since we're not "entitled to [some]one [...] based on [...] the intrinsic goodness of our cause", it explains why you shouldn't disrespect them (or at least not "net disrespect them") over their failure to join. This is less of a moral argument and more just describing how the world currently works. We can't expect more support from people for help saving the world in proportion to the obvious (to us) value that each cause actually contains. I suppose it might be different if the world had more rationalists. Perhaps another way to think of this is that we shouldn't disrespect a person twice who isn't a rationalist and not saving the world. If they're not on board with the idea of following chains of logic to their conclusions and then accepting them, it's a bit like beating a blind dog for walking into the wrong room of your house. They might figure things out eventually by some random cue, but it's cruel and ignores their disability in a thoughtless sort of way. Better to wait for them to regain their eyesight before expecting them to really understand... and hopefully at that point, you haven't been so heavy-handed with them in the past that they will run away in fear of you.

So, this is a fantastic exposition of how to be a rational altruist -- but it still left me a little disappointed, because the title suggests that you will teach us how to "save the world," i.e., how to accomplish some really epic-level quest like ending hunger or disease. You don't actually do that here.

Instead, you argue that the most good we can realistically hope to accomplish is to educate people and to donate to efficient charities on a modest scale and to have fun, and so you set about teaching us how to do that.

Even assuming that you're c... (read more)

Things change. And even if they don't, people deserve your respect regardless of whether they want to help save the world or not.

Can you unpack this sense of "respect"? It seems to me that it must necessarily be influenced by properties like this, I don't know how to define the word so that it isn't.

(Of course, the sign of the influence is not a given, depending on one's epistemic situation respect could well go down in you learn that the person believes X, even if you're pretty certain X is the correct thing to believe. And the extent of the influence could well be small in most cases, but again depending on what other things the person knows.)

I think this was my way of saying that it makes sense as an instrumental rationality technique to afford people at least some positive level of respect (as opposed to negative respect levels, or overall disrespect) regardless of their current world saving position. I could say all that in the article, but it sounds mealy-mouthed that way. So my advice is that if you're really a "respect-Bayesian" and you have to account for evidence (so you're duty bound to adjust downward), try not to update others' total respect value below zero over this. Or move your zero-floor down so that almost everyone has a positive value both a priori and in practice.
At this point, when you start discussing "positive/negative respect", I'd need to ask what that means in even more detail. What defines the "zero point", why would you have a total order ("levels"), why is this an interesting concept. Again, I see the affect, the surface promise of meaning, but not any straightforward way of discerning what's actually meant. (I agree that with any reasonable guesses at the concepts, "respect" going into the "negative" because of not saving the world in the case of not being aware of the arguments is incorrect, but I don't appreciate the abundance of apparently arbitrary detail in your explanation.)
One possible definition of a zero point would be signaling (or being perceived to signal) neither a raising nor a lowering of the status of the person in question. So the imperative could be reformulated as "don't make moves to lower other people's status in interactions with them".
(It isn't your imperative but...) High status people will often take that as disrespectful.
I understand treating higher status people like you would treat equal status people as signaling a lowering of their status so I think that's already taken into account.
Not necessarily. Status is transactional and dynamic. High status people (of a certain kind) demand a constant stream of 'status raising' behaviors in the same way governments demand taxes.
I would add that the advice would seem better replaced with "for the purpose of social signalling don't be a respect-Bayesian". Now it seems to be "bias your bayesian updating such that your posterior respect gives desirable signals". (Although in the absence of the unpacking I can only infer.)

Very good post Louie! I agree with all the points, pretty much.

Number 11 seems especially important - it seems like a common trap for people in our crowd to try to over-optimize, so for me having an enjoyable life is a very high priority. A way of thinking that seems to work personally is to work on the margin rather than trying to reorganize my life top-down - to try to continually be a bit more awesome, work with more interesting people, get a bit more money, invest a bit more energy, etc, than yesterday.

In contrast, if I started out trying to allocate the resources I had access to / could gain access to in an optimal manner I suspect I would be paralyzed.

Don’t try to out-think the non-profits you support -

I take issue with this. Many nonprofits are not so smart. Some are idiotic. Always do due dilligence.

You're right. I don't want to leave people thinking that non-profits are always right and should never be questioned or given outside advice. Perhaps that language was overly strong. Is there a modification I could make to improve it? I thought the way I phrased the whole sentence made it context specific But I could see how it could be construed to be making a broader statement than I intended. Maybe it could be: EDIT: Changed in article. What do you think?
These can be times when contributing one's time can actually be more useful than the equivalent amount of money. Case by case, of course.

I think this is missing the primary advice of "work on instrumental rationality." The art of accomplishing goals is useful for the goal of saving the world - and still useful if you change your goal later! (say, to destroying the world, or moving to a new one :) )

So while this is a great list of ways to be instrumentally rational specifically for philanthropy, I think the general tools of instrumental rationality are also useful too (like: have concrete goals, hypothesize how to achieve them, try methods, evaluate them and change based on resul... (read more)

Agreed. I'm surprised I managed to write this whole list without remembering to add that. I think it's one of those fish in water kind of things. I was going out of my way to summarize the points in my mind that I attribute somewhat to LW and instrumental rationality didn't naturally fall into that category when I plumbed my brain for "important things less wrong can teach you about saving the world". I get the feeling that I already absorbed a high enough level of instrumental rationality before I ever made it here that I didn't actually get any additional mileage out of the relevant material on LW about it. In fact, it's so yesterday's news to me that I often forget that others don't have similar predispositions or that others are still developing here and can use pointers to helpful material on the subject. Thanks for reminding me not to take this for granted! I'll add a new section in a few hours. EDIT: Added as the new point #5.

Good post, though I thought that it is a little too focused on money. It could say (more explicitly) what types of charity are best, and what types of action... and other ways to help that aren't money.

In my opinion, some of the most efficient ways to achieve a positive difference are, foremost: (these are strategic priorities with more positive potential than all the rest) human genetic engineering and intelligence augmentation, artificial intelligence, and reduction of existential risks. In second order of importance: (these are ways to increase utility... (read more)

I should add that a lot of people here agree with your stand except that there is a bigger risk from AI than there is benefit. That is, we'll have to work on AI but first we should figure out how to make it friendly. That is what the SIAI is working on. By the way, welcome to Less Wrong. You know me as Alexander Kruel on Facebook.
There seems to be a significant "risk" of making a much better world with much smarter agents and a lot less insanity and stupidity. A lot of people see that as a bad thing, however. Looking at history, this sort of thing is fairly common. Most kinds of progress face resistance from various kinds of luddites- who would rather things stayed the way they were.
What? I don't follow. Are you saying it would be a much better world if an unfriendly AI replaced humanity? I don't think it's luddite-ish to say I'd rather not die so something else can take my place.
I'd agree to AI "unfriendly" (whatever this means... it shouldn't reason emotionally, it should just be sufficiently intelligent) replacing humanity... since we are the problem that we're trying to solve. We feel pain, we suffer, we are stupid, susceptible to countless diseases, we aren't very happy and fulfilled, etc. Eventually we'll all need to be either corrected or replaced. An old computer can only take so many software updates before it becomes incompatible with newer operating systems, and this is our eventual fate. It is not logical to be against our own demise, in my viewpoint.
Welcome to Less Wrong! Hey, have you read this paper about cognitive enhancement? If not, you might like it. Anyway, a lot of people have thought about this for years. This piece is a summary of that analysis. If you check the links in this article like these two videos and then read just these two articles, you might see more clearly why my article is organized the way it is and focuses heavily on donating while more or less ignoring other strategies. And I agree with you that most efforts to make a positive change are overrated and short-sighted. That was kind of my point in #2 and #3. Most causes are inefficient at creating good outcomes or optimized for making you feel good, not creating good. I'm working on solutions versus maintenance, but if other people are determined to work on maintenance activities, it's better if they do them wisely. It sounds like you already know a good deal about existential risk and the potential of AI. If you want to help out SIAI, I'm the remote volunteer coordinator. You can email me at I can always use more help.
The problem on Less Wrong is that there exists an unidentified subgroup who believes that 1.) the best you can do is support the SIAI 2.) most people can best support the SIAI by donating money. This view might not be the general consensus here, yet the most influential people certainly believe so. What is necessary is a paper or article sequence that outlines a decision procedure and exemplifies rational choice by dissolving the question about the best (most effective) possible action(s) one can take to benefit humanity and possible help saving the world. This hasn't been done. Supposedly you should be able to conclude an answer here by reading the sequences. That might be the case but isn't very effective as it is at best treated as an marginal issue. How to save the world is not an explicit conclusion of the current sequences.
A less collapsed summary of the view you describe is: 1) Saving lives is good 2) X-risk reduction is a surprisingly high leverage way to save lives 3) Using money gives you more options for how to contribute to a cause, not less So I think it's a decidedly uncharitable framing to imply that our analysis reduces everyone's options down to the singular option of donating. An equally valid interpretation is that everyone now has unlimited options for how to save the world. You can be a computer programmer or an online poker player or a circus performer or anything else you love doing. Then you can turn the thing you love doing the most or what you're best at (usually the same thing) into a vehicle for saving the world. Characterizing all the millions of different ways to earn money for saving the world as just "donating" is like characterizing all books as "just paper" or all software as "just bits". The means of transmission (paper, bits, donating) isn't the important part for any of these. What we care about in practice is the content of those transmission mechanisms: the functioning of the software, the content of the book, or the career / economic activity that allows you to save the world. I know you don't argue explicitly against this, so I apologize for laying all this out in response to your comment. I hope you don't mind me expounding on this here to try and develop a more helpful framing.
As a point of detail that isn't the kind of question you dissolve, just one you answer! :)
Does this phrase actually add clarifying detail to your premise? How are we unidentified? It seems to me like the majority of posters on Less Wrong who strongly advocate a view along the lines of what you're describing post under our real names. What more could we do to identify ourselves? This phrase explicitly accuses the people you disagree with (or pretend to disagree with?? I can never tell with you) of being sinister and shadowy. It's probably not warranted in the case of clearly identified people who share their views openly and honestly.
Since I am not sure who, and therefore how many people here share that opinion, but know that some do, I referred to them as unidentified subgroup. That labeling was solely reflective of my current state of knowledge and not supposed to be judgemental. I'm often using a translator which outputs many different English words for a German concept. I suppose that might be one of the reasons what I am writing sometimes appears weird or inept.
As a very unrelated side note, I usually read your username as Chinese, where "xi du" is "to smoke/take drugs" so "xi xi du" would be something like "to casually try drugs" (the verb is doubled to reduce emphasis). I have no idea if that's how you meant it.

I came up with that nickname at the age of 16 (in the year 2000). It is supposed to be a random sequence of letters that is pronounceable in German. A search gave no results, hence I naively suspected it to be unique. Only much later I learnt that many sequences of letters humans are able to pronounce do also bear a meaning in some language. Last year I learnt that xixi means piss in Portuguese. Some native English speakers also asked me if it is supposed to mean sexy dude. But I can assure you that I never intended my nickname to signal a sexy dude who takes a piss and casually tries drugs. I was rather annoyed that many nicknames were already taken when I tried to register with various services. I also wanted to be uniquely identifiable. It pretty much worked, as almost all of the 46.100 results of a Google search for xixidu are related to myself.

How do you pronounce your nickname? I'd vaguely assumed the name was Chinese, with some presupposition that you were, too.
Ksicksiduh - but I prefer to go by my real name (Alexander Kruel) when it comes to vocal communication. I've never been Chinese. It wasn't my intention at all to sound Asiatic. I looked at an instant messenger avatar of a Rubber duck when that particular sequence occurred to me.
Thanks-- I pronounce people's names in my head when I'm reading. "Xi" is a letter combination that shows up in English transliterations of Chinese. That, plus your saying that English isn't your first language, was what gave me the false impression.
I've always pronounced your nickname in my head as if it were a Pinyin transliteration of Chinese (much like the English words "she she do"), even though I had no idea what it might mean. Making every other letter uppercase also gives the impression of Chinese (where there can be disagreement between transliterations for words made of several characters, such as "pinyin" vs "pin-yin" vs "pin yin", to take an example from my comment), even though nobody actually transliterates Chinese quite like that. But now I'll do German instead.
Doubled to reduce emphasis? Now that is unintuitive!
Oh, sorry to make such a big deal out of this then. Your English is good enough that I didn't realize you were a non-native speaker/writer. I'll take that into account when reading your comments in the future.
Now I'm feeling username envy. I'm Cameron Taylor, from Melbourne! I can't think of a better world saving option than the one in question, even if my advocacy is of the form 'least bad'. (I don't know what the 'unidentified subgroup' idea was supposed to be. It makes no sense.)
Hey Cameron! You're from Melbourne?? I'm American but I've been traveling in Australia the past few months. I'm in Byron Bay now. Do you know Patrick Robotham? I met him when I first got back here and stopped in Melbourne. He organizes the Less Wrong Meet-up at Don Tojo in Melbourne near the University. You guys actually have a surprisingly good number of LW rationalists there. Perhaps the most anywhere outside of the California Bay Area, New York, or London.
I haven't made it to one of the meet ups yet. I must at some stage. I didn't realize they were so well attended!
They're brand new. Patrick only started organizing them after meeting with me and realizing how many other LWers there were in Melbourne. I think there has only been 1 or 2 of them but there is a large critical mass of attendants from what I heard from Patrick.

Considering my recent personal experience (which I mentioned here) with removing a huge hidden negative motivation from my life I'd say that the absolutely most critical thing is to find out why you want to save the world.

If you find out that it's actually because you feel some kind of SASS threat if you don't try to save the world, I'd strongly suggest trying to directly remove that feeling anyway. The risk here is of course that after you've done it, you might find out that you never actually wanted to save the world to begin with. However, considering h... (read more)

Excellent point (I think). What's a SASS threat?
Sorry for being late with my answer. SASS is PJs terminology, it stands for Significance, Affiliation, Stability, and Stimulation. The exact categories aren't that critical, the important idea is that they represent the terminal values all humans seem to have hard wired into them so to speak. So what I meant is that it's important to know why you're motivated into doing action X. If it is because you've learned that you'll gain SASS by doing X then everything is fine. That's operating under what PJ calls "positive motivation" and you'll feel as if you really want to do it and you can pursue X without feeling stressed out, by naturally selecting the best course of action, among other things. If you're operating under a SASS threat on the other hand, which you do if you've learned that you'll lose SASS if you don't do X, then your mental state will be completely different. This is what he calls "negative motivation" and there you'll feel like you should and ought to do X without really feeling like you genuinely want to. It's usually accompanied by only doing as much of X as necessary to remove the immediate feeling of threat and then mostly feeling bad about not doing more even though you feel like you "could", "should", "ought" and similar feelings.

Love almost all of this. I worry that (3) is making the common rationalist mistake of basing a strategy on the type of person you wish you were rather than the type you are. (Striding toward Unhappiness, we might call it).

So, you wish that your passion for a cause were more strongly correlated with the utilitarian benefit of that cause, and game the instinct to work on what feels good with small gifts while putting most of your effort towards what you think is optimal. But if the result is working on something you aren't as passionate and excited about,... (read more)

A random thought:

If you donate less than 10% of your income to a cause you believe in, or you spend less than one hour per week learning how to be more effective at helping a cause you believe in, or you spend less than half an hour per week socializing with other people who support the cause... then you are less instrumentally rational than the average christian.

edit: shokwave points out that the above claim is missing a critical inferential step: "if one of your goals is to be charitable"

edit: Nick_Tarleton points out that the average christia... (read more)

First off, strongly agreed that community matters and is worth investing in. You may be less something, but rational targeting of effort (both doing something besides converting people, and being strategic at whatever you're doing) utterly swamps quantity of effort here. Being charitable ≠ doing good. Source, or are you just assuming people do what they're supposed to? This (first search result) says the mean is 2.9%. (I would also bet that most Christians don't know what they nominally should give.) (ETA: I read your comment after you deleted the paragraph acknowledging this.) I feel obligated to point out (outgroup homogeneity bias, etc.) that far from all Christians see this as their goal.
After some math, 2.9% still feels like more than most people donate to their non-religious causes. 2.9% of the average annual expenditure is more than 1400 dollars! I am willing to accept that Christians are doing more for their cause than I am for mine. Mine is more effective, but unless I can say that Christianity is a net negative (I can't), when you multiply it through the effectiveness, I still come out below Christians.
good points, thanks. I made some more edits. I added a note mentioning that the mean is 2.9%, and that comment "Being charitable ≠ doing good." I replaced "their mission of converting the whole world to christianity" with "their vaguely defined mission"
People are going to balk at your use of "intrumentally rational". I would suggest explicating the chain of inference: If you donate less than 10% ... then you are less charitable than the average christian; and if one of your goals is to be charitable, then you are less instrumentally rational than them too.
you're right. thanks. I updated the comment to include your change.
It seems you area assuming that donating to a church = donating to a good cause which i am not sure is always if most of the time right.
sorry, I should have stated explicitly that I'm NOT assuming that "donating to a church = donating to a good cause". What I am assuming is that the christians think that "donating to a church = donating to a good cause"
(blink) So, I think you just said that the average Christian does X, but doesn't do X, and therefore I should do X. I can't quite figure out if there's a typo in there somewhere, or whether I'm just misunderstanding radically. In any case, I agree with you that contributing resources to causes I support and training myself to understand them better and support them more effectively, and socializing with other supporters are all good things to spend some resources on. Incidentally, most of the Christians I know who do this in their capacities as Christians are not actually devoting those efforts to converting the world to Christianity, but rather to things like aiding the needy. Then again, the Christians I know well enough to know how they practice their religion are a pretty self-selecting bunch, and generalizing from them probably isn't safe.
You're right, thanks, the previous wording was confusing. I removed the paragraph that said "I suspect that the average christian actually gives significantly less than 10% of their income to the church, and doesn't go to church every sunday, but I haven't actually looked up the statistics yet." The point of that paragraph was that I'm admitting that I'm probably overestimating the contributions of the average christian.

Can anyone offer a single example of a major, longstanding problem that has been solved by this kind of approach?

Solved by what kind of approach? Organized charity? How about the polio vaccine? March of Dimes. Admittedly, solved problems are rare. Quite a few charities at least alleviate problems. Even though it is faith-based, I think the Salvation Army does some real good. Red Cross. Big Brothers. DWB. League of Women Voters.
The original article holds up charitable organizations as a means to make the world a better place. But all the examples I can think of where the human condition improved significantly were due to new technology (birth control, antibiotics), sweeping cultural changes (religious tolerance), or increasing wealth (sanitation, literacy). Charities, on the other hand, typically focus on handouts and lobbying, which may benefit individual aid recipients and rent seekers but rarely seems to do anything about the underlying problem. So my question is, what is the evidence that such organizations can actually deal with large issues like hunger, disease, poverty, oppression, genocide, and so on? And if there is no track record of success, why do we continue to pin our hopes on them?
This is the question I'd love to see answered. I appreciate the original article's analysis if you've already decided that giving resources (money, work, whatever) to non-profits is a desirable and rational use of those resources. Maybe it is, but I'd love to see someone really tackle that issue. I once heard Rush Limbaugh say something like "In 200 years, capitalism has saved more lives than thousands of years of charity." I generally dislike the man, but I found it hard to disagree with him there (actually I assume he was probably paraphrasing someone else). It seems to me that the wealth created through market economies has massively improved living standards unlike anything else. The technological, medical, social, and education advances that contribute to improved health and welfare are greatly accelerated by competition and increased wealth. Maybe there are areas where charity is more effective than markets, but I'd like to see someone make the argument. Even well-run and well-intentioned charities run the risk of creating dependency and inhibiting local markets. Could it be that the best use of your time and money is to create as much wealth as possible and keep that wealth circulating through the market (investing and spending)? (And perhaps contributing to lobbying and advocacy efforts that work to spread open markets.) If anyone knows of a good discussion of this question, please let me know.
So, the Anti-Corn Law League destroyed grain tariffs in Britain and permanently altered the public perception of tariffs in that country compared to the rest of the world (more of the British public correctly see tariffs as a way to screw over customers than as a way to protect domestic jobs). Abolition groups also seem like they should be mentioned, here. Those are just the two off the top of my head, but I'm not sure they fit "this kind" of approach. The first one suggests a "do the math" approach to helping people, but also a strong deontologist "this isn't fair!", and the second one seems mostly along the same lines. I don't think SIAI and such are that comparable to Garrison, but perhaps they are. I guess my questions in response are "can you be more specific by "this kind of approach" and "what are your standards for a 'major' problem?"

Another obvious suggestion:

  • If there isn't already a wiki for the cause that you are interested in helping, then consider starting one.

Most people reading this are probably well aware of the awesome power of wikis. LW's own wiki is awesome, and LW would be a whole lot less awesome without its wiki.

What we need is a wiki that lists all the people and groups who are working towards saving the world, what projects they are working on, and what resources they need in order to complete these projects. And each user of the wiki could create a page for thems... (read more)

Some people, when faced with a problem, say, I know - I'll start a wiki! Now they have 2 problems. I said something similar yesterday, and I have a short essay, Wikipedia And Other Wikis about why forking off WP is a bad idea (which is a related bad idea). tl;dr: network effects are a bitch
If this is the answer then the SIAI should simply conclude this in a paper. Or EY should write a new sequence that concludes that supporting the SIAI is the rational choice if you want to save the world. I believe a Wiki would just add to the confusion. A wiki is good as a work of reference or a collaborative focal point for people working on a certain project. But when it comes to answering a certain question, a Wiki might lead people astray. I'm still puzzled by the fact that saving the world is not much dealt with on Less Wrong. What would be a better way to exemplify rational choice than concluding what to do when you want to save the world. On Less Wrong rationality is an abstract concept that is seldom used to tackle real life decisions.
Can we quantify that? What has it achieved?
The LW wiki has made it approximately one order of magnitude easier to find the best content from LW. You could try to quantify that by: * the time it takes to find a specific thing you're looking for * the probability of giving up before finding it * the probability that you wouldn't even have bothered looking if the information wasn't organized in a wiki. * maybe more
Yeah, but in terms of actually having achieved more downstream subgoals, like getting more people familiar with rationality?

Thank you for this post! One thing:

  1. Look into matching donations - If you’re gonna give money to charity anyway, you should see if you can get your employer to match your gift. Thousands of employers will match donations to qualified non-profits. When you get free money -- you should take it.

If GiveWell's cost-benefit calculations are remotely right, you should downplay matching donations even more than just making this item second-last. I fear that matching donations are so easy to think about that they will distract people from picking good cha... (read more)

I think you and Louie may be talking about two different kinds of matching donations. The GiveWell post is about an employer matching donations only to a specific charity. Some employers will hold this sort of pledge drive, particularly in the wake of an especially harmful natural disaster. However, many employers will match donations, up to a certain level, to any qualified (e.g., 501(c)(3)) charity; I believe one can find such employers by searching the database linked by Louie.
Upvoted for pointing out why people who I agree with were disagreeing with me.
I think if people are already here, it's more than safe to mention matching donation programs. It could actually really help motivate people. I know it helped me a lot in the past. I once donated $3k (the limit of my previous employer's matching program) to local service charities in Austin, TX. The only reason I started investigating charitable giving in the first place was because I found the info about the matching program buried in the packet of info I got from HR when I was hired (which I got around to looking through 6 months after starting). My goal at the time was barely altruistic. It was some mix of "Cool, I can get $3,000 in extra money! I just need to find something else besides myself that I care about." and "Wow, I work for a government defense contractor. I know what they will spend that $3,000 on if I don't find something better!". I don't think Less Wrong or Give Well existed at the time. My search for a good cause probably ended prematurely, but it still marked the beginning of a search for something outside of myself that I cared about. Also, even though searching through information about giving to charity and strongly considering giving did almost nothing for me, actually giving that $6,000 changed everything about how I saw myself.
Oh, oops, we were talking about different things. I think you're right to mention matching donations (especially after hearing your anecdote), but I wonder if there's room for a warning like, "It's more important to pick the right charity than to get someone to match your donation. (Do both if you can, of course.)"
Sorry, I have to disagree on matching donations. By switching a matching donation from third-world aid to existential risk mitigation, you do double your impact.
Oh, we agree, I was just unclear about my objection. Fixed.

Don’t confuse what “feels good” with what actually helps the most

This. I can overstate how often I find myself going with what feels good instead of actually doing the best to help. Its a horribly addicting habit.

Does this count as an entry to the $100 efficient charity challenge?

Enjoyed most of this, some worries about how far you're getting with point 8 (on giving now rather than later).

Give now (rather than later) - I’ve seen fascinating arguments that it might be possible to do more good by investing your money in the stock market for a long period of time and then giving all the proceeds to charity later. It’s an interesting strategy but it has a number of limitations. To name just two: 1) Not contributing to charity each year prevents you from taking advantage of the best tax planning strategy available to you. That tax-

... (read more)
I wrote much more about this point but decided to cut it down substantially since it was already disproportionately large compared to it's value to my overall rhetorical goals. But here's some other things I wrote that didn't make it into the final draft of #8: "I do agree that this helps you donate more dollars that you can credibly say came from you. But does it reliably increase total impact? It seems unlikely. For instance, imagine donating to a highly rated GiveWell charity that is vaccinating people against a communicable disease in Africa. The vaccines will be cheaper in the future and if you invest well, your money should be worth more in the future too. More money, cheaper vaccines -- impact properly increased, right? But preventing the spread of that disease earlier with less money could easily have prevented more total occurrences of that disease. Most problems like disease, lack of education, poverty, environmental damage, or existential risk compound quickly while you sit on the sidelines. Does the particular disease or other problem you want to combat really spread slow enough that you can overtake it with the power of compounding interest? You should do the calculation yourself, but most of the problems I’m aware of become harder to solve faster than that. And this is definitely a bad strategy if the charity you’re supporting is actually working on long-term solutions to the problems they’re combating and not just producing a series of (noble but ultimately endless) band-aid outcomes. Solving the problem is entirely different than managing outcomes indefinitely and can drastically shift the balance in favor of giving less sooner rather than more later." I also wrote a lot of poorly phased notes (that I wasn't entirely happy with) to the effect that if you still thought this was a great idea... so much so that you actually planned to do it, you should definitely not execute it silently without communicating your plan to the non-profit you're expecti

I'm uncomfortable with patching my moral intuition with math. Wouldn't that imply that we should be willing to use violence to shut down promising AGI labs who don't take friendliness concerns into consideration?

If your moral system leads you to do things that make your moral intuition queasy, you should question your moral system.

Biting bullets is an overly simple solution to moral dilemma. You find yourself making monsters without much effort.

Mmm, depends whether you are using "question" as a euphemism for "reject." Certainly, you should re-examine your explicit reasoning about ethics if the conclusions you reach conflict with many of your moral intuitions. However, you should also re-examine your moral intuitions when they fail to agree with your explicit reasoning about ethics. Otherwise, there would be very little point in conducting ethical analysis -- if your analysis can't ever validly prompt you to discard or ignore a moral intuition, then you may as well stop searching your conscience and just do whatever 'feels right' at any given moment. Sometimes your intuitions give way, and sometimes your formal reasoning gives way -- that's how you reach reflective equilibrium. Ah, but is "don't make monsters" your most important moral objective? Suppose you had to become a monster in the eyes of your friends in order to save a village full of innocent children. Is it obvious that it would be wrong to become a monster in this sense?

Though I won't be curing AIDS, designing cheaper solar panels, or searching for the Higgs Boson, seeing as I haven't chosen a career in the sciences, I am preparing for law school which should put me in a career that fairly well optimizes my income, while giving me a chance to use some of the rational argument skills on this site. Also, I live in Kansas, which, if I prove good enough at law, could provide me good opportunities to be on the front line against religious ignorance and bigotry here in the states. It would be a dream of mine to be in court ag... (read more)

How sure are you of said optimization? * * * * * * * EDIT: an 11% drop in applications in 2011:
Honestly, I'm not that sure. I knew that there have been issues for law graduates to find jobs, but with the state of the economy the way it is, there are problems for graduates across the board, not just in law school. I'll be graduating this spring with degrees in political science and history. So, I can try and find a job now when the market for college graduates in general is similarly bad, and I'll likely end up working a low paying hourly office job, like customer support, or do some graduate work, like law school or a masters or phd program in one of my fields. Though there is a glut of graduates and paucity of jobs for masters and phd graduates in my fields as well. Eventually, the economic situation will sort out, and jobs will return, and historically, law has been fairly lucrative. Hopefully, this will happen in the next three years, but if I have to wait a few more years after graduation to start making big money, that's acceptable to increase the long-term odds that I will have a well-paying job.