There are people out there who want to do good in the world, but don't know how.

Maybe you are one of them.

Maybe you kind of feel that you should be into the "saving the world" stuff but aren't quite sure if it's for you. You'd have to be some kind of saint, right? That doesn't sound like you.

Maybe you really do feel it's you, but don't know where to start. You've read the "How to Save the World" guide and your reaction is, ok, I get it, now where do I start? A plan that starts "first, change your entire life" somehow doesn't sound like a very good plan.

All the guides on how to save the world, all the advice, all the essays on why cooperation is so hard, everything I've read so far, has missed one fundamental point.

If I could put it into words, it would be this:

AAAAAAAAAAAGGGHH WTF CRAP WHERE DO I START EEK BLURFBL

If that's your reaction then you're half way there. That's what you get when you finally grasp how much pointless pain, misery, risk, death there is in the world; just how much good could be done if everyone would get their act together; just how little anyone seems to care.

If you're still reading, then maybe this is you. A little bit.

And I want to help you.

How will I help you? That's the easy part. I'll start a community of aspiring rationalist do-gooders. If I can, I'll start it right here in the comments section of this post. If anything about this post speaks to you, let me know. At this point I just want to know whether there's anybody out there.

And what then? I'll listen to people's opinions, feelings and concerns. I'll post about my worldview and invite people to criticize, attack, tear it apart. Because it's not my worldview I care about. I care about making the world better. I have something to protect.

The posts will mainly be about what I don't see enough of on Less Wrong. About reconciling being rational with being human. Posts that encourage doing rather than thinking. I've had enough ideas that I can commit to writing 20 discussion posts over a reasonable timescale, although some might be quite short - just single ideas.

Someone mentioned there should be a "saving the world wiki". That sounds like a great idea and I'm sure that setting one up would be well within my power if someone else doesn't get around to it first.

But how I intend to help you is not the important part. The important part is why.

To answer that I'll need to take a couple of steps back.

Since basically forever, I've had vague, guilt-motivated feelings that I ought to be good. I ought to work towards making the world the place I wished it would be. I knew that others appeared to do good for greedy or selfish reasons; I wasn't like that. I wasn't going to do it for personal gain.

If everyone did their bit, then things would be great. So I wanted to do my bit.

I wanted to privately, secretively, give a hell of a lot of money to a good charity. So that I would be doing good and that I would know I wasn't doing it for status or glory.

I started small. I gave small amounts to some big-name charities, charities I could be fairly sure would be doing something right. That went on for about a year, with not much given in total - I was still building up confidence.

And then I heard about GiveWell. And I stopped giving. Entirely.

WHY??? I can't really give a reason. But something just didn't seem right to me. People who talked about GiveWell also tended to mention that the best policy was to give only to the charity listed at the top. And that didn't seem right either. I couldn't argue with the maths, but it went against what I'd been doing up until that point and something about that didn't seem right.

Also, I hadn't heard of GiveWell or any of the charities they listed. How could I trust any of them? And yet how could I give to anyone else if these charities were so much more effective? Big akrasia time.

It took a while to sink in. But when it did, I realised that my life so far had mostly been a waste of time. I'd earned some money, but I had no real goals or ambitions. And yet, why should I care if my life so far had been wasted? What I had done in the past was irrelevant to what I intended to do in the future. I knew what my goal was now and from that a whole lot became clear.

One thing mattered most of all. If I was to be truly virtuous, altruistic, world-changing then I shouldn't deny myself status or make financial sacrifices. I should be completely indifferent to those things. And from that the plan became clear: the best way to save the world would be to persuade other people to do it for me. I'm still not entirely sure why they're not already doing it, but I will use the typical mind prior and assume that for some at least, it's for the same reasons as me. They're confused. And that to carry out my plan I won't need to manipulate anyone into carrying out my wishes, but simply help them carry out their own.

I could say a lot more and I will, but for now I just want to know. Who will be my ally?

New to LessWrong?

New Comment
56 comments, sorted by Click to highlight new comments since: Today at 6:11 AM

The problems of the world are perpetuated because the human race is perpetuated. There is a huge difference between those forms of action which consist of rearranging the pieces already on the chessboard of the human condition, and those forms of action which consist of changing the players, the rules, and the battlefield on which the game of life is played; and the way one should approach these two topics is completely different.

With respect to the first type of action, wanting to do good without concern for a personal payoff is only the relevant attitude for small-scale, interpersonal interactions. If you're hoping to save the world, either alone or in the company of your friends and colleagues, then you have a different problem: almost certainly, there is something missing or something unrealistic in your idea of how the world works. You say it yourself: why aren't other people already saving the world for you? So let's think about this.

But first, I want to emphasize: I am not talking about topics like life extension or friendly AI. If you are wondering why the human race hasn't made Ray Kurzweil president of the world and why health care debates don't include national cryonics programs, that belongs in Part Two of this comment.

So, let's consider some of the reasons why people aren't already saving the world (more than they are, that is). First, most adults are trapped in the wage-slave black-hole of working full-time in order to stay alive. Second, human survival requires a division of labor which ensures that most adult lives must revolve around performance of narrow economic functions in this way. Third, people actually enjoy domination and luxury, victory in conflict and selfish sensate pleasure, and these are zero-sum pursuits. Fourth, every human life is blighted by pain, disappointment, ageing, and death, and early hopes end up appearing just as hopes, not as facts about what is possible. Fifth, through a combination of limited capacity and natural inclination, people tend to care most about a small subset of all human beings: family, friends, subculture, nation.

Now let us consider some factors peculiar to the situation of the person or culture which does want to save the world. Here we should distinguish between world-saviors who have identified an enemy - an enemy class of people, not an impersonal enemy like "ignorance" or "poverty" - and those who have a spirit of general benevolence which doesn't blame anyone in particular. Human beings are well-adapted for hunting and so fighting a personalized enemy comes naturally. I have little to say about this much more common type of world-savior, except that they are probably part of the problem, especially from the perspective of the would-be purveyor of generalized benevolence. I have already given some reasons why the latter sort of world-savior is rare - why most people don't have the energy, time, attitude or inclination to be like that - so what can be said about the people who do?

First, they are probably better off than most other people, either through material standard of living or perhaps early quality of life. For them, that way of being seems natural, and so it's also natural to imagine that everyone could easily live like that.

Second, here we are often talking about people from the first world who want to make a difference in the third world. Well, there probably is considerable scope for improvement of life in that regard, and it's happening as the mundane innovations which make the difference between medieval daily life and modern daily life percolate throughout the world. (Though let us also remember that specifically modern annoyances and pathologies are thereby spreading as well.) What I see here most of all is massive political naivete of various forms. Altruists want to improve health or human rights in countries that they know almost nothing about, even though their history and their internal politics will be decisive on the big scales where the altruists want to make a difference. If you come from outside, you are likely to be either an ineffectual meddler or a useful tool for someone older, smarter, and more knowledgeable than you. Maybe sometimes that person or faction that uses you would actually be the faction that you would want to back, if you knew all the facts; but most outsiders just won't know those facts.

So I think that, if your aim is to do conventional (non-trans-human) world-improving (I won't call it world-saving), the attitude you should aim for is not one of being indifferent to personal gain; the attitude you should aim for is one of not expecting to make a big difference. Because most likely, you won't be making a big difference, or at least not a difference that is unambiguously an improvement. It's not hard to be part of something bigger - there are plenty of movements and causes in the world - but narrowness of personal experience and the sheer complexity of life tend to make it very difficult to tally up the gains and losses associated with any genuinely big change.

Feel free to criticize or dismiss what I've said, but perhaps you will feel compelled to agree with one point. If it's a mystery to you why the world is so bad, then something is lacking in your own understanding, and that places in doubt the effectiveness of any remedial action you may be planning. You may find that something or other is just so awful that you feel compelled to get involved, take a stand, make a difference, even though you're uncertain about the consequences - that there just isn't time to achieve a cold lucid understanding. That also is true; it's been said that life can only be understood in retrospect, but has to be lived forwards. Nonetheless, if you do rush into something, don't be surprised if you are surprised about what happens (or what doesn't happen), and end up being just another entry in the world's long log of follies and tragedies, rather than being the author of a genuine advance in the human condition.

Now for Part Two. We talk about these topics endlessly, so I will try to be brief. Here there is greater scope for changing the human condition, not just because ultratechnology might give us superlongevity or superintelligence, but because human nature itself here becomes changeable. But it's going to be a lot easier to make something subhuman or inhuman than it is to make something better than human.

The psychology of the individual response to the prospect of a singularity deserves to be more explored. A lot of it has nothing to do with altruism, and is just about an elemental desire for survival and freedom. It can also become a rationale for daydreams and fantasy. One really unexplored psychological aspect has to do with singularity-inspired altruism. Most committed detractors of transhuman futurism can barely acknowledge that there is such a thing - it's all power-tripping and infantile fantasy as far as they are concerned. However, it's clear that quite a few people have had a sublime personal vision of utopia achieved because (for example) no-one has to die any more, or because scarcity has been abolished. Unrealized utopian dreams are as old as history; it's a type of fantasy - an unselfish fantasy rather than a selfish one, but still a fantasy. Historically, most such visions have revolved around the existence of some wholly other reality in which all the negative features of this one are not present and are even made up for. More recently, we also have social utopianisms, which usually contain some element of psychological unrealism.

But now, in this era when natural and psychological causality are being decoded, and when the world and the brain might be reengineered from the atoms on up, it seems like you really could adjust the world to human beings, and human beings to the world, and create something utopian by its own standards. It's this sense of technological power and possibility which is the new factor in the otherwise familiar psychology of utopian hope. If it ever takes a form which is shared by thousands or millions of people, then we might have the interesting spectacle of typical pre-singularity social phenomena - politics, war, nationalism - that is informed by some of these new futurist notions. But the whole premise of Part One of this comment is that none of these pre-singularity activities can really "create utopia" in themselves. So it's best to view them a little coldly - perhaps sympathetically, if they aren't wrongheaded - but still, they have at best a tactical significance, in the way that they shape how things will play out when we really and truly start to have rejuvenation technology, nanotechnology, and autonomous artificial intelligence. Those (and some others) are the factors which really could change the human condition, easily for the worse, possibly for the radical betterment that altruist utopians have always desired.

I don't feel equal to the task of providing definitive advice to the person who aspires to make the right sort of difference to this change, the truly big change. But it's probably unhealthy to become too dreamy about it, and it's certainly unrealistic to start treating as a foregone conclusion, no matter how much happier that might make you. Also, it's important to get in touch with your own inner selfishness, and try to understand to what extent you are motivated just by the desire not to die, and not specifically by the desire to optimize the posthuman galactic future. And finally, don't ever expect the majority of pre-singularity humanity to unite behind you as one. Up until the end, however things play out, you and your cultural and political allies will only be one faction amongst many in a society which is still short-sighted by your standards, because it is preoccupied with the day-to-day crisis management arising from the collective political life of a society of millions of people.

This should be a top level post.

One of the best things for the third world has been the cell phone. What could people do to improve the odds of the next cell phone being invented and promulgated?

[-]Giles13y130

5 minutes of thinking yielded only one "next cell phone" candidate: an education system that actually works.

I'd need to research this, but my ignorant guess is that most funding for education in the third world has been aimed at making it more closely resemble first world education. If people's pet theories about education being broken are correct then we should instead be researching education systems that work, especially those that work given constrained resources. And then fund those.

Not sure if this is what you were getting at, but it at least seemed worth jotting down.

If you're specifically interested in education, the shortest route might be the Khan Academy-- whose effect is amplified by cell phones.

Note that I didn't say "What's the next cell phone?", I said "what improves the odds of the next cell phone?"

Math and/or science and/or engineering research? Free markets? Invent a lot of stuff rather than trying to identify the right thing in advance?

To my mind, the most interesting thing about cell phones is that they increased individual capacity in first world countries by making communication easier, but they increased individual capacity much more in third world countries because cell phones require less infrastructure.

Buckminster Fuller's idea of ephemeralization (doing more with much less) might be a useful clue.

The other clue might be that people in third world countries may need to have their own capacities increased more than they need help-- they have enough intelligence and initiative, they just need better tools.

Yes, I realise that I was interpreting your question at the wrong level. My 5 minutes of thinking were fairly unfocused this time.

My answer to "what improves the odds of the next cell phone" would of course be "create a thriving community of rationalists dedicated to self-improvement and making the world better". If you're asking what I'd do other than that then it's a good question I'd need to think about.

If we had a huge community of those rationalists, what more would we need?

5 minutes of thinking yielded only one "next cell phone" candidate: an education system that actually works.

Actually, I think cell phones will do that. They don't yet, but as smartphones get cheaper and infrastructure improves, internet access will become a human universal; and at the same time, there are projects like Khan Academy that turn internet access into education. There is still lots of work to be done - the phones themselves are still too expensive, a lot of countries don't have affordable wireless internet service fast enough for educational videos, and there aren't enough good videos or translations into enough languages. These developments are probably inevitable, but they can also be sped up.

OK I'll answer your actual question this time.

It's difficult to know which idea is the correct one as I don't know what the limiting factor is; why more cell phone-level inventions aren't already occurring.

Hypotheses are: lack of motivation, lack of ideas, lack of engineering competence, too much risk/startup capital.

For "motivation" the obvious suggestion is some sort of X Prize.

For "ideas" my suggestion would be to develop a speculative fiction subculture, with a utopian/awesomeness/practicality focus. See what ideas pop out of the stories that might actually work.

For "engineering competence" we're back to education again.

For "business risk" all I could think of was a sort of simulated market ecosystem, where people are modeled as agents seeking health, status and other goals and where new products aren't introduced into the real marketplace until they show reasonable probability of success in the simulated one.

Another angle about getting in touch with your own inner selfishness.... I doubt that cell phones were invented by people who wanted to make the world a lot better. I suspect they felt personal irritation at being tethered to a location when they wanted to talk on a phone.

There was idealism attached to birth control earlier in the process, but there was still a common human desire to have sex without a high risk of producing children.

Tools for improving the world have to be attractive enough for people to want to use them. What's been getting on your nerves lately? Can you tell what you used to want, but you've gotten resigned to not having it?


The credential problem is one that I think could use some solving, and I have no idea where to start. People spend a tremendous amount on education-- and sometimes on "education", and a lot of it isn't spent on actually becoming more capable, it's spent on signalling that one is knowledgeable and conscientious enough (and possibly capable of learning quickly enough) to be worth hiring.

I'm not saying that everything spent on education is wasted, but a lot of it is, and capable people who can't afford education have their talents wasted.

It's possible that credentials are the wrong end of the problem to attack. People behave as though there's a capital shortage, which could also be expressed as a people surplus relative to capital. Maybe what we need is more capital.

Some of the Thiel fellows are working on combating the credential problem. Dale Stephens is the only one I know off the top of my head, but I think there might be others.

I like your thinking

Sorry, I didn't mean to imply that I was limiting my scope to non-trans-human world improvement. If my initial focus is on "conventional" causes then it's because I believe most of humanity isn't ready to tackle existential risk, AI and transhuman issues until we can tackle the problems we're having here and now.

I was also a little misleading in suggesting that the reasons the world was so bad are somehow mysterious to me. They were, but not so much any more - I may be completely wrong but at least I see the issue as no longer mysterious. This post was more about my personal journey than about where I've ended up. Inferential distance and all that.

If my initial focus is on "conventional" causes then it's because I believe most of humanity isn't ready to tackle existential risk, AI and transhuman issues until we can tackle the problems we're having here and now.

Suppose your objective was, not to do good, but to make discoveries in some branch of science; biology, for example. And suppose you said

"If my initial focus is on 'conventional' biological research then it's because I believe most of humanity isn't ready to research futuristic topics X, Y, Z until we can answer the questions we are already asking in biology."

This response would make no sense, because you don't conduct scientific research by getting "most of humanity" to do it. Specialized research is conducted by the very small minorities who have the means, motive and opportunity to do it.

Similarly, the only reason that the readiness of most of humanity to do something would matter to you, is either if you intend to have them doing it as well, or if you intend to seek their approval before doing it.

Seeking the approval of 7 billion people for something is a formula for never doing it. And trying to get even 1% of 7 billion people to do something ... that's 70 million people. Who ever gets 70 million people to do the same thing? The state, by forcing them to do it; giant corporations, by marketing a product to them; maybe extremely influential cultural institutions, like the Catholic church; that's about it.

Can you explain what logical relationship there is between "the degree to which one should focus on conventional causes" and "the degree to which most of humanity is ready to think about the Singularity"?

[-]Giles13y-20

Thanks for this thoughtful reply. I'll outline my position briefly.

  1. The biological research example is not analogous because in biology, the necessary institutions and infrastructure already exist. In the field of effective world improvement this isn't the case. (If you convince me that current large charities and aid organizations are near-optimal then I'll update accordingly).

  2. My "most of humanity" comment was misleading and I apologize for this. I merely meant that I would be seeking to reach a lot of people outside the LW & "transhumanist" communities, as I may need to reach beyond these communities in order to find people with the right skills, goals, passions, etc. An organization with a purely transhumanist focus might seem off-putting.to such outsiders.

  3. The payoff of creating a very influential cultural institution would be very great and so could be seen as worth the low probability of success.

[EDIT: I'm not sure I can provide a meaningful answer to your question. My current plan is essentially "a bit of the conventional stuff, a bit of the transhuman stuff" and I'm prepared to drop either strand if it turns out not to be useful. That's all.]

[-][anonymous]13y170

It sounds to me like you're suffering from the perfect being the enemy of the good. I guess my advice would be that you're worrying too much about optimizing for the best possible course of action and not simply pursuing one reasonable course of action.

If it makes you feel any better, you're a better person than I am at the moment, as I don't really care about saving the world (right now), just trying to save myself, by which I mean getting into grad school/finding a career.

For me, the personal terror of mediocrity and slacking is far more effective in driving me away from saving-the-world-type-activities than akrasia. I don't know. Maybe in a few years if I'm in a more stable situation I'll worry about saving the world.

[This comment is no longer endorsed by its author]Reply

Isn't "optimizing for the best possible course of action and not simply pursuing one reasonable course of action" what instrumental rationality is all about?

"You're a better person than I am at the moment" doesn't make me feel any better, no. This is something I'll address in a later post.

Given your psychological situation, choosing to pursue one reasonable course of action may well be the best course of action. Maybe over time you'll get over your akrasia issues, but until then, the optimal decision is to do as much good as you can given those issues.

Instrumental rationality is about the best course of action, yes. But stressing over things so much that you can't achieve even the second- or third- or hundredth-best course of action isn't instrumentally rational either.

This is a very good point, and I'll be basing a later post around this theme.

Having said that, at the moment I don't feel that I have akrasia issues. But in my decision-making, I certainly have to consider the hypothesis that I have akrasia issues that I am in denial about.

And from that the plan became clear: the best way to save the world would be to persuade other people to do it for me.

I don't see why evangelism is likely to be your comparative advantage. I'd suggest the better alternative is probably to earn a crap load of money then pay someone else to do the evangelism. Certainly what my plan is.

It's worth a try at least - if it works then I've achieved my immediate goal and still have all the money.

Also, don't underestimate the subtle power of signaling: people may respond differently to "mercenary" evangelism versus "genuine" evangelism.

It's worth a try at least

Certainly. I suspect I wouldn't have blinked if you said 'an effective'.

if it works then I've achieved my immediate goal and still have all the money.

That's true. But you don't have the time and energy which constitute the opportunity cost. Those could, of course, have been spent making more money or getting laid. But who knows, often evangelizing a cause helps the latter goal rather effectively as well! Or so Robin would have me believe.

OK - "an effective".

What I meant was that, at the time, the optimal decision seemed to be to go ahead with this plan rather than invest time thinking of more plans. The initial cost was very cheap and even if it doesn't "go to plan" I'll still learn something.

All good thinking. :)

I know it's kind of taboo to say this here, but someone has to say it:

You best bet is probably to just give as much money as possible to the SIAI.

Not taboo at all, certainly not under one of my posts.

You would be right if the people at SIAI were so much cleverer than me than I would have literally nothing to contribute to their cause except money. I don't believe this is the case.

Also, I trust them, but I don't yet trust them anything like 100%.

I'd say they probably are that mush cleverer than you, that you do have things to contribute other than money, and that this fact does not stop you from doing even more if you ALSO donate some amount that dosn't to badly affect your ability to do other stuff.

You would be right if the people at SIAI were so much cleverer than me than I would have literally nothing to contribute to their cause except money. I don't believe this is the case.

I'm not exactly sure what you mean by cleverness, but the folks at SIAI probably have more expertise than you in the "saving the world" domain, at least for now, if your own activities thus far have been limited to donating. Of course, there may be things that you haven't told us yet.

But even if your expertise is currently limited in this particular domain, this does not mean that you won't be able to catch up, or even surpass the SIAI people at some point. But it might take a while. Are you aware of this, and are you ready for that kind of commitment?

Also, I trust them, but I don't yet trust them anything like 100%.

It sounds like you are not ruling out the possibility of trusting them 100% at some point. What are the necessary conditions that must be met for this to happen?

Assume the SIAI are expert at world-saving and OK at fundraising, and they are short of funds. Also assume that I am poor at both world-saving and fundraising. Then I have Comparative advantage in fundraising.

In other words, I should focus on improving my fundraising ability and then putting it into practice, letting SIAI get on with their world-saving speciality.

Even if there are SIAI fundraising organizations already out there, I'm pretty sure I can apply the comparative advantage thing to them too.

And all this is under the assumption that there is nothing better I could do than raise funds for SIAI.

I think it's dangerous to become trapped in "market" and "specialization" ways of thinking in something as underfunded as this. It's not like there's a thriving, bustling saving-the-world market out there and the SIAI have shown themselves to be market leaders. It's more like they're the only show in town (at least according to LW wisdom; I'd be very interested to know about their "competitors").

Yes, I'm ready for some hard work. It's my life goal after all.

Sorry, I shouldn't have said "trusting 100%". The point I was trying to make is that I see "just give all your money to the SIAI" as a stopsign and I'm trying to see beyond it.

Nothing I've said should be taken as a non-endorsement of the SIAI. I think they're cute and awesome, and very likely to be getting some money from me soon.

I'd like to help!

Basically my current plan is to learn how to build rationalist communities and optimize fun. Fun is instrumental in drawing people into rationalist communities.

I think outreach is helpful because it draws in other people, who can act autonomously, intelligently, and using their own skill sets. On my robotics team, I've found that most vaguely complex tasks are made easier (if not possible) when you work with other people.

My current strategy for doing that is making myself fun to be around, and finding people with goals that I want accomplished, and helping them with those. I'm less optimistic about the possibility of making people change goals, but it might actually be possible given enough time. Will investigate that further.

So then I start a community in DC and then college.

After college, I'll try to do some mixture of consulting and engineering.

Consulting: Basically, this might make it possible to spread rationality among business people in important fields. I'm not totally sure if there's much marginal impact there, since they might already be rational enough. If that's true, I probably won't spend as much time on this. It also has the useful side effect of teaching me a variety of skills, and having a wider variety of contacts.

Engineering: I think this is where my comparative advantage is, based on my experience in leading my robotics team, and becoming a Dean's List Finalist. I would focus on creating technologies which improve the quality of life, which hopefully will be profitable. This gives me money for other projects, and donating to SIAI, and maybe SENS. On top of that, lots and lots of good things seem to happen as people get richer, such as a lower birth rate (I consider this a good thing until we live in a post-scarcity environment), lower infant mortality rate, longer life expectancy, etc.

Basically, acquire skills, money, and influence in ways that generate lots of positive externalities.

Of course, this is all contingent on my continuing to think this would be effective. Should more effective instrumental goals come to my awareness, I would switch to them.

We should talk some time.

That's a pretty well thought out set of medium-term goals! I'm impressed.

Thanks.

I think that rationality outreach is particularly useful in that it brings more people into helping. Those people can help increase and complement each other's rationality and skills. More people can work on more things, and improve on instrumental goals in order to make things even more effective.

Also, I believe that it can be pursued in ways that exclude other goals.

Just to clarify: do you mean it can be pursued in ways that don't exclude other goals?

A quick bit of meta:

Firstly, I need a name for my group: how about "Altruist Support Network"?

Secondly, I've been trying to answer everyone individually, but only to show that I'm listening. I'm not trying to harass people.

"The Archimedes Institute"

Based on his quote "Give me a lever and a place to stand and I can move the world".

I think this is a good summary of what you're trying to do -- finding levers that actually transform effort into utility and a base which provides enough support to be able to use your leverage.

I like! But it fails the Google test. http://www.permanent.com/ep-archi.htm

You asked, so:

That's what you get when you finally grasp how much pointless pain, misery, risk, death there is in the world; just how much good could be done if everyone would get their act together; just how little anyone seems to care.

Speaks to me greatly. I am often shocked at how little people care about big things, and how much they care about trivial ones.

Thanks for this comment. Some of what I want to write about is "harsh reality" stuff - I don't believe we can start fixing things until we understand how the world really works. But I must be careful to write it in a way that doesn't damage anyone's faith in humanity.

Not willing to fully commit without knowing more, but I'll listen.

There are people out there who want to do good in the world, but don't know how.

The how is what I'm interested in, not really the 'why.' This post may be useful to some people, but it's not really useful to me until you start focusing primarily on this.

Entirely reasonable. There'll be more "how" in later posts.

Why is continuing to donate as you did previously mutually exclusive with your evangelism plan?

They're not mutually exclusive. I just feel at the moment that I gain less utility from spending the money immediately than I expect to gain from receiving donation advice from the community (including "save it - a better cause might come along).

I don't expect others to emulate me here though. I guess I should add a "donate or save?" post to my list.

I'll help you save the world!

Awesome! For now you'll have to be patient: I'm going to be posting a whole bunch of stuff and I guess the main thing I'll want to know is "is this helping?"

In the middle term could one way to do good is to support consumer advocacy by helping/developing rationalist organizations that try to help make it so that people aren't swindled (this is kind of what the skeptical community is about though compared to this community they need to learn a lot more).

"persuade other people to do it for me"? Don't you mean "persuade other people to do it with me"?

other than that, this is an awesome post! I totally want to be your ally! :)

Congratulations on your altruism! If you really are as altruistic as you claim to be.

I'm the person who mentioned there should be a "saving the world wiki", by the way. The main thing that's stopping me from going ahead and starting it myself is that noone else expressed any interest in actually using this wiki if I created it.

Also, I've already made some previous attempts to do something like this, and they were pretty much complete failures. Costly failures. Costing lots of time and money.

(sorry for not noticing this post until now)

Please upvote this comment if you would have at least some use for a "saving the world wiki"

downvote this comment if you want to balance out the karma from an upvote to the other comment.

a community of aspiring rationalist do-gooders.

I recommend Felicifia.org, a forum on Utilitarianism.

Thanks for the link! At a glance it looks like another Less Wrong, i.e. more of a talking shop and less geared towards action. But I'll check it out and introduce myself there.

I know this post was like 4 years ago but hey, it's never too late to change the world. Did you do the stuff you were hoping? I'm glad there are still good people in the world. Thanks for sharing, let me know if there's anything I can do to help :)

Declaring your intention to do good is an excellent way to start. However, I'd like to know what "good" means to you, and whether it reconciles with my conception of "good", before I formally declare my allegiance. I'm looking forward to hearing more in subsequent posts.

One possible path towards improving the world may be to identify people who have already accomplished that goal within their lifetimes, examine their approach, and possibly improve on it. What people would meet this criteria for you?

You're right - I should clarify in my own mind what I mean by "good" and then post about it. But the more important point is that I'm not sure such differences matter. Allies don't have identical utility functions; they just have to be close enough that they benefit from cooperation. That's the really important thing I need to post about.

TBH I'm not sure which people meet your criteria; I'm not good at picking heroes or role models. I think your strategy is an interesting one, but it's certainly not the overall strategy.

[-][anonymous]13y00

Have you considered open source projects?

What are you good at?

I'm trying to develop an open source meta-project which I believe is the path to maximal utility for all (that I can plan). Here: Plan A. I am just recently ready for collaborators. FAQ

I especially need programmers (Java or alternate), animators / producers of videos, illustrators, and everyone else now or later.

Brief Description. The project is centered around a serverless P2P application which provides a personal assistant offering peer-generated responses to user requests. The intent is to interface with personal manufacturing technologies in order to directly compare various means of meeting user requests, including commercial, home-manufacture, and barter, as well as considering alternative courses of action.

The application incorporates project-guided user generated content as entertainment in order to promote open source projects. This content is being developed for additional formats.

I am planning a "formal" announcement but a few pieces are not yet ready.