Sorted by New

Wiki Contributions


I have a fairly strong intuition that “if you don’t fund it, somebody else will” is more true than “if you don’t do it, somebody else will” so that this counter-consideration is outweighed. It’s important to note that many projects of high social value are the first of their kind, and that finding somebody else to execute such a project is highly nontrivial. I think that it’s also relevant that 114 billionaires have signed the Giving Pledge, committing to giving 50+% of their wealth away in their lifetimes.

On the other hand, the vast majority of people who want to do good in the world try to "do it" rather than "fund it" (hence why "Earning to Give" is considered a novel, controversial idea), which makes me think that “if you don’t do it, somebody else will” is more true than “if you don’t fund it, somebody else will”. Convincing other people to donate as much as you would have done in an EtG career is also highly nontrivial. And I think that those Giving Pledge stats are about as relevant as the fact that the nonprofit sector employs about 10 million people in the US.

(Still very glad to see this post on LW though, lest any of us should forget that Will's article, and probably many of the articles discussing EtG, were written for audiences quite different from LW! I especially liked the sections on Discrepancy in Earnings and Replaceability.)

"for a community that purports to put stock in rationality and self-improvement, effective altruists have shown surprisingly little interest in self-modification to have more altruistic intentions. This seems obviously worthy of further work." I would love to see more work done on this. However, I understand "wanting to have more altruistic intentions" as part of a broader class of "wanting to act according to my ultimate/rational/long-term desires rather than my immediate desires", and this doesn't seem niche enough for members of our community to make good progress on (I hope I am wrong), although CFAR's work on Propagating Urges is a start (I just found that particular session relatively useless).

I'd also love to see more work done on historical analogues and more attention given to "Diffusion of Innovations" (h/t Jonas Muller).

On non-obviousness, the arc of history seems to me to bend somewhat towards EA, and it is unsurprising that a society's moral circle would expand and allow more demanding obligations as their own circumstances become more cushy and their awareness of and power over the wider world increases. In other words, we've only just reached the point in history where a large group of people are sufficiently well-off, informed and powerful to be able to (perhaps even need to...think Maslow's Hierarchy of Needs) think about morality on such a massive scale, and EA is pretty superlative until we need to think about morality on such a huge scale. (I would love to hear some thoughts/research on this last paragraph as I was considering developing it into a blog post.)

The term "EA" is undoubtedly based on a form of total utilitarianism. Whatever the term means today, and whatever Wikipedia says (which, incidentally, weeatquince helped to write, though I can't remember if he wrote the part he is referring to), the motivation behind the creation of the term was the need for a much more palatable and slightly broader term for total utilitarianism.

I'd probably go, especially if people I know are going (went to my first LW meetup in London recently and couldn't shake the feeling that no one wanted to talk to me...)

I might go to the Fringe on Sunday 18th Aug and I'm busy Friday 23rd August, but otherwise free!

I think that the question that this article is trying to answer should have been made clearer, because there are very different answers to the questions:

  1. What impact has CEA produced so far?

  2. What impact has CEA produced per dollar invested so far?

  3. What impact has CEA produced per dollar invested, assuming each volunteer hour costs an average of $x, so far?

  4. What impact would CEA produce if I gave them $1000?

  5. What impact would CEA produce if I gave them £1m?

I think that the most useful question to answer is probably #4 or #5, but many of the criticisms here seem to be along the lines of #1, identifying poor outputs without any reference to what the inputs were. For example, the comments on lack of transparency, poor research outputs and poor chapter growth could all just be because CEA was using its limited resources more effectively elsewhere. Indeed, if you think CEA should be more transparent etc., maybe you should give them some money so they've got resources to put into this ;)

This also goes for digs at Will at not having certain Fermi estimates to hand and, in fact, I wouldn't be surprised if Ben (80k Executive Director) had these estimates and Will did not, since Ben is full-time 80k and Will is trying to put no more than 1 day a week into CEA as a whole, last I heard. Ben also suggested to me recently that I come up with BOTE estimates for The Life You Can Save's (TLYCS) top potential activities, so he's clearly thinking in the right way.

The second main point I wanted to make is that I don't think LWers are CEA's target audience; it seems more likely to me that CEA wants to create a load of "dilettantes" over a few improved LW-types. So GWWC acts like most charities and simply announces on their website how much money has been pledged rather than giving an estimate for money that their members so far probably will donate along with a complicated explanation of where that figure comes from; come on, even the typical man on the street looks at a figure like that on a website and thinks "I bet much less will actually be donated", it's not like GWWC are deliberately trying to deceive people. And maybe 80k are focusing on the best way to market EtG, and are less concerned with coming up with detailed advice for people who already accept EtG as a baseline, because they're trying to bring lots of people on the fringes of the EA movement a bit closer, rather than improving a few existing hardcore EAs (which seems to be more the Leverage Research approach). Which would make things like "There is a strong sense in which they are being graded on their speed, with publication being the mediator of impact.” just not the case. Maybe I'm projecting a bit though, since TLYCS is aiming to make a ton of potential EAs into EAs more than improving the effectiveness of existing EAs.

A couple more little things:

  • The Peter Singer talk got about 150 people, which I think was poor. But then, I organised it, and I'm no longer part of CEA ;)

  • I got the impression that a lot of thought went into both version's of 80k's declaration

Disclaimer 1: This is my first comment on LW, so apologies if I haven't got the swing of the conventions yet.

Disclaimer 2: I volunteer for TLYCS and have volunteered for GWWC in the past. There is a weak relationship between TLYCS and CEA at the moment but since this post does not mention TLYCS and since I know relatively little about the operations of GWWC/80k lately, I am treating TLYCS as separate from CEA in this thread and please do not take me to be representing CEA.

Disclaimer 3: Since this article was just about CEA's weaknesses, though of course strengths exist, I in turn have only tried to deny some of the weaknesses, though of course weaknesses exist.