A conversation I had on Facebook last month, saved here so I can link back to it:


Anonymous: I want to give away some money. Who should I give it to? [...]

Note that I am familiar with the effective altruism movement and with givewell.org. (I think “GiveWell’s top charities” might be the right answer, and I am even curious how many people reading this would say that - I just don’t want to get comments starting with “Have you heard of…”.)

[...]

Buck Shlegeris: I think donor lotteries probably have higher EV than anything else you can do from your current epistemic state.

[...]

Rob Bensinger: I came here to recommend donor lotteries, but Buck already did it. I think most people should donate via donor lotteries. If you win the lottery and don't know where else to donate, the EA Funds are usually a better fallback option than GiveWell.

Ben Hoffman: Agreed on donor lotteries but not on EA funds.

Rob: Oh, interesting, why do you think GiveWell beats EA Funds?

Ben: They seem pretty similar in prospect, given the extent to which they seem to funge against each other (especially when you notice the level of coordination between GiveWell, Good Ventures, and Open Philanthropy Project), and the strong overlap in management. Basically it seems a bit odd to spend attention distinguishing them, vs spending effort distinguishing substantially different strategies.

Ben: Here's my explanation of donor lotteries, which links to a couple others: http://benjaminrosshoffman.com/claim-explainer-returns-to-scale/

[...]

Ben: I would recommend either gifting it directly to individuals you think have done good work in the world, without restrictions, or reaching out to them directly and asking them where they think you should give it.

Facebook queries like these will select for charities that are memetically fit answers to "where should I give?". If you don't have time to track this sort of thing, then just keep rolling it over into donor lotteries until/unless you "win" enough money to make it worth your time to track it.

Ben: Another decent option: giving unsolicited monetary gifts to friends or acquaintances (people known to you personally, not via mass media) who seem like they're basically benevolent and competent, have high marginal value for money, and aren't asking for any.

Ben: Anyhow, the reason I'm not suggesting any specific charities should be pretty obvious. If you want specific recommendations from me for some reason you can reach out to me directly. Likewise for other people you expect to be following similar heuristics.


New to LessWrong?

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 8:51 PM

I tend to use a portion of my money on things like buying people flights to places where they can work on cool projects, helping to the cost of attending a CFAR workshop, and making it easy to run community events (e.g. I'll just buy a lot of good-quality snacks and maybe pizza for the event, so that the event isn't disrupted by food planning). I think that this is pretty good, but I'd be happy if there was a way to find out more things that people could do with money without asking them as much. I like it when people have Patreons I can fund, for example. But overall I didn't know that most of the people that the first (and only?) round of EA grants gave money to, had projects that they were excited about.

Added: I don't otherwise donate to any organisations, except if it happens by one of the above methods.

What's the reason to think EA Funds (other than the global health and development one) currently funges heavily with GiveWell recommended charities? My guess would have been that that increased donations to GiveWell's recommended charities would not cause many other donors (including Open Phil or Good Ventures) to give instead to orgs like those supported by the Long-Term Future, EA Community, or Animal Welfare EA Funds.

In particular, to me this seems in tension with Open Phil's last public writing on it's current thinking about how much to give to GW recommendations versus these other cause areas ("world views" in Holden's terminology). In his January "Update on Cause Prioritization at Open Philanthropy," Holden wrote:

"We will probably recommend that a cluster of 'long-termist' buckets collectively receive the largest allocation: at least 50% of all available capital. . . .
We will likely recommend allocating something like 10% of available capital to a “straightforward charity” bucket (described more below), which will likely correspond to supporting GiveWell recommendations for the near future."

There are some slight complications here but overall it doesn't seem to me that Open Phil/GV's giving to long-termist areas is very sensitive to other donors' decisions about giving to GW's recommended charities. Contra Ben H, I therefore think it does currently make sense for donors to spend attention distinguishing between EA Funds and GW's recommendations.

For what it's worth, there might be a stronger case that EA Funds funges against long-termist/EA community/Animal welfare grants that Open Phil would otherwise make but I think that's actually an effect with substantially different consequences.

[Disclosure - I formerly worked at GiveWell and Open Phil but haven't worked there for over a year and I don't think anything in this comment is based on any specific inside information.]

[Edited to make my disclosure slightly more specific/nuanced.]

I don't think that statements like this are reliable guides to future behavior. GiveWell / Open Phil changes its strategic outlook from time to time in a way that's upstream of particular commitments. Even if Open Phil's claims about its strategic outlook are accurate representations of its current point of view, this point of view seems to change based on considerations not explicitly included. This is basically what I'd expect from a strategic actor following cluster thinking heuristics. In any case, despite Holden's apparent good-faith efforts to disclose the considerations motivating his actions, such disclosures don't really seem like they're high-precision guides to Open Phil's future actions, and in many cases they aren't the best explanation of present actions.

The actual causal factors behind allocation decisions by GiveWell and OpenPhil continue to be opaque to outsiders, including me, even though I too used to work there.

ETA: I too used to work at GW/OpenPhil

The actual causal factors behind allocation decisions by GiveWell and OpenPhil continue to be opaque to outsiders, [...]

You mean something other than the cost-effectiveness process and analysis from their website?

Yeah, I don't think someone doing independent, principled cost-effectiveness analysis would find that the top interventions were anything like GiveWell does, or favor anything like the set of actions that GiveWell and OpenPhil do. I went into some detail in the posts I linked. The relevant standard here is whether which coherent model predicts their actions taken as a whole as the best actions to take, not whether any particular decision is defensible as better than some other particular proposed alternative. (Basically anything can be justified using the latter standard, it's just a clever arguing contest.)

I'm confused how you distinguish between predicting vs clever arguing if all our data is in the past?

Suppose I have a radish, a carrot, and a prune. I eat the carrot. Someone asks me why. I respond that I ate the carrot because I like sweet foods, and the carrot was sweeter than the radish. They might reasonably be skeptical of this explanation, since if they'd tried to predict my behavior using the implied decision rule, they'd have predicted that I'd eat the prune rather than the carrot.

I see you as arguing that GW/Open Phil might change its strategic outlook in the future and that their disclosures aren't high precision so we can't rule out that (at some point in the future or even today) giving to GW recommended charities could lead Open Phil to give more to orgs like those in the EA Funds.

That doesn't strike me as sufficient to argue that GW recommended charities funge so heavily against EA funds that it's "odd to spend attention distinguishing them, vs spending effort distinguishing substantially different strategies."

Here's a potentially more specific way to get at what I mean.

Let's say that somebody has long-termist values and believes that the orgs supported by the Long Term Future EA Fund in expectation have a much better impact on the long-term future than GW recommended charities. In particular, let's say she believes that (absent funging) giving $1 to GW recommended charities would be as valuable as giving $100 to the EA Long Term Future Fund.

You're saying that she should reduce her estimate because Open Phil may change its strategy or the blog post may be an imprecise guide to Open Phil's strategy so there's some probability that giving $1 to GW recommended charities could cause Open Phil to reallocate some money from GW recommended charities toward the orgs funded by the Long Term Future Fund.

In expectation, how much money do you think is reallocated from GW recommended charities toward orgs like those funded by the Long Term Future Fund for every $1 given to GW recommended charities? In other words, by what percent should this person adjust down their estimate of the difference in effectiveness?

Personally, I'd guess it's lower than 15% and I'd be quite surprised to hear you say you think it's as high as 33%. This would still leave a difference that easily clears the bar for "large enough to pay attention to."

Fwiw, to the extent that donors to GW are getting funged, I think it's much more likely that they are funging with other developing world interventions (e.g. one recommended org's hits diminishing returns and so funding already targeted toward developing world interventions goes to a different developing world health org instead).

I'm guessing that you have other objections to EA Funds (some of which I think are expressed in the posts you linked although I haven't had a chance to reread them). Is it possible that funging with GW top charities isn't really your true objection?

Update: Nick's recent comment on the EA Forum sure suggests there is a high level of funging, though maybe not 100%, and that giving a very large amount of money to EA Funds to some extent may cause him to redirect his attention from allocating Open Phil money to allocating EA Funds money. (This seems basically reasonable on Nick's part.) So it's not obvious that an extra dollar of giving to EA Funds corresponds to anything like an extra dollar of spending within that focus area.

Overall I expect *lots* of things like that, not just in the areas where people have asked questions publicly.

I don't understand why this is evidence that "EA Funds (other than the global health and development one) currently funges heavily with GiveWell recommended charities", which was Howie's original question. It seems like evidence that donations to OpenPhil (which afaik cannot be made by individual donors) funge against donations to the long-term future EA fund.

The definitions of and boundaries between Open Phil, GiveWell, and Good Ventures, as financial or decisionmaking entities, are not clear.

I'd say that if you're competent to make a judgement like that, you're already a sufficiently high-information donor that abstractions like "EA Funds" are kind of irrelevant. For instance, by that point you probably know who Nick Beckstead is, and have an opinion about whether he seems like he knows more than you about what to do, and to what extent the intermediation of the "EA Funds" mechanism and need for public accountability might increase or reduce the benefits of his information advantage.

If you use the "EA Funds" abstraction, then you're treating giving money to the EA Long Term Future Fund managed by Nick Beckstead as the same sort of action as giving money to the EA Global Development fund managed by Elie Hassenfeld (which has largely given to GiveWell top charities). This seems obviously ridiculous to me if you have fine-grained enough opinions to have an opinion about which org's priorities make more sense, and insofar as it doesn't to you I'd like to hear why.

This doesn't look to me like an argument that there is so much funging between EA Funds and GiveWell recommended charities that it's odd to spend attention distinguishing between them? For people with some common sets of values (e.g. long-termist, placing lots of weight on the well-being of animals) it doesn't seem like there's a decision-relevant amount of funging between GiveWell recommendations and the EA Fund they would choose. Do we disagree about that?

I guess I interpreted Rob's statement that "the EA Funds are usually a better fallback option than GiveWell" as shorthand for "the EA Fund relevant to your values is in expectation a better fallback option than GiveWell." "The EA Fund relevant to your values" does seem like a useful abstraction to me.

I am curious about the reasoning behind "giving unsolicited monetary gifts to friends or acquaintances (people known to you personally, not via mass media) who seem like they're basically benevolent and competent, have high marginal value for money, and aren't asking for any." This doesn't seem like a very EA recommendation -- anybody I know personally is almost certainly among the richest individuals in the world, and even if they have the highest marginal value for money of anybody I know, it's probably still quite low relative to e.g. recipients of GiveDirectly grants. Right?

(I'm asking more to understand the reasoning behind the suggestion, and whether Ben feels this is indeed an "EA" recommendation -- not to challenge that this may well be a great thing to do, either way.)

I'm curious about the reasoning behind that statement, too.

This suggestion would unnecessarily concentrate donations among people with existing social connections to one another, no? I don't expect that I personally know the world's highest-leverage people. Even if I know some of them, I expect that organizations that dedicate resources to finding high-leverage people or opportunities (GiveWell, EA Funds, etc.) will fund opportunities with a better expected value than those that happen to be in front of me.

Is the reasoning here that those organizations are likely to miss the opportunities that happen to be in front of me personally? Or that sharing resources in local social communities strengthens them in a way that has particularly large benefits? Or that you've more carefully selected the people you have social connections to, such that they are likely to be overlooked-yet-high-leverage?

(I think I'm coming from a slightly more sceptical starting point than gwillen, but also feel like I could be missing something important here.)

I'm not sure I understand exactly what Ben's proposing, and I posted Ben's view here as a discussion-starter (because I want to see it evaluated), rather than as an endorsement.

(I should also note explicitly that I'm not writing this on MIRI's behalf or trying to make any statement about MIRI's current room for more funding; and I should mention that Open Phil is MIRI's largest contributor.)

But if I had said something like what Ben said, the version of the claim I'd be making is:

  • The primary goal is still to maximize long-term, large-scale welfare, not to improve your friends' lives as an end in itself. But if your friends are in the EA community, or in some other community that tends to do really important high-value things, then personal financial constraints will overlap a lot with "constraints on my ability to start a new high-altruistic-value project", "constraints on my ability to take 3 months off work to think about what new high-value projects I could start in the future", etc.
  • These personal constraints are often tougher to evaluate for bigger donors like Jaan Tallinn or Dustin Moskovitz (and the organizations they use to disburse funds, like BERI and the Open Philanthropy Project), awkward for those unusually heavily scrutinized donors to justify to onlookers, or demanding of too much evaluation time given opportunity costs. The funding gaps tend to be too small to be worth the time of bigger donors, while smaller donors are in a great position to cover these gaps, particularly if they're gaps affecting high-impact individuals the donor already knows really well.

Larger donors are in a great position to help provide large, stable long-term support to well-established projects; I take Ben to be arguing that the role of smaller donors should largely be to add enough slack to the system that high-altruistic-impact people can afford to do the early-stage work (brainstorming, experimenting with uncertain new ideas, taking time off to skill-build or retrain for a new kind of work, etc.) that will then sometimes spit out a well-established project later in the pipeline.

I take Paul Christiano's recent experiments with impact purchases, prizes, and researcher funding to be a special case of this approach to giving: rather than trying to find a well-established project to support, try to address value that's being lost early in the pipeline, by paying individuals to start new projects or by just giving no-strings donations to people who have a proven track record of doing really valuable things.

One effect of this is that you're incentivizing the good accomplishments/behaviors you're basing your donation decision on. A separate effect can be that you're removing constraints from people who find high-value projects inherently motivating and would spend time on them by default if they could; someone who's already sufficiently motivated by altruistic impact and doesn't need extra financial incentive may still be cash-constrained in what useful things they can spend their time on (or pay others to do, etc.).

This approach does introduce risk of bias. In principle, though, you can try to mitigate bias for this category of decision in the same way you'd try to mitigate bias for a direct donation to a philanthropic organization. E.g., ask third parties to check your reasoning, deliberately ignore opportunities where you're wary of your own motivations, or simply give the money to someone you trust a lot to do the donating on your behalf.

This seems like a good representation of a large portion of my reasons.

I basically expect people without perceived slack to be destroying value whenever they're engaged in sufficiently high-level intellectual work.If you believe that people in the developed world do in fact wield disproportionate power in the form of money (which is the usual justification for wealth transfers to the developing world poor), then improving the decisionmaking slack of those people seems like an extremely high-leverage intervention. This works for the same reason that real tenure was a good idea, and for the same reason Tocqueville was worried about the destruction of hereditary aristocracy and unaccountable institutions more generally.

For more on the incentive effect, see Robin Hanson's argument for prizes over grants, which is related to the argument for impact certificates (and my argument for something simpler).

See Carl Shulman's Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation for a more thorough discussion of a few of these points (though the examples Carl cites to support his conclusion look more like "provide very early funding to new organizations" than like Ben's particular description).