CEA does not seem to be credibly high impact
I am highly grateful to Alexey Morgunov and Adam Casey for reviewing and commenting on an earlier draft of this post, and pestering me into migrating the content from many emails to a somewhat coherent post. Will Crouch has posted about the Centre for Effective Altruism and in a follow up post discussed questions in more detail. The general sense of the discussion of that post was that the arguments were convincing and that donating to CEA is a good idea. Recently, he visited Cambridge, primarily to discuss 80,000 hours, and several Cambridge LWers spoke with him. These discussions caused a number of us to substantially downgrade our estimates of the effectiveness of CEA, and made our concerns more concrete. We're aware that our kind often don't cooperate well, but we are concerned that at present CEA's projects are unlikely to cash out into large numbers of people changing their behaviour. Ultimately, we are concerned that the space for high impact meta-charity is limited, and that if CEA is suboptimal this will have large opportunity costs. We want CEA to change the world, and would prefer this happens quickly. The key argument in favour of donating money to CEA which was presented by Will was that by donating $1 to CEA you produce more than $1 in donations to the most effective charities. We present some apparent difficulties with this remaining true on the margin. We also present more general worries with CEA as an organisation under these headings: Transparency Cost effectiveness estimates Research 80,000 hours Impact of 80,000 hours advice Content of 80,000 hours advice The 80,000 hours pledge Scope and Goals Speed of growth Ambition Transparency It is worrying how little of the key information about CEA is publicly available. This makes assessment hard. By contrast to GiveWell, CEA programs are not particularly open about where their money is spent, what their marginal goals are, or what they are doing internally. As pr
Even if it's the case that the statistics are as suggested, it would seem that a highly effective strategy is to ensure that there are multiple adults around all the time. I'll accept your numbers ad arguendo (though I think they're relevantly wrong).
If there's a 4% chance that one adult is an abuser, there's a 1/625 chance that two independent ones are, and one might reasonably assume that the other 96% of adults are unlikely to let abuse slide if they see any evidence of it. The failure modes are then things like abusers being able to greenbeard well enough that multiple abusers identify each other and then proceed to be all... (read more)