Wiki Contributions

Comments

You need to add in the endowments of the colleges as well. The richest college at Cambridge (Trinity) has an endowment of about $1.5bn; whereas the richest college at Oxford has only about $300m.

What are the chances of being a billionaire or getting $30m plus if you go to Harvard rather than an elite uni?

And then what about HBS rather than Harvard?

Agree - Glassdoor is mainly designed to appeal to job seekers. The way they get their data is by only granting access if you reveal your salary. So the salary data ends up tilted towards the people who are seeking jobs.

There's also a sampling problem. Google has ~10,000 engineers, but there's probably only ~100 who earn $1mn+. Large companies normally only have a couple of responses, so even if you sampled everyone randomly, you'd only get ~1 top earner in the sample.

Hi Jonah,

Great posts.

I agree these figures show it's plausible that the value of donations in finance are significantly larger than the direct economic contribution of many jobs, though I see it as highly uncertain. When you're working in highly socially valuable sectors like research or some entrepreneurship, it seems to me that the two are roughly comparable, with big error bars.

However, I don't think this shows it's plausible that earning to give is likely to be the path towards doing the most good. There are many careers that seem to offer influence over budgets significantly larger than what you could expect to donate. For instance, the average budget per employee at DfiD is about $6mn per year, and you get similar figures at the World Bank, and many major foundations. It seems possible to move this money into something similarly effective or better than cash transfers. We've also just done an estimate of party politics showing that the expected budget influenced towards your preferred causes is 1-80mn if you're an Oxford graduate over a career, and that takes account of chances of success.

You'd expect there to be less competition to influence the budgets of foundations for the better than to earn money, so these figures make sense.

(And then there's all the meta things, like persuading people to do earning to give :) )

One point to note with Carl's 30x figure - that's only when comparing the short-run welfare impact of a GDP boost with a transfer to GiveDirectly. If you also care about the long-run effects, then it becomes much less clear.

Glassdoor rarely properly includes the top paid employees (those people don't fill out the survey). According to Goldman's own figures, mean compensation per employee (across all employees) is ~$400k. It'll be significantly higher if you're in front office. Your expected earnings from a Goldman job are roughly the mean earnings multiplied by the expected number of years you'll stay at the firm.

I think both research and advocacy (both to governments and among individuals) are highly important, and it's very unclear which is more important at the margin.

It's too simple to say basic research is more important, because advocacy could lead to hugely increased funding for basic research.

We've collated a list of all the approaches that seem to be on the table in the effective altruism community for improving the long-run future. There's some other options, including funding GiveWell and GCRI. This doc also explains a little more of the reasoning behind the approaches. If you like more detail on how 80k might help reduce the risk of extinction, drop me an email at ben@80000hours.org.

In general, I think the question of how best to improve the long-run future is highly uncertain, but has high value of information, so the most important activities are: (i) more prioritisation research (ii) building flexible capacity which can act on whatever turns out to be best in the future.

MIRI, FHI, GW, 80k, CEA, CFAR, GCRI all aim to further these causes, and are mutually supporting, so are particularly hard to disentangle. My guess is that if you buy the basic picture, the key issues will be things like 'which organisation has the most pressing room for more funding at the moment?' rather than questions about the value of the particular strategies.

Another option would be to fund research into which org can best use donations. There's a chance this could be commissioned through CEA, though we'll need to think of some ways to reduce bias!

Disclaimer: I'm the Executive Director of 80,000 Hours, which is part of CEA.

Note that Toby is a trustee of CEA and did most of his government consulting due to GWWC, not the FHI, so it's not clear that FHI wins out in terms of influence over government.

Moreover, if your concern is influence over government, CEA could still beat FHI (even if FHI is doing very high level advocacy) by acting as a multiplier on the FHI's efforts (and similar orgs): $1 donated to CEA could lead to more than $1 of financial or human capital delivered to the FHI or similar. I'm not claiming this is happening, but just pointing out that it's too simple to say FHI wins out just because they're doing some really good advocacy.

Disclaimer: I'm the Executive Director of 80,000 Hours, which is part of CEA.

Read the response to poor cause choice and inconsistent attitude toward rigor as "while some EAs might be donating without enough thought, lots of others are investing most of their resources in doing more research"

The monoculture problem is something we often think about how to fix at 80k. We haven't come up with great solutions yet though.

I also argued that the decline in the FB group is not obviously important. And if it's difficult to avoid, but many movements started by a small group of smart people nevertheless go on to achieve a lot, that's also evidence that it's not important.

Hi Ben,

Thanks for the post. I think this is an important discussion. Though I'm also sympathetic to Nick's comment that a significant amount of extra self-reflection is not the most important thing to EA's success.

I just wanted to flag that I think there are attempts to deal with some of these issues, and explain why I think some of these issues are not a problem.

Philosophical difficulties

Effective altruism was founded by philosophers, so I think there's enough effort going into this, including population ethics. (See Nick's comment)

Poor cause choices

There's a lot being done on this front:

  • GiveWell is running Labs, and Holden has said he expects to find better donation opportunities in the next few years outside of global health
  • CEA is an advocate of further cause prioritisation research, and is about to hire Owen Cotton-Barratt, to work full-time on it.
  • 80k is about to release a list of recommended causes, which will not have global health at the top.

Non-obviousness

I think the more useful framing of this problem is 'what's the competitive advantage that has let us come up with these ideas rather than anyone else?' I think more work on this question would be useful. This also deals with the efficient markets problem. If you don't have an answer to this question, I agree you should be worried.

I've thought about it in the context of 80k, and have some ideas (unfortunately I haven't had time to write about them publicly). I now think the bigger priority is just to try out 80k and see how well it works. More generally, we try to take our disagreements with elite common sense very seriously.

I don't think recency is a problem. It seems reasonable that EA could only develop after we had things like the internet, good quality trial data of different interventions, and Singer's pond argument (which required a certain level of global inequality and globalization), which are all relatively recent.

Inconsistent attitude toward rigor

I think this is mainly because people use the best analysis that's out there, and the best analysis for charity is currently much more in-depth than it is for these other issues. We're trying to make progress on the other issues at 80k and CEA.

Poor psychological understanding

My impression is that people at CEA have worried about these problems quite a bit. At 80k, we try to work on this problem by highlighting members who are really trying rather than rationalising what they want, which we hope will encourage good norms. We'll also consider calling people out, but it can be a delicate issue!

Monoculture

I'm worried about this, but it's difficult to change. All we can do is try to make an active effort to reach out to new groups.

Community problems

I don't see the decline in quality of the FB group as a problem. EA was started by some of the smartest, most well meaning people I have ever met. It's going to be almost impossible to avoid a decline in quality of discussion as the circle is widened.

I'll also push back against equating the community with the FB group. There are efforts by other EA groups to build better venues for the community e.g. the EA Summit by Leverage. We don't even need a good FB group so long as there are other ways for people to form projects (e.g. speak to 80k's careers coaches) and get good information (read GiveWell's research).

Load More