The current issue of the Oxford Left Review has a debate between socialist Pete Mills and two 80,000 hours people, Ben Todd and Sebastian Farquhar: The Ethical Careers Debate, p4-9. I'm interested in it because I want to understand why people object to the ideas of 80,000 hours. A paraphrasing:

Todd and Farquhar
Choose your career to most improve the world. Focus on the consequences of your decisions.
80,000 hours says you must work a high paying but world-destroying career so you can give more money away. Then "your interests are aligned with the interests of capital" and you can't use political means to improve the world because that endangers your career.
Todd and Farquhar
Professional philanthropy is one option, but so are research and advocacy. Even if you go the high paying career route, you could be a doctor. Even if you take a people-harming but well paying job you're just replacing someone else who would do it instead. Engels was a professional philanthropist, funding Marx's reasearch.
"80k makes much of replaceability: 'the job will exist whatever you do.' This is stronger than the claim that someone else will become a banker; rather, it states that there will always be bankers, that there will always be exploitation." Engles took on too much by trying to be both a professional philanthropist and an activist which drove him to depression and illness.
Todd and Farquhar
Campaigning might be better than professional philanthropy, though you should consider whether you do better to get a well-paying job and fund multiple people to campaign. Replaceability means that a given job will exist whether you take it or not, but "there might be some things you could do that would cause the job to cease to exist; for instance, by campaigning against banking". "Even if you believe capitalism is one of the world's greatest problems, you shouldn't make the seductive inference that you should devote your energies to fighting it. Rather, you should work on the cause that enables you to make the biggest difference. There may be other very big problems which are more tractable."
"The language of probability will always fail to capture the possibility of system change. What was the expected value of the civil rights movement, or the campaign for universal suffrage, or anticolonial struggles for independence? As we have seen most recently with the Arab Spring, every revolution is impossible, until it is inevitable." I don't like that 80,000 hours uses calculations in their attempt to estimate the good you could do through various potential careers. Stop focusing on the individual when the system is the problem. [Other stuff that doesn't make sense to me.]

As a socialist, Mills really doesn't like the argument that the best way to help the world's poor is probably to work in heavily capitalist industries. He seems to be avoiding engaging with Todd and Farquhar's arguments, especially replaceability. He also really doesn't like looking at things in terms of numbers, I think because numbers suggest certainty. When I calculate that in 50 years of giving away $40K a year you save 1000 lives at $2K each, that's not saying the number is exactly 1000. It's saying 1000 is my best guess, and unless I can come up with a better guess it's the estimate I should use when choosing between this career path and other ones. He also doesn't seem to understand prediction and probability: "every revolution is impossible, until it is inevitable" may be how it feels for those living under an oppressive regime but it's not our best probability estimate. [1]

In a previous discussion a friend also was mislead calculations. When I said "one can avert infant deaths for about $500 each" their response was "What do they do with the 500 dollars? That doesn't seem to make sense. Do they give the infant a $500 anti-death pill? How do you know it really takes a constant stream of $500 for each infant?". Have other people run into this? Bad calculations also tend to be distributed widely, with people saying things like "one pint of blood can save up to three lives" when the expected marginal lives saved is actually tiny. Maybe we should focus less on estimates of effectiveness in smart-giving advocacy? Is there a way to show the huge difference in effect between the best charities and most charities without using these?

Maybe I should have way more of these discussions, enough that I can collect statistics on what arguments and examples work and which don't.

(I also posted this on my blog)

[1] Which is not to say you can't have big jumps in probability estimates. I could put the chance of revolution at 5% somewhere based on historical data but then hear some new information about how one has just started and sounds really promising which bumps my estimate up to 70%. But expected value calculations for jobs can work with numbers like these, it's just "impossible" and "inevitable" that break estimates.

New Comment
71 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Here's an attempted reconstruction of Mills' argument. I'm not endorsing this argument (although there are parts of it with which I sympathize), but I think it is a lot better than the case for Mills as you present it in your post:

If a friend asked me whether she should vote in the upcoming Presidential election, I would advise her not to. It would be an inconvenience, and the chance of her vote making a difference to the outcome in my state is minuscule. From a consequentialist point of view, there is a good argument that it would be (mildly) unethical for her to vote, given the non-negligible cost and the negligible benefit. So if I were her personal ethical adviser, I would advise her not to vote. This analysis applies not just to my friend, but to most people in my state. So I might conclude that I would encourage significant good if I launched a large-scale state-wide media blitz discouraging voter turn-out. But this would be a bad idea! What is sound ethical advice directed at an individual is irresponsible when directed at the aggregate.

80k strongly encourages professional philanthropism over political activism, based on an individualist analysis. Any individual's chance of ... (read more)

Responding to your reconstruction: I think 80k are pretty clear about the fact that their advice is only good on the margin. If they get to a position where they can influence a significant fraction of the workers in some sector, then I expect their advice would change.


Thank you for writing this. I think I understand Mills' view better now.

The fewer people that vote, the more influential each vote is. Everyone else, stay home on Election Day! ;)

With respect to why some viscerally reject the idea, I think many see charity as a sort of morally repugnant paternalism that demeans its supposed beneficiaries. (I can sympathize with this, although it seems like a rather less pressing consideration than famine and plague.)

You might actually be able to cut ideologies up - or at least the instinctive attitudes that tend to precede them - according to how comfortable they are with charity and what they see it as encompassing: liberals think charity is great; socialists find charity uncomfortable and think it would be best if the poor took rather than passively received; libertarians either also find charity uncomfortable but extend that feeling to any system that socialists might hope to establish, or think charity is great but that the social democratic stuff liberals like isn't charity.

It might also be possible to view this unease as stemming from formally representing charity as purchasing status. I give you some money, I feel great, you feel crummy (but eat.) It's a bit like prostitution: one doesn't have to deny that both parties are on net better off from any given transaction to hold that something exploitative is going on. F... (read more)

I think the problem with charity reflects an ethical question: what exactly does it mean that something is "good", and if something is "good" what should be the consequences for our behavior? The traditional answer is that it is proper to reward doing "good" things socially, but they should not be enforced legally. One will be celebrated as a hero for saving people from a burning house, but one will not be charged with murder for not saving people from a burning house. On the other hand, doing "bad" things should be punished not only socially but also legally. Stealing things from others is punished not only by losing friends, but also by prison. What is the source of this asymetry? Why is "bad" not the opposite of "good", with all consequences? This is especially important for utilitarians, because if we convert everything to utilons, at the end we have a choice between an action A which creates a worldstate with X utilons, and B which creates the worldstate with Y utilons. Knowing that X is greater than Y, should we treat action A as "good", or action B as "bad"? My guess is that we have some baseline that we consider the standard behavior. (Minding your own business, neither helping others nor harming them.) A "good" action is change from this baseline towards more utilons, a "bad" action is change from this baseline towards less utilons. Not lowering this baseline is considered more important than increasing it. It makes sense to have a long-term Schelling point. Problem is that if you change this baseline, you have redefined the boundary between "good" and "bad". And people disagree about where exactly should this baseline be. If two groups disagree about the baseline, they have moral disagreement even if they use the same utility function. They disagree about whether choosing worse B instead of better A should be punished. For example people are socially rewarded for giving money to charity, but they are not punished for not giving to charity, because th
1Rob Bensinger
You're conflating two different questions here: 1. What interval of quantified goodness (utility) should the Law actively promote, by distributing punishments or rewards to agents? What are the least good good deeds the Law should care about, and what are the most good good deeds? 2. Restricting our attention to deeds the Law actively promotes or discourages, how ungood does an act have to be before the Law should discourage it via positive punishment, as opposed to just discouraging it by withholding a reward or by rewarding a somewhat-less-bad alternative action? You start off speaking as though you're answering the first question -- when should the state be indifferent to supererogation? -- but then you only list punishment (and extremely harsh punishment, at that!) as the mechanism by which Laws can incentivize behavior. This is confusing. Whether the Law should encourage people (e.g., with economic inventives) to save their neighbors from burning houses is quite a different question from whether the Law should punish people who don't save their neighbors, and that in turn is quite a different question from whether such a punishment should be as harsh as that for, say, manslaughter! A $100 fine is also a punishment. (And a $100 reward is also an incentive.) I don't agree with this. If two rational and informed people disagree about whether enacting a certain punishment is a good idea, then they don't have the same utility function -- assuming they have utility functions at all. I think the core problem is that you're conceiving the Law as a utilometer. You input the goodness or badness of an act's consequences. (Or its act-type's foreseeable consequences.) The Law, programmed with a certain baseline, calculates how far those consequences fall below the baseline, and assigns a punishment proportional to the distance below. (If it is at or above the baseline, the punishment is 0.) The Law acts as a sort of karmic justice system, mirroring the world's distri
This makes sense to me, but then wouldn't Mills be arguing against the charity component instead of the career component?
Possibly. Or possibly he's deciding to go after the weaker claim, or is personally too cowardly to accept the lifestyle consequences of full-on consequentialism, or you should accept at face value his arguments that even on consequentialist grounds high-paying finance jobs are likely to destroy as much as they create. I'm mostly speculating based on my experiences among the kommie krowd and what I like to imagine (though don't we all) is a developed sympathetic understanding of other tribes as well. This shouldn't be read as a strong claim or even really a claim at all about Mills specifically. (From your summary it sounds like you found yourself confused by Mills' arguments, so either it is hopelessly confused, or you might benefit from giving it another go, or there's simply too much inferential distance at this moment.)

Downvoted for the extremely tendentious paraphrase. I'm generally in favor of more discussion of politics on this site, but I think it's a topic we need to be extra careful about. This is not the way to do it.

Also, it's "Engels", not "Engles".


I'm extremely interested in this discussion, but I agree - if you're going to discuss politics, please do so more carefully. A deliberately uncharitable paraphrase is not a good place to start.

"Extremely tendentious" is not what I want. The ideas of 80k make a lot of sense to me and a lot of what Mills was arguing did not, but I tried to paraphrase them as accurately as I could, or leave quotes in when I couldn't. [1] Which parts do you think badly represent their sources? [1] For example, "The language of probability will always fail to capture the possibility of system change. What was the expected value of the civil rights movement, or the campaign for universal suffrage, or anticolonial struggles for independence? As we have seen most recently with the Arab Spring, every revolution is impossible, until it is inevitable." was originally [misunderstands probability] but I tried to be fairer to him and avoid my own biases by using his own words.)

I'm sure your intention was to present an unbiased summary. Unfortunately, this is very difficult to do when you strongly identify with one side of a dispute. It also doesn't help that Mills is not a very clear writer. I've noticed that when I read an argument for a conclusion I do not agree with, and the argument doesn't seem to make much sense, my default is to assume it must be a bad argument, and to attribute the lack of sense to the author's confusion rather than my own. On the other hand, when the conclusion is one with which I agree, and especially if its a conclusion I think is underappreciated or nonobvious, an unconscious principle of charity comes into play. If I can't make sense of an argument, I think I must be missing something and try harder to interpret what the author is saying.

This is probably a reasonably effective heuristic in general. There's only so much time I can spend trying to parse arguments, and in the absence of other information, using the conclusion as a filter to determine how much credibility (and therefore time) I should assign to the source isn't a terrible strategy. When I'm trying to provide a fair paraphrase of someone's argument though, the he... (read more)

"I refuse to accept replaceability because it conflicts with my politics" is hardly a fair representation of his point, for a start. I think his point here along the lines of that although if you don't become a banker someone else will, if you do become a banker, nobody will become a political activist in your place (and for various reasons it's extremely hard to be both a banker and a socialist activist). And if you're a successful political activist, you increase the chance that society will be reformed so that there aren't a load of bankers.
From the source: Mills doesn't argue against replaceability, he says that he can't accept replaceability because it implies there will always be bankers and exploitation.
His actual quote is saying that replaceability goes away if the whole system can be changed. Your original paraphrase makes it sound like he has an ideological precommitment to the idea that if you don't become a banker, nobody else will.
Ok; I'll replace the paraphrase with the quote.
Regarding your example, I think what Mills is saying is probably a fair point - or rather, it's probably a gesture towards a fair point, muddied by rhetorical constraints and perhaps misunderstanding of probability. It is very difficult to actually get good numbers to predict things outside of our past experience, and so probability as used by humans to decide policy is likely to have significant biases.

When I calculate that in 50 years of giving away $40K a year you save 1000 lives at $2K each, that's not saying the number is exactly 400. It's saying 1000 is my best guess...

Bold added myself. Should that be 1000?

Fixed. Thanks!

I hate to go off on a tangent, but:

Bad calculations also tend to be distributed widely, with people saying things like "one pint of blood can save up to three lives" when the expected marginal lives saved is actually tiny.

Just in the past week I was trying to figure out the math behind that statistic. I couldn't find actual studies on the topic that would let me calculate the expected utility of donating blood. Do you happen to know said information?

I bet it's that 1/3 of a pint is pretty much the minimum amount of blood needed for a lifesaving transfusion (The Red Cross says the average transfusion size is three pints). The expected utility of giving blood is currently very small, at least in the USA, because people are not being refused transfusions due to lack of blood. If that started happening blood drives would expand hugely and you'd know about it. We do have shortages of other organs, such as kidneys, and with those you have around maybe a 50% chance of saving someone's life if after you offer to donate one they find a match for you. If you're able to start a chain of kidney swaps that wouldn't happen otherwise, you may be able to get above one expected life saved per kidney.
Upon further thought, 50% may be too high for kidney donation. I was estimating that you'd only be giving your kidney to someone who would die otherwise (there are many more people who need kidney transplants than are available, so the replaceability effect should be absent.), and it had a 50% chance of working. While people who get a donated kidney do live longer than ones who stay on dialysis (and get to stop having to go in for dialysis) they tend to only live an extra 10-15 years. As the lives of people with major kidney problems are worse than those without, maybe count each year as only 75% when quality weighting? So 50% chance of working 10-15 years 75% gives a very rough estimate of 4-6 QALYs per kidney donation. Still much better than blood, but with deworming at around $100/QALY donating $1K does more good than donating a kidney.
This is wrong: the 10-15 year estimate already takes into account rejections and other transplant failures. So I'm off by 50% and we get 8-12 QALYs. The data is also from a study on kidneys from cadavers; live donated ones might be better.
Live ones apparently are better; I heard this before recently, and this seems to be right according to a few pages I checked, although they don't cite specific studies: or
I have heard statistics (from sources that aren't the red cross) that the supply of blood in the US is nearly always quite low. Running out rarely happens, although I did hear that lack of adequate blood reserves did cause a problem on 9/11. My impression is that the Red Cross typically has enough reserves for a few average days, but if something major happens, that supply can get used up pretty quickly.
Somewhere I heard that a single donation unit of blood can be made into three distinct products that could be given to three distinct patients. I assume that this possibility is the source of the statistic. Wikipedia is suggestive but not definitive. Even if that's right, it's not a very useful statistic because it ignores the fairly common occurrence than more than one unit of medical thing will be needed for a single patient.

The current issue of the Oxford Left Review has a debate between socialist Pete Mills and two 80,000 hours people, Ben Todd and Sebastian Farquhar: The Ethical Careers Debate, p4-9

Link to the article (the one in the post is dead)

I find the replaceability assumption very problematic, too. If this wasn't LW, I would simply state the obvious an say that all sorts of evil stuff can be justified by replaceability. But this is LW, so I'll say that replaceability is not true for reflective decision theories.

The other potential bankers aren't using reflective decision theories. It's really that simple.

Added: Actually, it's even simpler: the other potential bankers have different goals. But the point about whether other people are using reflective decision theories is sometimes relevant.

I'm not sure how to parse this. One possible interpretation is, "If the replaceability thesis were true, then it would follow that people should do evil things. But since people shouldn't do evil things, it follows by modus tollens that the replaceability thesis is false." This kind of argument could be correct depending on how the details were fleshed out, but I certainly would not call it obvious. Another interpretation is, "Unscrupulous clever arguers could use the replaceability thesis to persuade people to do evil things." This is more obvious, but it doesn't seem very relevant; sufficiently bad reasoning can be used to argue for any conclusion from any set of premises.
I wasn't trying to say anything deep, really. If the replaceability argument works for investment bankers, then it works for henchmen of an oppressive regime, too. In my country, many people actually used the replaceability argument, without the fancy name. And in hindsight people in my county agree that they shouldn't have used the argument. So yeah, maybe it's the modus tollens. But maybe it's simpler than that: maybe these people misjudged being completely replaceable. In the eighties more and more people dared to say no to the Hungarian secret service, with less and less consequences. By the way, the apparently yet-unpublished part 2 of jkaufman's link will deal with this issue.

Well, it kind of does apply to henchmen of an opressive reigime. The classic example is Oskar Schindler: he ran munitions factories for the Nazis in order to help him smuggle Jews out of Germany (and he ran them at under capacity). Schindler is generally regarded as a hero, but that seems to be trading on precisely something like the replaceability argument. If he hadn't done the job, someone else would have, and not only would they not have saved anybody, they would have run the factories better.

Flip the argument around for "being a banker" (or your doubtful career of choice) and it's hard to see what changes.

Sure, I never meant to imply that the issue is clear-cut. Many of the people revealed to be informers argued that they only reported the most innocent things about the people they were tasked to spy on. Tens of thousands of books are written about such moral dilemmas. When people decide that Schindler is a hero, they seem to use a litmus test that is similar but definitely not identical to replaceability. They ask: Did he do more than what can reasonably be expected from him under his circumstances? I don't think focusing on the replaceability part of this very complex question helps clear things up.
Okay, that's pretty fair. I can only really claim that a replaceability argument could be used to argue that Schindler was a hero; there may be other ways of thinking about it, and those may be the ways people actually do think! That said, I've found that example does sometimes make people reconsider their opinion of replaceability arguments, so it certainly appeals to something in the folk morality.
3Paul Crowley
Replaceability is also not total. If you decide to be a henchman, on average you slightly increase henchman quality and reduce henchman salary. So refusing to be a henchman does cost the evil regime something.
I'm confused.
The essay you linked to acknowledges the existence of the coordination problems I am talking about, and promises a Part 2 where it deals with them. This Part 2 is not yet published.
I see. You meant the link in this post, not one of the links in the top level post (which was also me).
Can you elaborate for me, please? I don't know what you mean (even though this is LW).
As Douglas_Knight shows, my comment wasn't really well thought out. However, the idea is that a reflective decision theory agent considers the implications of the fact that whatever her decision is, similar agents will reach a similar decision. This makes them cooperate in Prisoner's Dilemma - Tragedy of the Commons situations where "if all of us behaved so selfishly, we would be in big trouble". The thing is sometimes called superrationality.
A more detailed consequentialist argument for replaceability: The replaceability effect: working in unethical industries part 1.

This is the first I heard of 80,000 hours, and their site gives me an instant negative vibe, and it's not just the abundance of weird pink on it. But I have trouble pinpointing quite what it is.

I notice that in their list of high-impact careers, not one of them involves actually doing the work that all this charity pays for. The grunt work is beneath them and the audience they're aiming at. A lot of the careers consist of telling other people what to do: managers, policy advisors, grant writers, sitting on funding bodies. The closest any of them get to boots on the ground is scientific research and development. Now, I can see the argument for this. If your abilities lie in the direction of a lucrative profession, you should do that and give most of the proceeds to charity. The lawyer and the soup kitchen. But take this a step further (as the 80,000ers do themselves). Wouldn't it be even more effective to persuade other people to do this? If you get 10 people to make substantial donations, that's more effective than just doing it yourself. Or in their words, "There are also many opportunities for forming chains of these activities. For instance, you could campaign for more people to become professional philanthropists, who could spend money paying for more campaigners." But why stop there? The more money people have, the more they can give, so you should concentrate on persuading the seriously wealthy to donate. And to move in their circles, you will have to cultivate a certain degree of prosperity yourself, or you'll never get access. Just an expense of the job, which will pay off with even more money raised for charity. But then again, governments command vastly more wealth and power than almost any individual, so that is where you could have a truly great impact. Better still, go for the governments of governments, the supra-national organisations. Of course, it will be such a chore to maintain a pied-à-terre in every major capital, charter private jets for your travelling, and dine at the most expensive restaurants with senior politicians and businessmen, but one could probably put up with it. Now, the higher up this pyramid you go, the smaller it g

You know, this is sort of WAD. It's much easier to get people to do good if it happens to nearly coincide with something they wanted to do anyway. If you have someone who was already planning to become a banker, then it's much easier to persuade them to keep doing that, but give away money, than it is to persuade them to become a peniless activist. As it happens, this may be hugely effective, and so a massive win from a consequentialist point of view.

I like to think of it more on the positive front: as a white, male, privileged Westerner with a high-status education you basically have economic superpowers, so you can quite easily do a lot of good by doing pretty much what you were going to do anyway. Obviously, most of this is due to your circumstances, but it's still a great opportunity.

The amusement or absurdity value should be irrelevant in evaluating such decisions. I feel really angry when I consider remarks like these (not angry at someone or some action in particular, but more vaguely about the human status quo). The kind of spectacle where tickets are purchased in dead child currency.

I can see that my response to 80,000 Hours could be just as self-serving as they can be seen as being, but see my further response to satt.

This has been my concern. I'm not involved with 80k but I travel in Effective Altruism circles, which extend beyond 80k and include most of their memes.

What is incredibly frustrating is that none of this actually proves anything. It is still true that a wealthy banker is probably able to do more good than a single aid worker. Clearly we DO need to make sure there's an object-level impact somewhere. But for the near future, unless their memes overtake the bulk of the philanthropy world, it is likely that methods 80k advocates are sound.

Still, the whole thing smells really off to me, and your post sums up exactly why. It is awful convenient for a movement consisting of mostly upper-middle-class college grads that their "effective" tools for goodness award them the status and wealth that they'd otherwise feel entitled to.

alternately, humans are badly made and care more about status and wealth than about the poor sick. They won't listen if you tell them to sacrifice themselves, but they might listen if you tell them to gain status and also help the poor at the same time. The mark of a strong system in my mind is one that functions despite the perverse desires of the participants. If 80k can harness people's desire to do charity to people's desire for money and status I think it can go really far.

That is a good point.
this seems like a feature since it means it is attractive to a MUCH larger subset of the populace than self sacrifice.
Is this actually meant to be an argument against 80k hours' style of effective altruism or are you just joking around?
I am not joking around, but neither am I arguing that they should shut up shop and go save the world some other way. I have not concluded a view on whether what they are doing is worth while, and my posting is simply to voice a concern. If you think it's an unfounded one, go ahead and say why. I don't know. But I think there's a valid point here, and an Onion piece begging to be written about a bunch of Oxford philosophy students urging people to save the world by earning pots of money as bankers (a target they've painted on themselves with their own press release), and I can't help imagining what Mencius Moldbug would have to say about them.
I don't think I understand your concern. It's that people who go into high-earning careers will lose touch with "real people" (although the people these folks want to help are usually future people or in the developing world, and thus people they would never have met anyway)?
I'm not RichardKennaway, but I read him as basically applying a be-wary-of-convenient-clever-arguments-for-doing-something-you'd-probably-want-to-do-anyway heuristic. I see where he's coming from. The 80,000 Hours argument for getting riches & status instead of becoming (say) an aid worker or a doctor does smell suspiciously self-serving, at least to my nose. However — and it's a big "However" — their argument does appear to be correct, so I try to ignore the smell.
Yes, that's exactly what was in my mind, and Raemon expressed it also. I don't think that's the right way to resolve the conflict. One person's taking beliefs seriously is another's toxic decompartmentalisation, after all. Why should the smell yield to the argument instead of vice versa? Especially when you notice that the part telling you that is the part making the argument, and the cognitive nose is inarticulate. No, what is needed is to resolve the contradiction, not to suppress one side of it and pretend you are no longer confused. And meanwhile, go and be a banker, or whatever the right answer presently seems to be, because there is no such thing as inaction while you resolve the conundrum: to do nothing is itself to do something. If you later conclude it's a false path, at least the money will give you the flexibility to switch to whatever you decide you should have been doing instead.
That's the $64,000(-a-year) question, and I don't have an answer I'm happy with for the general case. Here's roughly what I think for this specific situation. As you say, my nose can't describe what it smells. It might be a genuine problem or a false alarm. To find out which, I have to consciously poke around for an overlooked counterargument or a weak spot in the original argument, something to corroborate the bad smell. I did that here and couldn't find a killer gap in 80,000 Hours' arguments, nor a strong counterargument for why I should disregard them. For simplicity, consider a stripped down version of the decision problem where I have only two options: becoming a rich banker vs. getting a normal job paying the median salary. Suppose I disapprove of the banker option for whatever reason. If I hold my nose and become a banker anyway, it seems very likely to me that (1) I would nonetheless prefer that to having someone with different values in my place instead, and (2) that even if taking a banking job worked against my values or goals, I could compensate for that by hiring other people to further them. I had thought that reference class forecasting might warn against the get-rich-and-give strategy: people with more income give a smaller percentage of it to charity, so by entering banking one might opt into a less generous reference class. But quick Googling reveals that people with higher incomes give more in absolute terms, at least in the UK, the US, and Canada. Putting aside the chance of my being wrong, what about the disutility of being wrong? Well, I agree with your final paragraph, so that doesn't seem to weigh heavily against the 80,000 Hours point of view either. All in all my nose seems to have overreacted on this one. Maybe it raised the alarm because 80,000 Hours' conclusion failed a quick universalizability test, namely "would this still be the best choice if everyone else in the same boat made it too?" But that test itself seems to fail here.
That's a part of it. One reason for the lawyer to now and then put in a shift at the soup kitchen is to keep his feet on the ground and observe the actual effect of what he's donating to. Some managers put in a shift on the shop floor now and then for the same reason. Maybe 80,000ers should consider spending their vacations out in the field?
I agree. I'm writing from Ecuador right now. Seeing serious poverty first-hand does hit me in a different place than reading about it. But I still think donating to efficient charities is the best way to help these people - not me volunteering or moving here. I think most of the 80K Hours founders are/were philosophy grad students. So they weren't especially likely to wind up as either on-the-ground nonprofit workers or high-flying financiers. And I gather many of them had an ugh field around money, so trying to earn more of it (and being read by other people as someone who loves money) is more of a sacrifice than it might seem.
Some of the links you make aren't sound (lots of people are already trying to get the seriously wealthy to donate, so it might not be where you can have the greatest impact, there's not a good reason to think that you would be more effective than the people who currently run the IMF and WorldBank) but the overall idea seems good to me: look for where you can most improve the world and go there.
If you do pinpoint it, I would be curious.
Assuming that we run on corrupted hardware, how much should we trust explanations: "I should get a lot of money and power, because I am a good person, so this will help the whole society"? Also, if I tried to convert you to a money-making cult, such as Amway, I would start by describing the good things you could do after you become super-rich. Not because we are at LW where we signal that we care about saving the world, but because this is the standard recruitment tactic. (This does not prove that "80,000 hours" is a bad thing. I just explain how it pattern-matches and creates a negative vibe.) EDIT: Also, be wary of convenient clever arguments for doing something you'd probably want to do anyway.
Maybe it reminds him of Breast Cancer advertising?

Linking to the article seems like it would be significantly better than this sort of paraphrase. I'm not sure whether you can get authorization to do so, but I would find that a lot more useful, especially for controversial political issues like this.

It's linked in the original post: The summary was an attempt to get more people to read and consider the ideas by making it all a lot shorter. Mostly seems to have been a bad idea, primarily because I didn't do a good job of keeping my biases out of it.