With the EA Forum's giving season just behind us, it's a natural moment to look back on your donations over the past year and think about where you'd like to give in the year ahead. We (Tristan and Sergio) rarely spend as much time on these decisions as we'd like. When we tried to dig a bit deeper this year, we realized there are a lot of big questions about personal donations that haven't been crisply put together anywhere else, hence the post.
We've tried to make some of those questions clearer here, to highlight things that you might want to consider if they haven't occurred to you before, and to encourage comments from others as to how they think about these factors. Some of these factors aren’t original to us, and in general we’re aiming to bring together considerations that are scattered across different posts, papers, and conversations, and present them in one place through the lens of personal donation decisions. Happy giving!
TLDR
This post focuses on five considerations that arise as you try to deepen your giving, especially as you give to specific opportunities rather than just to a fund. Those are:
Deference: Funders have far more context than most people on a given funding landscape, and that access to exclusive knowledge, along with knowing the right questions to ask, put them in a better position to decide who should get funding. But when funders are potentially biased in a given direction, or you have domain-specific knowledge that potentially runs deeper than theirs, it's worth re-evaluating.
Indirect Effects: Many interventions have second-order effects that could rival or exceed their direct impact. For example, saving lives may affect consumption patterns, economic development may alter land use, and so on. Given this complexity, it might make sense to fund more work as a community into potential indirect effects of common EA interventions.
Moral Uncertainty: If you're uncertain between worldviews or cause priorities, allocating 0% to your minority views isn't necessarily the best choice. Rather than letting your top credence dominate entirely, consider giving each perspective you hold some representation. But also keep in mind that you're part of a community, and as such might be best to think of balancing the community's allocation rather than your own.
Timing: $1,200 could be: given via a Steady Drip (regular, e.g. $100 monthly, donations), Reserved for Pivotal Moments (e.g. saving the $1,200 to close a critical funding gap), or allocated patiently through Patient Philanthropy (investing now to give more later). Each has specific strengths.
Moral Seriousness: As a community of do-gooders in the world, it would be bad if all of our bets were speculative, hits-based-type giving. We should use at least part of our resources to demonstrate moral seriousness and genuine commitment to present suffering in ways that are recognizable to the average person.
1. Deference
Early on, it likely makes sense for nearly all of your donating to run through some fund. You're new to a cause area, or new to EA and considering the broad set of potential donation opportunities at hand, and you simply don't have a well enough constructed view that it makes sense to try to stake out your own position.
But eventually, you'll become familiar enough that you've begun to form your own inside view. You'll look at what funders broadly fund in areas that interest you, and start to disagree with certain decisions, or at least feel that some segment of the cause area is being neglected. These start as intuitions, useful indicators but likely nothing robust enough to deviate from donating to the fund you think is most impactful.
But at some point, you'll likely arrive at a place where you have enough knowledge about some part of a cause (especially if you work on it) that it's worth considering choosing the targets of your donations yourself. Where is that point?
When is reasonable to deviate
Frankly it’s hard to tell, we’ve debated this more than once ourselves[1]. But here are some signals that you might be ready to allocate a portion of donations according to your own judgment:
You can articulate specific reasons funds are wrong, not just "I have a different intuition." You've read grant databases, you understand their stated reasoning, and you have a concrete model of what they're missing.
You have domain-specific knowledge that professional grantmakers are less likely to have (e.g., you work closely with a neglected subcommunity, you have technical expertise in a niche area, or you've been tracking a specific bottleneck for months).
Others with similar experience respect your takes. This is an imperfect signal, but if people you consider well-calibrated find your analysis reasonable, that suggests you may be ready to exercise more autonomy in your donation decisions.
You've engaged directly with the orgs/founders you're considering. Brief calls and public materials are a start, but they don't replicate the depth of evaluation that dedicated grantmakers do[2].
Even when you meet some of these signals, we'd suggest an 'earn your autonomy' approach: start with ~20% for inside-view bets while keeping most funds allocated through established grantmakers. Track your reasoning and expected outcomes, then increase autonomy gradually if your bets look good in hindsight.
2. Indirect Effects
We take the meat-eater problem seriously, but we don't at all think that the conclusion is to avoid donating in the Global Health and Development (GHD) space: the effects might actually even out if e.g. further development reduces the total amount of natural space, potentially counterbalancing increased meat consumption by reducing the number of suffering wild animals. But the problem is enough to give us pause, and highlights the general issue that, for anyone with a diverse set of things they care about in the world, they should likely consider the indirect effects of the interventions they're funding.
The cluelessness problem
The meat-eater problem[3] is a specific case of a much broader issue, that we are often radically uncertain about the long-run or indirect effects of our actions which is incredibly important given that second-order (and further) effects might be the most important aspect of any given intervention.
This is "complex cluelessness", uncertainty not just about the sign and magnitude of indirect effects, but cases where plausible effects flow in opposite directions and we lack a reliable way to weigh them.
There's much more to say about cluelessness and different people offer different responses. But if you don't want to be paralyzed, sometimes you have to bracket what you can't reliably assess and act on what you can. This doesn't mean ignoring second-order effects — quite the opposite. It means there may be real value in donating to those working to map out the potential unintended consequences of common EA interventions.
3. Moral Uncertainty and Diversification
Probably everyone here is familiar with moral uncertainty, but what does it actually mean for your giving? What would this body of work have to say about how we can donate more wisely? More concretely: if you're uncertain between different moral frameworks or cause priorities, how should you allocate your donations?
The standard answer is to maximize expected value (EV). Donate everything to whatever has the highest expected impact given your credences across different moral views. But donating 100% to what you think is the most important cause is far from the obvious strategy here.[4]
The benefits of EV maximization under ordinary empirical uncertainty don't fully apply to philosophical uncertainty. With empirical uncertainty, a portfolio of diversified bets tends to reliably do better in the long run: individual gambles may fail, but the overall strategy works. With philosophical uncertainty, you're not making independent bets that will converge toward truth over time. If you're wrong about hedonistic utilitarianism, you're likely to stay wrong, and all your actions will be systematically misguided..
Second, moral uncertainty can reflect value pluralism rather than confusion. You can genuinely care about multiple ethical perspectives. You might genuinely have utilitarian concerns and deontological ones at the same time, and your donations can reflect that.
If these objections to EV maximization for dealing with moral uncertainty seem relevant to you, an alternative approach might be through frameworks such as moral parliaments, subagents, or Moral Marketplace Theory. While distinct, these approaches share the insight that when genuinely uncertain between moral views, you should give each perspective meaningful representation. If you're 60% longtermist, 25% focused on present human welfare, and 15% focused on animal welfare, you might allocate your donations roughly in those proportions, not because you're hedging, but because you're giving each perspective the representation it deserves given your actual uncertainty.
Career vs donations
The framework becomes especially relevant when thinking about the relationship between your career and your donations. If you work full-time in a cause area, you've already made a massive allocation to that perspective (40-60 hours per week, your professional development, your social capital, your comparative advantage).
It's reasonable to think that 80,000 hours is already enough of an investment, and that unless you're really, really confident in your cause prioritization, you should use your donations to give voice to your other values. If you're 70% confident AIS (AI Safety) is the top priority and 30% confident it's something else (animal welfare, nuclear risk, GHD), allocating both your entire career and all your donations to AIS treats that 70% credence as certainty. Your career might be an indivisible resource that you've allocated to your plurality view, but your donations are divisible, they're an opportunity to give your minority perspectives some voice.
But coordination matters
A potential blind spot of this framework is that it treats you as an individual but you're actually part of a community. If everyone diversifies individually, we lose specialization. If everyone specializes, assuming others will cover minority views, those views will be neglected.
Nevertheless, even if individual diversification is collectively suboptimal, it might still be personally defensible. Maybe you are not just optimizing community output, you could also care about maintaining integrity with your own values.
4. Timing
When you donate can matter as much as where. The right timing strategy could depend on how engaged you are with the funding landscape, whether you can spot time-sensitive opportunities, and how much you expect to learn over time. There are (at least) three possible approaches:
Approach 1: A steady drip of donations
Regularly donating, e.g. monthly, reduces cognitive overhead, helps with self-control around spending, and gives orgs predictable cashflow for planning. A possible downside of this approach is something like the "set-and-forget" bias, where your automated allocations continue unchanged even as your knowledge or the landscape evolves. Using a fund or regrantor mitigates this somewhat (they adapt their grants as the landscape shifts), but doesn't eliminate it completely; the fund itself might be the wrong choice now, or your split between different causes/worldviews may no longer match your current thinking.
Approach 2: Reserve for pivotal moments
Another approach that can potentially generate a lot of value is to keep a buffer to act on time-sensitive opportunities: matching campaigns, bridge funding for quality orgs hit by landscape shifts, key hires, or short policy windows. $12,000 at the right moment can beat $1,000/month when money is genuinely the binding constraint. This strategy works best when you can distinguish "temporary funding shock" from "org struggling for good reasons", which requires more engagement and time than the Steady Drip method, also inviting the risk of sloppy evaluation when you're pressed for time trying to make decisions.
Approach 3: Patient philanthropy
There's also the question of patient philanthropy, a question that used to be a live area of exploration but since seems to have gone under the radar as people have become increasingly convinced that this is The Most Important Century. We at least are not totally convinced, and as such reserve invest part of our current savings so that we might be able to donate later which comes with multiple benefits:
Expected financial growth: Historically, investments in the market have delivered positive real returns.
Epistemic growth: This connects to the "complex cluelessness" discussion in Section 2: you may not resolve all downstream uncertainty, but you can (hopefully) learn which interventions are more robust and which indirect effects are tractable enough to update on.
Option value: You can always donate later, but you can't un-donate.
But patient philanthropy comes with downsides as well. Even if you just accept the weaker claim that AI is likely to make the world a much weirder place than it is today, that's good reason to think about donating today, while the world is still intelligible and there seem to be clearly good options on the table for improving the world under many worldviews.
5. Moral Seriousness
One of the things that most stuck with us from the 80,000 Hours podcast, was a moment in an early podcast with Alex Gordon-Brown where he mentioned that he always puts some of his donations towards interventions in the GHD space, out of what we might call moral seriousness.
Here, moral seriousness is passing the scrutiny in the eye of a skeptic recently acquainted with EA's core ideas. We imagine her saying: "Wait wait, you just spent all this time talking to me about how important donating more effectively is, about what an absolute shame it is what others on this Earth are suffering through right now, at this moment, but you're donating all of your money to prevent abstract potential future harms from AI? Really? Did you ever even care about the children (or animals) to begin with?"
We could explain Longtermism to her, try to convince her of the seriousness of our caring for all these things at once while still deciding to go all in on donating to AIS. We could explain the concept of hits-based giving, and why we think the stakes are high enough that we should focus all our funds there. But then we hear her saying: "Sure sure, I get it, but you aren't even donating a portion of your 10% to them. Are you really okay with dedicating all of your funds, which over the course of your life could have saved tens of thousands of animals and hundreds of humans, to something which might in the end help no one? Do you really endorse the belief that you owe them nothing, not even some small portion?"
Frankly, the skeptic seems right. We're comfortable with longtermism being a significant part of our giving, but neither of us wants it to be 100%. Still, the same questions about coordination arise here too: if the community is still split between these areas, is there any need to personally allocate across them? One reason to think so is that most people will come to EA first through an interaction with a community member, and it seems particularly important for that person to signal that their moral concern is broad and doesn't just include weird, speculative things that are unfamiliar. We want to reserve some portion for GHD and animal welfare, making sure that at least part of what we're working towards is helping others now, actively, today.
Moreover, through the lens of the moral uncertainty framework we discussed earlier, you can think of that skeptic as a subagent who deserves a seat at your decision-making table, your "common-sense representative" demanding a place among your other moral views. Even if your carefully reasoned philosophical views point heavily toward longtermism, there's something to be said for giving your intuitions about present, visible suffering some weight in your actions. Not as a concession to outside perception, but because those intuitions are themselves part of your moral compass.
Up until now, I’ve (Tristan) made my donations totally out of deference, knowing that funders have a far more in-depth view of the ecosystem than I do, and time to really deeply consider the value of each project. But now I’m at a crossroads, as I believe that funders aren’t prioritizing AIS advocacy enough. I really believe that, but I’m still relatively junior (only ~2 years in the AIS space), and am quite weary to think I should then entirely shift my donations based on that. But then what amount would be appropriate? 50% to organizations based on my inside view, 50% to funds?
Part of the issue here is that, by then choosing to donate to a very narrow window of opportunities (AIS advocacy orgs), you lose the benefit of then trying to pit those advocacy orgs against the broader set of organizations working on AIS. You’re choosing for the most effective AIS advocacy organizations, not the most effective organization reducing AI risk. I have abstract arguments as to why I think AIS advocacy is potentially really impactful, but I don’t have the expertise to even begin to evaluate any technical interventions and how they stack up against them.
What’s important here is that you’ve tried to consider a number of factors that capture important considerations and have that ready to go as you dig deeper into a given organization. For example, it’s not enough to establish that a given organization is impactful, i.e. has done great work in the past, but also that they’re set to do good work in the future, and more specifically that your contribution will go to supporting good work. It’s important to ask what’s being bought with your further donation, and to have a sense of the upside of that specific work, beyond the org more generally.
The meat-eater problem refers to the concern that interventions saving human lives, particularly in developing countries, may indirectly increase animal suffering. The logic is that each person saved will consume meat throughout their lifetime, leading to more animals being raised and slaughtered in factory farms. If you value animal welfare, this could potentially offset the positive impact of saving human lives.
How does this work in practice? Suppose you're 95% confident that only humans matter morally, and 5% confident that shrimp can suffer and their welfare counts. In that 5% scenario, you think helping one shrimp matters much less than helping one human, maybe one millionth as much. But there are about a trillion shrimp killed each year in aquaculture. Expected value maximization multiplies your 5% credence by a trillion shrimp, and even dividing by a million for how little each counts, that overwhelms your 95% confidence about humans. The expected value calculation will tell you to donate almost everything to shrimp welfare, and many people find this conclusion troubling or even fanatical.
With the EA Forum's giving season just behind us, it's a natural moment to look back on your donations over the past year and think about where you'd like to give in the year ahead. We (Tristan and Sergio) rarely spend as much time on these decisions as we'd like. When we tried to dig a bit deeper this year, we realized there are a lot of big questions about personal donations that haven't been crisply put together anywhere else, hence the post.
We've tried to make some of those questions clearer here, to highlight things that you might want to consider if they haven't occurred to you before, and to encourage comments from others as to how they think about these factors. Some of these factors aren’t original to us, and in general we’re aiming to bring together considerations that are scattered across different posts, papers, and conversations, and present them in one place through the lens of personal donation decisions. Happy giving!
TLDR
This post focuses on five considerations that arise as you try to deepen your giving, especially as you give to specific opportunities rather than just to a fund. Those are:
1. Deference
Early on, it likely makes sense for nearly all of your donating to run through some fund. You're new to a cause area, or new to EA and considering the broad set of potential donation opportunities at hand, and you simply don't have a well enough constructed view that it makes sense to try to stake out your own position.
But eventually, you'll become familiar enough that you've begun to form your own inside view. You'll look at what funders broadly fund in areas that interest you, and start to disagree with certain decisions, or at least feel that some segment of the cause area is being neglected. These start as intuitions, useful indicators but likely nothing robust enough to deviate from donating to the fund you think is most impactful.
But at some point, you'll likely arrive at a place where you have enough knowledge about some part of a cause (especially if you work on it) that it's worth considering choosing the targets of your donations yourself. Where is that point?
When is reasonable to deviate
Frankly it’s hard to tell, we’ve debated this more than once ourselves[1]. But here are some signals that you might be ready to allocate a portion of donations according to your own judgment:
You've engaged directly with the orgs/founders you're considering. Brief calls and public materials are a start, but they don't replicate the depth of evaluation that dedicated grantmakers do[2].
Even when you meet some of these signals, we'd suggest an 'earn your autonomy' approach: start with ~20% for inside-view bets while keeping most funds allocated through established grantmakers. Track your reasoning and expected outcomes, then increase autonomy gradually if your bets look good in hindsight.
2. Indirect Effects
We take the meat-eater problem seriously, but we don't at all think that the conclusion is to avoid donating in the Global Health and Development (GHD) space: the effects might actually even out if e.g. further development reduces the total amount of natural space, potentially counterbalancing increased meat consumption by reducing the number of suffering wild animals. But the problem is enough to give us pause, and highlights the general issue that, for anyone with a diverse set of things they care about in the world, they should likely consider the indirect effects of the interventions they're funding.
The cluelessness problem
The meat-eater problem[3] is a specific case of a much broader issue, that we are often radically uncertain about the long-run or indirect effects of our actions which is incredibly important given that second-order (and further) effects might be the most important aspect of any given intervention.
This is "complex cluelessness", uncertainty not just about the sign and magnitude of indirect effects, but cases where plausible effects flow in opposite directions and we lack a reliable way to weigh them.
There's much more to say about cluelessness and different people offer different responses. But if you don't want to be paralyzed, sometimes you have to bracket what you can't reliably assess and act on what you can. This doesn't mean ignoring second-order effects — quite the opposite. It means there may be real value in donating to those working to map out the potential unintended consequences of common EA interventions.
3. Moral Uncertainty and Diversification
Probably everyone here is familiar with moral uncertainty, but what does it actually mean for your giving? What would this body of work have to say about how we can donate more wisely? More concretely: if you're uncertain between different moral frameworks or cause priorities, how should you allocate your donations?
The standard answer is to maximize expected value (EV). Donate everything to whatever has the highest expected impact given your credences across different moral views. But donating 100% to what you think is the most important cause is far from the obvious strategy here.[4]
The benefits of EV maximization under ordinary empirical uncertainty don't fully apply to philosophical uncertainty. With empirical uncertainty, a portfolio of diversified bets tends to reliably do better in the long run: individual gambles may fail, but the overall strategy works. With philosophical uncertainty, you're not making independent bets that will converge toward truth over time. If you're wrong about hedonistic utilitarianism, you're likely to stay wrong, and all your actions will be systematically misguided..
Second, moral uncertainty can reflect value pluralism rather than confusion. You can genuinely care about multiple ethical perspectives. You might genuinely have utilitarian concerns and deontological ones at the same time, and your donations can reflect that.
If these objections to EV maximization for dealing with moral uncertainty seem relevant to you, an alternative approach might be through frameworks such as moral parliaments, subagents, or Moral Marketplace Theory. While distinct, these approaches share the insight that when genuinely uncertain between moral views, you should give each perspective meaningful representation. If you're 60% longtermist, 25% focused on present human welfare, and 15% focused on animal welfare, you might allocate your donations roughly in those proportions, not because you're hedging, but because you're giving each perspective the representation it deserves given your actual uncertainty.
Career vs donations
The framework becomes especially relevant when thinking about the relationship between your career and your donations. If you work full-time in a cause area, you've already made a massive allocation to that perspective (40-60 hours per week, your professional development, your social capital, your comparative advantage).
It's reasonable to think that 80,000 hours is already enough of an investment, and that unless you're really, really confident in your cause prioritization, you should use your donations to give voice to your other values. If you're 70% confident AIS (AI Safety) is the top priority and 30% confident it's something else (animal welfare, nuclear risk, GHD), allocating both your entire career and all your donations to AIS treats that 70% credence as certainty. Your career might be an indivisible resource that you've allocated to your plurality view, but your donations are divisible, they're an opportunity to give your minority perspectives some voice.
But coordination matters
A potential blind spot of this framework is that it treats you as an individual but you're actually part of a community. If everyone diversifies individually, we lose specialization. If everyone specializes, assuming others will cover minority views, those views will be neglected.
Nevertheless, even if individual diversification is collectively suboptimal, it might still be personally defensible. Maybe you are not just optimizing community output, you could also care about maintaining integrity with your own values.
4. Timing
When you donate can matter as much as where. The right timing strategy could depend on how engaged you are with the funding landscape, whether you can spot time-sensitive opportunities, and how much you expect to learn over time. There are (at least) three possible approaches:
Approach 1: A steady drip of donations
Regularly donating, e.g. monthly, reduces cognitive overhead, helps with self-control around spending, and gives orgs predictable cashflow for planning. A possible downside of this approach is something like the "set-and-forget" bias, where your automated allocations continue unchanged even as your knowledge or the landscape evolves. Using a fund or regrantor mitigates this somewhat (they adapt their grants as the landscape shifts), but doesn't eliminate it completely; the fund itself might be the wrong choice now, or your split between different causes/worldviews may no longer match your current thinking.
Approach 2: Reserve for pivotal moments
Another approach that can potentially generate a lot of value is to keep a buffer to act on time-sensitive opportunities: matching campaigns, bridge funding for quality orgs hit by landscape shifts, key hires, or short policy windows. $12,000 at the right moment can beat $1,000/month when money is genuinely the binding constraint. This strategy works best when you can distinguish "temporary funding shock" from "org struggling for good reasons", which requires more engagement and time than the Steady Drip method, also inviting the risk of sloppy evaluation when you're pressed for time trying to make decisions.
Approach 3: Patient philanthropy
There's also the question of patient philanthropy, a question that used to be a live area of exploration but since seems to have gone under the radar as people have become increasingly convinced that this is The Most Important Century. We at least are not totally convinced, and as such reserve invest part of our current savings so that we might be able to donate later which comes with multiple benefits:
Expected financial growth: Historically, investments in the market have delivered positive real returns.
Epistemic growth: This connects to the "complex cluelessness" discussion in Section 2: you may not resolve all downstream uncertainty, but you can (hopefully) learn which interventions are more robust and which indirect effects are tractable enough to update on.
Option value: You can always donate later, but you can't un-donate.
But patient philanthropy comes with downsides as well. Even if you just accept the weaker claim that AI is likely to make the world a much weirder place than it is today, that's good reason to think about donating today, while the world is still intelligible and there seem to be clearly good options on the table for improving the world under many worldviews.
5. Moral Seriousness
One of the things that most stuck with us from the 80,000 Hours podcast, was a moment in an early podcast with Alex Gordon-Brown where he mentioned that he always puts some of his donations towards interventions in the GHD space, out of what we might call moral seriousness.
Here, moral seriousness is passing the scrutiny in the eye of a skeptic recently acquainted with EA's core ideas. We imagine her saying: "Wait wait, you just spent all this time talking to me about how important donating more effectively is, about what an absolute shame it is what others on this Earth are suffering through right now, at this moment, but you're donating all of your money to prevent abstract potential future harms from AI? Really? Did you ever even care about the children (or animals) to begin with?"
We could explain Longtermism to her, try to convince her of the seriousness of our caring for all these things at once while still deciding to go all in on donating to AIS. We could explain the concept of hits-based giving, and why we think the stakes are high enough that we should focus all our funds there. But then we hear her saying: "Sure sure, I get it, but you aren't even donating a portion of your 10% to them. Are you really okay with dedicating all of your funds, which over the course of your life could have saved tens of thousands of animals and hundreds of humans, to something which might in the end help no one? Do you really endorse the belief that you owe them nothing, not even some small portion?"
Frankly, the skeptic seems right. We're comfortable with longtermism being a significant part of our giving, but neither of us wants it to be 100%. Still, the same questions about coordination arise here too: if the community is still split between these areas, is there any need to personally allocate across them? One reason to think so is that most people will come to EA first through an interaction with a community member, and it seems particularly important for that person to signal that their moral concern is broad and doesn't just include weird, speculative things that are unfamiliar. We want to reserve some portion for GHD and animal welfare, making sure that at least part of what we're working towards is helping others now, actively, today.
Moreover, through the lens of the moral uncertainty framework we discussed earlier, you can think of that skeptic as a subagent who deserves a seat at your decision-making table, your "common-sense representative" demanding a place among your other moral views. Even if your carefully reasoned philosophical views point heavily toward longtermism, there's something to be said for giving your intuitions about present, visible suffering some weight in your actions. Not as a concession to outside perception, but because those intuitions are themselves part of your moral compass.
Up until now, I’ve (Tristan) made my donations totally out of deference, knowing that funders have a far more in-depth view of the ecosystem than I do, and time to really deeply consider the value of each project. But now I’m at a crossroads, as I believe that funders aren’t prioritizing AIS advocacy enough. I really believe that, but I’m still relatively junior (only ~2 years in the AIS space), and am quite weary to think I should then entirely shift my donations based on that. But then what amount would be appropriate? 50% to organizations based on my inside view, 50% to funds?
Part of the issue here is that, by then choosing to donate to a very narrow window of opportunities (AIS advocacy orgs), you lose the benefit of then trying to pit those advocacy orgs against the broader set of organizations working on AIS. You’re choosing for the most effective AIS advocacy organizations, not the most effective organization reducing AI risk. I have abstract arguments as to why I think AIS advocacy is potentially really impactful, but I don’t have the expertise to even begin to evaluate any technical interventions and how they stack up against them.
What’s important here is that you’ve tried to consider a number of factors that capture important considerations and have that ready to go as you dig deeper into a given organization. For example, it’s not enough to establish that a given organization is impactful, i.e. has done great work in the past, but also that they’re set to do good work in the future, and more specifically that your contribution will go to supporting good work. It’s important to ask what’s being bought with your further donation, and to have a sense of the upside of that specific work, beyond the org more generally.
The meat-eater problem refers to the concern that interventions saving human lives, particularly in developing countries, may indirectly increase animal suffering. The logic is that each person saved will consume meat throughout their lifetime, leading to more animals being raised and slaughtered in factory farms. If you value animal welfare, this could potentially offset the positive impact of saving human lives.
How does this work in practice? Suppose you're 95% confident that only humans matter morally, and 5% confident that shrimp can suffer and their welfare counts. In that 5% scenario, you think helping one shrimp matters much less than helping one human, maybe one millionth as much. But there are about a trillion shrimp killed each year in aquaculture. Expected value maximization multiplies your 5% credence by a trillion shrimp, and even dividing by a million for how little each counts, that overwhelms your 95% confidence about humans. The expected value calculation will tell you to donate almost everything to shrimp welfare, and many people find this conclusion troubling or even fanatical.