Cross-posted on my blog and the effective altruism forum with some minor tweaks; apologies if some of the formatting hasn't copied across. The article was written with an EA audience in mind but it is essentially one about rationality and consequentialism.

Summary: People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important – many standard arguments on both sides of moral issues like the permissibility of abortion are significantly undermined or otherwise effected by EA considerations, especially moral uncertainty.

A long time ago, Will wrote an article about how a key part of rationality was taking ideas seriously: fully exploring ideas, seeing all their consequences, and then acting upon them. This is something most of us do not do! I for one certainly have trouble. He later partially redacted it, and Anna has an excellent article on the subject, but at the very least decompartmentalizing is a very standard part of effective altruism.

Similarly, I think people selectively apply Effective Altruist (EA) principles. People are very willing to apply them in some cases, but when those principles would cut at a core part of the person’s identity – like requiring them to dress appropriately so they seem less weird – people are much less willing to take those EA ideas to their logical conclusion.

Consider your personal views. I’ve certainly changed some of my opinions as a result of thinking about EA ideas. For example, my opinion of bednet distribution is now much higher than it once was. And I’ve learned a lot about how to think about some technical issues, like regression to the mean. Yet I realized that I had rarely done a full 180  – and I think this is true of many people:

  • Many think EA ideas argue for more foreign aid – but did anyone come to this conclusion who had previously been passionately anti-aid?
  • Many think EA ideas argue for vegetarianism – but did anyone come to this conclusion who had previously been passionately carnivorous?
  • Many think EA ideas argue against domestic causes – but did anyone come to this conclusion who had previously been a passionate nationalist?

Yet this is quite worrying. Given the power and scope of many EA ideas, it seems that they should lead to people changing their mind on issues were they had been previously very certain, and indeed emotionally involved.

Obviously we don’t need to apply EA principles to everything – we can probably continue to brush our teeth without need for much reflection. But we probably should apply them to issues with are seen as being very important: given the importance of the issues, any implications of EA ideas would probably be important implications.

Moral Uncertainty

In his PhD thesis, Will MacAskill argues that we should treat normative uncertainty in much the same way as ordinary positive uncertainty; we should assign credences (probabilities) to each theory, and then try to maximise the expected morality of our actions. He calls this idea ‘maximise expected choice-worthiness’, and if you’re into philosophy, I recommend reading the paper. As such, when deciding how to act we should give greater weight to the theories we consider more likely to be true, and also give more weight to theories that consider the issue to be of greater importance.

This is important because it means that a novel view does not have to be totally persuasive to demand our observance. Consider, for example, vegetarianism. Maybe you think there’s only a 10% chance that animal welfare is morally significant – you’re pretty sure they’re tasty for a reason. Yet if the consequences of eating meat are very bad in those 10% of cases (murder or torture, if the animal rights activists are correct), and the advantages are not very great in the other 90% (tasty, some nutritional advantages), we should not eat meat regardless. Taking into account the size of the issue at stake as well as probability of its being correct means paying more respect to ‘minority’ theories.

And this is more of an issue for EAs than for most people. Effective Altruism involves a group of novel moral premisses, like cosmopolitanism, the moral imperative for cost-effectiveness and the importance of the far future. Each of these imply that our decisions are in some way very important, so even if we assign them only a small credence, their plausibility implies radical revisions to our actions.

One issue that Will touches on in his thesis is the issue of whether fetuses morally count. In the same way that we have moral uncertainty as to whether animals, or people in the far future, count, so too we have moral uncertainty as to whether unborn children are morally significant. Yes, many people are confident they know the correct answer – but there many of these on each side of the issue. Given the degree of disagreement on the issue, among philosophers, politicians and the general public, it seems like the perfect example of an issue where moral uncertainty should be taken into account – indeed Will uses it as a canonical example.

Consider the case of a pregnant women Sarah, wondering whether it is morally permissible to abort her child1. The alternative course of action she is considering is putting the child up for adoption. In accordance with the level of social and philosophical debate on the issue, she is uncertain as to whether aborting the fetus is morally permissible. If it’s morally permissible, it’s merely permissible – it’s not obligatory. She follows the example from Normative Uncertainty and constructs the following table

abortion table 1

In the best case scenario, abortion has nothing to recommend it, as adoption is also permissible. In the worst case, abortion is actually impermissible, whereas adoption is permissible. As such, adoption dominates abortion.

However, Sarah might not consider this representation as adequate. In particular, she thinks that now is not the best time to have a child, and would prefer to avoid it.2 She has made plans which are inconsistent with being pregnant, and prefers not to give birth at the current time. So she amends the table to take into account these preferences.

abortion table 2

Now adoption no longer strictly dominates abortion, because she prefers abortion to adoption in the scenario where it is morally permissible. As such, she considers her credence: she considers the pro-choice arguments slightly more persuasive than the pro-life ones: she assigns a 70% credence to abortion being morally permissible, but only a 30% chance to its being morally impermissible.

Looking at the table with these numbers in mind, intuitively it seems that again it’s not worth the risk of abortion: a 70% chance of saving oneself inconvenience and temporary discomfort is not sufficient to justify a 30% chance of committing murder. But Sarah’s unsatisfied with this unscientific comparison: it doesn’t seem to have much of a theoretical basis, and she distrusts appeals to intuitions in cases like this. What is more, Sarah is something of a utilitarian; she doesn’t really believe in something being impermissible.

Fortunately, there’s a standard tool for making inter-personal welfare comparisons: QALYs. We can convert the previous table into QALYs, with the moral uncertainty now being expressed as uncertainty as to whether saving fetuses generates QALYs. If it does, then it generates a lot; supposing she’s at the end of her first trimester, if she doesn’t abort the baby it has a 98% chance of surviving to birth, at which point its life expectancy is 78.7 in the US, for 78.126 QALYs. This calculation assumes assigns no QALYs to the fetus’s 6 months of existence between now and birth. If fetuses are not worthy of ethical consideration, then it accounts for 0 QALYs.

We also need to assign QALYs to Sarah. For an upper bound, being pregnant is probably not much worse than having both your legs amputated without medication, which is 0.494 QALYs, so lets conservatively say 0.494 QALYs. She has an expected 6 months of pregnancy remaining, so we divide by 2 to get 0.247 QALYs. Women’s Health Magazine gives the odds of maternal death during childbirth at 0.03% for 2013; we’ll round up to 0.05% to take into account risk of non-death injury. Women at 25 have a remaining life expectancy of around 58 years, so thats 0.05%*58= 0.029 QALYs. In total that gives us an estimate of 0.276 QALYs. If the baby doesn’t survive to birth, however, some of these costs will not be incurred, so the truth is probably slightly lower than this. All in all a 0.276 QALYs seems like a reasonably conservative figure.

Obviously you could refine these numbers a lot (for example, years of old age are likely to be at lower quality of life, there are some medical risks to the mother from aborting a fetus, etc.) but they’re plausibly in the right ballpark. They would also change if we used inherent temporal discounting, but probably we shouldn’t.

abortion table 3

We can then take into account her moral uncertainty directly, and calculate the expected QALYs of each action:

  • If she aborts the fetus, our expected QALYs are 70%x0 + 30%(-78.126) = -23.138
  • If she carries the baby to term and puts it up for adoption, our expected QALYs are 70%(-0.247) + 30%(-0.247) = -0.247

Which again suggests that the moral thing to do is to not abort the baby. Indeed, the life expectancy is so long at birth that it quite easily dominates the calculation: Sarah would have to be extremely confident in rejecting the value of the fetus to justify aborting it. So, mindful of overconfidence bias, she decides to carry the child to term.

Indeed, we can show just how confident in the lack of moral significance of the fetuses one would have to be to justify aborting one. Here is a sensitivity table, showing credence in moral significance of fetuses on the y axis, and the direct QALY cost of pregnancy on the x axis for a wide range of possible values. The direct QALY cost of pregnancy is obviously bounded above by its limited duration. As is immediately apparent, one has to be very confident in fetuses lacking moral significance, and pregnancy has to be very bad, before aborting a fetus becomes even slightly QALY-positive. For moderate values, it is extremely QALY-negative.

abortion table 4

Other EA concepts and their applications to this issue

Of course, moral uncertainty is not the only EA principle that could have bearing on the issue, and given that the theme of this blogging carnival, and this post, is things we’re overlooking, it would be remiss not to give at least a broad overview of some of the others. Here, I don’t intend to judge how persuasive any given argument is – as we discussed above, this is a debate that has been going without settlement for thousands of years – but merely to show the ways that common EA arguments affect the plausibility of the different arguments. This is a section about the directionality of EA concerns, not on the overall magnitudes.

Not really people

One of the most important arguments for the permissibility of abortion is that fetuses are in some important sense ‘not really people’. In many ways this argument resembles the anti-animal rights argument that animals are also ‘not really people’. We already covered above the way that considerations of moral uncertainty undermine both these arguments, but it’s also noteworthy that in general it seems that the two views are mutually supporting (or mutually undermining, if both are false). Animal-rights advocates often appeal to the idea of an ‘expanding circle’ of moral concern. I’m skeptical of such an argument, but it seems clear that the larger your sphere, the more likely fetuses are to end up on the inside. The fact that, in the US at least, animal activists tend to be pro-abortion seems to be more of a historical accident than anything else. We could imagine alternative-universe political coalitions, where a “Defend the Weak; They’re morally valuable too” party faced off against a “Exploit the Weak; They just don’t count” party. In general, to the extent that EAs care about animal suffering (even insect suffering ), EAs should tend to be concerned about the welfare of the unborn.

Not people yet

A slightly different common argument is that while fetuses will eventually be people, they’re not people yet. Since they’re not people right now, we don’t have to pay any attention to their rights or welfare right now. Indeed, many people make short sighted decisions that implicitly assign very little value to the futures of people currently alive, or even to their own futures – through self-destructive drug habits, or simply failing to save for retirement. If we don’t assign much value to our own futures, it seems very sensible to disregard the futures of those not even born. And even if people who disregarded their own futures were simply negligent, we might still be concerned about things like the non-identity problem.

Yet it seems that EAs are almost uniquely unsuited to this response. EAs do tend to care explicitly about future generations. We put considerable resources into investigating how to help them, whether through addressing climate change or existential risks. And yet these people have far less of a claim to current personhood than fetuses, who at least have current physical form, even if it is diminutive. So again to the extent that EAs care about future welfare, EAs should tend to be concerned about the welfare of the unborn.


Another important EA idea is that of replaceability. Typically this arises in contexts of career choice, but there is a different application here. The QALYs associated with aborted children might not be so bad if the mother will go on to have another child instead. If she does, the net QALY loss is much lower than the gross QALY loss. Of course, the benefits of aborting the fetus are equivalently much smaller – if she has a child later on instead, she will have to bear the costs of pregnancy eventually anyway. This resembles concerns that maybe saving children in Africa doesn’t make much difference, because their parents adjust their subsequent fertility.

The plausibility behind this idea comes from the idea that, at least in the US, most families have a certain ideal number of children in mind, and basically achieve this goal. As such, missing an opportunity to have an early child simply results in having another later on.

If this were fully true, utilitarians might decide that abortion actually has no QALY impact at all – all it does is change the timing of events. On the other hand, fertility declines with age, so many couples planning to have a replacement child later may be unable to do so. Also, some people do not have ideal family size plans.

Additionally, this does not really seem to hold when the alternative is adoption; presumably a woman putting a child up for adoption does not consider it as part of her family, so her future childbearing would be unaffected. This argument might hold if raising the child yourself was the only alternative, but given that adoption services are available, it does not seem to go through.


Sometimes people argue for the permissibility of abortion through autonomy arguments. “It is my body”, such an argument would go, “therefore I may do whatever I want with it.” To a certain extent this argument is addressed by pointing out that one’s bodily rights presumably do not extent to killing others, so if the anti-abortion side are correct, or even have a non-trivial probability of being correct, autonomy would be insufficient. It seems that if the autonomy argument is to work, it must be because a different argument has established the non-personhood of fetuses – in which case the autonomy argument is redundant. Yet even putting this aside, this argument is less appealing to EAs than to non-EAs, because EAs often hold a distinctly non-libertarian account of personal ethics. We believe it is actually good to help people (and avoid hurting them), and perhaps that it is bad to avoid doing so. And many EAs are utilitarians, for whom helping/not-hurting is not merely laud-worthy but actually compulsory. EAs are generally not very impressed with Ayn Rand style autonomy arguments for rejecting charity, so again EAs should tend to be unsympathetic to autonomy arguments for the permissibility of abortion.

Indeed, some EAs even think we should be legally obliged to act in good ways, whether through laws against factory farming or tax-funded foreign aid.


An argument often used on the opposite side  – that is, an argument used to oppose abortion, is that abortion is murder, and murder is simply always wrong. Whether because God commanded it or Kant derived it, we should place the utmost importance of never murdering. I’m not sure that any EA principle directly pulls against this, but nonetheless most EAs are consequentialists, who believe that all values can be compared. If aborting one child would save a million others, most EAs would probably endorse the abortion. So I think this is one case where a common EA view pulls in favor of the permissibility of abortion.

I didn’t ask for this

Another argument often used for the permissibility of abortion is that the situation is in some sense unfair. If one did not intend to become pregnant – perhaps even took precautions to avoid becoming so – but nonetheless ends up pregnant, you’re in some way not responsible for becoming pregnant. And since you’re not responsible for it you have no obligations concerning it – so may permissible abort the fetus.

However, once again this runs counter to a major strand of EA thought. Most of us did not ask to be born in rich countries, or to be intelligent, or hardworking. Perhaps it was simply luck. Yet being in such a position nonetheless means we have certain opportunities and obligations. Specifically, we have the opportunity to use of wealth to significantly aid those less fortunate than ourselves in the developing world, and many EAs would agree the obligation. So EAs seem to reject the general idea that not intending a situation relieves one of the responsibilities of that situation.

Infanticide is okay too

A frequent argument against the permissibility of aborting fetuses is by analogy to infanticide. In general it is hard to produce a coherent criteria that permits the killing of babies before birth but forbids it after birth. For most people, this is a reasonably compelling objection: murdering innocent babies is clearly evil! Yet some EAs actually endorse infanticide. If you were one of those people, this particular argument would have little sway over you.

Moral Universalism

A common implicit premise in many moral discussion is that the same moral principles apply to everyone. When Sarah did her QALY calculation, she counted the baby’s QALYs as equally important to her own in the scenario where they counted at all. Similarly, both sides of the debate assume that whatever the answer is, it will apply fairly broadly. Perhaps permissibility varies by age of the fetus – maybe ending when viability hits – but the same answer will apply to rich and poor, Christian and Jew, etc.

This is something some EAs might reject. Yes, saving the baby produces many more QALYs than Sarah loses through the pregnancy, and that would be the end of the story if Sarah were simply an ordinary person. But Sarah is an EA, and so has a much higher opportunity cost for her time. Becoming pregnant will undermine her career as an investment banker, the argument would go, which in turn prevents her from donating to AMF and saving a great many lives. Because of this, Sarah is in a special position – it is permissible for her, but it would not be permissible for someone who wasn’t saving many lives a year.

I think this is a pretty repugnant attitude in general, and a particularly objectionable instance of it, but I include it here for completeness.

May we discuss this?

Now we’ve considered these arguments, it appears that applying general EA principles to the issue in general tends to make abortion look less morally permissible, though there were one or two exceptions. But there is also a second order issue that we should perhaps address – is it permissible to discuss this issue at all?

Nothing to do with you

A frequently seen argument on this issue is to claim that the speaker has no right to opine on the issue. If it doesn’t personally affect you, you cannot discuss it – especially if you’re privileged. As many (a majority?) of EAs are male, and of the women many are not pregnant, this would curtail dramatically the ability of EAs to discuss abortion. This is not so much an argument on one side or other of the issue as an argument for silence.

Leaving aside the inherent virtues and vices of this argument, it is not very suitable for EAs. Because EAs have many many opinions on topics that don’t directly affect them:

  • EAs have opinions on disease in Africa, yet most have never been to Africa, and never will
  • EAs have opinions on (non-human) animal suffering, yet most are not non-human animals
  • EAs have opinions on the far future, yet live in the present

Indeed, EAs seem more qualified to comment on abortion – as we all were once fetuses, and many of us will become pregnant. If taken seriously this argument would call foul on virtually ever EA activity! And this is no idle fantasy – there are certainly some people who think that Westerns cannot usefully contribute to solving African poverty.

Too controversial

We can safely say this is a somewhat controversial issue. Perhaps it is too controversial – maybe it is bad for the movement to discuss. One might accept the arguments above – that EA principles generally undermine the traditional reasons for thinking abortion is morally permissible – yet think we should not talk about it. The controversy might divide the community and undermine trust. Perhaps it might deter newcomers. I’m somewhat sympathetic to this argument – I take the virtue of silence seriously, though eventually my boyfriend persuaded me it was worth publishing.

Note that the controversial nature is evidence against abortion’s moral permissibility, due to moral uncertainty.

However, the EA movement is no stranger to controversy.

  • There is a semi-official EA position on immigration, which is about as controversial as abortion in the US at the moment, and the EA position is such an extreme position that essentially no mainstream politicians hold it.
  • There is a semi-official EA position on vegetarianism, which is pretty controversial too, as it involves implying that the majority of Americans are complicit in murder every day.

Not worthy of discussion

Finally, another objection to discussing this is it simply it’s an EA idea. There are many disagreements in the world, yet there is no need for an EA view on each. Conflict between the Lilliputians and Blefuscudians notwithstanding, there is no need for an EA perspective on which end of the egg to break first. And we should be especially careful of heated, emotional topics with less avenue to pull the rope sideways. As such, even though the object-level arguments given above are correct, we should simply decline to discuss it.

However, it seems that if abortion is a moral issue, it is a very large one. In the same way that the sheer number of QALYs lost makes abortion worse than adoption even if our credence in fetuses having moral significance was very low, the large number of abortions occurring each year make the issue as a whole of high significance. In 2011 there were over 1 million babies were aborted in the US. I’ve seen a wide range of global estimates, including around 10 million to over 40 million. By contrast, the WHO estimates there are fewer than 1 million malaria deaths worldwide each year. Abortion deaths also cause a higher loss of QALYs due to the young age at which they occur. On the other hand, we should discount them for the uncertainty that they are morally significant. And perhaps there is an even larger closely related moral issue. The size of the issue is not the only factor in estimating the cost-effectiveness of interventions, but it is the most easily estimable. On the other hand, I have little idea how many dollars of donations it takes to save a fetus – it seems like an excellent example of some low-hanging fruit research.


People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important. In this post we the implications of common EA beliefs on the permissibility of abortion. Taking into account moral uncertainty makes aborting a fetus seem far less permissible, as the high counterfactual life expectancy of the baby tends to dominate other factors. Many other EA views are also significant to the issue, making various standard arguments on each side less plausible.


  1. There doesn’t seem to be any neutral language one can use here, so I’m just going to switch back and forth between ‘fetus’ and ‘child’ or ‘baby’ in a vain attempt at terminological neutrality. 
  2. I chose this reason because it is the most frequently cited main motivation for aborting a fetus according to the Guttmacher Institute. 
New Comment
80 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Sorry, haven't read the whole post, only commenting on one part. If your goal is to maximize QALY and given that happiness depends only very weakly on the living conditions, it is nearly always true that

saving fetuses generates QALYs

and you end up with the repugnant conclusion of populating the Earth to capacity (and maximizing this capacity by any means possible). The standard solution is not a calculational one, but rather agreeing on where the Schelling point lies. Which factors out the EA-related considerations and brings you back to square one, arguing about the fetal personhood.

saving fetuses generates QALYs

Not merely saving them, but creating them and bringing them to term. Every unoccupied womb is an idle QALY factory going to waste.

Pretty much what I was going to comment. I would add that even if he somehow were able to avoid having to accept the more general Repugnant Conclusion, he would certainly have to at least accept that if abortion is wrong in these grounds, not having a child is (nearly) equally wrong on the same grounds.
I haven't read all the comments to this post, and I am new to LW generally, so if I say anything that's been gone over, bear with me. The argument that abortion is bad due to the QALYs has certain inherent assumptions. First is that there's "room" in the system for additional people. If the addition of a new person subtracts from the quality of life of others, then that has to be factored. Another aspect that must be factored into this analysis is somewhat more obscure. "Moral hazard". "Slippery slope" is a fallacy, however, as noted here under certain conditions, it's a caution to be taken seriously. If abortion is banned because it reduces total QALYs, then the precedent has been set for authoritative intervention in personal choices for the purpose of increasing total QALYs. it would then make sense to, for instance, ban eating beef due to the inefficiency, and health consequences of it. And etcetera, etcetera, etcetera. Depending on how the adjustment for "quality" is calculated. And this gets into the more important question when pondering QALYs, What calculus are we using for the "Quality adjustment"? What's the coefficient adjustment for depression? How do you factor in personal differences? Is a year of the life of a man with a mean wife worth less than that of a man with a nice wife? Does "Pleasure" factor in? How? Fried food is pleasurable to many people, consuming it increases their instantaneous quality of life, but has some long-term costs in the "years". Additionally, the quality of life hit that is suffered from banning abortion (and taking a commensurate increase in adoptions) is not just to the mother. Every human in that system takes a quality of life hit due to the chilling effect that it will likely have on the sexual climate, the additional concerns that will inevitably be present on every act of sexual congress (A full term pregnancy is a far greater consequence than an abortion). If our goal is to maximize QALY, then QALY must include all
Yeah I think the repugnant conclusion is not actually very repugnant; it just seems so because of scope insensitivity. But I would stress that the argument I make doesn't rely on your having a goal of maximizing QALYs. You might assign some credence to other moral views that take a stance on aborting fetuses; deontology, for example, or even just 'maximize the QALYs of everyone who is already alive.'
Not very repugnant? Are you saying that you would support impregnating every fertile female, voluntarily or forcibly, if you expect this to maximize QALY? Or do you qualify it by saying "maximize the QALYs of everyone who is already alive"? Then you are back to the definition of when to count fetus as alive, and this is again a Schelling point argument, EA or no EA.
No, but that's not what the repugnant conclusion is. The RC is about the desirability of an end-state - highly populous worlds could be very desirable and yet some methods for achieving such worlds still be morally impermissible. There can be side-constraints, to use Nozick's (?) terminology, or other values at stake. You might find [this article] on population ethics interesting. I think there are many plausible approaches, including a consequentialism-of-rights. I included "maximize the QALYs of everyone who is already alive" because I wanted to show that the argument applied to many different systems, but I do not actually think that system is very plausible. I agree that many arguments can ultimately be reduced to arguments about the moral status of fetuses - in fact I say so in the OP! But here I must disagree. It seems plausible that there is actually a fact of the matter whether one has moral value / how much value one has. I don't think this is particularly controversial, except I guess to some anti-realists.
Doesn't that argument prove too much, namely that murder is acceptable?
Choosing to not create a new person is not the same as killing an existing one.
I agree but in isolation in such an population ethics context it has insufficient elaboration. Some might disagree at least in theory.
How is this different from a QALY point of view?
At the very least because an already-born person will almost always leave survivors aggrieved and/or materially harmed by the act, while aborted fetuses often do not.
So what about killing hermits?
If they're a truly isolated hermit, that distinction would presumably no longer apply, but the world is pretty short on truly isolated hermits. I think you probably could kill and replace an isolated hermit in a QALY-neutral way (you'd probably need a fairly unhappy person to keep it QALY neutral even,) whereas with social connections in the equation, if you were trying to kill and replace non-hermits in a QALY neutral way, you'd ultimately end up having to do it to everyone.
It's not, and that is why QALY is a too simplistic point of view.
This is not an objection if you happen to accept the "repugnant" conclusion, as I do.

I don't think you did justice to the replaceability argument. If fetuses are replaceable, then the only benefit of banning abortion is that it increases the fertility rate. However, there are far better ways to increase the fertility rate than banning abortion. For example, one could pay people to have children (and maybe give them up for adoption). So your argument is kind of like saying that since we really need farm laborers, we should allow slavery.

Wouldn't the replaceability argument imply that it's ok to kill people who don't have unique skills?
I said that fetuses are replaceable, not that all people are replaceable. OP didn't argue that fetuses weren't replaceable, just that they won't get replaced in practice.
And why aren't people replaceable. It strikes me that they are in fact replaceable in the sense you mean.
So, you can kill a person, create a new person, and raise them to be about equivalent to the original person (on average; this makes a bit more sense if we do it many times so the distribution of people, life outcomes, etc is similar). I guess your question is, why don't we do this (aside from the cost)? A few reasons come to mind: 1. It would contradict the person's preferences to die more than it contradicts the non-existing people's preferences to never exist. 2. It would cause emotional suffering to people who know the person. 3. If people knew that people were being killed in this way, they would justifiably be scared that they might be killed and work to prevent this. 4. Living in a society requires cooperating with other members of the society by obeying rules such as not killing people (even if you buy murder offsets, which is kind of like what this is). Defection (by murdering people) might temporarily satisfy your values better, but even if this is the case, the usual reasons not to defect in iterated prisoner's dilemma apply here. 5. It would require overriding people's moral heuristics against murder. This is a very strong moral heuristic, and it's not clear that you can do this without causing serious negative consequences. Anyway, I highly doubt that you are in favor of murder offsets, so you must have your own reasons for this. Perhaps you could look at which ones apply to fetuses and which ones don't.
Correct, I don't favor abortions either.

What credence do you give to the proposition that every sperm is sacred?

Unless your credence is a lot less than one in a billion (which is dubious given overconfidence bias) then this dominates all other concerns

Unless your credence is a lot less than one in a billion (which is dubious given overconfidence bias) then this dominates all other concerns

The easiest argument against this is to observe that sperm are not the limiting factor in creating more lives- uterus time and parenting time are.

This is a good point, but it applies to moral uncertainty in general, not just to this particular case
Isn't that kinda the point? It suggests there's probably something wrong with arguments of the form "such-and-such an improbable proposition about moral values would make a huge difference if correct, so we should all drop everything and attend to it". One thing that might be wrong: if moral values are not objective facts about the world but about particular people's (or communities') value systems, then it doesn't make sense to ask "what's the probability that every sperm is sacred?" or "what's the probability that a foetus is about as important morally as an adult human?"; our values are what they are and it's perfectly reasonable to have very little uncertainty about them. It remains reasonable to ask "what's the probability that spermatozoa or foetuses have the properties that I do, in fact, regard as conferring moral significance?" -- but that probability may reasonably be extremely low, e.g. on the grounds that spermatozoa don't have brains.
How would it dominate? What would you do if it were true?
It would dominate (if you buy into the EA maximise QALY assumptions) because the QALY lost would massively outweigh those lost to everything else. If it were true, then I suppose one could freeze sperm so that a future space faring civilisation with room for a larger population can use them. But I'm not suggesting anyone actually bites this bullet.

I'm not an EA and I have reservations with the total utilitarian tendencies of the movement, but I think that your argument assumes an extreme form of EA that may not characterize the mainstream position of the movement (though I wouldn't say it's a strawman since some significant part of the movement may endorse it).

A consistent strict total utilitarian must necessarily oppose abortion in the general case, since a strict total utilitarian endorses the "repugnant conclusion" of maximizing the number of humans in existence conditional on their lives being barely worth living, and arguably most fetuses would live lives at least barely worth living if they were being born.
A strict total utilitarian might endorse abortion in some special cases, but not in general.

However, typical EAs, as far as I can tell as an outsider watching the movement from the Internet, are not strict total utilitarians. They consider morally permissible for an agent to give priority to their own selfish preferences over the wellbeing of others.
That's why EAs can consider morally permissible not to donate all of their disposable income to charity. In fact, EA organizations such as Giving What We Can an... (read more)

Yvain's EA tithing is much like a resource sliced utilitarianism, similar to the time sliced utilitarianism I've often seen around here. There is a natural fit between the two, sliced or not.

a resource sliced utilitarianism, similar to the time sliced utilitarianism I've often seen around here.

Can you explain these terms, please?

I made them up. By time sliced utilitarianism, I mean operating as a utilitarian for some percentage of time. Well, not really, but that's close. In discussions about utilitarianism, some people here call it utilitarianism when for a particular choice, they choose to maximize total utility. Sometimes, they're a utilitarian. Or so they say. That's what I'm referring to by "time sliced utilitarian". If you take a chunk of your money, and say "I want to maximize total utility with this money", that could similarly be called a resource sliced (some chunk of my assets) utilitarian, but it also seems to accurately be EA.
Sure, maybe you think it's not morally obligatory. But EAs who think it's good to give 10% generally think it's better to give 20%, and similarly maybe it is permissible to abort a baby but morally better to not.
And they may also think that it is even better to give 100% minus living expenses, but at the end of the day most of them don't do it.

It’s a nice post with a sound argumentation towards an unconformable conclusion to many EA/rationalists. We certainly need more of this.However, this isn't the first time someone has tried to sketch some probability calculus in order to account for moral uncertainty when analysing abortion. In the same way as the previous attempts, yours seems to be surreptitiously sneaking in some controversial assumptions into probability estimates and numbers. This is further evidence to me that trying to do the math in cases where we still need more conceptual clarification isn't really as useful as it would seem. Here are a few points you have sneaked/ignored:

  • You are accepting some sort of Repugnant Conclusion, as mentioned here

  • You are ignoring the real life circumstances in which abortion takes place. Firstly, putting your kid for adoption isn't always available. Additionally, I believe that in practice people are mostly choosing between having an abortion and raising an unwanted child with scare resources (which probably has a negative moral value).

  • You are not accounting for the fact that even if adoption successfully takes place, adopted children have very low quality of life.

Over... (read more)

If you have a utiliterian framework that rejects the "Repugnant Conclusion" without coming to even more repugnant conclusions (of the kill the poor variety), I'd love to see it.
Maybe the second paragraph here will help clarify my line of thought.
We are not evaluating ethical systems but intuitions about abortion.
Reaching a repugnant conclusion is not a proof that the conclusion is wrong. Dias does make several assumptions along the way (QALY looks like first world estimates while most abortions happen in developing countries, no particular psychological impact of producing and giving up a child, etc.) and it's always worth while to tweak those assumptions to see how they impact the conclusion, but just getting an answer you don't want isn't a good reason to discount the argument (from an EA perspective, if your goal is to justify your beliefs or discount an opponents beliefs then this is actually a fairly effective tool). Since Diases argument makes the assumption that adoption is available, you could simply view that as a given for the circumstances under which this decision is correct. Where adoption isn't possible, that row on the table doesn't apply and you're just left with the inconvenience of birthing and raising a child vs. the potential moral value of murdering a human. From an EA perspective, if raising a child with scarce resources produces negative moral value, then people with scarce resource should be sterilized or otherwise stopped from reproducing, even if they object to it. Can you provide some sort of source for this? As an adult who was adopted as a baby and has talked with a lot of other adoptees about their experience, your proposition stands in opposition to basically all of my experience. That's not to say that I've never met an adoptee who wishes they'd never been born. I have, but the percentages don't seem so much higher for adoptees than non-adoptees that I'd say that adopted children have very low quality of life in comparison to anyone else.
2joaolkf This paper is already a major update from the long standing belief adoptees had lower quality of life, i.e. this is as optimistic as it gets. Given that stress during early childhood has a dramatic impact on an individual's adult life, I think this is something very uncontroversial.
Thank you for linking the study. It seems like most of the adopted children did not have any measurable difference from the natural children. Additionally, the two disorders that were more significantly more prevalent (ODD and ADHD) generally aren't considered to cripple people so badly that a life with them should be considered worse than not living at all. It hardly seems like that would justify claiming that "adopted children have very low quality of life" in the context of a debate on the acceptability of abortion. It comes off as though you're arguing that being more susceptible to those disorders is a reason to choose abortion over adoption - that you've got the potential future persons best interest in mind when you decide whether the life should be allowed or not, but to make this argument from this pseudo-utility perspective, you'd need to show that the poor quality of the disordered adoptees life causes more suffering than the normal quality of the natural children causes enjoyment, but I don't think this shows that. Or did I misconstrue the general thrust of your argument?
When I made my initial comment I wasn't aware adoptees' quality of life wasn't that bad. I would still argue it should be worse than what could be inferred from that study. Cortisol levels on early childhood are really extremely important and have well documented long-term effects on one's life. You and your friends might be in the better half, or even be an exception. I can't really say for sure whether reaching the repugnant conclusion is necessarily bad. However, I feel like unless you agree on accepting it as a valid conclusion you should avoid that your argument independently reaches it. That certain ethical systems reach this conclusion is generally regarded as a nearly reductio ad absurdum, therefore something to be avoided. If we end up fixing this issue on theses ethical systems then we surely will no long find acceptable arguments that independently assume/conclude it. Hence, we have some grounds for already finding those arguments unacceptable. I agree we should, ideally, prevent people with scare resources from reproducing. Except the transition costs for bringing this about are huge, so I don't think we should be moving in that direction right now. It's probably less controversial to just eliminate poverty. Sorry but I don't have the time to continue this discussion right now. I'm sorry also if anything I said caused any sort of negative emotion on you, I can be very curt at times and this might be a sensitive subject.

A slightly different common argument is that while fetuses will eventually be people, they’re not people yet...Yet it seems that EAs are almost uniquely unsuited to this response. EAs do tend to care explicitly about future generations.

This is a fairly bad case of conflation, made worse by the fact that run-of-the-mill abortion activists make the same error. Caring about the future in the abstract, wanting a future filled with happy people - does not mean that the interests of any specific people who might have been created should be appeased. Else we'd... (read more)

Yeah, I realise they're quite related positions. I often feel guilty for not having had children yet. My guess is that because everything is linear, treating a fetus as being like 0.3-person-weight will give you the same answer as treating them as having 1-person-weight with probability 30%

The usual question about "permissibility" of abortion is the political one of support for making abortion illegal, i.e., threatening violence to a woman who has an abortion or to people who help a woman have an abortion, thereby attempting to force women to bring to term children who they do not want at the time, or resort to the illegal means of obtaining an abortion that are left when the legal ones are removed.

This is related to the more question of support for theocracy - of attempting to force people to live by your values.

Trying to preven... (read more)

It seems to me that someone who decides that abortion is wrong could still believe that abortion should be legal, but do pro-life advertising, suggest to pregnant friends that they carry the child to term, and then shame or ostracize their friends that get abortions, just as an animal rights activist might believe that eating animals should be legal but do advertisement and positive and negative social actions to convince their social circles to not eat animals.

Yes I deliberately avoided discussing the law for this reason, and to try to keep down the number of open worm-filled cans.
True. But my guess is that the OP is more the type interested in the question of law, as evidenced by his blithe dismissal of concerns about autonomy.
Your guess would be mistaken! I think I am much more concerned about the autonomy than the average EA, which is a large part of the reason I write the only libertarian effective altruist blog I'm aware of. But most EAs do not seem to care about autonomy, hence why I pointed out than autonomy arguments, a classic pro-abortion argument, are not available to them. Meta: I think you may have had a negative reaction to my post because you (perhaps reasonably) pattern-matched me as an ideological opponent, which I think is a (perhaps reasonable) mistake. I think some of my other posts, like this one, this one or this one might be more to your taste.
Would you apply this "argument" to infanticide, how about sati, how about slavery, how about murder? You're completely glazing over the question of of whether abortion is moral by assuming it is, the concluding what you've just assumed and pretending this constitutes an argument.
I was commenting on his arguments and his hypothetical Utilitarians, not giving my own arguments. I don't have much of an issue with early term abortions. Having a steak or a hot dog seems more problematic, and I do both, with relish. Literally, with the hot dog. Later term abortions are morally a problem, but so is nationalizing a woman's womb. Moral questions don't always have tidy feel good solutions. All in all, I'll go with letting the woman choose no matter the state of development of the fetus.

This argument is treating "moral worth" like an unknown scientific truth that we are trying to discover, which seems incorrect to me. Moral judgements vary over time (rather than converging on some absolute truth) and are given from society to an individual, or an individual to themselves. The more useful question to ask is not "is a fetus morally significant" (in some absolute sense), but "what are the chances that I will either regret the abortion or be punished for the abortion in the future?". This may give a different answer.

This is a horrendous way to do ethics. It leads to concluding that ethical behavior is whatever I can get away with.
This is just confusing moral anti-realism with egoism. The point is that it makes no sense for anti-realists to worry about the probability of being mistaken about the truth of a moral fact, but it might make sense to worry about the probability of your value system evolving in a direction that causes you to regret prior decisions. Although I suspect that it only makes sense to worry about this when your uncertainty is very high (i.e. you are confused about the issue and are not sure how you will feel after you've had a chance to think it through).
You realize that's an argument against moral anti-realism right?
If it is, it's not a very good one. Regardless, the comment that I replied to above is either confused or disingenuous. It is entirely consistent for anti-realists to agonize over ethical decisions, act with strictly altruistic motivations and all the rest of it.
With a sufficiently long-term view, "what one can get away with" (including considerations of signalling, effects on self, etc) is not as scary as it sounds. It's basically just near-mode utilitarianism. And to me it's the only ethics that that doesn't seem to rely on confused notions like unknown absolute moral worth. Most ethics discussions, even on LW, are more about signalling and bullying people into doing what you want them to do, rather the describing how decision making actually works. I'd prefer that there was somewhere we could stay descriptive instead of prescriptive since I think there's a lot more insight to be had that way. Separate the game-theoretical negotiation of establishing a society's ethics (which can take place anywhere) from the theoretical basis of how it all works (elucidation of which can only be done those sufficiently versed in rationality).
Only if you believe there is some universal force that ensures good wins in the end.

As someone who's had a very nuanced view of abortion, as well as a recent EA convert who was thinking about writing about this, I'm glad you wrote this. It's probably a better and more well-constructed post than what I would have been able to put together.

The argument in your post though, seems to assume that we have only two options, either to totally ban or not ban all abortion, when in fact, we can take this much more nuanced approach.

My own, pre-EA views are nuanced to the extent that I view personhood as something that goes from 0 before conception, ... (read more)

Thanks! It took a long time - and was quite stressful. I'm glad you liked it. I actually deliberately avoided discussing legal issues (ban or not ban) because I felt the purely moral issues were complicated enough already. Yeah, if you want to do both you need a joint probability distribution, which seemed a little in-depth for this (already very long!) post.
I had another thought as well. In your calculation, you only factor in the potential person's QALYs. But if we're really dealing with potential people here, what about the potential offspring or descendants of the potential person as well? What I mean by this is, when you kill someone, generally speaking, aren't you also killing all that person's future possible descendants as well? If we care about future people as much as present people, don't we have to account for the arbitrarily high number of possible descendants that anyone could theoretically have? So, wouldn't the actual number of QALYs be more like +/- Infinity, where the sign of the value is based on whether or not the average life has more net happiness than suffering, and as such, is considered worth living? Thus, it seems like the question of abortion can be encompassed in the question of suicide, and whether or not to perpetuate or end life generally.

There are a couple of very good comments on the EA forum cross-post, like this, this and this

Many think EA ideas argue for vegetarianism – but did anyone come to this conclusion who had previously been passionately carnivorous?

That happened to me.

Congratulations! You are very unusually virtuous.

Another facet of the replaceability argument might come into play in environments where resources are scarce and adoption is not an option. In the case I'm thinking of where there are few enough available resources that the supportable population limit has been reached, it might make more sense for one person to abort an unwanted child to leave resources for a wanted child (who will presumably be treated better and thus be happier than an unwanted child).

Yeah there are many cases where the math I did would produce a different answer. But I think this concern at least remains hypothetical.

It surprises me that you didn't mention at all the harm of overpopulation as a factor. I'm a man who has chosen to not have children because this world can't handle additional people. I'm open to adopting already existing children, though.

What harm? The people warning about such harm have a rather long track record of failed predictions.
Habitat destruction and subsequent loss of ecological diversity (based on a long-term view of several thousand years). Note: I'm ok with abortion, and also with having kids - I endorse the idea of keeping the global population relatively stable until we can figure out how not to screw over the environment while we're busy making more stuff for ourselves. I don't endorse any drastic enforcements to that, because we seem to be doing ok on that front (peak child et al)

I find myself thinking of the differences between a single-shot "Prisoner's Dilemma" and the "Iterated Prisoner's Dilemma"; that is, conclusions about any particular choice may not necessarily apply when dealing with a large number of similar choices. That is, even if, in the listed examples, the QALYs indicate a particular choice is the best one, that does not necessarily imply large-scale social policies should result from that conclusion. Or, put another way, the QALYs of a nation having a strict anti-abortion policy or a more permissive one don't necessarily seem to be closely correlated with the QALYs of an individual facing a choice on the topic.

Assigning abortion a 30% chance of being murder is ridiculous, and depends on a fallacy that is also used to argue for vegetarianism: people who are asked to estimate the probability of something with a low probability will usually pick one that falls in a certain range, regardless of whether it is a good estimate. And other people who look at that number will say "yup, looks like a small probability to me", even if it really isn't small enough. I don't usually see it done with numbers as high as 30%, but even 5% would be far too high. A 5% ch... (read more)

There is a semi-official EA position on immigration

Could you describe what this position is? (or give a link) Thanks!

Full open boarders, although Michelle partly disagreed here, and many have concerns about immigration's effects on domestic policy/crime etc.

There's a lot of heavy lifting going on behind the scenes in Will MacAskill's thesis, which I'm glad you linked to.

In particular, it's far from obvious that you can rationally construct an uncertainty table about moral 'facts' in the same way that you could for an empirical uncertainty. Can the objective worth of an action be surprising, independent of its form and consequences? The physical state of the fetus is not in question; the 'surprising discovery' here would be that an abortion has some quality of badness, one which is not implied by a subjectiv... (read more)

The physical state of the fetus is not in question; the 'surprising discovery' here would be that an abortion has some quality of badness, one which is not implied by a subjective observer's desires or a full and complete understanding of the physical system. I think I have two responses: * Firstly, I sometimes am convinced to change my mind on moral issues as a result of purely moral arguments. Something like moral uncertainty seems to be at play. * I think there's some danger of equivocation with "full and complete understanding of the physical system." Maybe if I knew the position of each atom, and had all the systems-level understanding that would imply, then there would be no moral uncertainty. But it seems possible that I could have a 'full understanding' in the conventional, more banal sense, and also have moral uncertainty, even if some strong version of physicalism is true.

I think that a correct analysis has to take into account the QALYs associated with the new person whether we consider fetuses to have moral worth or not. If fetuses do have moral worth then the utility cost of abortion is higher than merely the QALYs that disappear since murder has its own negativity utility. On the other hand we also have to take into account the reduced quality of other people's lives due to the existence of the new person: the resources she will consume, the work required to bring her up. Also, giving a child to adoption carries its own... (read more)

Yep, but it seems plausible these would be outweighed by the value she will create for others, assuming she eventually gets a job, pays taxes, etc. Assuming you think humanity is net positive value, absent some particular reason to think the child will be negative it seems reasonable to assume she will be positive. Yes please!
I mostly agree, however there have to be scenarios in which another person is a net negative, if we're to avoid the repugnant conclusion. Moreover, in the case of abortion we're considering an undesired child which might mean psychological damage both to the child and the unwilling parent. To take an extreme example, consider pregnancy from rape. According to UDT we have to compute the expectation value of the utility function over the whole "a priori" Tegmark IV multiverse (without conditioning on observations). A natural way to model this is considering the Solomonoff measure on the space of infinite sequences of bits (each such sequence is a "universe") . Thus the expectation value is an integral over all such sequences. It is natural to require the utility function to be bounded in order for the integral to converge (avoiding Pascal's mugger). Since each universe enters the integral together with its time translated versions, the resulting time asymptotics is 2^{-Kolmogorov complexity of t} which decays only slightly faster than 1/t. This result doesn't depend on the details of the time discount in the "bare" utility function: it is a universality result. See also this.

The argument doesn't understand what the moral uncertainty is over; it's taking moral uncertainty over whether fetuses are people in the standard quasi-deontological framework and trying to translate it into a total utilitarian framework, which winds up with fairly silly math (what could the 70% possibly refer to? Not to the value of the future person's life years- nobody disputes that once a person is alive, their life has normal, 100% value.)