The following is an excerpt from some comments I wrote to Will MacAskill about a pre-publication draft of What We Owe the Future. It is in response to the chapter on population ethics.

Chapter 8 presented some interesting ideas and did so clearly, I learned a lot from it.

That said, I couldn’t shake the feeling that there was something bizarre about the entire enterprise of trying to rate and rank different worlds and populations. I wonder if the attempt is misguided, and if that’s where some of the paradoxes come from.

When I encounter questions like “is a world where we add X many people with Y level of happiness better or worse?” or “if we flatten the happiness of a population to its average, is that better or worse?”—my reaction is to reject the question.

First, I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds. Second, if I consider realistic, analogous scenarios, there are always major considerations that guide my choices other than an abstract, top-down decision about overall world-values.

For instance, if I choose to bring one more person into the world, by having a child (which, incidentally, we just did!), that decision is primarily about what kind of life want to have, and what commitments I am willing to make, rather than about whether I think the world, in the abstract, is better or not with one more person in it.

Similarly, if I were to consider whether I should make the lives of some people worse, in order to make the lives of some less-well-off people better, my first thought is: by what means, and what right do I have to do so? If it were by force or conquest, I would reject the idea, not necessarily because of the end, but because I don’t believe that the ends justify the means.

There seems to be an implicit framework to a lot of this along the lines of: “in order to figure out what to do, we need to first decide which worlds are better than which other worlds, and then we can work towards better worlds or avoiding worse worlds.”

This is fairly abstract, centralized, and top-down. World-states are assigned value without considering, to whom and for what? The world-states are presumed to be universal, the same for everyone. And it provides no guidance about what means are acceptable to work towards world-states.

An approach that makes more sense to me is something like: “The goal of ethics is to guide action. But actions are taken by individuals, who are ultimately sovereign entities. Further, they have differing goals and even unique perspectives and preferences. Ethics should help individuals decide what goals they want to pursue, and should give guidance for how they do so, including principles for how they interact with others in society. This can ultimately include concepts of what kind of society and world we want to live in, but these world-level values must be built bottom-up, grounded in the values and preferences of individuals. Ultimately, world-states must be understood as an emergent property of individuals pursuing their own life-courses, rather than something that we can always evaluate top-down.”

I wonder if, in that framework, a lot of the paradoxes in the book would dissolve. (Although perhaps, of course, new ones would be created!) Rather than asking whether a world-state is desirable or not, we would consider the path by which it came about. Was it the result of a population of individuals pursuing good (if not convergent) goals, according to good principles (like honesty and integrity), in the context of good laws and institutions that respect rights and prohibit oppression? If so, then how can anyone say that a different world-state would have been better, especially without explaining how it might have come about?

I’m not sure that this alternate framework is compatible with EA—indeed, it seems perhaps not even compatible with altruism as such. It’s more of an individualist / enlightened-egoism framework, and I admit that it represents my personal biases and background. It also may be full of holes and problems itself—but I hope it’s useful for you to consider it, if only to throw light on some implicit assumptions.

Incidentally, aside from all this, my intuition about the Repugnant Conclusion is that Non-Anti-Egalitarianism is wrong. The very reason that the Conclusion is repugnant is the idea that there’s some nonlinearity to happiness: a single thriving, flourishing life is better than the same amount of happiness spread thin over many lives. But if that’s the case, then it’s wrong to average out a more-happy population with a less-happy population. I suppose this makes me an anti-egalitarian, which is OK with me. (But again, I prefer to analyze this in terms of the path to the outcome and how it relates to the choices and preferences of the individuals involved.)

New Comment
39 comments, sorted by Click to highlight new comments since:

People love to have their cake and eat it too. They want to maintain that they have no preferences about how the future of the universe turns out (and therefore can't be called out on any particular of those preferences), and yet also spend resources affecting the future. As my tone suggests, I think this is wrong, and that arguments for such a position are rationalizations.

Why rationalize? To defend the way you currently make decisions against abstract arguments that you should change how you make decisions. But just because people use rationalizations to oppose those abstract arguments, doesn't mean the abstract arguments are right

I think the assumption that there is one correct population ethics is wrong, and that it's totally fine for each person to have different preferences about the future of the universe just like they have preferences about what ice cream is best, and it's fine if their preferences don't follow simple rules (because human preferences are complicated). But this is a somewhat unpalatable bullet to bite for many, and I think it isn't really the intuitive argument to hit on to defend yourself (realism/universalism is intuitive).

I don't understand which views you are attributing to me, which to population ethicists, and which to people in general? Do you think I am claiming not to have preferences about the future? (That's not right.) Or what exactly? Sorry, I'm confused.

I am accusing the first half of your post of being straight-up bad and agreeing with parts of the second half. To me, it reads like you threw up whatever objections came to hand, making it seem like first you decided to defend your current way of making decisions, and only second did you start listing arguments.

But you did claim to have your own preferences in the second half. My nitpick would be that you pit this against altruism - but instead you should be following something like Egan's law ("It all adds up to normality"). There's stuff we call altruistic in the real world, and also in the real world people have their own preferences. Egan's law says that you should not take this to mean that the stuff we call altruistic is a lie and the world is actually strange. Instead, the stuff we call altruistic is an expression and natural consequence of peoples' own values.

I'm a bit confused about what exactly you mean, and if I attribute to you a view that you do not hold, please correct me.

I think the assumption that there is one correct population ethics is wrong, and that it's totally fine for each person to have different preferences about the future of the universe just like they have preferences about what ice cream is best

This kind of argument has always puzzled me. Your ethical principles are axioms, you define them to be correct, and this should compel you to believe that everybody else's ethics, insofar as they violate those axioms, are wrong. This is where the "objectivity" comes from. It doesn't matter what other people's ethics are, my ethical principles are objectively the way they are, and that is all the objectivity I need.

Imagine there were a group of people who used a set of axioms for counting (Natural Numbers) that violated the Peano axioms in some straightforward way, such that they came to a different conclusion about how much 5+3 is. What do you think the significance of that should be for your mathematical understanding? My guess is "those people are wrong, I don't care what they believe. I don't want to needlessly offend them, but that doesn't change anything about how I view the world, or how we should construct our technological devices."

Likewise, if a deontologist says "Human challenge trials for covid are wrong, because [deontological reason]", my reaction to that (I'm a utilitarian) is pretty much the same.

I understand that there are different kinds of people with vastly different preferences for what we should try to optimize for (or whether we should try to optimize for anything at all), but why should that stop me from being persuaded by arguments that honor the axioms I believe in, or why should I consider arguments that rely on axioms I reject?

I realize I'll never be able to change a deontologist's mind using utilitarian arguments, and that's fine. When the longtermists use utilitarian arguments to argue in favor of longtermism, they assume that the recipient is already a utilitarian, or at least that he can be persuaded to become one.

i'm less firm than that, but basically: yes, "one correct" with "one objectively correct."

Good points. People tend to confuse value pluralism or relativism with open-mindedness.

Basically this. I think that the moral anti-realists are right and there's no single correct morality, including population ethics. (Corollary: There's no wrong morals except from perspective or for signalling purposes.)

Corollary: There's no wrong morals except from perspective or for signalling purposes

Surely Future-Tuesday-suffering-indifference is wrong? 

(Corollary: There's no wrong morals except from perspective or for signalling purposes.)

Do you consider perspective something experiential or is it conceptual? If the former, is there a shared perspective of sentient life in some respects? E.g. "suffering feels bad".

I consider it experiential, but I'm talking about a "true or objective moral values, and all others are false." fashion.

It sounds like you are rejecting utilitarianism and consequentialism, not just population ethics.

Maybe! I am not a utilitarian. And while there are some things that appeal to me about consequentialism, I don't think I'm a consequentialist either.

It sounds like your story is similar to the one that Bernard Williams would tell.

Williams was in critical dialog with Peter Singer and Derek Parfit for much of his career.

This lead to a book: Philosophy as a Humanistic Discipline.

If you're curious:

In case you hadn't seen it, there's a post on the EA forum which argues that if you accept both utilitarianism and try to resist scope insensitivity, there's no way to escape stuff like the Repugnant Conclusion.

I hope the reasoning is clear enough from this sketch. If you are committed to the scope of utility mattering, such that you cannot just declare additional utility de facto irrelevant past a certain point, then there is no way for you to formulate a moral theory that can avoid being swamped by utility comparisons. Once the utility stakes get large enough—and, when considering the scale of human or animal suffering or the size of the future, the utility stakes really are quite large—all other factors become essentially irrelevant, supplying no relevant information for our evaluation of actions or outcomes.

The post even includes a recipe for how to construct new paradoxes:

Indeed, in section five Cowen comes close to suggesting a quasi-algorithmic procedure for generating challenges to utilitarianism.[9] You just need a sum over a large number of individually-imperceptible epsilons somewhere in your example, and everything else falls into place. The epsilons can represent tiny amounts of pleasure, or pain, or probability, or something else; the large number can be extended in time, or space, or state-space, or across possible worlds; it can be a one-shot or repeated game. It doesn’t matter. You just need some Σ ε and you can generate a new absurdity: you start with an obvious choice between two options, then keep adding additional epsilons to the worse option until either utility vanishes in importance or utility dominates everything else.

In other words, Cowen can just keep generating more and more absurd examples, and there is no principled way for you to say ‘this far but no further’. As Cowen puts it:

Once values are treated as commensurable, one value may swamp all others in importance and trump their effects… The possibility of value dictatorship, when we must weigh conflicting ends, stands as a fundamental difficulty.

One way to get out of this predicament, while getting different problems, is via accepting incommensurable values:

If we accept a certain amount of incommensurability between our values, and thus a certain amount of non-systematicity in our ethics, we can avoid the absurdities directly. Different values are just valuable in different ways, and they are not systematically comparable: while sometimes the choices between different values are obvious, often we just have to respond to trade-offs between values with context-specific judgment. On these views, as we add more and more utility to option B, eventually we reach a point where the different goods in A and B are incommensurable and the trade-off is systematically undecidable; as such, we can avoid the problem of utility swallowing all other considerations without arbitrarily declaring it unimportant past a certain point.

I think you've mistitled this.  "Against universal ethics" or "against objective ethics" might be closer to the position you seem to espouse.  If you don't believe that people are ever morally homogeneous based on definable characteristics, and/or you don't believe there is any non-indexical preference to the state of the universe, you're going to have trouble engaging with most moral philosophers.  "What you like is good for you, what I like is good for me, there's no aggregation or comparison possible" doesn't get papers published.  Or, once you go Crowley, you never go back.  

I may have mistitled it.

But I don't think I'm against universal or objective ethics? Depends how you define those terms.

I'm not sure what it means for people to be “morally homogeneous” or what “non-indexical preference” means.

For instance, if I choose to bring one more person into the world, by having a child (which, incidentally, we just did!), that decision is primarily about what kind of life want to have, and what commitments I am willing to make, rather than about whether I think the world, in the abstract, is better or not with one more person in it.

Would you take into account the wellbeing of the child you are choosing to have? 

For example, if you knew the child was going to have a devastating illness such that they would live a life full of intense suffering, would you take that into account and perhaps rethink having that child (for the child's sake)? If you think this is a relevant consideration you're essentially engaging in population ethics.

Yes, of course I would take that into account. For the child's sake as well as mine.

I don't see how that constitutes engaging in population ethics. Unless you are using a very broad definition of “population ethics.” (Maybe the definition I was working with was too narrow.)

Population ethics is the philosophical study of the ethical problems arising when our actions affect who is born and how many people are born in the future (see Wikipedia here).

In the example I gave we are judging the ethical permissibility of a change in population (one extra life) taking into account the welfare of the new person. You implied it can sometimes not be ethically permissible to bring into existence an additional life, if that life is of a poor enough quality. This quite clearly seems to me to be engaging in population ethics.

You say:

World-states are assigned value without considering, to whom and for what?

This isn't usually true. In population ethics it is most common to assign value to world states based on the wellbeing of the individuals that inhabit that world state. So in the case I gave, one might say World A, which has a single tortured life with unimaginable suffering, is worse than World B, which has no lives and therefore no suffering at all. This isn't very abstract - it's what we already implicitly agreed on in our previous comments.

Thanks. So, of course I think we should discuss the ethics of actions that affect who is born. If that's all that population ethics is, then it's hard to be against it. But all the discussion of it I've seen makes some fundamental assumptions that I am questioning.

What assumptions are these specifically?

You're welcome to lay out your own theory of population ethics. The more I read about it though the more it seems like a minefield where no theory seems to evade counterintuitive/repungnant results.

Agree with this. I had a related comment here; the brief summary of it would be that 

  • it can be possible to set up population ethical dilemmas in which we reason according to an extremely decontextualized frame and only according to a couple of axioms that have been defined for that frame
  • however, if we want to learn something from this that's relevant for real life, we need to re-contextualize the conclusions of that reasoning and indicate how they interact with all the other considerations we might care about
  • without doing that recontextualization, it's impossible to know whether the results of population ethical experiments are any more weighty or relevant for anything than the results of much more obviously absurd decontextualized dilemmas, such as "other things being even, is it better for objects to be green or red". 
  • at the same time, there's a tendency for some people to make a move that goes something like "but there might be situations where you really do have to make the population-ethical choice, and if you refuse to consider the question in isolation you can't decide where population levels are significantly influenced by your decision". which is a reasonable argument for trying out the decontextualization. but often there's a sleight of hand where the results of this decontextualized analysis are then assumed to be generally significant on their own, glossing over the fact that they are really only useful if you also do the re-contextualization step.

Simply put: Utilitarianism is wrong. 

Slightly more wordy: The adherents of utilitarianism take as an article of faith that only these final world states matter. Just about everyone acts as if they don't actually believe only final world states matter. These disagreements you are having here are pretty much entirely with said idea. Basically the entirety of humanity agrees with you.

Technically, that means utilitarians should change that view since humans actually value (and thus have utility for) certain paths, but they don't and they keep writing things that are just wrong even by the coherent version of their own standards. Utilitarians are thus wrong.

Your speculation that your professed beliefs are incompatible with altruism is also wrong, though it's trickier to show. Improving the world as you see it to be by your own lights is, in fact, the very basis of a large portion of altruism. There's nothing about altruism that is only about world-states.

For instance, if someone likes the idea of orphans being taken care of, they could be against taxing people to pay for orphanages, but instead founds some (good ones) with their own money for that purpose, that is both an improvement to the world by their standards, and clearly altruistic.

The adherents of utilitarianism take as an article of faith that only these final world states matter. Just about everyone acts as if they don't actually believe only final world states matter.

What are these "final world states" you're talking about? They're not mentioned in the OP, and utilitarians typically don't privilege a particular point in time as being more important than another. The view and moral treatment of time is actually a defining, unusual feature of longtermism. Longtermism typically views time as linear, and the long-term material future as being valuable, perhaps with some non-hyperbolic discounting function, and influenceable to some degree through individual action.

Note that most people act like only the near-term matters. Buddhists claim something like "there is only this moment." Many religious folks who believe in an afterlife act as of only "eternity" matters.

Utilitarians should change that view since humans actually value (and thus have utility for) certain paths

In addition to its assumptions about what present-day humans value, this statement takes for granted the idea that present-day humans are the only group whose values matter. The longtermist EA-flavored utilitarianism OP is addressing rejects the second claim, holding instead that the values and utilities of future generations should also be taken into account in our decisions. Since we can't ask them, but they will exist, we need some way of modeling what they'd want us, their ancestors, to do, if they were able to give us input.

Note that this is something we do all the time. Parents make decisions for the good of their children based on the expected future utility of that child, even prior to conception. They commonly think about it in exactly these terms. People do this for "future generations" on the level of society.

Longtermist utilitarianism is just about doing this more systematically, thinking farther into the future and about larger and more diverse groups of future people/sentient beings than is conventional. This is obviously an important difference, but the philosophy OP is expounding also has its radical elements. There's no getting away from minority opinions in philosophy, simply because worldviews are extremely diverse.

Final world states is not the terminology used originally, but it's what the discussion is talking about. The complaint is that utilitarianism is only concerned with the state of the world not how it gets there. 'Final world states' are the heart of choosing between better or worse worlds. It's obvious that's both what is being talked about by the original post, and what my reply is referencing. I suspect that you have mistaken what the word 'final' means. Nothing in what I said is about some 'privileged time'. I didn't reference time at all.  'Final' before 'world states' is clearly about paths versus destinations. 

Even if you didn't get that initially, the chosen example is clearly about the path mattering, and not just the state of the world. What I referred to wasn't even vaguely critiquing or opposed to the 'long-termism' you bring up with that statement. I have critiques of 'long-termism' but those are completely separate, and not brought up.

My critique was of Utilitarianism itself, not of any particular sub-variety of it. (And you could easily be a, let's call it 'pathist long-termist' where you care about the paths that will be available to the future rather than a utilitarian calculus.) My critique was effectively that Utilitarians need to pay more attention to the fact that people care a lot about how we get there, and it is directly counter to people's utility functions to ignore that, rendering actual, (rather than the concept itself), Utilitarianism not in accordance to its own values. This isn't the primary reason I subscribe to other ethical and moral systems, but it is a significant problem with how people actually practice it.

You also assume another non-existent time dimension when you try to critique my use of the words 'humans actually value.' This phrase sets up a broad truth about humans, and their nature, not a time-dependent one. Past people cared. Current people care. Future people will care about how we got/get/and will get there in general. You aren't being a good long-termist if you assume away the nature of future people to care about it (unless you are proposing altering them dramatically to get them to not care, which is an ethical and moral nightmare in the making).

I care about what happened in the past, and how we got here. I care about how we are now, and where we choose to go along the path. I care about the path we will take. So does everyone else. Even your narrative about long-termism is about what path we should take toward the future. It's possible you got there by some utility calculation...but the chance is only slightly above nothing.

The OP is calling out trying to control people, and preventing them from taking their own paths. Nowhere does the OP say that they don't care how it turns out...because they do care. They want people to be free. That is their value here.

Side note: I think this is the first time I've personally seen the karma and agreement numbers notably diverge. In a number of ways, it speaks well of the place that an argument against the predominant moral system of the place can still get noticeably positive karma.

This is a minor stylistic point, not a substantive critique, but I don’t agree that your choice of wording was clear and obvious. I think it’s widely agreed that defining your terms, using quotes, giving examples, and writing with a baseline assumption that transparency of meaning is an illusion are key to successful communication.

When you don’t do these things, and then use language like “clearly,” “obvious,” “you mistook the meaning of [word],” it’s a little offputting. NBD, just a piece of feedback you can consider, or not.

I have written a fairly lengthy reply about why the approach I took was necessary. I tend to explain things in detail. My posts would be even longer and thus much harder to read and understand if I was any more specific in my explanations. You could expand every paragraph to something this long. People don't want that level of detail. Here it is.

When you go off on completely unrelated subjects because you misunderstood what a word means in context, am I not supposed to point out that you misunderstood it? Do you not think it is worth pointing out that the entire reply was based on a false premise of what I said? Just about every point in your post was based on a clear misreading of what I wrote, and you needed to read it more carefully.

Words have specific meanings. It is 'clear' that 'final world states' is 'what states the world ends up being in as a result'. Writing can be, and often is, ambiguous, but that was not. I could literally write paragraphs to explain each and every term I used, but that would only make things less clear! It would also be a never-ending task.

In this case 'final' = 'resultant' , so 'resultant world states' is clear and just as short, but very awkward. It is important informational content that it was clear, because that determines whether I should try to write that particular point differently in the future, and/or whether you need to interpret things more carefully. While in other contexts final has a relation to time, that is only because of its role in denoting the last thing in a list or sequence (and people often use chronological lists.).

It is similar with the word obvious. I am making a strong, and true, statement as to what the content of the OP's post is by using the word 'obvious'. This points out that you should compare that point of my statement to the meaning of the original post, and see that those pieces are the same. This is not a factor of 'maybe' this is what was meant or 'there are multiple interpretations'. Those words are important parts of the message, and removing them would require leaving out large parts of what I was saying.

I do not act like what I write is automatically transparent as to meaning, but as it was, I was very precise and my meaning was clear. There are reasons other than clarity of the writing for whether someone will or won't understand, but I can't control those parts.

People don't have to like that I was sending these messages, of course. That 'of course' is an important part of what I'm saying too. In this case, that I am acknowledging a generally known fact that people can and will dislike the messages I send, at least sometimes.

Around these parts, some people like to talk about levels of epistemic belief. The words 'clear' and 'obvious' clearly and obviously convey that my level of epistemic certainty here is very high. It is 'epistemic certainty' here because of just how high it is. It does not denote complete certainty like I have for 2 + 2 = 4, but more like my certainty that I am not currently conversing with an AI.

If I was trying to persuade rather than inform, I would not send these messages, and I would not say these words. Then I would pay more attention to guessing what tone people would read into the fact I was saying a certain message rather than the content of the message itself, and I might avoid phrases like 'clear', 'obvious', and so on. Their clarity would be a point against them.

I did include an example. It was  one of the five paragraphs, no shorter than the others. That whole thing about someone caring about orphans, wanting them taken care of, being against taking taxes from people to do it, and founding them with their own money? There is no interpretation of things where it isn't an example of both paths and states mattering, and about how altruism is clearly compatible with that.

Your part about using quotes is genuinely ambiguous. Are you trying to claim I should have quoted random famous people about my argument, which is wholly unnecessary, or that you wish for me to scour the internet for quotes by Utilitarians proving they think this way, which is an impossibly large endeavor for this sort of post rather than a research paper (when even that wouldn't necessarily be telling)? Or quote the OP, even though I was responding to the whole thing?

Utilitarianism is pretty broad! There are utilitarians who care about the paths taken to reach an outcome.

That's what I was talking about when I said that it meant utilitarians should 'technically change that view' and that it was not doing so that made Utilitarianism incoherent. Do I actually know what percentage of utilitarians explicitly include the path in the calculation? No. 

I could be wrong that they generally don't do that. Whether it is true or not, it is an intuition that both the OP and I share about Utilitarianism. I don't think I've ever seen a utilitarian argument that did care about the paths. My impression is that explicitly including paths is deontological or virtue-ethical in practice. It is the one premise of my argument that could be entirely false. Would you say that a significant portion of utilitarians actually care about the path rather than the results?

Would you say you are one? Assuming you are familiar enough with it and have the time, I would be interested in how you formulate the basics of Utilitarianism in a path dependent manner, and would you say that is different in actual meaning to how it is usually formulated (assuming you agree that path-dependent isn't the usual form)?

Would you say you are one?

Yes, I consider it very likely correct to care about paths. I don’t care what percentage of utilitarians have which kinds of utilitarian views because the most common views have huge problems and are not likely to be right. There isn’t that much that utilitarians have in common other than the general concept of maximizing aggregate utility (that is, maximizing some aggregate of some kind of utility). There are disagreements over what the utility is of (it doesn’t have to be world states), what the maximization is over (doesn’t have to be actions), how the aggregation is done (doesn’t have to be a sum or an average or even to use any cardinal information, and don’t forget negative utilitarianism fits in here too), which utilities are aggregated (doesn’t have to be people’s own preference utilities, nor does it have to be happiness, nor pleasure and suffering, nor does it have to be von Neumann–Morgenstern utilities), or with what weights (if any; and they don’t need to be equal). I find it all pretty confusing. Attempts by some smart people to figure it out in the second half of the 20th century seem to have raised more questions than they have produced answers. I wouldn’t be very surprised if there were people who knew the answers and they were written up somewhere, but if so I haven’t come across that yet.

You probably don't agree with this, but if I understand what you're saying, utilitarians don't really agree on anything or really have shared categories? Since utility is a nearly meaningless word outside of context due to broadness and vagueness, and they don't agree on anything about it, Utilitarianism shouldn't really be considered a thing itself? Just a collection of people who don't really fit into the other paradigms but don't rely on pure intuitions. Or in other words, pre-paradigmatic?

I don’t see “utility” or “utilitarianism” as meaningless or nearly meaningless words. “Utility” often refers to von Neumann–Morgenstern utilities and always refers to some kind of value assigned to something by some agent from some perspective that they have some reason to find sufficiently interesting to think about. And most ethical theories don’t seem utilitarian, even if perhaps it would be possible to frame them in utilitarian terms.

I can't say I'm surprised a utilitarian doesn't realize how vague it sounds? It is a jargon taken from a word that simply means ability to be used widely? Utility is an extreme abstraction, literally unassignable, and entirely based on guessing. You've straightforwardly admitted that it doesn't have an agreed upon basis. Is it happiness? Avoidance of suffering? Fulfillment of the values of agents? Etc. 

Utilitarians constantly talk about monetary situations, because that is one place they can actually use it and get results? But there, it's hardly different than ordinary statistics. Utility there is often treated as a simple function of money but with diminishing returns. Looking up the term for the kind of utility you mentioned, it seems to once again only use monetary situations as examples, and sources claimed it was meant for lotteries and gambling.

Utility as a term makes sense there, but is the only place where your list has general agreement on what utility means? That doesn't mean it is a useless term, but it is a very vague one.

Since you claim there isn't agreement on the other aspects of the theories, that makes them more of an artificial category where the adherents don't really agree on anything. The only real connection seems to be wanting to do math on on how good things are?

The only real connection seems to be wanting to do math on on how good things are?

Yes, to me utilitarian ethical theories do seem usually more interested in formalizing things. That is probably part of their appeal. Moral philosophy is confusing, so people seek to formalize it in the hope of understanding things better (that’s the good reason to do it, at least; often the motivation is instead academic, or signaling, or obfuscation). Consider Tyler Cowen’s review of Derek Parfit’s arguments in On What Matters:

Parfit at great length discusses optimific principles, namely which specifications of rule consequentialism and Kantian obligations can succeed, given strategic behavior, collective action problems, non-linearities, and other tricks of the trade. The Kantian might feel that the turf is already making too many concessions to the consequentialists, but my concern differs. I am frustrated with this very long and very central part of the book, which cries out for formalization or at the very least citations to formalized game theory.

If you’re analyzing a claim such as — “It is wrong to act in some way unless everyone could rationally will it to be true that everyone believes such acts to be morally permitted” (p.20) — words cannot bring you very far, and I write this as a not-very-mathematically-formal economist.

Parfit is operating in the territory of solution concepts and game-theoretic equilibrium refinements, but with nary a nod in their direction. By the end of his lengthy and indeed exhausting discussions, I do not feel I am up to where game theory was in 1990.

I wouldn't be surprised if there are lots of Utilitarians who do not even consider path-dependence. Typical presentations of utility do not mention path dependence and nearly all the toy examples presented for utility calculations do not involve path dependence.

I do think that most, if prompted, would agree that utility can and in practice will depend upon path and therefore so does aggregated utility.

I'm copy-pasting a comment I made on the EA forum version of this post

Hey, you might enjoy this post ("Population Ethics Without Axiology") I published just two weeks ago – I think it has some similar themes.

What you describe sounds like a more radical version of my post where, in your account, ethics is all about individuals pursuing their personal life goals while being civil towards one another. I think a part of ethics is about that, but those of us who are motivated to dedicate our lives to helping others can still ask "What would it entail to do what's best for others?" – that's where consequentialism ("care morality") comes into play.

I agree with you that ranking populations according to the value they contain, in a sense that's meant to be independent of the preferences of people within that population (how they want the future to go), seems quite strange. Admittedly, I think there's a reason many people, and effective altruists in particular, are interested in coming up with such rankings. Namely, if we're motivated to go beyond "don't be a jerk" and want to dedicate our lives to altruism or even the ambitious goal of "doing the most moral/altruistic thing," we need to form views on welfare tradeoffs and things like whether it's good to bring people into existence who would be grateful to be alive. That said, I think such rankings (at least for population-ethical contexts where the number of people/beings isn't fixed or where it isn't fixed what types of interests/goals a new mind is going to have) always contain a subjective element. I think it's misguided to assume that there's a correct world ranking, an "objective axiology," for population ethics. Even so, individual people may want to form informed views on the matter because there's a sense in which we can't avoid forming opinions on this. (Not forming an opinion just means "anything goes" / "this whole topic isn't important" – which doesn't seem true, either.)

Anyway, I recommend my post for more thoughts!

Your view here sounds a bit like preference presentism:

For instance, if I choose to bring one more person into the world, by having a child (which, incidentally, we just did!), that decision is primarily about what kind of life I want to have, and what commitments I am willing to make, rather than about whether I think the world, in the abstract, is better or not with one more person in it.

Compare:

Apart from comparativists, we have presentists who draw a distinction between presently existing people and non-existing people (Narveson 1973; Heyd 1988); necessitarians who distinguish between people who exist or will exist irrespective of how we act and people whose existence is contingent on our choices (Singer 1993); and actualists who differentiate between people that have existed, exist or who are going to exist in the actual world, on the one hand, and people who haven’t, don’t, and won’t exist, on the other [...]

https://plato.stanford.edu/entries/repugnant-conclusion/#:~:text=apart from,on the other

You would be in good company. Scott Alexander sympathizes with presentism as well, and according to the latter source also Peter Singer (though not according to the SEP quote above). Eliezer Yudkowsky also flirts with it.

If it were by force or conquest, I would reject the idea, not necessarily because of the end, but because I don’t believe that the ends justify the means.

 

In ideal utilitarianism, you have to tally up all the effects. Doing something by force usually implies a side effect of people getting hurt, and various other bad things. 

(Although, does this mean you are against all re-distributive taxation? Or just against robin hood? )

This is an old dilemma (as I suppose you suspect).

Part of the difficulty is the implicit assumption that all possible world-states are achievable. (I'm probably expressing this poorly; please be charitable with me.)

In other words, suppose we decide that state A, resulting from "make the lives of some people worse, in order to make the lives of some less-well-off people better" is better (by some measure) than state B, where we don't.

If the only way to achieve state A is "by force or conquest" (and B doesn't require that), the harm that results from those means must be taken into account in the evaluation of state A. And so, even if the end-state (A) is "better" than the alternative end-state (B), the harm along the path to A makes the integrated goodness of A in fact worse than B.

In yet other words, liberty + human rights may not lead to an optimal world. But the harm of using force and conquest to create a more-optimal outcome may make things worse, overall.

This is an old argument, and none of it is original with me.