When someone complains that utilitarianism1 leads to the dust speck paradox or the trolley-car problem, I tell them that's a feature, not a bug. I'm not ready to say that respecting the utility monster is also a feature of utilitarianism, but it is what most people everywhere have always done. A model that doesn't allow for utility monsters can't model human behavior, and certainly shouldn't provoke indignant responses from philosophers who keep right on respecting their own utility monsters.

The utility monster is a creature that is somehow more capable of experiencing pleasure (or positive utility) than all others combined. Most people consider sacrificing everyone else's small utilities for the benefits of this monster to be repugnant.

Let's suppose the utility monster is a utility monster because it has a more highly-developed brain capable of making finer discriminations, higher-level abstractions, and more associations than all the lesser minds around it. Does that make it less repugnant? (If so, I lose you here. I invite you to post a comment explaining why utility-monster-by-smartness is an exception.) Suppose we have one utility monster and one million others. Everything we do, we do for the one utility monster. Repugnant?

Multiply by nine billion. We now have nine billion utility monsters and 9x1015 others. Still repugnant?

Yet these same enlightened, democratic societies whose philosophers decry the utility monster give approximately zero weight to the well-being of non-humans. We might try not to drive a species extinct, but when contemplating a new hydroelectric dam, nobody adds up the disutility to all the squirrels in the valley to be flooded.

If you believe the utility monster is a problem with utilitarianism, how do you take into account the well-being of squirrels? How about ants? Worms? Bacteria? You've gone to 1015 others just with ants.2 Maybe 1020 with nematodes.

"But humans are different!" our anti-utilitarian complains. "They're so much more intelligent and emotionally complex than nematodes that it would be repugnant to wipe out all humans to save any number of nematodes."

Well, that's what a real utility monster looks like.

The same people who believe this then turn around and say there's a problem with utilitarianism because (when unpacked into a plausible real-life example) it might kill all the nematodes to save one human. Given their beliefs, they should complain about the opposite "problem": For a sufficient number of nematodes, an instantiation of utilitarianism might say not to kill all the nematodes to save one human.


1. I use the term in a very general way, meaning any action selection system that uses a utility function—which in practice means any rational, deterministic action selection system in which action preferences are well-ordered.

2. This recent attempt to estimate the number of different living beings of different kinds gives some numbers. The web has many pages claiming there are 1015 ants, but I haven't found a citation of any original source.

New to LessWrong?

New Comment
216 comments, sorted by Click to highlight new comments since: Today at 7:22 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So here's a question for anyone who thinks the concept of a utility monster is coherent and/or plausible:

The utility monster allegedly derives more utility from whatever than whoever else, or doesn't experience any diminishing returns, etc. etc.

Those are all facts about the utility monster's utility function.

But why should that affect the value of the utility monster's term in my utility function?

In other words: granting that the utility monster experiences arbitrarily large amounts of utility (and granting the even more problematic thesis that experienced utility is intersubjectively comparable)... why should I care?


I always automatically interpret the utility monster as an entity that somehow can be in a state that is more highly valued under my utility function than, say, a billion other humans put together.

But then the monster isn't a problem, because if there were in fact such an entity, I would indeed actually want to sacrifice a billion other humans to make the monster happy. This is true by definition.

I always automatically interpret the utility monster as an entity that somehow can be in a state that is more highly valued under my utility function than, say, a billion other humans put together.

That's easy. For most people (in general; I don't mean here on lesswrong), this just describes one's family (and/or close friends)... not to mention themselves!

I mean, I don't know exactly how many random people's lives, in e.g. Indonesia, would have to be at stake for me to sacrifice my mother's life to save them, but it'd be more than one. Maybe a lot more.

A billion? I don't know that I'd go that far. But some people might.

Well, whether you really want (in the extrapolated volition sense) to sacrifice 10^{whatever} lives to save your family is a whole big calculation involving interpersonal morality, bounded rationality/virtue ethics, TDT/game theory, etc. The point that I was echoing is that if you really would want to make that trade, there's nothing monstery about your family - you just {love them that much}/{love others that little}. The utility monster is an objection to the social morality theory called "utilitarianism"; the utility monster becomes gibberish when phrased as an objection to "any set of preferences can in principal be completely specified by a utility function, to be handed to a generic decision process, resulting in optimal decision making". Like, "Oh no, oh no, I found this monster, and it is soooo soooo good to feed it humans! It is even more better every time I feed it another human! Woe is me! Goooood!!". Now, the utility monster makes perfect sense as an objection to humans actually making decisions purely using explicit quantitative expected utility calculations. But that doesn't say anything about utility as a formalized version of "good". Rather, that's some sort of comment about the capricious quality of bounded reasoning under uncertainty - you always worry about strong conclusions that make you do particularly effective things, because a mistake in your calculations means you are doing particularly effective bad things. One particular sort of dangerously strong conclusion would be concluding that, e.g., the marginal utility of {UMonster eating an additional human} is larger than and grows faster than the marginal utility of {another humans gets eaten alive}.

To continue the argument: It could be a problem if you'd want to protect the utility monster once it exists, but would prefer that the utility monster not exist. For example it could be an innocent being who experiences unimaginable suffering when not given five dollars.

Our oldest utility monster is eight years old. (Did you have this example specifically in mind? Seems to fit the description very well.)

If you prefer a happy monster to no monster and no monster to a sad monster, then you prefer a happy monster to a sad monster, and TsviBT's point applies. Whereas if you prefer no monster to a happy monster to a sad monster, why don't you kill the monster?

...sometimes I wonder about the people who find it unintuitive to consider that "Killing X, once X is alive and asking not to be killed" and "Preferring that X not be born, if we have that option in advance" could have widely different utility to me. The converse perspective implies that we should either (1) be spawning as many babies as possible, as fast as possible, or (2) anyone who disagrees with 1 should go on a murder spree, or at best consider such murder sprees ethically unimportant. After all, not spawning babies as fast as possible is as bad as murdering that many existent adults, apparently.

The crucial question is how we want to value the creation of new sentience (aka population ethics). It has been proven impossible to come up with intuitive solutions to it, i.e. solutions that fit some seemingly very conservative adequacy conditions.

The view you outline as an alternative to total hedonistic utilitarianism is often left underdetermined, which hides some underlying difficulties.

In Practical Ethics, Peter Singer advocated a position he called "prior-existence preference utilitarianism". He considered it wrong to kill existing people, but not wrong to not create new people as long as their lives would be worth living. This position is awkward because it leaves you no way of saying that a very happy life (one where almost all preferences are going to be fulfilled) is better than a merely decent life that is worth living. If it were better, and if the latter is equal to non-creation, then denying that the creation of the former life is preferable over non-existence would lead to intransitivity.

If I prefer, but only to a very tiny degree, having a child with a decent life over having one with an awesome life, would it be better if I had the child with the dece... (read more)

In my view population ethics failed at the start by making a false assumption, namely "Personal identity does not matter, all that matters is the total amount of whatever makes life worth living (ie utility)." I believe this assumption is wrong. Derek Parfit first made this assumption when discussing the Nonidentity Problem. He believed it was the most plausible solution, but was disturbed by its other implications, like the Repugnant Conclusion. His work is what spawned most of the further debate on population ethics and its disturbing conclusions. After meditating on the Nonidentity Problem for a while I realized Parfit's proposed solution had a major problem. In the traditional form of the NIP you are given a choice between two individuals who have different capabilities for utility generation (one is injured in utero, the other is not). However, there is another way to change the amount of utility someone gets out of life besides increasing or reducing their capabilities. You could also change the content of their preferences, so that a person has more ambitious preferences that are harder to achieve. I reframed the NIP as giving a choice between having two children with equal capabilities (intelligence, able-bodiedness, etc.) but with different ambitions, one wanted to be a great scientist or artist, while the other just wanted to do heroin all day. It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility. In my view the primary thing that determines whether someone's creation is good or not is their identity (ie, what sort of preferences they have, their personality, etc). What constitutes someone having a "morally right" identity is really complicated and fragile, but generally it means that they have the sort of rich, complex values that humans have, and that they are (in certain ways) unique and different from the people who have
Oh? Yes, true it is better to have the ambitious child. I agree and I think most others will too. But I don't think that's because of some fundamental preference, but rather because the ambitious child has a far greater chance of causing good in the world. (Say, becoming an artist and painting masterpieces that will be admired for centuries to come, or becoming a scientist and developing our understanding of the fundamental nature of the universe.) The druggie will not provide these positive externalities, and may even provide negative ones. (Say, turning to crime in order to feed his addiction, as some druggies do.) I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.
I disagree. I have come to realize that that morality isn't just about maximizing utility, it's also about protecting fragile human* values. Creating creatures that have values fundamentally opposed to those values, such as paperclip maximizers, orgasmium, or sociopaths, seems a morally wrong thing to do to me. This was driven home to me by a common criticism of utilitarianism, namely that it advocates that, if possible, we should kill everyone and replace them with creatures whose preferences are easier to satisfy, or who are easier to make happy. I believe this is a bug, not a feature, and that valuing the identity of created creatures is the solution. Eliezer's essays on the fragility and complexity of human values also helped me realize this. *When I say "human" I mean any creature with a sufficiently humanlike mind, regardless of whether it is biologically human or not.
Perhaps I was unclear. I used utilitarian terminology, but utilitarianism is not necessary for my point. To restate: If I could choose between an ambitious child being born, or a druggie child being born, I (and you, according to your above comment) would choose the ambitious child, all else being equal. Why would we choose that? Well, there are several possible explanations, including the one which you gave. However, yours was complicated and far from trivially true, and so I point out that such massive suppositions are unnecessary, as we already have a certain well known human desire to explain that choice. (Call that desire what you will, perhaps "altruism", or "bettering the world". It's the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.)
I agree that externalities are the first reason that comes to mind. But when I try to modify the thought experiments to control for this my preferences remain the same. For instance, if I imagine someone with rather introverted ambitions, for instance, someone who wants to collect and modify cars, or beat lots of difficult videogames, versus someone with unambitious, but harmless preferences, (such as looking at porn all day), I still preferred the ambitious person. Incidentally, I'm not saying it's bad that there are people who want to look at porn (or who want to use recreational drugs, for that matter), I'm just saying it's bad that there are people who want to devote their entire life too it and do nothing more ambitious. To test my ideals even further (and to make sure my intuitions were not biased by the fact that porn and drugs are low-status activities) I imagined two people who both wanted to just look at porn all day. The difference was that one wanted to compare and contrast the porn they watched and develop theories about the patterns he found, while the other just wanted to passively absorb it without really thinking. I preferred the Intellectual Porn Watched to the Absorber. I think the strongest reason to value certain identities over others is that otherwise, the most efficient way to create things-considered-valuable-to-us is to change who "us" is. Once we get good at AI or genetics, kill everyone and replace them with creatures who value things that are easier to manufacture than art and knowledge. Or, if we have an aversion to killing, just sterilize everyone and make sure all future creatures born are of this type. The fact that this seems absurdly evil indicates to me that we do value identity over utility to some extent.
Hm. That's actually a pretty good answer. I too find I would prefer the Intellectual Porn Watcher to the Absorber. I will note, however, that the preference is rather weak. If you would give me $10 (or however much) in exchange for letting the Absorber exist rather than the Intellectual Porn Watcher, I'd take that, even for relatively low values of money. (I'm not quite sure of what the cuttoff is though, but it's low). On the other hand, I think I'd be willing to give up a fair bit of money to have the Ambitious Intellectual exist rather than the Druggie. Thinking about it in these terms is by no means perfect, but it allows me to solidify my view of my preferences. In any case, I'll admit this is a good point. See, "valuable" is a two place word, it takes as arguments both an object or state, and a valuer. Now, when I talk about this, I say "us" as the valuer, (and you can argue that I really should be only saying me, as our goal-systems are not necessarily aligned, but we'll put that aside), but that specifically means the "us" that is having this conversation. Or to put it another way, if you ask me "How much do you value thing X?", you can model it as me going to a black box inside my head and getting an answer. Of course, if you take out that black box and replace it with another one, the answer may be different. But, even if I know that tomorrow someone will come and do surgery to swap those "boxes", that doesn't change my answer today. Sorry for rambling a bit. I'm not sure how best to explain it all. But I value art and knowledge. (To use your example.) If you replace me with someone who values paperclips, then that other person will go and do the things he values, like making paperclips and not art and knowledge, and I will hate him for that. I don't like the world were he does that, as my utility function does not include terms for paperclips. He would value that world, and would fight tooth and claw to get to that worldstate. Nothing says we have to a
... Oh. Actually, on reading what you wrote over again, I think (in the last section, the points about ambition still stand) we are arguing over different things, and are more in agreement then we thought. You say you value "identity over utility" (to some extent). I think I interpreted that to mean something subtly different from what you meant. By utility, you meant total utility of everyone (or maybe the average utility of everyone?) Realizing that, of course we value lots of things over "utility", when "utility" is used in that sense. (I will call it ToAU, for "Total or Average Utility", to avoid confusing it with what I will call MPU, "My Personal Utility".) Yess, what you make is a good point that ToAU is not what we should be maximizing. I agree. I was arguing that it is nonsensical to not value utility, as by definition, MPU is what we should be maximizing. (Ok, put aside for now, as before, that me and you may have slightly different goal systems and I so I should be using a different pronoun, either you, if I am talking about what you are maximising, or me, if we are talking about me.) Now, MPU is quite the complex function, and for us, at least, it includes terms for art and science existing, for humans not being killed, for minimizing not only our (mine, your) personal suffering, but also for minimising global suffering. Altruism is a major part of MPU, make no mistake, I am not arguing that others' opinions do not matter, at least for some value of "others", definitely including all humans, and likely including many non humans. MPU does include a term for the enjoyment, happiness, identity, non-suffering, and so forth of those in this category, but (as you have shown) this category cannot be completely universal. In fact, in the end, all this boils down to is that you were arguing against utilitarianism, while I was arguing for consequentialism, two very similar ethical systems, but profoundly different.
Sorry, I tend to carelessly use the word "utility" to mean "the stuff utilitarians want to maximize," forgetting that many people will read it as "Von Neuman-Morgenstern Utility." You actually aren't the first person on Less Wrong I've done this to. I agree entirely.
Average utilitarianism (which can be both hedonistic or about preferences / utility functions) is another way to avoid the repugnant conclusion. However, average utilitarianism comes with its own conclusions that most consider to be unacceptable. If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average? Another point to bring up against average utilitarianism is that is seems odd why the value of creating a new life should depend on what the rest of the universe looks like. All the conscious experiences remain the same, after all, so where does this "let's just take the average!" come from?
More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.
This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die. This neatly resolves the question you asked Eliezer earlier in the thread, "If you prefer no monster to a happy monster why don't you kill the monster." The answer is that once the monster is created it always exists in a timeless sense. The only way for there to be "no monster" is for it to never exist in the first place. That still leaves the most repugnant conclusion of naive average utilitarianism, namely that it states that, if the average utility is ultranegative (i.e., everyone is tortured 24/7), creating someone with slightly less negative utility (ie they are tortured 23/7) is better than creating nobody. In my view average utilitarianism is a failed attempt to capture a basic intuition, namely that a small population of high utility people is sometimes better than a large one of low utility people, even if the large population's total utility is higher. "Take the average utility of the population" sounds like an easy and mathematically rigorous to express that intuition at first, but runs into problems once you figure out "munchkin" ways to manipulate the average, like adding moderately miserable people to a super-miserable world.. In my view we should keep the basic intuition (especially the timeless interpreation of it), but figure out a way to express it that isn't as horrible as AU.
In that view, does someone already counts as part of the average even before they are born?
I would think so. Of course, that's not to say we know that they count... my confidence that someone who doesn't exist once existed is likely much higher, all else being equal, than my confidence that someone who doesn't exist is going to exist. This should in no way be understood as endorsing the more general formulation.
Presumably, only if they get born. Although that's tweakable.
Yes and no. Yes in that the timeless view is timeless in both directions. No in that for decisionmaking we can only take into account predictions of the future and not the future itself. For intuitive purposes, consider the current political issues of climate change and economic bubbles. It might be the case that we who are now alive could have better quality of life if we used up the natural resources and if we had the government propogate a massive economic bubble that wouldn't burst until after we died. If we don't value the welfare of possible future generations, we should do those things. If we do value the welfare of possible future generations, we should not do those things. For technical purposes, suppose we have an AIXI-bot with a utility function that values human welfare. Examination of the AIXI definition makes it clear that the utility function is evaluated over the (predicted) total future. (Entertaining speculation: If the utility function was additive, such an optimizer might kill off those of us using more than our share of resources to ensure we stay within Earth's carrying capacity, making it able to support a billion years of humanity; or it might enslave us to build space colonies capable of supporting unimaginable throngs of future happier humans.) For philosophical purposes, there's an important sense in which my brainstates change so much over the years that I can meaningfully, if not literally, say "I'm not the same person I was a decade ago", and expect that the same will be true a decade from now. So if I want to value my future self, there's a sense in which I necessarily must value the welfare of some only-partly-known set of possible future persons.
If I kill someone in their sleep so they don't experience death, and nobody else is affected by it (maybe it's a hobo or something), is that okay under the timeless view because their prior utility still "counts"?
If we're talking preference utilitarianism, in the "timeless sense" you have drastically reduced the utility of the person, since the person (while still living) would have preferred not to be so killed; and you went against that preference. It's because their prior utility (their preference not to be killed) counts, that killing someone is drastically different from them not being born in the first place.
No, because they'll be deprived of any future utility they might have otherwise received by remaining alive. So if a person is born, has 50 utility of experiences and is then killed, the timeless view says the population had one person of 50 utility added to it by their birth. By contrast, if they were born, have 50 utility of experiences, avoid being killed, and then have an additional 60 utility of experiences before they die of old age, the timeless view says the population had one person of 110 utility added to it by their birth. Obviously, all other things being equal, adding someone with 110 utility is better than adding someone with 50, so killing is still bad.
The obvious way to avoid this is to weight each person by their measure, e.g. the amount of time they spend alive.
I think total utilitarianism already does that.
Yes, that's my point (Maybe my tenses were wrong.) This answer (the weighting) was meant to be the answer to teageegeepea's question of how exactly the timeless view considers the situation.
In real life, this would tend to make the remaining people less happy.
Did you mean to write, "not wrong to create new people..." ?
No, that's Singer's position. He's saying there is no obligation to create new people.
Then what's the qualifier about their lives being worth living there for? Presumably he believes it's also not wrong to not create people whose lives would not be worth living, right?
Huh. Rereading it, your interpretation might make more sense. I was thinking about that as 'even if their lives would be worth living, you don't have an obligation to create new people', which is a position that Peter Singer holds, but so is the position expressed after your correction.
In the case of actual human children in an actual society, there are considerations that don't necessarily apply to hypothetical alien five-dollar-bill-satisficers in a vacuum.
Perhaps you and they are just focusing on different stages of reasoning. The difference in utility that you've described is a temporal asymmetry that sure looks at first glance like a flaw. But that's because it's an unnecessary complexity to add it as a root principle when explaining morality up to now. Each of us desires not to be a victim of murder sprees (when there are too many people) or to have to care for dozens of babies (when there are too few people), and the simplest way for a group of people to organize to enforce satisfaction of that desire is for them to guarantee the state does not victimize any member of the group. So on desirist grounds I'd expect the temporal asymmetry to tend to emerge strategically as the conventional morality applying only among the ruling social class of a society: only humans and not animals in a modern democracy, only men when women lack suffrage, only whites when blacks are subjugated, only nobles in aristocratic society, and so on. (I can readily think of supporting examples, but I'm not confident in my inability to think of contrary examples, so I do not yet claim that history bears out desirism's prediction on this matter.) Of course, if you plan to build an AI capable of aquiring power over all current life, you may have strong reason to incorporate the temporal asymmetry as a root principle. It wouldn't likely emerge out of unbalanced power relations. And similarly, if you plan on bootstrapping yourself as an em into a powerful optimizer, you have strong reason to precommit to the temporal asymmetry so the rest of us don't fear you. :D
If the utility monster is so monstrously sad, why would it be asking not to be killed? Usually, a decent rule of thumb is that if someone doesn't want to die there's a good chance their lives are somewhat worth living. This conclusion is technically incorrect. For new babies, you don't know in advance whether their lives will be worth living. Even if you go with positive expected value (and no negative externalities), you can still have better alternatives, e.g. do science now that makes many more and much better lives much later; "as fast as possible" is logically unnecessary. Also, killing sprees have side-effects on society that omissions of reproduction don't have, e.g. already-born people will take costly measures not to be killed (etc...)
I worries me how many people have come to exactly those conclusions. I mean, it's not very many, but still ...
0Said Achmiz11y
Only if your preferences are transitive.
If you have any sort of coherent utility system at all, they will be. A better point is that "no monster" just means you're shunting the problem to poor Alternate You in another many-worlds branch, whereas killing a happy monster means actually decreasing the number of universes with the monster in it by one.
I don't get it, how is that different from any old bad thing you want to avoid?

why should I care?

Isn't this an objection to any theory of ethics?

As a lone question, it could be, but the point of his post is that even stipulating utilitarianism it does not follow that you or I should maximize the utils of Mr. Utility Monster.
3Said Achmiz11y
No, only theories of ethics that say that I should care about things that I do not already care about. And it is, in case, not an objection but a question. :)
Not necesarily a fatal one.
I believe some famous philosopher already has this point named after him.

This is just the (intended) critique of utilitarianism itself, which says that the utility functions of others are (in aggregate) exactly what you should care about.

Utilitarianism doesn't say that. Maybe some variant says that, but general utilitarianism merely says that I should have a single self-consistent utility function of my own, which is free to assign whatever weights to others. ETA: PhilGoetz says otherwise. I believe that he is right, he's an expert in the subject matter. I am surprised and confused.

If you're unsure of a question of philosophy, the Stanford Encyclopedia of Philosophy is usually the best place to consult first. Its history of utilitarianism article says that

Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good. There are many ways to spell out this general claim. One thing to note is that the theory is a form of consequentialism: the right action is understood entirely in terms of consequences produced. What distinguishes utilitarianism from egoism has to do with the scope of the relevant consequences. On the utilitarian view one ought to maximize the overall good — that is, consider the good of others as well as one's own good.

The Classical Utilitarians, Jeremy Bentham and John Stuart Mill, identified the good with pleasure, so, like Epicurus, were hedonists about value. They also held that we ought to maximize the good, that is, bring about ‘the greatest amount of good for the greatest number’.

Utilitarianism is also distinguished by impartiality and agent-neutrality. Everyone's happiness counts the same. When one maximizes the good, it is

... (read more)
PhilGoetz is correct, but your confusion is justified; it's bad terminology. Consequentialism is the word for what you thought utilitarianism meant.
I thought a consequentialist is not necessarily a utilitarianist. Utilitarianism should mean that all values are comparable and tradeable via utilons (measured in real numbers), and (ideally) a single utility function for measuring the utility of a thing (to someone). The Wikipedia page you link lists "utilitarianism" as only one of many philosophies compatible with consequentialism.
You are correct that utilitarianism is a type of consequentialism, and that you can be a consequentialist without being a utilitarian. Consequentialism says that you should choose actions based on their consequences, which pretty much forces you into the VNM axioms, so consequentialism is roughly what you described as utilitarianism. As I said, it would make sense if that is what utilitarianism meant, but despite my opinions, utilitarianism does not mean that. Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.
I see. Thank you for clearing up the terminology. Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others' welfare only because they figure in his utility function, and doesn't intrinsically care about their own utility functions?
"Utilitarian" and all the other labels in normative ethics are labels for what ought to be in an agent's utility function. So I would call this person someone who rightly stopped caring about normative philosophy.
I don't know of a commonly agreed-upon term for that, unfortunately. "Utility maximizer", "VNM-rational agent", and "homo economicus" are similar to what you're looking for, but none of these terms imply that the agent's utility function is necessarily dependent on the welfare of others.
Rational self-interest?
To use an Objectivist term, it's a person who's acting in his "properly understood self-interest".
Not just people but all the beings that serve as "vessels" for whatever it is that matters (to you). According to most common forms of utilitarianism, "utility" consists of happiness and/or (the absence of) suffering or preference satisfaction/frustration.
Thanks, but I tend to define and use my own terminology, because the standard terms are too muddled to use. I am an expert in my own terminology. Leon is talking about utilitarianism as the word is usually, or at least historically, used outside LessWrong, as a computation that everyone can perform and get the same answer, so society can agree on an action.
But that computation is still a two-place function; it depends on the actual utility function used. Surely "classical" utilitarianism doesn't just assume moral-utility realism. But without "utility realism" there is no necessary relation between the monster's utility according to its own utility function, and the monster's utility according to my utility function. Humans are similar, so they have similar utility functions, so they can trade without too many repugnant outcomes. And because of this we sometimes talk of utility functions colloquially without mentioning whose functions they are. But a utility monster is by definition unlike regular humans, so the usual heuristics don't apply; this is not surprising. When I thought of a "utility monster" previously, I thought of a problem with the fact that my (and other humans') utility functions are really composed of many shards of value and are bad at trading between them. So a utility monster would be something that forced me to sacrifice a small amount of one value (murder a billion small children) to achieve a huge increase in another value (make all adults transcendently happy). But this would still be a utility monster according to my own utility function. On the other hand, saying "a utility monster is anything that assigns huge utility to itself - which forces you to assign huge utility to it too, just because it says so" - that's just a misunderstanding of how utility works. I don't know if it's a strawman, but it's definitely wrong. I notice that I am still confused about what different people actually believe.
If by "moral-utility realism" you mean the notion that there is one true moral utility function that everyone should use, I think that's what you'll find in the writings of Bentham, and of Nozick. Not explicitly asserted; just assumed, out of lack of awareness that there's any alternative. I haven't read Nozick, just summaries of him. Historically, utilitarianism was seen as radical for proposing that happiness could by itself be the sole criterion for an ethical system, and for being strictly consequentialist. I don't know when the first person proposed that it makes sense to talk about different people having different utility functions. You could argue it was Nietzsche, but he meant that people could have dramatically opposite value systems that are necessarily at war with each other, which is different from saying that people in a single society can use different utility functions. (What counts as a "different" belief, BTW, depends on the representational system you use, particularly WRT quasi-indexicals.) Anyway, that's no longer a useful way to define utilitarianism, because we can use "consequentialism" for consequentialism, and happiness turns out to just be a magical word, like "God", that you pretend the answers are hidden inside of.
"Utilitarianism" is sometimes used for both that "variant" (valuing utility) and the meaning you ascribe to it (defining "value" in terms of utility.) The Utility Monster is designed to interfere with the former meaning. Which is the correct meaning ...

In this post, I wrote: "The standard view ... obliterates distinctions between the ethics of that person, the ethics of society, and "true" ethics (whatever they may be). I will call these "personal ethics", "social ethics", and "normative ethics" ."

Using that terminology, you're objecting to the more general point that social utility functions shouldn't be confused with personal utility functions. All mainstream discussion of utilitarianism has failed to make this distinction, including the literature on the utility monster.

However, it's still perfectly valid to talk about using utilitarianism to construct social utility functions (e.g., those to encode into a set of community laws), and in that context the utility monster makes sense.

Utilitarianism, and all ethical systems, are usually discussed with the flawed assumption that there is one single proper ethical algorithm, which, once discovered, should be chosen by society and implemented by every individual. (CEV is based on the converse of this assumption: that you can use a personal utility function, or the average of many personal utility functions. as a social utility function.)

That's because the mainstream discussion of utilitarianism the normative ethical theory has almost nothing at all to do with the concept of utility in economics.
That flaw is not obvious to me. But the flaw in anything-goes ethics is.
I don't doubt that you're right, but I find that stunning. How can this distinction not be made? In the trivial example Selfish World, everyone assigns greater utility to themselves than to anyone else. That surely doesn't mean utilitarianism is useless - people can still make decisions and trade utilons!
"Utility" refers a representation of preference over goods and services in economics and decision theory. This usage dates to the late 1940s. It has almost nothing at all to do with the normative theory of utilitarianism which dates to the late 1780s. As a normative theory is supposed to tell you how you ought to act saying "oh everyone ought to follow their own utility function" is completely without content. The entire content of the theory is that my utils and your utils are actually the same kind of thing such that we can combine them one-to-one in a calculation to determine how to act (we want to maximize total utils). This isn't utilitarianism. It is ethical egoism as described by economists.
The utility monster is a concept created to critique utilitarianism. If you are not a utilitarian, then it is not a criticism of your beliefs. If you need to ask why you should care about another being's utility, and it's a serious rather than a rhetorical question, then you aren't a utilitarian.
So this comment seems straightforwardly confused about what utilitarianism is. Why is it up this high?

I don't know. Patterns of upvotes and downvotes on LessWrong still mystify me.

You are right; I was, when I wrote the grandparent, confused about what utilitarianism is. Having read the other comment threads on this post, I think the reason is that popular usage of the term "utilitarianism" on this site does not match its usage elsewhere. What I thought utilitarianism was before I started commenting on LessWrong, and what I think utilitarianism is now that I've gotten unconfused, are the same thing (the same silly thing, imo); my interim confusion is more or less described in this thread.

My primary objections to utilitarianism remain the same: intersubjective comparability of utility (I am highly dubious about whether it's possible), disagreement about what sorts of things experience utility in a relevant way (animals? nematodes? thermostats?) and thus ought to be considered in the calculation, divergence of utilitarian conclusions from foundational moral intuitions in non-edge cases, various repugnant conclusions.

As far as the utility monster goes, I think the main issue is that I am really not inclined to grant intersubjective comparability of experienced utility. It jus... (read more)

Because you care about other agents' utility. Right? That's what the Utility Monster is meant to be an issue with.
In more personal terms, if you fit your utility function to your friends and decide what is best for them based on that, rather than letting them to their own alien utility functions and helping them to get what they really want rather than what you think they should want, you are not a good friend. I say this because if the function you're pushing prohibits me from fulfilling my goals, I will avoid the fuck out of you. I will lie about my intentions. I will not trust you. It doesn't matter if your heart's in the right place.
The definition of want here is ambiguous, and that makes this is a little hard to parse. How are you defining "want" with respect to "utility function"? Do you mean to make them equivalent? If by "want" you mean desire in accord with their appropriately calibrated utility functions, then, well, sure. A friend is selfish by any common understanding if he doesn't care about his buddies' needs. But it seems like you might be saying that he's a bad friend for not helping his friends get what they want regardless of what he thinks they need. While this is one view of friendship, it is not nearly as common, and I can make a strong case against it. Such a view would require that you help addicts continue to use, that you help self-destructive people harm themselves, that you never argue with a friend over a toxic relationship you can see, and that you never really try to convince a friend to try anything he or she doesn't think he or she will like. Sadly, this happens. If you're saying you think it should happen more, okay. But I would consider a friend pretty poor if he or she weren't willing to risk a little alienation because of genuine concern.
I meant the former case, what use are people who's wants don't perfectly align with their utility function? xJ I guess whenever the latter case occurs in my life, that's not really what's happening. The dog thinks it's driving away a threat I don't recognise, when really it's driving away an opportunity it's incapable of recognising. Sometimes it might even be the right thing for them to do, even by my standards, given a lack of information. I still have to manage them like a burdensome dog.
Assuming that the utility monster is not, somehow, mistaken regarding it's wants...
The utility monster is generally given as opposition to hedonistic or preference utilitarianism in particular. It's not an objection to arbitrary utility functions. There's no monster that can be an increasing number of paperclips.

Most people in time and space have considered it strange to take the well-being of non-humans into account

I think this is wrong in an interesting way: it's an Industrial Age blind spot. Only people who've never hunted or herded and buy their meat wrapped in plastic have never thought about animal welfare. Many indigenous hunting cultures ask forgiveness when taking food animals. Countless cultures have taboos about killing certain animals. Many animal species' names translate to "people of the __." As far as I can tell, all major religions consider wanton cruelty to animals a sin, and have for thousands of years, though obviously, people dispute the definition of cruelty.

I kinda think the opposite is true. It's people who live in cities who join PETA. Country folk get acclimatized to commoditizing animals.

I'd like to see a summary of the evidence that many Native Americans actually prayed for forgiveness to animal spirits. There's been a lot of retrospective "reframing" of Native American culture in the past 100 years--go to a pow-wow today and an earnest Native American elder may tell you stories about their great respect for the Earth, but I don't find these stories in 17th thru 19th-century accounts. Praying for forgiveness makes a great story, but you usually hear about it from somebody like James Fenimore Cooper rather than in an ethnographic account. Do contemporary accounts from the Amazon say that tribespeople there do that?

(Regarding the reliability of contemporary Native American accounts: Once I was researching the Cree Indians, and I read an account, circa 1900, by a Cree, boasting that their written language was their own invention and went back generations before the white man came. The next thing I read was an account from around 1860 of a white missionary who had recently learned Cree and invented the written script for i... (read more)

I kinda think the opposite is true. It's people who live in cities who join PETA. Country folk get acclimatized to commoditizing animals.

This sounds right to me. After all, you don't find plantation owners agitating for the rights of slaves. No, it's people who live off far away from actual slaves, meeting the occasional lucky black guy who managed to make it in the city and noting that he seems morally worthy.

Um, what about the actual slaves and ex-slaves?
In this analogy, they correspond to non-human animals, who have not yet expressed an opinion on the matter.
You mean, have not yet expressed an opinion in a way that you understand. Anyway, the fact that slaves and ex-slaves did advocate for the rights of slaves indicates that closeness to a problem does not necessarily lead one to ignore it.
They did not benefit from slavery, as the plantation owners did. Sorry, that was meant to be the implication of "plantation owners" - "they're biased", not "anyone who actually met slaves was fine with it.".
This makes the claim unfalsifiable. People who work closely with animals are the greatest believers in animal rights? Obviously animals should have rights, since they're the ones who know the best. People who work closely with animals believe in animal rights the least? Obviously animals should have rights, since people who work closely with animals are rationalizing it away like slaveholders and the people with the least contact with animals are the most objective. No matter what happens, that "proves" that the people who talk about animal rights are the ones we should listen to.
I could make equally-valid stories up to come to the opposite conclusion: People who work closely with animals are the greatest believers in animal rights? Obviously they are prejudiced by their close association. People who work closely with animals believe in animal rights the least? Obviously they're the ones who know best.
If you can explain everything, you can't explain anything.
There are two axes here - knowledge and bias. Those who own farms are most biased, but also most knowledgeable. Those who own farms but don't work on them are both biased and ignorant, so I would predict they are most in favour of farming. Those who are ignorant, but only benefit indirectly - the city dwellers - I would predict higher variance, since it may prove convenient for various reasons to be against it. And finally, the knowledgeable and who benefit only slightly; I would predict that the more knowledge, the more likely that it outweighed the bias. Of course, I already know these to be true in both cases, pretty much. (Can anyone think of a third example to test these predictions on?) But in general, I would expect large amounts of bias to outweigh knowledge - power corrupts - and low amounts of bias to be eventually overcome by the evidence of nastyness. That's just human nature (or my model of it), and slavery is just a handy analogy where stuff lined up much the same way.
This argument doesn't help you. The problem is that the original (implied) claim (that the positions of city-dwellers and farmers happen because vegetarianism is good but people oppose it for irrational reasons) is unfalsifiable: if city-dwellers favor it and farmers oppose it, that happens because vegetarianism is good; if city-dwellers oppose it and farmers favor it, that still happens because vegetarianism is good. Your explanation in terms of two axes is not wrong, but that explanation implies that the positions of farmers and city-dwellers can go either way regardless of whether vegetarianism is good. In other words, your explanation doesn't save the original claim, and in fact demolishes it instead.
What? No. Where are you getting that from? Which original claim? I just pointed out that you have to take bias into account.
No, it goes both ways. It's only people who live in cities who can either completely ignore animal welfare or go to the other wacky extreme, rather than realizing what is involved in using animals for raw material for things and understanding that some kind of arrangement has to be made and trying to make it the best one possible.
FWIW I'll provide some institutional references: The current Catechism of the Catholic Church section 2418 reads, in part: "It is contrary to human dignity to cause animals to suffer or die needlessly." The 1908 Catholic Encyclopedia goes into more detail. I also searched for statements by the largest Protestant denominations. I found nothing by the EKD. The SBC doesn't take official positions but the Humane Society publishes a PDF presenting Baptist thinking that is favorable to animals. The United Synagogue of Conservative Judaism website has lots of minor references to animal welfare. One specific example is that they appear to endorse the Humane Farm Animal Care Standards. The largest Muslim organization that I found reference to, the Nahdlatul Ulama, does not appear to have any official stance on treatment of animals.
The developed world is thoroughly urbanized. Des Moines is as far from animals as Manhattan. I think what you mean is that a certain politique ascendant on both coasts is much more likely to purchase animal rights as an expansion pack. Which is not to pre-judge the add-on, but to say it has very little to do with the size of your skyscrapers. That said, I'm not disputing at all that modern agribusiness commodifies animals and that many of today's farmers and ranchers are pretty insulated from the things they eat. There are many accounts of prayers to animals. One of the best-attested is of the Ainu prayers to the bears they worship (and kill.) Well, that does exclude Hinduism, Jainism, and Buddhism, which famously do have animal ethics. But even if we're just talking the western religions, then yeah, they do, too. Without getting into a nasty debate involving proof-texting and what Atheists say the Bible says versus what Theists say the Bible says: if you go ask a few questions in the pertinent parts of Stack Exchange of Muslim, Roman Catholic, Protestant, Eastern Orthodox, and Orthodox Jewish thinkers, I guarantee they will answer back that wanton cruelty to animals is wrong. And the same would be true if you started reading random imams, theologians, patriarchs, and pastors. Unfortunately, there is no possible answer to this. While the first and loudest opposition to cock-fighting and bear-baiting came from Puritans and Methodists, outside the Church of England's mainstream, these people were indisputably Anglicans at the beginning. And a voice of conscience from the margins of the culture is very common, and usually just means that the center of the culture has been captured by self-interest. Catholics leaders were present at the beginning of the anti-vivisection movement. If this were true, tribes would be in constant total war, which is actually a foreign concept to most tribal societies. Read Napoleon Chagnon again. They kill out of self-interest, and
Of course there is. Not all statements in religious holy books require the same amount of interpretation. If the various holy books said "thou shalt not be cruel to animals" using fairly direct language, that would be an answer to that. Problem is, they don't. That doesn't follow. I grant zero weight to the well-being of clothes, but that doesn't mean I go around destroying my clothes and setting department stores on fire. Granting zero weight to something doesn't imply wanting to destroy it, and even granting negative weight to it only means wanting to destroy it insofar as destroying it doesn't make something else worse that you do care about (such as risking death to your own tribesmen in the war.) Also, I wonder how many of the cultures who pray to the spirit of the animal also pray to the spirit of plants, rocks, the sun, or other things that even vegetarians don't think have any rights.
A minimal investment of time would convince anybody willing to be convinced that at the very least there are many doctrinal authorities on record in every large strain of western monotheism against cruelty to animals, and that these authorities adduce evidence from ancient holy texts to support their pronouncements. Feel free to disagree with Aquinas, eastern patriarchs, a large body of hadiths, and many rabbinical rulings about the faiths they represent. There is a hermeneutical constellation of belief systems that posits texts speaking for themselves without any interpretation and announces that meanings are clear to the newcomer, or outsider, or even the barely literate, in ways they were never clear to bodies of scholars who gave their lives to the study of the same texts. I'm not sure you want to be in that constellation. That is Constellation Fundamentalism, though to be fair to the actual fundamentalists, they don't seem to be amenable to animal bloodsports at all. Clothes aren't a threat to ambush you, and aren't eating tapirs you could eat. I assume you would burn them if you feared ambush or starvation. Total war doesn't mean you can't be tactical in your approach, obviously. Dissembling and biding time are smart. What I mean about the tribes being in constant total war is that since, as was pointed out, they are in competition for resources with neighboring tribes, they would kill neighbors whenever they thought they could get away with it if they attached zero utility to these people's survival. And we see that's not the case, not at all. Hunter-gatherers trade, they intermarry, they feast together, they form friendships and alliances between tribes, they do a bunch of things that would be socially impossible if there were not any empathy at all. Sometimes they betray and murder. But by no means all the time. Napoleon Chagnon's accounts of the Yanomamo, where most of this stuff about violent stone-agers comes from recently, are quite clear that elders
Which means that many doctrinal authorities are capable of making stuff up. While most religions' tenets require some interpretation of their holy books, there are degrees of this. Some claims made by religions come from their holy books in a fairly direct and straightforward way. Others are claimed to come from their holy books but in fact are the result of contrived interpretation. Religious animal cruelty laws fall in the second category. The holy books do not support laws about animal cruelty in the same way that they support "thou shalt not commit adultery". Furthermore, even those contrived laws don't generally claim it's cruel to eat animals. Bringing up the fact that religions oppose animal cruelty is like pointing out that every religion and culture has rules about sexual immorality, and therefore we should oppose some particular type of sexual immorality that you don't like. During much of history, most cultures that knew Jews attached zero or negative utility to them, but pogroms only happened every so often. They didn't just kill all the Jews until the Nazi era. Anthromorphizing is also pretty basic to humans; that's why the Eliza program convinces people. But you're not following the implications of this. The idea that primitive cultures respect the spirit of animals was brought up to show that taking the well-being of animals into account is normal. If the same primitive people respect the spirit of things whose well-being we clearly should not take into account, such as vegetables, it doesn't support the point you brought it up to support.

The holy books do not support laws about animal cruelty in the same way that they support "thou shalt not commit adultery".

IIRC, the requirements for humane slaughter are spelled out in great detail in the mishnah.

Friend, I'm assuming you believe all/most of religion is made up anyway, right? I mean, you might think some of it was made up sincerely and some was made up cynically. But you know with an extraordinarily high degree of certainty it's all made up. Right? So who cares who made it up. It's there. Some people take it seriously. It doesn't threaten non-theism at all to concede that religions define their own interpretations and belief systems. This concession is actually the bread and butter of non-theism. Really the only person who gets to contest that is the theist with an alternate interpretation, because he can appeal to a higher authority. Even though I said I didn't want to sling scripture, and I really don't: why don't you muzzle the ox that treadeth out the grain? Why were the fifth and sixth days of creation declared good? Why was man created on the same day as the beasts of the field? Why was man originally given plants to eat, not flesh? Why was man specifically forbidden to eat "the life" of the animal? Why did you have to rest beasts of burden on the Sabbath? Why couldn't you disturb mother birds on their eggs? Why did fallen beasts of burden have to be helped up? Why were the animals saved with Noah during the flood? Why doesn't God forget sparrows? Why does God feed the birds of the air? Why is it that animals only become carnivorous after the exit from Eden? What does it mean that the lion will lie down with the lamb and that a little child shall lead them? Why are humans constantly portrayed as animals in scriptural metaphor? Now, I totally believe you have answers for all these questions that acknowledge the scriptural references but manage to discredit their supposed connection to any sort of authorial concern for animal welfare or the environment. The problem is, that's not enough. You have to show that your answers were the one that audiences have understood and adopted over centuries. That will be difficult. It certainly appears that St. Franci
That's a cheat that is commonly used by creationists who come up with lists of 100 and 200 arguments for creationism. The trick? Make a list containing a lot of very low quality arguments in the knowledge that it's long enough that no one person will have the patience (or sometimes the knowledge) to properly refute every single one. Then latch on to whichever ones got the least thorough response. It's not hard to point out the flaws in your examples. For instance, Noah did save the animals, but he's saving them as resources--because if he doesn't, there won't be any animals--not as an anti-cruelty rule. If God also commanded that he take some seeds, would you then have claimed that he was concerned about cruelty to seeds? And notice that he takes seven pairs of clean animals so that he can make animal sacrifices. But no matter which example I refute, you'd just point to another I haven't refuted. And I'm not going to do every single one.
Like I said, I really am sure you can refute these! That is beside the point. I doubt very much you can show that your refutations are what people actually believe about the texts. I am not arguing the text is true. I am not even arguing that a certain interpretation of the text is correct. I am pointing out that people believe certain interpretations of the text. This is not like arguing with William Lane Craig about creationism. This is like trying to tell William Lane Craig that nobody believes in creationism. We may have reached the point of diminishing returns. Arguments are soldiers. Mine need a vacation. Enjoy your day.
I would be very surprised if any major religion claims that Noah had to take the animals on the ark because not taking them would be cruelty to animals. In other words, yes, my refutation is what people believe about the texts. Except I'm not going to bother going through 13 refutations.
How about, say, three? I could probably do three myself, but they would suck because I'm biased. And I'd be genuinely interested to hear it. (This is completely beside the point, at this stage, so I can understand why you may not want to bother.)
Mmmm. Clicked the wrong reply button. Sorry....
It's not that clear to Swiss politicians. "The dignity of plants". That was written by one of the committee that produced this official Swiss government publication. (PDF)
Actually, he's responding to PG, who claimed that no major religion is against cruelty to animals ... presumably implying that this is a modern aberration? Or something? Regardless, it was he who claimed (in your analogy) that since no religion is against "sexual immorality", then clearly modern dislike of rape is not a part of basic human ethics. They demonized them. That is not the same as attaching "zero or negative utility" except in the most dire of cases (which, admittedly, crop up with some regularity.)
To be fair to this idea, it can be useful to approach things from a fresh perspective. Scholars have had longer to develop the more ... complex misinterpretations. The trouble springs up when you don't check the, y'know, facts. Like the original text your copy was translated from, say. Or the culture it was written in. Or logic. (Or, in the opposite case, declaring that your once-over the text has revealed what believers "really" believe.)
So very much this.
Most Native American cultures felt awesome about killing enemies in battle. I don't know if it's universal, but it was very common for warriors to be highly-respected in tribal cultures, in proportion to how many people they'd killed. I don't think you can assert that it's not constant, either. Look at the conflict between Hopi & Navajo, Cree & Blackfoot. Similar to the Palestinian/Israeli conflict, and I'd call that constant. Modern all-out, extended-duration war is a foreign concept to such groups, but "this tribe is our enemy and we will kill any of them found unprotected" and "let us all get together and annihilate this troublesome neighbor village and take their women" is not.
Weren't you just saying there's a lot of mythologizing of the NA past? Did you know there are specific Navajo rituals designed to cleanse warriors returning from war before they re-enter the community, to prevent their violence from infecting the community? And that these rituals have counterparts in cultures around the world, and are of interest to modern trauma researchers? It is helpful to separate desirable status as a successful warrior from desire for war. It is very common for very successful warriors to prefer peace, in tribal societies as in modern. That's not to say young guys don't want to make their bones and old guys don't see the need to take care of business: it's to say that only a totally deranged person kills without any barriers, and very few people are totally deranged. It's interesting that you adduce the Palestinian/Israeli conflict in this context. I am very certain that the majority of Israelis and Palestinians are capable of empathy for each other. This doesn't mean they wouldn't shell each other or commit atrocities. But you're arguing a hard line: that tribes attach "zero or negative" utility to each other's continued existence.
This needs modifiers: it looks to me that with "always" added this is wrong, but with "sometimes" added this is correct.
Farmers are in contact with animals even more often than hunter gatherers. But have you ever seen the whole "asking for forgiveness" thing in an agricultural society? (not rhetorical)
No, though I've seen small-scale family farms ensure that their stock live pleasantly and are slaughtered humanely, and I myself have tried to make sure food animals I've killed died quickly and painlessly. Mileage will vary. There are a lot of true horror stories about farming and ranching, and they're not all from industrial feedlots.
The asking for forgiveness may indicate that people somehow thought of the act as killing, but that did not change their actions. Humans have had a distinctive influence on the local megafauna wherever they showed up. A cynic might write that "humans did not really care about the well-being of ...". We for instance also have taboos of eating dogs and cats, but the last time I checked it was not because of value their lives, but because they are cute. It's mostly organized lying to feel OK.
What? Of course people care about the lives of dogs and cats. Anecdotal Evidence: All the people I've seen cry over the death of a dog. Not just children, either. I've seen grown men and women grieve for months over the death of a beloved dog. Even if their sole reason for caring is that their cute, that wouldn't invalidate the fact that they care. There's some amount of "organized lying" in most social interactions, that doesn't imply that people don't care about anything. That's silliness, or puts such a high burden of proof/ high standard of caring (even when most humans can talk about degrees of caring more or less) as to be both outside the realm of what normal people talk about and totally unfalsifiable.
More because we regularly socialize with them. People are not, generally, in favour of killing just the ugly pets. (And, this is purely anecdotal, but viewing animals more as less-intelligent individuals with a personality and so on and less as fleshy automatons seems to correlate with pets.)
I guess I'm not cynical? People have to eat. It's consistent to feel that animal life has value but to know that your tribe needs meat, and to prioritize the second over the first. The fact that you value an animal life doesn't mean you value it above all else. And the fact that humans wiped out the Giant Sloth/Mammoth/whatever only necessitates that we were really good hunters. It says nothing about our motivations. Also, I think you would find it really hard to disentangle cuteness from empathy, if that's what you're trying to do.
Asking for forgiveness is usually a hunter-gatherer thing. Before agriculture brought starchy grains and dairy on the scene animal fat was the major calorie source, and vegetarianism would have meant only fruits, nuts, leafy vegetables, and tubers. And you'd need a lot of tubers in order for this to be a sufficiently calorie rich diet.
You are right, of course. I did not want to imply that a vegan diet would have been feasible until recent advances.
I think "most people in time and space" have lived in the industrial age. Am I wrong?
Most cultures, I understand, base moral worth on a "great chain of being" model, with gods above heroes above mortals, and mortals above those **s in the next village above smart animals above dumb animals ... you probably get the picture.

The actual reality does not have high level objects such as nematodes or humans.

Before one could even consider an utility of a human (or a nematode) 's existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it's value, and so on.

What would be the properties of that function? Well, for one thing, an utility of a region of space would not generally be equal to sum of utilities of parts, for the obvious reason that your head has bigger utility when it haven't been diced into perfect cubic blocks and then rearranged like a Rubik's cube.

This function could, then, be applied to a larger region of space containing nematodes and humans, and process it in some way which would clearly differ from any variety of arithmetic utilitarianism that adds or averages utilities of nematodes and humans, because, as we have established above, the function is not distributive over regions of spacetime, and nematodes and humans are just regions of spacetime with specific stuff inside.

What I imagine that function would do, is identify existence of particular computationa... (read more)

Um... yes, it does. "Reality" doesn't conceptualize of them, but I, the agent analyzing the situation, do. I will have some function that looks at the underlying reality and partitions it into objects, and some other function that computes utility over those objects. These functions could be composed to give one big function from physics to utility. But that would be, epistemologically, backwards. No. Utility is a thing agents have. "Utility theory" is a thing you use to compute an agent's desired action; it is therefore a thing that only intelligent agents have. Space doesn't have utility. To quote (perhaps unfortunately) Žižek, space is literally the stupidest thing there is.
'one' in that case refers to an agent who's trying to value feelings that physical systems have. I think there's some linguistic confusion here. As an agent valuing that there's no enormous torture camp set up in a region of space, I'd need to have an utility function over space, which gives the utility of that space.
I see what you're doing, then. I'm thinking of a real-life limited agent like me, who has little idea how the inside of a nematode or human works. I have a model of each, and I make a guess at how to weigh them in my utility function based on observations of them. You're thinking of an ideal agent that has a universal utility function that applies to arbitrary reality. Still, though, the function is at least as likely to start its evaluation top-down (partitioning the world into objects) as bottom-up. I don't understand your overall point. It sounds to me like you're taking a long way around to agreeing with me, yet phrasing it as if you disagreed.
I think (and private_messaging should feel free to correct me if I'm wrong) that what private_messaging is saying is, in effect, that before you can assign utilities to objects or worldstates or whatever, you've got to be able to recognize those objects/worldstates/whatever. I may value "humans", but what is a "human"? Since the actual reality doesn't have a "human" as an ontologically fundamental category--it simply computes the behavior of particles according to the laws of physics--the definition of the "human" which I assign utility to must be given by me. I'm not going to get the definition of a "human" from the universe itself.
Okay. I don't understand his point, then. That doesn't seem relevant to what I was saying.
I'm not entirely sure what the point of this comment was, but in that case, surely the problem occurs when said chunks die? I mean, if they magically kept working the same way, linking telepathically with the other chunks and processing information perfecty well, I don't see why they wouldn't be just as valuable, albeit rather grisly looking.

Finding out that the chunks will die (given the laws of physics as they are) is something that the function in question got to do. Likewise, finding out that they won't die with some magic, but would die if they weren't rearranged and the magic was applied (portal-ing the blood all over the place).

You just keep jumping to making an utility that is computed from the labels you already assign to the world.

edit: one could also subdivide it into very small regions of space, and note that you can't compute any kind of utility of the whole by going over every piece in isolation and then summing.

edit2: to be exact, I am counter-exampling the f(ab)=f(a)+f(b) (where "ab" is a concatenated with b) with f(ab)!=f(ba) and a+b=b+a .

More broadly, mathematics1 has been very useful in science, and so ethicists try to use mathematics2 . Where mathematics1 is a serious discipline where one states assumptions and progresses formally, and mathematics2 is "there must be arithmetical operations involved" or even "it is some kind of Elvish" . (while mathematics1 doesn't get you very far because we can't make many assumptions)

I broadly agree - it seems to me a plausible and desirable outcome of FAI that most of the utility of the future comes from a single super-mind made of all the material it can possibly gather in the Universe, rather than from a community of human-sized individuals.

The sort of utility monster I worry about is one that we might weigh more not because it is actually more sophisticated or otherwise of greater intrinsic moral weight, but simply one that feels more strongly.

Well, nematodes might already feel more strongly. If you have a total of 302 neurons, and 15 of them signal "YUM!" when you bite into a really tasty protozoan, that might be pure bliss.

8Eliezer Yudkowsky11y
I'd bet against this at pretty extreme odds, if only there were some way to settle the bet.
I don't think, in general, there could be a way to compare 'strength of feeling', etc. across two separate systems. For example, all you can do is measure the behavior of the organism, but that organism is always going to do the maximum that it can do to maximize its utility function. All you would be doing is measuring the organism's resources for optimizing its utility function, and determining the strength of its preference for any one thing relative to its other preferences only.
It seems plausible to me that there is more to 'bliss' than one's level of reaction to a stimulus. When my car is low on gas a warning light comes on, and in response to having its tank filled, the light goes off. Despite the ease of analogy, I think it's fair to describe the difference between this and my own feelings of want and satiety as a difference in kind, and not just degree. Not that a machine couldn't experience human-like desires, but to be properly called human-like it would need to have something analogous to our sorts of internal representations of ourselves. I don't think the nematode's 302 neurons encode that.
Yes, I agree with you (and likely this was Eliezer's point) that nematodes likely don't have something that a specialized scientist (sort of like a linguist that compares types of feelings across systems) would identify as anologous to 'bliss'. But this would be because their systems aren't complex enough to have that particular feeling, not because they don't feel strongly enough. ... A car's gas gauge must feel very strongly that it either has enough gas or doesn't have enough gas, but the feeling isn't very interesting. (And I don't mind if the specialist mentioned above wants to put a threshold on how interesting a feeling must be to merit being a 'feeling'.)
Going back and re-reading ciphergoth's comment above, I now see why you're emphasizing strength of feeling. What you said makes sense, point conceded.
I expect that, as we learn enough about neuroscience to begin to answer this, we'll substitute "feels more strongly" with some other criteria on which humans come out definitively on top.
I agree, and not just because it's us deciding the rubric. I believe an objective sentient bystander would agree that there is some (important) measure by which we come out ahead. Meaning our utility needs a greater weight in the equation. That is, if they are global utility maximizers. Incidentally, where does that assumption come from? It seems kind of strange. Are these utility maximizers just so social and empathetic they want everybody to be happy?
You could imagine the perfect global utility maximizer being created by self-modification of beings, or built by beings who desire such a maximizer. Why would they want that in the first place? Prosocial emotions (e.g. caused by cooperation and kin selection instincts + altruistic memes) could be a starting point. Another possible path is philosophical self-reflection. A self-modelling agent could model their utility as resulting from the valuation of mental states, e.g. a hedonist who thinks about what value is to him and concludes that what matters is the (un-)pleasantness of their brain states. From there, you only need a few philosophical assumptions to generalize: 1) Mental states are time-local, the psychological present lasts maybe up to three seconds only. 2) Our selves are not immutable metaphysical entities, but physical system states that are being transformed considerably (from fetus to toddler to preteen to adult to mentally disabled). 3) Other beings share the crucial system properties (brains with (un-)plesantness); we even have common ancestors passing on the blueprints. 4) Hypothetically, though improbably, any being could be transformed into any other being in a gradual process by speculative technology (e.g. nano technology could tranform me into you, or a human into a chimp, or a pig etc.) without breaking life functions. 5) An agent might decide that it shouldn't matter how a system state came about, only what properties the system state has, e.g. it shouldn't matter to me whether you are a future version of me transformed by speculative technology starting with my current state, but only what properties your system states has (e.g. (un-)pleasantness) I'm not claiming this is enough to beat everyday psychological egoism, but it could be enough for a philosopher-system to desire self-modification or the creation of an artificial global utility maximizer.
Come, now, it's hardly untestable. You can pay him if the FAI kills everyone to tile the universe with nematodes.
3Rob Bensinger11y
That seems doable, if you trick the AI into tearing apart a simulation before it figures out it's in one. But how do you test whether the AI weighted the nematodes so highly because their qualia are extra phenomenologically vivid, and not because their qualia are extra phenomenologically clipperiffic?
I suspect we'd have to know a lot more about neuroscience and consciousness to define "feel more strongly" precisely enough for the question to have an answer. I also suspect that, if the answer doesn't come out the way we want it to, we'll substitute another question in its place that does, in the time-honored practice of claiming that universal, objective agenthood is defined by whatever scale humans win on.
Do you really think that is at all likely that a nematode might be capable of feeling more informed life-satisfaction than a human?

Most people in time and space have considered it strange to take the well-being of non-humans into account.

I don't think this is true. As gwern's The Narrowing Circle argues, major historical exceptions to this include gods and dead ancestors.

dead ancestors may not count as 'non-human', depending on your metric.

Same for most gods, given the degree to which they were anthropomorphized. (In fact, the Bhagavad-Gita talks about how Hindus need to anthropomorphize in order to give "personal loving devotion to Lord Krishna". [Quote from a commentary])

... which would imply that the reality is not anthropomorphic but empathising with it is a good thing.
Yep, ancestors are dead humans, gods are humans in the same way Batman is human. (I mean, Thor is one of the Avengers. I think that gives it away.) I wanted to say "animals" without implying that humans aren't animals. I remember reading about a Native American culture that had a designated Speaker for the Wolves who was supposed to represent them in meetings, but I can't remember any details. Could be bogus.
There are many indigenous cultures (with some hunters still around today) who ask forgiveness upon killing food animals. And history's full of bear cults, and animal species with names that translate into "people of the _," and taboos on harming various animals. I think the notion that humans have mostly only cared for the concerns of humans is the product of an industrial-age blind spot: only people who've never hunted or husbanded, and eat their meat from the slaughterhouse, have never thought about animal welfare.
Dead ancestors are not minds that experience anything.

Ancestor worshippers- who are the people whose opinions we're discussing- would disagree. Wikipedia:

Veneration of the dead or ancestor reverence is based on the belief that the dead have a continued existence...the goal of ancestor veneration is to ensure the ancestors' continued well-being

Sure, but there's a fact of the matter: It's not that we don't value the experiences or well-being of dead ancestors; it's that we hold that they do not have any experiences or well-being — or, at least, none that we can affect with the consequences of our actions. (For instance, Christians who believe in heaven consider their dead ancestors to be beyond suffering and mortal concerns; that's kind of the point of heaven.)

The "expanding circle" thesis notices the increasing concern in Western societies for the experiences had by, e.g., black people. The "narrowing circle" thesis notices the decreasing concern for experiences had by dead ancestors and gods.

The former is a difference of sentiment or values, whereas the latter is a difference of factual belief.

The former is a matter of "ought"; the latter of "is".

Slaveholders did not hold the propositional beliefs, "People's experiences are morally significant, but slaves do not have experiences." They did not value the experiences of all people. Their moral upbringing specifically instructed them to not value the experiences of slaves; or to regard the suffering of slaves as the appointed (and thus morally correct) lot in life of slaves; or to regard the experiences of slaves as less important than the continuity of the social order and economy which were supported by slavery.

You know, I think you're wrong about that. They talked about how savages needed to be ruled by civilised man, and the like, rather than claiming that they were the same as us but who gives a damn?
I am fairly confident that I haven't understood your point, as it doesn't seem to me to address the discussion above. My interpretation of your post is that it claims that people engaged in ancestor worship were factually wrong about whether their dead ancestors still counted as humans- e.g. whether or not they experienced anything. However, this is irrelevant to the question under discussion- of whether or not ancestor worship is a counter-example to the claim that most people throughout history haven't cared about non-humans. All that matters for this claim is whether or not most ancestor-worshippers thought that their ancestors qualified as people.
I think the point that fubarobfusco was trying to make with that was a partial refutation of the "narrowing circle" thesis that says we care less about people not like us today than in the past. S/he was trying to say, "we haven't stopped caring about anyone we used to care about, we've just stopped believing in them. If we still believed our dead ancestors had feelings, we'd still care about them." You're correct that all that matters for the question "did ancestor-worshippers care for non-humans" is whether the ancestor-worshippers thought their ancestors were human.
Therefore, by substitution, we don't experience anything in response to knowledge about things that will happen after we're dead?
What? Sorry, I don't see the connection. (It's my impression that the belief of ancestor-worshipers is not that their actions today fulfill the past living desires of now-dead ancestors, but that their actions today affect the experiences of their dead ancestors today.)
I haven't read the article by gwern that Qiaochu linked, so I didn't know that it referred specifically to ancestor worship rather than the more general (believed) evaporation of respect for ancestors' desires as a terminal value.

I've always believed having an issue with utility monsters is either a lack of imagination or a bad definition of utility (if your definition of utility is "happiness" then a utility monster seems grotesque, but that's because your definition of utility is narrow and lousy).

We don't even need to stretch to create a utility monster. Imagine there's a spacecraft that's been damaged in deep space. There's four survivors, three are badly wounded and one is relatively unharmed. There's enough air for four humans to survive one day or one human to survive four days. The closest rescue ship is three days away. After assessing the situation and verifying the air supply, the three wounded crewmembers sacrifice themselves so the one is rescued.

To quote Nozick from wikipedia: "Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility." That is exactly what happens on the spaceship, but most people here would find it pretty reasonable. A real utility monster would look more like that than some super-happy alien.

Not exactly like that... :-) http://en.wikipedia.org/wiki/R_v_Dudley_and_Stephens

When you're talking about the utility of squirrels, what exactly are you calculating? How much you personally value squirrels? How do you measure that? If it is just a thought experiment ("I would pay $1 per squirrel to prevent their deaths") how do you know that you aren't just lying to yourself & if it really came down to it, you wouldn't pay? Maybe we can only really calculate utility after the fact by looking at what people do rather than what they say.

I may not actually want to pay $1 per squirrel, but if I still want to want to, then that's as significant a part of my ethics as my desire to avoid being a wire-head, even though once I tried it I would almost certainly never want to stop.
I would rather observe you & see what you do to avoid becoming a wirehead. I'd put saying you want to avoid becoming a wirehead & saying you want to want to pay to save the squirrels in the same camp -- totally unprovable at this point in time. In the future maybe we can scan your brain & see which of your stated preferences you are likely to act on; that'd be extremely cool, especially if we could scan politicians during their campaigns.
How do you know those people aren't still "lying to themselves"? Humans are not known for being perfect, bias-free reasoners. Maybe we can only really calculate utility after the fact by looking at what perfect Bayesian agents do rather than mere mortals.

I am mildly consequentialist, but not a utilitarian (and not in the closet about it, unlike many pretend-utilitarians here), precisely because any utilitarianism runs into a repugnant conclusion of one form or another. That said, it seems that the utility-monster type RC is addressed by negative utilitarians, who emphasize reduction in suffering over maximizing pleasure.

Isn't there an equivalent negative utility monster, who is really in a ferociously large amount of pain right now?
Perhaps, but if your utility scale can actually become negative (rather than simply hitting zero), the solution of assisted suicide is fairly simple and cheap to implement.
Killing it reduces the overall suffering, since its quality of life is well below the "barely worth living" level, with no hope of improvement.
What if it can't be easily killed?
That doesn't work for preference utilitarians (it would strongly prefer to remain alive).
The purely negative utility monster (whether it is in a ferociously large amount of pain or not), that also has by definition no diminishing returns in its utility function, just hits zero pain at some point. Until it is in pain again, it is simply not part of the equation. The difference is: If your goal is to minimize X, you can't go on forever without diminishing returns (but with diminishing returns, you can) whereas if your goal is to maximize Y, you can go on forever with or without diminishing returns. edit: It depends on how the function is defined. Above, I used allocated resources vs. utility (utility = relieve from suffering). But a negative utility monster would be possible if its condition got automatically worse and if it had no diminishing returns of (f.e.) suffering per unit pain, but all the other beings had.

Well, isn't the central end of humanity (nay all sentient life) contentment and ease?

Seems like a strange assumption. Indeed, the reverse is often argued, that the central end of life is to be constantly facing challenges, to never be content, that we should seek out not ease but difficulty.

"How dull it is to pause, to make an end, To rust unburnished, not to shine in use!"

Moreover, even if your assertion were true for humans, and even all mammals, we can imagine non-mammalian sentient life.

So yeah.. all mammals do not avert painful situations and seek contented ones? If one kicks a dog, the dog actually likes that, or would not eventually fight against it? Isn't that part of the definition of sentience? Your point is essentially validating moving outside of one's comfort zone. However, I doubt many who advocate doing that would say humans don't by design seek situations of ease over situations of discomfort. Moving outside one's comfort zone via, say, learning to ride a bike is different from averting a stressful work or home environment. As for non-mammals, well as humans are mammals, then I'm using our taxonomical order as a base. I don't know if the same applies to birds, reptiles or amphibians.

Saying that a utility monster means a "creature that is somehow more capable of experiencing pleasure (or positive utility) than all others combined" is vague, because it doesn't mean a creature that's just more capable, it's a creature that's a specific kind of "more capable". Just because human beings can experience more utility from the same actions than nematodes can doesn't make humans into utility monsters, because that's the wrong kind of "more capable". According to your own link, a utility monster is not susceptible to diminishing marginal returns, which doesn't seem to describe humans and certainly isn't a distinction between humans and nematodes.

The qualification that a utility monster is not susceptible to diminishing marginal returns is made only because they're still assuming utility is measured in something like dollars, which has diminishing marginal returns, rather than units of utility, which do not. Removing that qualification doesn't banish the utility monster. The important point is that the utility monster's utility is much larger than anybody else's.

Removing that qualification does banish the utility monster. If the utility monster gets greater utility from dollars than someone else (let's say nematodes), but is still subject to diminishing marginal returns (at a slower rate than nematodes), then the utilitarian result is to start giving dollars to the utility monster until its utility-per-dollar has diminished enough to match the starting utility-per-dollar of the nematodes, and then to give to both the utility monster and the nematodes in a proportion which keeps them at the same rate. The "utility monster" has ceased to be a utility monster because it no longer gets everything. It still gets more, of course, but that's the equivalent of deciding that the starving person gets the food before the full person.

The "utility monster" has ceased to be a utility monster because it no longer gets everything. It still gets more, of course, but that's the equivalent of deciding that the starving person gets the food before the full person.

This sounds like it could be almost as repugnant as a utility monster that gets literally everything, depending on precisely how much "more" we're talking about.

Edit: if I were the kind of person who found utility monsters repugnant, that is. I'd already dissolved the "OMG what if utility monsters??" problem in my own mind by reasoning that the repugnant feeling comes from representing utility monsters as black boxes, stripping away all of the features of theirs that make it intuitively obvious why they generate more utility from the same inputs. Put another way, the things that make real-life utility monsters "utility monsters" are exactly the things that make us fail to recognize them as utility monsters. When a parent values their child's continued existence far more than their own, we don't call the child a "utility monster" if the parent sacrifices themselves to save their child, even though that's exactly the child's role in that situation.

Re. "black box", nice way of putting it. This post just gives an example where we can look inside the black box.
Can this be resolved by adding more monsters? I.e., instead of having just one utility monster on Earth, we could have a million or even 6 billion monsters (as many as there are humans). This would allow the monsters to fully benefit from consuming "everything" or at least close enough to "everything" to raise the dilemma.
Definitionally speaking, "making each human into a utility monster" is the same as not having any utility monsters at all; utility-monsterdom is a relative property of one agent with respect to the other agents in the population.
There are other agents in the population than humans. (I apologize for the late reply. I didn't check my notifications.)
I want to criticise either the idea that diminishing returns is important, or, at least, that dollar values make sense for talking about them. Suppose we have a monster who likes to eat. Each serving of food is just as tasty as the previous, but he still gets diminishing returns on the dollar, because the marginal cost of the servings goes up. We also have nematodes, who like to eat, but not as much. They never get a look in, because as the monster eats, they also suffer diminished utilons per dollar. So the monster is serving the 'purpose' of the utility monster, but still has diminishing returns on the dollar. If we redefine diminishing returns to be on something else, I'm not sure it could be well justified or immune to this issue. And, although humans are not an example of this sort of monster, the human race certainly is.
Presumably, that's diminishing marginal returns relative to dollars input. In other words, "You can only drink 30 or 40 glasses of beer a day, no matter how rich you are."
Units of utility are non-fungible, right?
They surely are fungible. The whole point of using utility functions in the first place, is that I can't convert apples into children saved, but I can convert utilons gained from eating apples into utilons gained from saving children, because both are just real numbers.
But you can't take utilons from the apple tree and give them to the children. I guess I meant 'transferable' instead of 'fungible', or perhaps something else. The utility monster being associated with more utility does not require that the rest of the world be associated with less.
Right; I can't give you one of my utilons directly. If the world is already in a Pareto-optimal state, then changing it to benefit the utility monster would require making someone else worse off.
What does the Pareto-optimal state look like if a Utility Monster exists?
Pareto-optimal means that no one can be made better off without making someone else worse off. It doesn't care about how much better off it can make someone, so the existence of a Utility Monster makes no difference to which states are Pareto-optimal. Pareto-optimal could range all the way from giving all the resources to the Utility Monster to giving nothing to the Utility Monster. So my comment was fairly trivial from the definition of Pareto-optimal; I was just trying to emphasize that there generally are a wide range of Pareto-optimal states; you can't just increase the utility for one person arbitrarily high without trading it off against someone else's utility; you can start, but eventually you hit a Pareto-optimal state, and then you've got tradeoffs to make.
It looks like you are taking some kind of sum across all agents as the utility of the world; that is incompatible with the basic assumption of the utility monster as I understand it. The utility monster is something such that as it controls scarce resources, the marginal utility that it contributes to the world as a whole (per additional resource that it controls/consumes) increases. (With everything else having a decreasing marginal return). The argument is that such a creature would receive all of the resources, and that is bad; the counterargument is that given the described setup, giving the utility monster all of the resources is good, and the fact that we intuit that it is bad is a problem with our intuition and not the math.
As far as I can tell, the definition involving increasing marginal returns was invented by some wikipedian. Wikipedia does not cite a source for that definition. According to every other source, a utility monster is an agent who gets more utility from having resources than anyone else gets from having resources, regardless of how the utility monster's marginal value of resources changes with the amount of resources already controlled. Either way, the argument for giving the utility monster all the resources comes from maximizing the sum of the utilities of each agent. I'm not sure what you mean by this being incompatible with the assumption of the utility monster. Edit: Also, rereading my previous comment, I notice that I was actually not taking a sum across the utilities of all agents. Pareto-optimal does not mean maximizing such a sum. It means a state such that it is impossible to make anyone better off without making anyone else worse off.
A +utility outcome for one agent is incomparable to a -utility for a different agent on the object layer. It is impossible to compare how much the utility monster gains from security to how much the peasant loses from lack of autonomy without taking a third point- this third viewpoint becomes the only agent in the meta-level (or, if there are multiple agents in the first meta, it goes up again, until there is only one agent at a particular level of meta).
This is true; there is no canonical way to aggregate utilities. An agent can only be a utility monster with respect to some scheme for comparing utilities between agents.
Such a scheme is only measuring its own utility of different states of the universe; a utility monster is not a problem for such a scheme/agent, any more than preventing 3^^^3 people being tortured for a million years at zero cost would be a problem.
I'm not quite sure what you mean. If you mean that any agent that cares disproportionately about a utility monster would not regret that it cares disproportionately about a utility monster, then that is true. However, if humans propose some method of aggregating their utilities, and then they notice that in practice, their procedure disproportionately favors one of them at the expense of the others, the others would likely complain that it was not a fair aggregation. So a utility monster could be a problem.
If humans propose some method of aggregating their utilities, and later notice that following that method is non-optimal, it is because the method they proposed does not match their actual values. That's a characteristic of the method, not of the world.
That's right; being a utility monster is only with respect to an aggregation. However, the concept was invented and first talked about by people who thought there was a canonical aggregation, and as an unfortunate result, the dependency on the aggregation is typically not mentioned in the definition.
I can't resolve paradoxes that come up with regard to people who have internally inconsistent value systems; were they afraid that the canonical aggregation was such that they personally were left out, in a manner that proved they were bad (because they preferred outcomes where they did better than they did at the global maximum of the canonical aggregation)?
'Fungible' means you don't care where you get your utilons from, as long as it's the same number of utilons.
Yes, I used the wrong term. For 'fungible' to be cogent in reference to a utility monster, utilons would have to be transferable.
Wikipedia does not cite a source for its claim that utility monsters have anything to do with non-decreasing marginal utility, nor does the claim make any sense at all. Does anyone know if some wikipedian just made this up, or whether it was published somewhere previously? I've also asked about this on the wikipedia article's talk page. If no one can find any prior source for the statement, I will edit it.

I discussed this recently elsewhere: https://utilitarian.quora.com/Utility-monsters-arent-we-all I'm glad I'm not the only one who's thought of this.

[This comment is no longer endorsed by its author]Reply

Nice post.

I disagree with the premise that humans are utility monsters, but I see what you are getting at.

I'm a little weary of the concept of a utility monster as it is easy to imagine and debate but I don't think it is immediately realistic.

I want my considerations of utility to be aware of possible future outcomes. If we imagine a concrete scenario like Zach's fantastic slave pyramid builders for an increasingly happy man, it seems obvious that there is something psychotic about an individual who could be made more happy by the senseless toil of other... (read more)


This is fucking brilliant.


One man's utility monster is another man's neighbour down the street named Bob who you see when you for walks sometimes.

The human vs animal issue makes more sense if we focus not on "utility" but "asskicking".

I do not see a contradiction in claiming that a) utility monsters do not exist and b) under utilitarianism, it is correct to kill an arbitrarily large number of nematodes to save one human.

The solution to this issue is to reject the idea of a continuous scale of "utility capability", under which nematodes can feel a tiny amount of utility, humans can feel a moderate amount, and some superhuman utility monster can feel a tremendous amount. Rather, we can (and, I believe, should) reduce it to two classes: agents and objects.

An agent, such as a hum... (read more)

If I contract a neurodegenerative illness, which will gradually reduce my cognitive function, until I end up in a vegetative state, do I retain agent-ness throughout, or at some point lose equal footing with healthy me in one go? Neither seems a good description of my slow slide from fully human to vegetable. What is an "average day"? My average day probably has greater utility than that of a captive of a sadistic gang...
That looks like a great foundation for a set of laws, but a poor foundation for a set of ethics.
How so? I view this as an implementation of equality among agents. What makes it ethically repugnant?
This is a particular instance of the general approach, "I have to assign a number to each of these items, but it's hard and contentious to do, so instead I will give them all zeroes (objects) or ones (agents)." It always increases the total error. The world is not divided into agents and objects, and this approach would still increase total error, or at best leave it unchanged, even if they were, since errors in classification give a larger total error when they are thresholded instead of just left as, say, probabilities. You should also consider that, when AI is developed, you will become an "object". This approach doesn't work well even for humans. Very intelligent humans, armed with and experienced with mathematics, large computerized databases, regression analysis, probability and statistics, information theory, dimension reduction, data mining, machine learning, stability analysis, optimization techniques, and a good background in cog sci, biology, & physics, think more differently from average humans around 1600 AD, than average humans in 1600 AD did from dogs. So where do you draw the line?
Reference? The priors for recent humans and dogs thinking more alike than modern humans and dogs (despite Euarchontoglires and Laurasiatheria diverging about 90 MYA and humans in 1600 diverging from modern humans 400 years ago). I might estimate at 90M:400 against, if I had to do so very quickly. Why do you think that more than half of the change in thinking in the last 90 million years has occurred in the last 400?
Obviously I cannot cite a reference. This is an opinion. I take it you think less than half of the sum total of what has been discovered or learned was learned in the past 400 years? Your priors suggest you assume linear advance in thinking, but hominid cranial enlargement began only 1-2 million years ago. So you must also expect, as a prior, that the difference between humans and chimps is 1/90th - 1/45th of the difference between chimps and dogs. In that case, why exclude chimps from our society? The maximum travel speed of humans today have travelled is about 7 miles per second. Assuming a travel rate of 0 miles per second 4 billion years ago, we do not conclude that bacteria were able to propel themselves 3.5 miles per second 2 billion years ago. I don't really think there's been a change in humans. I think there are new tools available that help us think better, much like the new machines available that let us move fast.
You don't believe that homind cranial enlargement is responsible for more than half of the difference between modern humans and dogs, so why does it matter when it happened? Suppose that dogs are 50-100 times further away from humans than chimps are. Further suppose that bacteria are more than 100 times further away from humans than dogs are. Why is one of those a reason to include chimps, and the other not a reason to include dogs. (Rocks are more than 100 times as different from humans than fungi are, right?) Rather than use relative closeness, I'm going to assert that absolute distance is important. (If that means that a typical human 400 years ago would not qualify now, I think it says more about them than it does about me; but I don't think that is the case). I also danced around and didn't actually say that 90M:400 was the best prior; I said if I needed one quickly it's the one I would use. To refine that number first requires refining the question.
That is it an implementation of equality among unequal agents. Why is an average day of Agent Alpha the same value as an average day of Agent Beta, and how does Agent Beta determine how much utility Agent Alpha gains from something other than the reference economy? If we allow the agents to determine their own utility derived from, say, fiat currency, we have instead of a utility economy a financial economy. Everyone gains instrumental value from participating (or they stop participating). Allow precommitment and assume rational, well-informed agents, and the economic system maximizes each individual utility within the possible parameters.
0Said Achmiz11y
This seems like a pretty sensible account to me. (Does anyone see any obvious flaws?) Could you explain this a bit more? I'm not sure I understand. (FYI, I know almost nothing about currency exchange.)
I don't know much about it either, but the basic principles I'm trying to transfer are: a) N different nations have N different currencies. Agents 1 through N have Agent1-Utils, Agent2-Utils...AgentN-Utils. b) They are able to interact in an international market by setting an exchange rate between their currencies. In this case, we propose the extra step of creating a single societal currency, which would be analogous to a "World Dollar", so that we need only N different conversions (Agent i to Society, i = 1..N) rather than N(N-1)/2 (Agent i to Agent j, i = 1..N, j = i+1..N), and the responsibility to set conversion rates is a societal, rather than individual, responsibility. Admittedly, this analogy has its own "utility monster" - a nation which is economically powerful enough to manipulate exchange rates. However, that doesn't quite exist in the "Utility Economy" unless one agent is powerful enough to bend society to their whim, in which case it's not so much a utilitarian society as a dictatorship.