Some moral questions I’ve seen discussed here:

  • A trolley is about to run over five people, and the only way to prevent that is to push a fat bystander in front of the trolley to stop it. Should I?
  • Is it better to allow 3^^^3 people to get a dust speck in their eye, or one man to be tortured for 50 years?
  • Who should I save, if I have to pick between one very talented artist, and five random nobodies?
  • Do I identify as an utilitarian? a consequentialist? a deontologist? a virtue ethicist?

Yet I spend time and money on my children and parents, that may be “better” spent elsewhere under many moral systems. And if I cared as much about my parents and children as I do about random strangers, many people would see me as somewhat of a monster.

In other words, “commonsense moral judgements” finds it normal to care differently about different groups; in roughly decreasing order:

  • immediate family
  • friends, pets, distant family
  • neighbors, acquaintances, coworkers
  • fellow citizens
  • foreigners
  • sometimes, animals
  • (possibly, plants...)
… and sometimes, we’re even perceived as having a *duty* to care more about one group than another (if someone saved three strangers instead of two of his children, how would he be seen?).

In consequentialist / utilitarian discussions, a regular discussion is “who counts as agents worthy of moral concern” (humans? sentient beings? intelligent beings? those who feel pain? how about unborn beings?), which covers the later part of the spectrum. However I have seen little discussion of the earlier part of the spectrum (friends and family vs. strangers), and it seems to be the one on which our intuitions agree the most reliably - which is why I think it deserves more of our attention (and having clear ideas about it might help about the rest).

Let’s consider two rough categories of decisions:

  • impersonal decisions: what should government policy be? By what standard should we judge moral systems? On which cause is charity money best spent? Who should I hire?
  • personal decisions: where should I go on holidays this summer? Should I lend money to an unreliable friend? Should I take a part-time job so I can take care of my children and/or parents better? How much of my money should I devote to charity? In which country should I live?

Impartial utilitarianism and consequentialism (like the question at the head of this post) make sense for impersonal decisions (including when an individual is acting in a role that require impartiality - a ruler, a hiring manager, a judge), but clash with our usual intuitions for personal decisions. Is this because under those moral systems we should apply the same impartial standards for our personal decisions, or because those systems are only meant for discussing impersonal decisions, and personal decisions require additional standards ?

I don’t really know, and because of that, I don’t know whether or not I count as a consequentialist (not that I mind much apart from confusion during the yearly survey; not knowing my values would be a problem, but not knowing which label I should stick on them? eh, who cares).

I also have similar ambivalence about Effective Altruism:

  • If it means that I should care as much about poor people in third world countries than I do about my family and friends, then it’s a bit hard to swallow.
  • However, if it means that assuming one is going to spend money to help people, one should better make sure that money helps them in the most effective way possible.

Scott’s “give ten percent” seems like a good compromise on the first point.

So what do you think? How does "caring for your friend’s and family" fit in a consequentialist/utilitarian framework ?

Other places this has been discussed:

  • This was a big debate in ancient China, between the Confucians who considered it normal to have “care with distinctions” (愛有差等), whereas Mozi preached “universal love” (兼愛) in opposition to that, claiming that care with distinctions was a source of conflict and injustice.
  • Impartiality” is a big debate in philosophy - the question of whether partiality is acceptable or even required.
  • The philosophical debate between “egoism and altruism” seems like it should cover this, but it feels a bit like a false dichotomy to me (it’s not even clear whether “care only for one’s friends and family” counts as altruism or egoism)
  • Special obligations” (towards Friends and family, those one made a promise to) is a common objection to impartial, impersonal moral theories
  • The Ethics of Care seem to cover some of what I’m talking about.
  • A middle part of the spectrum - fellow citizens versus foreigners - is discussed under Cosmopolitanism.
  • Peter Singer’s “expanding circle of concern” presents moral progress as caring for a wider and wider group of people (counterpoint: Gwern's Narrowing Circle) (I haven't read it, so can't say much)

Other related points:

  • The use of “care” here hides an important distinction between “how one feels” (My dog dying makes me feel worse than hearing about a schoolbus in China falling off a cliff) and “how one is motivated to act” (I would sacrifice my dog to save a schoolbus in China from falling off a cliff). Yet I think we have the gradations on both criteria.
  • Hanson’s “far mode vs. near mode” seems pretty relevant here.

New to LessWrong?

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 6:55 AM
[-][anonymous]9y80

One of the major problems I have with classical "greatest good for the greatest number" utilitarianism, the kind that most people think of when they hear the word, is that people act as if these are still rules handed to them from on high. When given the trolley problem, for example, people think you should save the five people rather than the one for "shut up and calculate" reasons, and that they are just supposed to count all humans exactly the same because those are "the rules".

I do not believe that assigning agents moral weight as if you are getting these weights from some source outside yourself is a good idea. The only way to get moral weights is from your personal preferences. Do you find that you assign more moral weight to friends and family than to complete strangers? That's perfectly fine. If someone else says they assign all humans equal weight, well, that's their decision. But when people start telling you that your weights are assigned wrong, then that's a sign that they still think morality comes from some outside source.

Morality is (or, at least, should be) just the calculus of maximizing personal utility. That we consider strangers to have moral weight is just a happy accident of social psychology and evolution.

I do not believe that assigning agents moral weight as if you are getting these weights from some source outside yourself is a good idea.

Suppose I get my weights from outside of me, and you get your weights from outside of you. Then it's possible that we could coordinate and get them from the same source, and then agree and cooperate.

Suppose I get my weights from inside me, and you get yours from inside you; then we might not be able to coordinate, instead wrestling each other over the ability to flip the switch.

Suppose I get my weights from inside me, and you get yours from inside you; then we might not be able to coordinate, instead wrestling each other over the ability to flip the switch.

In practice people with different values manage to coordinate perfectly fine via trade; I agree an external source of morality would be sufficient for cooperation, but it's not necessary (also having all humans really take an external source as the real basis for all their choices would require some pretty heavy rewriting of human nature).

[-][anonymous]9y20

But that presupposes that I value cooperation with you. I don't think it's possible to get moral weights from an outside source even in principle; you have to decide that the outside source in question is worth it, which implies you are weighing it against your actual, internal values.

It's like how selfless action is impossible; if I want to save someone's life, it's because I value that person's life in my own utility function. Even if I sacrifice my own life to save someone, I'm still doing it for some internal reason; I'm satisfying my own, personal values, and they happen to say that the other person's life is worth more.

But that presupposes that I value cooperation with you. I don't think it's possible to get moral weights from an outside source even in principle; you have to decide that the outside source in question is worth it, which implies you are weighing it against your actual, internal values.

I think you're mixing up levels, here. You have your internal values, by which you decide that you like being alive and doing your thing, and I have my internal values, by which I decide that I like being alive and doing my thing. Then there's the local king, who decides that if we don't play by his rules, his servants will imprison or kill us. You and I both look at our values and decide that it's better to play by the king's rules than not play by the king's rules.

If one of those rules is "enforce my rules," now when the two of us meet we both expect the other to be playing by the king's rules and willing to punish us for not playing by the king's rules. This is way better than not having any expectations about the other person.

Moral talk is basically "what are the rules that we are both playing by? What should they be?". It would be bad if I pulled the lever to save five people, thinking that this would make me a hero, and then I get shamed or arrested for causing the death of the one person. The reasons to play by the rules at all are personal: appreciating following the rules in an internal way, appreciating other people's appreciation of you, and fearing other people's reprisal if you violate the rules badly enough.

[-][anonymous]9y20

If the king was a dictator and forced everyone to torture innocent people, it would still be against my morals to torture people, regardless of whether I had to do it or not. I can't decide to adopt the king's moral weights, no matter how much it may assuage my guilt. This is what I mean when I say it is not possible to get moral weights from an outside source. I may be playing by the king's rules, but only because I value my life above all else, and it's drowning out the rest of my utility function.

On a related note, is this an example of a intrapersonal utility monster? All my goals are being thrown under the bus except for one, which I value most highly.

Your example of the King who wants you to torture is extreme, and doesnt generalize ... you have set up not torturing as a non-negotiable absolute imperative. A more steelmanned case would be compromising on negotiable principles at the behest of society at large.

From what little I know about EA, they tend to mix together the two issues, one is "Who to care about?" and the other "How to best care about those you care about?" Probably in part owing to the word "care" in English having multiple meanings, but certainly not entirely so.

However I have seen little discussion of the earlier part of the spectrum (friends and family vs. strangers), and it seems to be the one on which our intuitions agree the most reliably - which is why I think it deserves more of our attention (and having clear ideas about it might help about the rest).

I think, like you point out, this gets into near / far issues. How I behave around my family is tied into a lot of near mode things, and how I direct my charitable dollars is tied into a lot of far mode things. It's easy to talk far mode in an abstract way (Is it better to donate to ease German suffering or Somali suffering?) than it is to talk near mode in an abstract way (What is the optimal period for calling your mother?).

This was a big debate in ancient China, between the Confucians who considered it normal to have “care with distinctions” (愛有差等), whereas Mozi preached “universal love” (兼愛) in opposition to that, claiming that care with distinctions was a source of conflict and injustice.

The Spring and Autumn period definitely seems relevant, and I think someone could get a lot of interesting posts out of it.

The Spring and Autumn period definitely seems relevant, and I think someone could get a lot of interesting posts out of it.

Yep, I've been reading a fair amount about it recently; I had considering first making a "prequel" post talking about that period and about how studying ancient China can be fairly interesting, in that it shows us a pretty alien society that still had similar debates.

I had heard from various sources how Confucius said it was normal to care more about some than others, and it took me a bit of work to dig up what that notion was called exactly.

[-][anonymous]9y10

How does "caring for your friend’s and family" fit in a consequentialist/utilitarian framework ?

If you have a desert-adjusted moral system, especially if combined with risk aversion, then it might make sense to care for friends and family more than others.

You want to spend your “caring units” on those who deserve them, you know enough about your friends and family to determine they deserve caring units, and you are willing to accept a lower expected return on your caring units to reduce the risk of giving to a stranger who doesn’t deserve them.

Now to debate myself…

What about that unbearable cousin? A family member, but not deserving of your caring units.

Also, babies. If an infant family member and a poor Third World infant both have unknown levels of desert, shouldn’t you give to the poor Third World infant, assuming this will have a greater impact?

The impression I get when reading posts like these is that people should read up on the morality of self-care. If I'm not "allowed" to care for my friends/family/-self, not only would my quality if life decrease, it would decrease in such a way that would it harder and less efficient to actively care (e.g. donate) about people I don't know.

But is caring for yourself and your friends and family an instrumental value that helps you stay sane so that you can help others more efficiently, or is it a terminal value? It sure feels like a terminal value, and your "morality of self-care" sounds like a roundabout way of explaining why people care so much about it by making it instrumental.

I don't know. I also don't know if terminal values for utility maximizers and terminal values for fallible human beings perfectly line up, even if humans might strive to be perfectly selfless utility maximizers.

What I do know is that for a lot of people the practical utility increase they can manage goes up when they have friends and family they can care about. If you forbid people from self-care, you create a net decrease of utility in the world.

I think ultimately, we should care about the well-being of all humans equally - but that doesn't necessarily mean making the same amount of effort to help one kid in Africa and your brother. What if, for example, the institution of family is crucial for the well-being of humans, and not putting your close ones first in the short run would undermine that institution?

What if, for example, the institution of family is crucial for the well-being of humans, and not putting your close ones first in the short run would undermine that institution?

If that was the real reason you would treat your brother better than one kid in Africa, than you would be willing to sacrifice a good relationship with your brother in exchange for saving two good brother-relationships between poor kids in Africa.

I agree you could evaluate impersonally how much good the institution of the family (and other similar things, like marriages, promises, friendship, nation-states, etc.) creates; and thus how "good" are natural inclinations to help our family are (on the plus side; sustains the family, an efficient form of organization and child-rearing; on the down side: can cause nepotism). But we humans aren't moved by that kind of abstract considerations nearly as much as we are by a desire to care for our family.

we should care about the well-being of all humans equally - but that doesn't necessarily mean making the same amount of effort to help one kid in Africa and your brother.

We have the moral imperative to have the same care for them, but not act in accordance with equal care? This is a common meme, if rarely spelled out so clearly. A "morality" that consists of moral imperatives to have the "proper feelings" instead of the "proper doings" isn't much of a morality.

I don’t really know, and because of that, I don’t know whether or not I count as a consequentialist

Consequentialism just means the rightness of behaviour is determined by its result. (The World's Most Reliable Encyclopaedia™ confirms this.) So you can be a partial (as in not impartial) consequentialist, a consequentialist who thinks good results for kith & kin are better than good results for distant strangers.

As for utilitarianism, it depends on which definition of utilitarianism one chooses. Partiality is compatible with what I call utilityfunctionarianism (and with additively-separable-utility-function-arianism), but contradicts egalitarian utility maximization.