TLDR: Words are complicated. I understand why someone would consider an action 100% altruistic if they define action solely as a thing someone physically does. But I doubt my decision-making process ever consists of 100% altruistic thoughts. And a “100% altruistic action” would still be a “self-interested” (but not a “selfish”) action too.

After reading my post, Long-term Short-term Happiness, my friend Dima objected to my statement, “I don’t take any 100% altruistic action.” He said he thinks these types of actions exist.

I defended myself by saying that I was talking about myself. But I did lean towards believing, with less confidence, that what I said in that post was true for almost every present human and many other beings.[1]

So I took Dima’s suggestion to look at Fundamentals of Ethics by Russ Shafer-Landau to understand why he thought purely altruistic actions exist.[2] I skimmed its chapter on psychological egoism (egoism), the idea that everything we do is motivated by a desire to benefit ourselves.

I’m not sure I agree with psychological egoism. It would depend on what someone means by a desire to benefit themself.

Helga and Horace

Let’s pretend Helga dives in front of a bullet, fully expecting to die, to save her friend Horace? Is that a purely altruistic action? 

One defense of egoism would be that Helga sacrificed herself because she would’ve had her reputation damaged otherwise. If that’s true, I agree that’s not an altruistic action. But I think it’s plausible that Helga’s reputation wouldn’t be hurt. I’d imagine plenty of people don’t feel society expects them to take a bullet for a friend.

Helga is hypothetical. So I guess someone could speculate about a lot of things along those lines. 

But it seems plausible that Helga only receives pleasure from sacrificing her life because she’d be sad if Horace dies. So sad that, in the moment she sacrifices herself, she’d rather be dead than live knowing she could’ve saved him. That could still be seen as egoistic. It would mean she took the bullet to maximize her immediate happiness.

So I’d agree that Helga’s sacrifice is a self-interested action.[3] Does that mean it can’t be altruistic too? When googling altruism, the first definition I found claimed that to be altruistic, someone must be selfless. I initially interpreted selfless as a complete lack of self-interest. But that same dictionary defined selfless as “unselfish.”[4]

And Helga would only be sad if Horace died because she cares about Horace. If Helga only genuinely cared about herself, she wouldn’t help Horace. The only way her sacrifice maximizes her happiness is through helping Horace. It doesn’t sound like she’s being “selfish” to me.[5]

Therefore, I still think every action I take is self-interested. But that self-interested action could be 100% altruistic.

It depends on how action (and every word) is defined too.

 I could take the term action somewhat literally, to refer to Helga physically diving in front of the bullet. However, I don’t know how I’d 100% define the difference between what’s an action and what’s decision-making. Aren’t we always thinking as we take actions?[6] And I’m skeptical that anyone’s decision-making process ever consists of 100% altruistic thoughts. As I’ve said, I feel like the “moral part” of me is constantly fighting with my selfish desires.[7] That means my perception of whether an action is moral feels like one of many variables that affect my happiness.

Impending Doom

So I’d guess that when anyone sacrifices themself, their subconscious thought process wouldn’t be asking, “Should I save that being’s life?” They’d ask themselves, potentially with a split second to answer if they’re considering diving in front of a bullet, “Should I sacrifice my life to save that being's life because the happiness that would bring me outweighs the amount of happiness I think I’d have in the rest of my life without that being?” 

I don’t mean to suggest they’re vividly imagining their lives as a senior citizen. It’s hard for me to do that now, even though I’m not trying to make a split-second decision. Based on my hazy memory of my experience irrationally[8] feeling like I was going to die when I went skydiving 4 years ago, I’d guess someone considering diving in front of a bullet would judge their future happiness based on their current level of happiness. And if they have a little more time to think (skydiving was a few minutes), I wouldn’t be surprised if people think about how their looming death would prevent them from achieving their wildest dreams.

Ultimately, I doubt anyone’s purely altruistic. But their selfish desires may never be reflected by their “actions.”

(cross-posted from my blog: https://utilitymonster.substack.com/p/pure-altruism)

  1. ^

    So, if a mutation led you to not fundamentally value happiness, this post doesn’t apply to you. Theoretically, a mutation could've led me to value happiness more than others. But other people seem to really like being happy too.

  2. ^

    I skimmed the 4th edition of the book.

  3. ^

    Lexico, the dictionary Google features results from, defines self-interest as “One's personal interest or advantage, especially when pursued without regard for others.” I’m interpreting that to mean self-interested actions are often selfish.

  4. ^

    Other dictionaries also tend to define altruism as acting unselfishly or as acting at a cost to ourselves.

  5. ^
  6. ^

    Lexico and Cambridge define action as both a thing that’s done and the process of doing something. The Merriam Webster definitions are similar too.

  7. ^

    So by selfish desires, I mean the self-interested desires the moral part of me doesn’t consider to be altruistic.

  8. ^

    Granted, the United States Parachute Association is probably a biased source. But they seem to have the most detailed data. And my two minutes of googling why skydiving is dangerous didn’t turn up any reasons skydiving would be dangerous for the average adult.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 8:03 AM

Words that need to be erased from dictionary: "altruism".

I do not remember any situation where using this word resulted in anything useful.

It seems to me that in practice the word is used as a wannabe-sophisticated synonym for "good" with the extra connotation that something is truly good only if you hurt yourself in the process. Doing win/win things is not altruistic; it's dirty, low-status. Even feeling good about helping others is suspicious; maybe your true reason is not wanting to help others, but wanting to feel good about helping others, you selfish asshole!

When this word joins the debate, the attention is taken away from the amount of good that is produced, as that is completely irrelevant for deciding whether an act is "altruistic" or not. The important thing is to signal that you are not seeking profit, that your intentions are pure. If you create 1000 utilons for others and 1 utilon for yourself (such as "feeling happy that the world is a nice place"), it is not altruistic. If you create 1 utilon for others and -1 or -2 utilons for yourself, it is altruistic. Therefore, somehow, the latter is better than the former.

Ask yourself whether you would rather live in a society where people regularly create 1000 utilons for others and 1 utilon for themselves (i.e. people who are not altruistic), or in a society where people regularly create 1 utilon for others and -2 utilons for themselves.

I am not saying that if you can create 1000 utilons for others and -1 for yourself, you shouldn't do it. It would be nice if you do. Especially if this is a social norm, and others do it too, so at the end of the day you usually still end up with positive total utilons. But the focus is on the +1000 utilons; that is the goal. That -1 utilon for yourself is just an unfortunate cost; if you could achieve the same outcome without paying the cost, that would be better (but no longer altruistic). Optimizing for altruism is the wrong goal. You should optimize to do good.

"Pure altruism" is one of those concepts that are stretched beyond its domain of applicability. An altruistic person is someone who derives more satisfaction from improving the welfare of others than the society's average, without expecting an extrinsic reward. There is no more to it than that. By definition, any society will have the whole spectrum of altruism and selfishness. Going any further makes the concept less useful, so don't do it, invent a different concept for the domain you are interested in.

Note also that it's very hard to distinguish (especially in others) someone who's altruistic because they derive satisfaction by improving welfare, or if they want to appear altruistic, so people will treat them as if they want to improve welfare.  Fortunately, it doesn't matter - actions, even with poor motivations, are often the important part of actual welfare improvements.

What do you want? What will you do to make it so?

When you know the answers to those questions, the questions of this post are dissolved.