TLDR: I think morality is subjective. My ideal society would maximize total happiness while minimizing happiness inequality for as many beings as possible. My morals could change, and I don’t always do what I feel is moral.

I don’t think there is an objective morality. 

I can’t prove that slavery is wrong. I can’t prove child porn is wrong. I can’t prove anything is morally right or wrong. 

I’m not 100% certain what the correct morality for me is either. At times, I struggle to determine what I believe. 

But, overall, I’ve formed many opinions. Some are more strongly held than others.

And I encourage others to agree with my beliefs. Generally, the more values people share with me, the more inclined we’ll be to work together. We can help each other make the world better to us.

If morality is subjective, why do I form moral opinions and try to act on them? I think I do that for the same reason I think I do anything else. To be happy.

My Moral Code

I think everyone matters equally. As much as I love myself, I can’t bring myself to believe I deserve more happiness than others.

I didn’t control my genes. I could’ve had a mental or physical disability. I could’ve inherited genes that made me more likely to have the “dark triad” traits of narcissismMachiavellianism, and psychopathy. There may be genes that lead to pedophilia too.

I didn’t control the environment I was born into. I could’ve been born into slavery. I could’ve been born as an animal on a factory farm. I could’ve been born into a dystopian future. 

I could’ve been anyone. I’m fortunate.

To me, the ideal society would maximize total happiness while minimizing happiness inequality for as many beings as possible.[1]

Morality Isn’t That Simple For Me

While everyone matters equally to me, some people make more of an impact than others. Imagine a hypothetical scenario where you have to go back in time to 1920. Imagine you have to kill Mahatma Gandhi or five random people. My gut instinct is to save Gandhi. I’d bet he’s done more to maximize and equalize happiness than the average five people.[2] 

I’d feel more confident about my answer if I could know what would’ve happened if Gandhi died in 1920. If other leaders would’ve stepped up to make the same impact as Gandhi, I’d probably choose to kill Gandhi.[3]

Equality

Equality (of happiness) matters to me. I’m not sure how much. I couldn’t tell you if I’d prefer all beings to have 1% more total happiness if that increased happiness inequality by 5%.

My uncertainty about how much to value equality doesn't often make it difficult to make decisions. So I'm not planning to determine how much equality matters to me anytime soon.[4]

Uncertainty

My values have changed many times in the past. They’ll probably change again.[5] If I was born as a Christian white man in 1600s Europe, I probably would’ve been racistsexist, and intolerant of other religions.[6] I opposed gay marriage until 2004.

If I could live a few hundred more years, I’d bet my beliefs change significantly. So I won’t advocate for anything that leads to significant value lock-in.

I don’t think that future people’s morals are necessarily better. As I said, I don’t think morality is objective. My point is that I’ve been happy with how moral views have evolved. I’m cautiously optimistic that won’t change.[7]

Why I Don’t Always Follow My Morality

I can’t scientifically explain my behavior.[8] I often feel like there are different parts of me fighting each other.[9] Sometimes I feel like a “moral part” of me loses control to another part of me. For example, a fearful part of me could push me to try to please someone. Other times, I look back and feel like one part of me has deluded my “moral part.” That’s how I’ve convinced myself it was productive to play One Night Ultimate Werewolf to help me develop my idea for a reality show.[10] I don’t think that’ll help anymore.[11]

I suspect the “part of me” that always wins out is the one that brings me the most immediate happiness.

The strongest part of me right now is writing this post. I don’t know if that’s a moral part of me, a part of me that wants to fulfill my potential as a writer, or a part of me that wants people to like me. It’s probably some combination of all of them and more.

But the strongest moral part of me right now reminds me that I didn’t have to be me. I could’ve been anyone. It hopes I remember that more.

(cross-posted from my blog: https://utilitymonster.substack.com/p/my-morality)
 

  1. ^

    I count clones (and copies of sentient AIs) as beings. Humans are 99.9% the same anyway. I’d feel bad for someone who wasn’t valued because they have the same genes as someone else born earlier.

    And Joshua Bach theorizes that beings could merge together in the future. (Search the word substrate to see when he alludes to merging.) In a vacuum, if the merging beings are an equal part of the merged being, and the merged being is as happy as the average of the beings combining, I’d support this.

  2. ^

    I used Gandhi in this example because I thought he represented an uncontroversial “good” figure. Since publishing this post, I’ve learned that he isn’t as well regarded as I’d thought. 

  3. ^

    I’d decide based on the amount of happiness Gandhi and the average person in 1920 have.

  4. ^

    To determine how much equality matters to me, I’d pretend I could quantify happiness. Next, I’d ask myself hypotheticals such as "Would I rather give 1 happiness point to person A who has -1 million happiness points or give 1 million happiness points to person B, who has 1 million happiness points?" I’d use these answers to help me determine a mathematical formula that expresses the tradeoffs I’d make in any situation.

  5. ^

    My morals have already changed since I published this post. Originally, I’d said I wanted to maximize total utility while minimizing utility inequality for as many beings as possible. I’ve now replaced the term utility (i.e., what anyone fundamentally wants) with happiness (i.e., positive emotional states, good feelings) At the time, I said I used utility instead of happiness because people have told me their desires don’t reduce to happiness. And if anyone ultimately wanted other things or feelings besides happiness, I wanted them to have that.

    I no longer feel that way. If someone fundamentally wants freedom, justice, dignity, or whatever they claim to value, and none of those things make them happy, I don’t care if they get them.

  6. ^

    This article’s similar claim inspired this thought. It seems reasonable. But I can’t find any surveys on racism, sexism, or religious intolerance from 1600s Europe.

  7. ^

    Over the long term. At some point, I'd bet I'll think something like my values align more with people in 2022 than 2023.

  8. ^

    I think there’s ultimately a scientific way to explain my behavior. But I don’t know enough science to do that. So instead I use mumbo jumbo.

  9. ^

    The yearning octopus from this article describes these feelings well.

  10. ^

    If someone who shares enough of my values wants to produce a reality show, I’d be excited to explain my idea to them. I think it has some promise, but it’s complicated and unpolished.

  11. ^

    If I use the original cards.

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 7:43 PM

If morality is subjective, why do I form moral opinions and try to act on them? I think I do that for the same reason I think I do anything else. To be happy.

What makes you happy is objective, so if that’s how you ground your theory of morality, it is objective in that sense. It’s subjective only in that it depends on what makes you happy rather than what makes other possible beings happy.

If morality is a thing we have some reason to be interested in and care about, it’s going to have to be grounded in our preferences. Our preferences, not any possible intelligent being’s preferences—so it’s subjective in that sense. But we can’t make up anything, either. We already have a complete theory of how we should act, given by our preferences & our decision theory. Morality needs to be part of or implied by that in some way.

To figure out what’s moral, there is real work that needs to be done: evolutionary psychology, game theoretic arguments, revealed preferences, social science experiments, etc. Stuff needs to be justified. Any aggregation procedure we choose to use, any weights we choose to use in said aggregation procedure, need to be grounded—there has to be a reason we are interested in that aggregation procedure and these weights.

There are multiple kinds of utilities that have moral import for different reasons, some of them interpersonally comparable and others not. Preference utilities are not interpersonally comparable and we care about them for game theoretic reasons that would apply just as well to many agents very different from us (who would use different weights however); what weights and aggregation procedure to use must be grounded in these game theoretic reasons. However they are to be aggregated, it can’t be weighted-sum utilitarianism, since the utilities aren’t interpersonally comparable (which doesn’t mean they can’t be aggregated by other means). But pleasure utilities (dependent on any positive mental or emotional state) often are interpersonally comparable:

An [individual’s] inability to weigh between pleasures is an epistemic problem. [Some] pleasures are greater than others. The pleasure of eating food one really enjoys is greater than that of eating food one doesn’t really enjoy. We can make similar interpersonal comparisons. We know that one person being tortured causes more suffering than another stubbing their toe. (HT: Bentham’s bulldog)

At least it should be the case that some mental states can be biologically quantified in ways that should be interpersonally comparable. And they can have moral import. Why not? It all depends on what evolution did or didn’t do. We need to know in what ways people care about other beings (which state or thing related to these beings they care about), which ones of the beings and to what degrees (and there can be multiple true answers to these questions).

How do we know? Well, there are things like ultimatum game experiments, dictator game, kin altruism, and so on. The details matter and there seems to be much controversy on interpretation.

Can we just know through introspection? It would be awfully convenient if so, but that requires that evolution has given us a way to introspect on our preferences that regard other people and reliably get the real answers instead of getting social desirability bias. How do we know if that’s the case? Two ways.

Way one: by comparing the answers people claim to get through introspection with their actual behavior. If introspection is reliable, the two should probably match to a high degree.

Way two: by seeing how much variation there is in the answers people claim to get through introspection. We still need to interpret that variation. Is it more plausible that people have very different moralities than that their answers are very different for other reasons (which ones?)?

This fog is too thick for me to see through. Many smart people have tried, probably much harder than me, and sometimes have said a few smart things: [1] [2] [3]. There must be people who have figured much more out and if so I would highly appreciate links.

[-]TAG2y10

If morality is a thing we have some reason to be interested in and care about, it’s going to have to be grounded in our preferences.

To some extent. Minimally it can be grounded in our preference not to be punished. Less minimally, but not maximally, it can be grounded in negative preferences , like " I don't want to be killed" without being grounded in positive preferences like * "I prefer Tutti Frutti". In either case, you dont need a detailed.picture of human preference to solve morality, if you haven't first shown that all preferences are relevant.

[-]TAG2y10

I don’t think there is an objective morality.

I think morality is subjective.

The validity of subjective morality doesn't follow from the invalidity of objective morality....because both could be wrong , and because there are other options. Admittedly , you didn't argue that explicitly ... but you didn't argue any other way. Other options include societal definitions. Societies put people in jail for breaking laws which delimit bad behaviour from.good behaviour, so something like deontology is going on under your nose. If the jailing and executing isn't justifiable by your morality, then it is a gross injustice.

Your two principle goals - maximize total utility and minimize utility inequality - are in conflict, as is well-known. (If for no other reason, because incentives matter.)  You can't have both.

A more reasonable goal would be Pareto efficiency-limited utility inequality.

But it’s not literally impossible to achieve both goals. And I think there are practical ways to improve total utility and reduce utility inequality at the same time. For example, anything that helps make a sad being happy. 

As I said, I don’t know how I’d make tradeoffs between total utility and utility inequality yet. If I did know, I would want society’s existing utility to be distributed in a Pareto efficient way.