Reworked version of a shortform comment. This is still not the optimal version of this post but it's the one I had time to re-publish.

I've spent the past few years trying to get a handle on what it means to be moral. In particular, to be moral in a robust way, that holds up in different circumstances. 

A year-ish ago, while arguing about what standards scientists should hold themselves to, I casually noted that I wasn't sure whether, if I were a scientist, and if the field of science were rife with dishonesty, whether it would be better for me to focus on becoming more honest than the average scientist, or focus on Some Other Cause, such as avoiding eating meat.

A bunch of arguments ensued, and elucidating my current position on the entire discourse would take a lot of time. But, I do think there was something important I was missing when I first wondered about that. I think a lot of Effective Altruism types miss this, and it's important.

The folk morality I was raised with, generally would rank the following crimes in ascending order of badness:

  • Lying
  • Stealing
  • Killing
  • Torturing people to death (I'm not sure if torture-without-death is generally considered better/worse/about-the-same-as killing)

But this is the conflation of a few different things. One axis I was ignoring was "morality as coordination tool" vs "morality as 'doing the right thing because I think it's right'." And these are actually quite different. And, importantly, you don't get to spend many resources on morality-as-doing-the-right-thing unless you have a solid foundation of the morality-as-coordination-tool. (This seems true whether "doing the right thing" looks like helping the needy, or "doing God's work", or whatever)

There's actually a 4x3 matrix you can plot lying/stealing/killing/torture-killing into which are:

  • Harming the ingroup
  • Harming the outgroup (who you may benefit from trading with)
  • Harming powerless people who can't trade or collaborate with you

And I think you need to tackle these mostly in this order. If you live in a world where even people in your tribe backstab each other all the time, you won't have spare resources to spend on the outgroup or the powerless until your tribe has gotten it's basic shit together and figured out that lying/stealing/killing each other sucks.

If your tribe has it's basic shit together, then maybe you have the slack to ask the question: "hey, that outgroup over there, who we regularly raid and steal their sheep and stuff, maybe it'd be better if we traded with them instead of stealing their sheep?" and then begin to develop cosmopolitan norms.

If you eventually become a powerful empire, you may notice that you're going around exploiting or conquering and... maybe you just don't actually want to do that anymore? Or maybe, within your empire, there's an underclass of people who are slaves or slave-like instead of being formally traded with. And maybe this is locally beneficial. But... you just don't want to do that anymore, because of empathy, or because you've come to believe in principles that say to treat all humans with dignity. Sometimes this is because the powerless people would actually be more productive if they were free builders/traders, but sometimes it just seems like the right thing to do.

Avoiding harming the ingroup and productive outgroup are things that you're locally incentived to do because cooperation is very valuable. In an iterated strategy game, these are things you're incentived to do all the way along.

Avoiding harming the powerless is something that you are limited in your ability to do, until the point where it starts making sense to cash in your victory points.

I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate "actions that are bad because it ruins ability to coordinate" and "actions that are bad because empathy and/or principles tell me they are."

I'm not making a claim about exactly how all of this should influence your decisionmaking. The world is complex. Cause prioritization is hard. But, while you're cause-prioritizing, and while you are deciding on strategy, make sure you keep this distinction in mind. 

New to LessWrong?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 4:28 AM

I've had similar thoughts; the working title that I jotted down at some point is "Two Aspects of Morality: Do-Gooding and Coordination." A quick summary of those thoughts:

Do-gooding is about seeing some worlds as better than others, and steering towards the better ones. Consequentialism, basically. A widely held view is that what makes some worlds better than others is how good they are for the beings in those worlds, and so people often contrast do-gooding with selfishness because do-gooding requires recognizing that the world is full of moral patients.

Coordination is about recognizing that the world is full of other agents, who are trying to steer towards (at least somewhat) different worlds. It's about finding ways to arrange the efforts of many agents so that they add up to more than the sum of their parts, rather than less. In other words, try for: many agents combine their efforts to get to worlds that are better (according to each agent) than the world that that agent would have reached without working together. And try to avoid: agents stepping on each other's toes, devoting lots of their efforts to undoing what other agents have done, or otherwise undermining each other's efforts. Related: game theory, Moloch, decision theory, contractualism.

These both seem like aspects of morality because:

  • "moral emotions", "moral intuitions", and other places where people use words like "moral" arise from both sorts of situations
  • both aspects involve some deep structure related to being an agent in the world, neither seems like just messy implementation details for the other
  • a person who is trying to cultivate virtues or become a more effective agent will work on both
[-]TAG3y30

both aspects involve some deep structure related to being an agent in the world, neither seems like just messy implementation details for the other

Indeed. Specifically, "right" and "good" are not synonyms.

"Right" and "wrong", that is praisweorthiness and blameability are concepts that belong to deontology. A good outcome in the consequentialist sense, one that is a generally desired, is a different concept from a deontologically right action.

Consider a case where someone dies in an industrial accident , although all rules were followed: if you think the plant manager should be exonerated because he folowed the rules, you are siding with deontology, whereas if you think he should be punished because a death occurred under his supervision, you are siding with consequentialism.

Consider a case where someone dies in an industrial accident , although all rules were followed: if you think the plant manager should be exonerated because he folowed the rules, you are siding with deontology, whereas if you think he should be punished because a death occurred under his supervision, you are siding with consequentialism.

That's not how consequentialism works. The consequentialist answer would be to punish the plant manager if and only if doing so would cause the world to become a better place.

I think I like "Do Gooding" in place of where I currently have "altruism" in my title. I used "altruism" despite it actually being more specific than I wanted because I couldn't think of a succinct enough word-phrase.

Thanks for writing this. I've started to shift away from utilitarianism to something that is more a combination of utilitarianism and contract-theory which the utilitarianism being about being altruistic and contract-theory being about building co-operation. I haven't thought out the specifics of how to make this work in detail yet, only the vague outline.

I guess the way you've justified focusing on co-operation in the above seems to be in terms of consequences, however people are often reluctant to co-operate with people who will use consequential justifications to break co-operation, so I think it's necessary to place some intrinsic value on co-operation.

I understand the direction, but it's VERY hard to mix the two without it being the case that the contractualism is just a part of consequentialism.  Being known as a promise-keeper is instrumentally desirable, and in VERY MANY cases leads to less-short-term-optimal behaviors.  But this is just longer-term consequentialist optimization.

And, of course, there can be a divergence between your public and private beliefs.  It's quite likely that, even if you're a pure consequentialist in truth (and acknowledge the instrumental value of contracts and the heuristic/calculation value of deontological-sounding rules), you'll get BETTER consequences if you signal extra strength to the less-flexible aspects of your beliefs.

I already tried to address this, although maybe I could have been clearer. If you are just calculating what is the utility from defecting, what is the utility from losing the opportunity and co-operating and building/maintaining trust, then people will see you as manipulative and not trust you. So you need to value co-operation more than that.

But then, maybe your point is that you can include this in the utility calculation too? If so, it would be useful for you to confirm.

you can include this in the utility calculation too?

Exactly.  Not only can, but must.  Naive consequentialism (looking at short-term easily-considered factors ONLY and ignoring the complex and longer-term impact) is just dumb.  Sane consequentialism includes all changes to the world conditional on an action.  In many (perhaps the vast majority) cases, the impact on others' trust and ease of modeling you is much bigger than the immediate visible consequence of an action.

And, since more distant and compounded consequences are MUCH harder to calculate, it's quite often better to follow the heuristics of a deontological ruleset rather than trying to calculate everything.  It's still consequentialism under the covers, and there may be a few cases in one's life that it IS better (for one's terminal goals) to break a rule or oath, but those are extremely rare.  Rare enough that they may mostly be of interest to intellectuals and researchers trying to find universal mechanisms, rather than just living good lives.

This is what rule and virtue (and global) consequentialism are for. You don't need to be calculating all the time, and as you point out, that might be counterproductive. But every now and then, you should (re)evaluate what rules to follow and what kind of character you want to cultivate.

And I don't mean this as saying rule or virtue consequentialism is the correct moral theory; I just mean that you should use rules and virtues, as a practical matter, since it leads to better consequences.

Sometimes you will want to break a rule. This can be okay, but should not be taken lightly, and it would be better if your rule included its exceptions. A rule can be something like a very strong prior towards/against certain kinds of acts.

But... you just don't want to do that anymore, because of empathy, or because you've come to believe in principles that say to treat all humans with dignity.

On the one hand, I think the history of abolition in Britain and the US is inspiring and many of the people involved laudable, and many of the actions taken (like the West Africa Squadron) net good for the world and worth memorializing. On the other hand, when I look around the present, I see a lot of things that (cynically) look like a culture war between elites, where the posturing is more important than the positions, or the fact that it allows one to put down other elites is more important than the fact that it raises up the less fortunate. And so when I turn that cynical view on abolition, it makes me wonder how much the anti-slavery efforts were attempts by the rich and powerful of type A to knock down the rich and powerful of type B, as opposed to genuine concern (as a probable example of the latter, John Laurens, made famous by Hamilton, was an abolitionist from South Carolina and son of a prominent slave trader and plantation owner, so abolition was probably going to be bad for him personally).

Another example of this is the temperance movement; one can much more easily make the empathetic case for banning alcohol than allowing it, I think (especially in a much poorer time, when many more children were going hungry because their father chose to spent limited earnings on alcohol instead), and yet as far as I can tell the political victory of the temperance movement was largely due to the shifting tides of fortune for various large groups, some of which were more pro-alcohol than others, rather than a sense that "this is beneath us now."

I agree. I think of myself as a utilitarian in the same subjective sense that I think of myself as (kind of) identifying with voting Democrats (not that I'm a US citizen). I disagree with Republican values, but it wouldn't even occur to me to poison a Republican neighbor's tea so they can't go voting. Sure, there's a sense in which one could interpret "Democrat values" fanatically, so they might imply that I prefer worlds where the neighbor doesn't vote, where then we're tempted to wonder whether ends do justify the means in certain situations. But thinking like that seems like a category error if the sense in which I consider myself a Democrat is just one part of my larger political views, where I also think of things in terms of respecting the political process. So, it's the same with morality and my negative utilitarianism. Utilitarianism is my altruism-inspired life goal, the reason I get up in the morning, the thing I'd vote for and put efforts towards. But it's not what I think is the universal law for everyone. Contractualism is how I deal with the fact that other people have life goals different from mine. Nowadays, whenever I see discussions like "Is classical utilitarianism right or is it negative utilitarianism after all?" – I cringe. 

Hmm, you cross-posted to EA forum, so I guess I'll reply both places since each might be seen by different folks.

I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate "actions that are bad because it ruins ability to coordinate" and "actions that are bad because empathy and/or principles tell me they are."

I think it's worth challenging the idea that this conflation is actually an issue with ethics.

Although it's true that things like coordination mechanisms and compassion are not literally the same thing and can have expressions that try to isolate themselves from each other (cf. market economies and prayer) and so things that are bad because they break coordination mechanisms or because they don't express compassion are not bad for exactly the same reasons, this need not mean there is not something deeper going on that ties them together.

I think this is why there tends to be a focus on meta-ethics among philosophers of ethics rather than directly trying to figure out what people should do, even when setting meta-ethical uncertainty aside. There's some notion of badness or undesireableness (and conversely goodness or desirableness) that powers both of these, and so they are both different expressions of this same underlying phenomenon. So we can reasonably ties these two approaches together by looking at this question of what makes something seem good or bad to us, and simply consider these different domains over which we consider how we make good or bad things happen.

As to what good and bad mean, well, that's a larger discussion. My best theory is that in humans it's rooted in prediction error plus some evolved affinities, but this is an ongoing place where folks are trying to figure out what good and bad mean beyond our intuitive sense that something is good or bad.

Crossposted on EA forum (I think this particular convo is more valuable over there)

The issue isn't just the conflation, but missing a gear about how the two relate.

The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes.

Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it's also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern.

In particular, I was concretely assuming "torturing people to death is generally worse than lying." But, that's specifically comparing within alike circles. It is now quite plausible to me that lying (or even exaggeration/filtered evidence) among the groups of people I actually have to coordinate with might actually be worse than allowing the torture-killing of others who I don't have the ability to coordinate with. (Or, might not – it depends a lot on the the weightings. But it is not the straightforward question I assumed at first)

Thanks. (I think honestly the EA forum needs to see this more than LessWrong does so I appreciate some commenting there. I'll probably reply in both places for lack of a better option)

This is worth exploring, and I think there's another aspect of it that relates: the distinction between edge and level.  Whether you're improving something, or maintaining a standard.  Your comment about 

if the field of science were rife with dishonesty, whether it would be better for me to focus on becoming more honest than the average scientist, or focus on Some Other Cause, such as avoiding eating meat.

gets to this - why does it matter that the field is rife with dishonesty (also, why not do both, but that's another discussion)?