Inspired by seeing Morality as "Coordination", vs "Altruism".

  • Coordination: a) To do more win-win stuff. b) To band against some outgroup (win-win-lose).
  • Selfish genes: E.g., your genes make you care about your family directly, as they are carrying your genes.
  • Empathy: E.g., just thinking about how being tortured sucks so much makes someone want to stop others from being tortured.
  • Signaling and reputation: E.g., having a reputation for being fair can give you power and status. Inversely, having a reputation for dishonesty can close a lot of opportunities.
  • Insurance: Reducing variance of outcomes (e.g., bats sharing food).
  • Hardwired assumptions: E.g., incest leads to flawed children.
  • Fear of punishment: E.g., murder probably won't go unanswered.
  • Rewards of subserviency: E.g., respecting high-status people is not without benefits.
  • Power/status games: Enforcing norms can increase your own status and decrease others'.
  • Optimizing trade-offs for personal benefits: E.g., net-neutrality is good for middle-class people, bad for poor people. "Bravery debates" might fall under this umbrella as well.
  • Instinctual game-theoretic strategies: E.g., people like having more control and agency ("freedom"). Note that this is different from coordination (coordination is a subset of this); A lot of these strategies are win-lose.
  • Abdicating responsibility: E.g., the current fiasco of Covid vaccines. People prefer passive harm to risky interventions because any activity brings responsibility.

What more can you think of? (Of course, a lot of these have some overlap.)

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:15 AM
[-]TAG3y30

Optimizing trade-offs for personal benefits: E.g., net-neutrality is good for middle-class people, bad for poor people. “Bravery debates” might fall under this umbrella as well.

What you are talking about is optimising trade off for group benefits. You can't usually get personal benefits unless you wield absolute power.

Jockeying for group benefits is very much a thing, but the thing we generally call "politics".

As agents embedded and evolving within our (ancestral) environment of interaction, our concepts of "morality" tend toward choices which, in principle, exploited synergies and thus tended to persist, for our ancestors.

For an individual agent, isolated from ongoing or anticipated interaction, there is no "moral", but only "good" relative to the agent's present values.

For agents interacting within groups (and groups of groups, …) actions perceived as "moral", or right-in-principle, are those actions assessed as (1) promoting an increasing context of increasingly coherent values (hierarchical and fine-grained), (2) via instrumental methods increasingly effective, in principle, over increasing scope of consequences. These orthogonal planes of (1) values, and (2) methods, form a space of meaningful action tending to select for increasing coherence over increasing context. Lather, rinse, repeat—two steps forward, one step back—tending to select for persistent, positive-sum, outcomes.

For agents embedded in their environment of interaction, there can be no "objective" morality, because their knowledge of their (1) values and (2) methods is ultimately ungrounded, thus subjective or perspectival, however this knowledge of values and methods is far from arbitrary since it emerges at great expense of testing within the common environment of interaction.

Metaphorically, the search for moral agreement can be envisioned as individual agents like leaves growing at the tips of a tree exploring the adjacent possible, and as they traverse the thickening and increasingly probable branches toward the trunk shared by all, rooted in the mists of "fundamental reality", they must find agreement upon arrival at the level of those branches which support them all.

The Arrow of Morality points not in any specific direction, but tends always outward, with increasing coherence over increasing context of meaning-making.

The practical application of this "moral" understanding is that we should strive to promote increasing awareness of (1) our present but evolving values, increasingly coherent over increasing context of meaning-making, and (2) our instrumental methods for their promotion, increasingly effectively over increasing scope of interaction and consequences, within an evolving intentional framework for effective decision-making at a level of complexity exceeding individual human faculties.