In this post I present a Hansonian view of morality, and tease out its consequences for advocacy. Beware, all of this is the result of armchair speculation.

The simplified model goes something like this:

  • Human morality is mostly a justification mechanism, trying to give coherence to the actions we were going to do anyway.
  • Concretely, when spousing a belief or its opposite is cheap (it does not fulfill a social function nor will it contradict our future actions) we will prefer to stick to the position that better fits our current plan.
  • In general our plan follows a minimal effort law, so we will stick to the side of the belief which is more convenient.
  • The other side of the coin is that when we receive new information about the world which changes what we will do in the future it has profound effects in our morality.

Example: Consider this study which reaches the conclusion that a higher minimum salary reduced the average payroll of a low income person by 125$. This is not a moral claim, yet for some it will feel like it is. Often we find that factual claims are processed by our brains as moral claims.

This explains why humans have so much difficulty with the "is" vs "ought" question - for the explicit models in the human brain, what is and what ought to be are both grounded on factual information.

This model makes a bold prediction: philosophical arguments such as "The Drowning Child" do not derive their strength from their moral validity, but instead from the factual revelation that it is possible to save a child's life for a rather modest sum of money.

This has direct relevance to advocacy, since it suggests that in order to change somebody's moral opinion on eg animalism we should not focus on trying to make moral arguments, but instead on giving people new data they previously didn't have and that increases the perceived convenience of taking a particular action eg objective figures on how much money does it cost to save an animal's life.

You can also go one meta level up and change how convenient it factually is to help with your cause of choice, by for example developing clean, affordable meat.

This contrast starkly with people trying to use emotional appeal or philosophically grounded arguments, which we would predict to have small long term effects.

Questions: What other predictions does this model make? Where does it fail? How can it be refined? How can we apply it to specific causes such as AI Safety research?

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 6:18 AM

UPDATE AFTER A YEAR: Since most people believe that lives in the developing world are cheaper to save than what they actually are I think that pretty much invalidates my argument.

My current best hypothesis is that the Drowning Child argument derives its strength from creating a cheap opportunity to buy status.

Human morality is mostly a justification mechanism, trying to give coherence to the actions we were going to do anyway.

Here is an even more Hansonian view that I think makes better predictions: the side-taking hypothesis says that morality is a social tool for deciding which side to support in a conflict between groups. Extended quote:

Here is a distinctive human problem that just might explain our distinctive moral condemnation: Humans, more than any other species, support each other in fights, whether fistfights, yelling matches, or gossip campaigns. In most animal species, fights are mano-a-mano or between fixed groups. Humans, however, face complicated conflicts in which bystanders are pressured to choose sides in other people’s fights, and it’s unclear who will take which side. Think about the intrigues of family feuds, office politics, or international relations.
One side-taking strategy is supporting the higher-status fighter like a boss against a coworker or parent against child. However, this encourages bullies because higher-ups can exploit their position. Another strategy is to form alliances with friends and loyally support them. Alliances deflate bullies but create another problem: When everyone sides with their own friend, the group tends to split into evenly matched sides and fights escalate. This is costly for bystanders because they get scuffed up fighting their friends’ battles.
Moral condemnation offers a third strategy for choosing sides. People can use moral judgment to assess the wrongness of fighters’ actions and then choose sides against whoever was most immoral. When all bystanders use this strategy, they all take the same side and avoid the costs of escalated fighting. That is, moral condemnation functions to synchronize people’s side-taking decisions. This moral strategy is, of course, mostly unconscious just like other evolved programs for vision, movement, language, and so on.
For moral side-taking to work, the group needs to invent and debate moral rules to cover the most common fights—rules about violence, sex, resources, etc. Humans are quite motivated to do just this. Once moral rules are established, people can use accusations of wrongdoing as coercive threats to turn the group, including your family and friends, against you [emphasis mine].

What the side-taking hypothesis suggests is that making the moral case for e.g. vegetarianism is a matter of convincing people to gang up against non-vegetarians in various ways, or rather convincing people that other people will do this. Insofar as you think this is bad, you might want to spread vegetarianism through a conduit other than morality.

Worth meditating on the side-taking hypothesis as it applies to the recent debacle around the vegan blogger who bought ice cream for a kid and got shamed by other vegans over it.

[-][anonymous]6y50

I think that providing people with additional actions / alternatives seems good, especially if you can then feel better about yourself because it ended up being easier than you thought to act on a certain moral view.

I think this might be valid mainly places where there do exist easy ways to switch / make the action easier. EX: Providing public transportation credits for commuters, or redirecting their existing charity payments to another cause.

I think the right question to frame it as, then, is something like "How much additional work are we asking of you to do X, and how can we make it easier for you to take X?"

When reading this I'm thinking of Kegan's stages of development. A person at Kegan's stage 3 will optimize for social cohesion and pick their morals to further those goals.

If you however have a person who's at stage 4 they will actually act according to a system. That person will do things because the system in which they think calls for certain actions even when there aren't specific benefits from a given action.

I think one thing this post fails to take into account is the difference between endorsed, professed, conscious beliefs, vs. unconscious aliefs. I suspect the "morals as a convenience" theory is actually talking about the latter type of belief, while the "factual advocacy" approach is more focused on the former.

While it is true that factual advocacy can affect unconscious aliefs, there are much more effective ways to do so, many pioneered and testing in the field of marketing, which in many ways can be seen as a study of how to effect people's aliefs such that they change their actions.

I am going to make a bold claim: traditional marketing strategies are succesful due to poorly understood rational incentives they create.

In other words: they are succesful because they give factual knowledge of cheap opportunities to purchase status or other social commodities, not because they change our aliefs.

Under other light, the marketing success evidence supports the morality-as-schelling-point-selector in Qiaochu's comment above.