In this post I present a Hansonian view of morality, and tease out its consequences for advocacy. Beware, all of this is the result of armchair speculation.
The simplified model goes something like this:
- Human morality is mostly a justification mechanism, trying to give coherence to the actions we were going to do anyway.
- Concretely, when spousing a belief or its opposite is cheap (it does not fulfill a social function nor will it contradict our future actions) we will prefer to stick to the position that better fits our current plan.
- In general our plan follows a minimal effort law, so we will stick to the side of the belief which is more convenient.
- The other side of the coin is that when we receive new information about the world which changes what we will do in the future it has profound effects in our morality.
Example: Consider this study which reaches the conclusion that a higher minimum salary reduced the average payroll of a low income person by 125$. This is not a moral claim, yet for some it will feel like it is. Often we find that factual claims are processed by our brains as moral claims.
This explains why humans have so much difficulty with the "is" vs "ought" question - for the explicit models in the human brain, what is and what ought to be are both grounded on factual information.
This model makes a bold prediction: philosophical arguments such as "The Drowning Child" do not derive their strength from their moral validity, but instead from the factual revelation that it is possible to save a child's life for a rather modest sum of money.
This has direct relevance to advocacy, since it suggests that in order to change somebody's moral opinion on eg animalism we should not focus on trying to make moral arguments, but instead on giving people new data they previously didn't have and that increases the perceived convenience of taking a particular action eg objective figures on how much money does it cost to save an animal's life.
You can also go one meta level up and change how convenient it factually is to help with your cause of choice, by for example developing clean, affordable meat.
This contrast starkly with people trying to use emotional appeal or philosophically grounded arguments, which we would predict to have small long term effects.
Questions: What other predictions does this model make? Where does it fail? How can it be refined? How can we apply it to specific causes such as AI Safety research?