Was recently thinking about this question and thought it would be interesting to get the opinions of the LW community.

You are God and you are creating a new world full of humans. You can encode a theory of morality into each human you create, and you have a guarantee that every human you create will follow the morality you encode into them to the extent of a top-5%-moral person in our current world, or more (but not perfectly). You are able to encode different moralities into different people if you so choose. You can also include randomness or mixed strategies if you so choose. What theories of morality do you encode into these humans (and if you are able to articulate, why?)?

9 comments, sorted by Click to highlight new comments since: Today at 12:53 PM
New Comment

Interesting question and one I have thought a lot about. I hold a moral anti-realism view and think most people speaking about "morality" are discussing normative views.  For this reason, a Virtue Ethics framework would translate really well into a better world under your scenario. A framework designed to optimize people being better rather than doing better would hopefully be an improvement a step earlier in the process of living. I am not familiar with a lot of criticism about virtue ethics, but open to reconsidering. Additionally, a moral anti-realism position isn't necessary for virtue ethics to be an ideal framework in my mind. 

Interesting! I can see where you are coming from with this idea. I feel like the question gets me to think about what the optimal framework would be based on how the whole system would behave / evolve as opposed to the normally individualistic view of morality.

I'd be a bad god.  I'd probably encode some mix of kindness and responsibility, and likely a more static enjoyment of what is than a striving for change.  I presume they'd never get out of hunter/gatherer mode.

And now I'm wondering exactly what my limits are.  I don't think "a theory of morality" is something that stands alone in people.  I translated that in my mind to a set of personality traits and behaviors that would automatically be enforced somehow and not evolve over time (in individuals or across generations).  But if you mean more constrained cognitive moral theories that most of 'em don't actually follow very well, I'm not sure what I'd choose.

Note that none of this applies to real humans, nor perhaps any agents in this universe.  

As evolved—and evolving—agents, we would benefit from increasing awareness of (1) our values, hierarchical and fine-grained, and (2) our methods for promoting those present but evolving values in the world around us, with perceived consequences feeding back and selected for increasing coherence over increasing context of meaning-making (values) and increasing scope of instrumental effectiveness (methods). Lather, rinse, repeat…

As inherently perspectival agents acting to express our present but evolving nature within the bounds of our presently perceived environment of interaction, we can find moral agreement as if we were (metaphorically) individual leaves on the tips of the growing branches of a tree, and by traversing the (increasingly probable) branches of that tree toward the (most probable) trunk, rooted in what we know as the physics of our world, finding agreement at the level(s) of those branches supporting our values-in-common.

I am not a god, but this is the advice I would provide to the next one I happen to meet, and thereby hope to expedite our current haphazard progress (2.71828 steps forward, 1 step back) in the domain of social decision-making assessed as increasingly "moral", or right in principle.

The arrow of morality points not toward any imagined goal, but rather, outward, with increasing coherence over increasing context.

Foundational questions to ponder: am I really God, or do I just think I'm God? How would I test this premise? I'd take a very long time to figure this out. Do I (or the humans) incur any penalty for a delay in encoding morality?

Also, are the humans in question subject to forces of evolution or are we talking about a static landscape? If we mean literal Homo Sapiens, then whatever we encode applies to a finite window as the creatures we manipulate will eventually evolve into something else.

You’re able to set everyone’s moral framework and create new humans, however once they are created you cannot undo or try again. You also cannot rely on being able to influence the world post creation.

Assume humans will be placed on the planet in an evolved state (like current Homo Sapiens) and they can continue evolving but will possess a pretty strong drive to follow the framework you embed (akin to the disgust response that humans have to seeing gore or decay).

I apologize for the simplistic response: if we're talking about a version of current Homo Sapiens, then they already have a perfectly functional meta-ethical system encoded into them. Otherwise they would not have evolved into humans. The quality of being human must necessarily include all the iterative development that got the creatures there. 

I must therefore conclude that if I indeed have the power of God and felt the need to intervene in a disruptive manner to rewire the poor human's ethical system, I must actually be the Devil... and any action taken along this path would therefor be inherently evil. Evil actors typically think they are Gods and cannot tell the difference.

Fair enough! I like the spirit of this answer, probably broadly agree, although makes me think “surely I’d want to modify some people’s moral beliefs”…

Of course you do. Me too! Humans are compelled by a need for mutual domestication. It's what sustains our bonds and long-term survival. In many ways culture and society are a kind of marketplace of morality modification. 

New to LessWrong?