Value Pluralism and AI

Eliezer Yudkowsky sixteen years ago in a text entitled "Fake Utility Functions" expressed an obvious pluralism of value, however without using the word pluralism, instead he spoke of the complexity of value.

Philosophers are still quite uncertain as to whether there is any one more foundational value, or whether there are a plurality of values, none of which is much more foundational than the others.

The Encyclopedia of World Problems and Human Potentials spoke a few decades ago of "an ecosystem of values", but failed to give that ecosystem any topology or any more central values.

Obviously survival should be fundamental to every ecosystem, but humans just as obviously desire more than sheer survival. We have many desires and anything that matches any of our desires could be seen as a value, especially if it is something that enhances our survival prospects, rather than lessens them.

Some thinkers believe that an informed satisfaction of desires is the highest good, while others believe that it is a desire for pleasure or happiness that should be satisfied and that all other desires are really unimportant in comparision.

I would say that it is very necessary that our minds (not least our frontal lobes) be engaged by reality, which for many of us requires far more than a little neuro-drugs and emotions. Our minds love a well-organized and preferably also dynamic complexity, which is an important part of a good and enduring life.

These different types of desires and values need to be conjoined under the label of value pluralism, rather than anything else. We desire and need a rich plurality of experiences more than anything else, and interesting, fun and pleasant experiences are something we get through the rich variety of life and the world, rather than by dwelling very hard on some single things.

A pluralism of values centered on survival, knowledge, competence, beauty, love and rich experiences (and a lot of other things) will promote human well-being and will also be helpful for our future AIs. I propose that international law stipulate that every AI and every owner of an AI should digest and adopt a document that explains these basic value matters. In addition, we may need to create a Guardian AI that watches over the diversity of values ​​in the world.

I have a longer text (26 300 characters) that explains these things more fully, which has not been translated into English yet. But this is a starter. There are hardly any really good texts on value pluralism, but you can read Yudkowsky's text (, the Wikipedia article on value pluralism (, the entry on value pluralism in the Stanford Encyclopedia of Philosophy (, this text from the Encyclopedia of World Problems and Human Potentials (, this discussion in Washington ( and this talk by Isaiah Berlin (

Value pluralism may possibly not have maximum sex appeal, but is the only solution for the world, it seems to me. I believe in a secular, Christian, ecological, social, liberal and democratic value pluralism and diversity/plurality enjoyment.

New Comment
4 comments, sorted by Click to highlight new comments since:

I notice myself feeling hesitant to upvote due to a bunch of unsourced claims. I do still find unsourced claims useful, and I often upvote them, so I'm not sure if this is reasonable; I might just be finding myself disagreeing with claims, but I'd appreciate a version of this with more explicit sourcing and description of idea origins. I have myself upvoted, but I share this comment as a guess as to why the average upvote might not be higher.

edit: actually, after some pondering, I came back and strong upvoted.