I don’t know if this is trivial/obvious or absurd or anything in between, so maybe you guys would like to help me judge?

The idea is simple. I’m confused by the term “values” in the moral and ethical sense. The way I often see it used, it makes personal and societal “values” seem hard to define/operationalize, incommensurable, uncountable, frequently in conflict with each other and usually in unclear relationships to each other and to those who hold them. But the word “values” is everywhere in normative discussions, and normative discussions are important and interesting, so I wish they weren’t being muddled by that word.

Is it just me, too stupid to get some perfectly clear meaning? Or is “values” really as vague and mysterious as it seems to me?

I think all of the useful work that the word “values” does can also be done by the word “priorities”. Priorities tell you what to do, they help you decide between alternatives. They are a language for describing agreement and disagreement on normative questions.

And all of us, including people who think in terms of values, already think in terms of priorities when we’re in projects and in everyday life. The confusion of “values” is in more abstract, longer-term regions of our thought. I think it is better to extend our thinking about priorities into those regions, rather than use a completely different set of terms and operations.

“Priorities” can also more obviously build on, or be derived from, each other. A priority can be strictly subordinate to another, as a means to an end. We’re used to having “higher” and “lower” and “overarching” priorities, so we can use those qualifiers rather than need to invent subcategories of “values” like “instrumental values” and “terminal values”.

Example: I have a priority to finish this post. That's a means to my higher priority, which is to find out if this idea is useful, and float it in the rationality cluster if it is. And that in turn is a means to my next higher priority, to contribute usefully to the memeplex that makes our species increasingly powerful and helps accelerate the colonization of the galaxy. You might call colonization of the galaxy my terminal value, but I prefer to call it my overarching priority.

And “priorities” implies commensurability. Which is great, because everything is commensurable. The priorities are still in competition over resources of course, but since they’re commensurable and they have means-ends relationships and “higher” and “lower” the competition can probably be quantified. This helps against what seems to me the most grating aspect of “values” - the relationship between competing values often remains undefined, which implies the possibility of irreconcilability and presents as an unsolvable problem.

(Utilitarianism already has utility, which also replaces “values” and achieves roughly the same results. But non-utilitarians would still be better off with an alternative to the word “values”.)

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 11:02 PM

FWIW, there seems to be a trend in my corner of the AI safety community (CHAI) to move away from the term 'values' and towards the term 'preferences', I think for similar reasons.

My sense is that this is just language coopting useful ideas in favor of useful sounding ideas. The term 'value' is quantitative and objective, as it is used say, in economics or finance. It's not inherently vauge, and goals can be comeasured as valued higher or lower. I imagine this comeasurable sense was shared in early philosophical use as well, as in debates about whether liberty or equality is the higher value.

If the language in vouge changed to 'priorities', I doubt it would take long before mission statements said thing like "prioritizing strengthened community through live art". This seems no more lucid or operationalizable to me.

In my understanding of language the word value refers to something that's deeper and more stable then the word priorities.

When it comes to words that describe internal mental states it's often not trivial to operationalize them. I think it's mistaken to say that we should ignore internal mental states because it's hard to introspect.

At the last LessWrong Community there was a person who presented Leverage Research's belief reporting which is a very useful tool for introspection.

According to framework I get a different resonance when I say "I value learning" then when I say "I prioritize learning" and it seems that they don't map to the same mental substance.

I don't like it. To me, very broadly, "values" are what I want, "priorities" are what I will go after given that I have limited resources and it costs things to go after what I want and succeed at getting it.

One may say they value X, but not be working to increase X. It may be a preference, and yet it may also not be a priority. (A relevant quote: "Never give up what you want the most, for what you want the most at the moment.")

I have limited resources and it costs things to go after what I want and succeed at getting it.

Is always true. Time is a constrained resource. Manpower is a constrained resource.

it makes personal and societal “values” seem hard to define/operationalize, incommensurable, uncountable, frequently in conflict with each other and usually in unclear relationships to each other and to those who hold them.

I'm fairly sure that's how human terminal values are. If you wanted to formalize a single human's values, if you had the utility function be a sum of unaligned modules that actually change over time in response to the moral apologia the human encounters and the hardships they experience, that would get you something pretty believable.

"value" has connotations about identity and long-term worldview. "priority" could include long-term tradeoff choices, but is more often about current circumstances.

Not everyone believes that everything is commesurable and people often wish to be able to talk about these issues without implicitly presuming that fact.

Moreover, values suggests something that is desirable because it is a moral good. A priority can be something I just happen to selfishly want. For instance, I might hold diminishing suffering as a value yet my highest current priority might be torturing someone to death because they killed a loved one of mine (having that priority is a moral failing on my part but doesn't make it impossible).

I break this down in a slightly different way. I think "value" is a coherent concept, but it's a much broader concept than we normally give it created for. I've sometimes used the word "axia" in place of "value" (note "axia" is singular; it's a Greek word and just looks like a plural would in Latin; the Greek plural is "axies" but "axias" is probably acceptable in English) to avoid association with the confused specificity of "value". So instead I go in the direction of considering anything that is the value of a thought to be sensible as an axia/value. This is a different direction than most people pick because I want to cover the entire space of value-like things rather than exclude some value-like things to get certain properties that make it easier to do math. I wrote some words about this in the final post in a multi-part series of posts I did about a year ago if you want to read more.

[-]jmh5y-10

There's an old song that says "language is a virus" -- the meaning of which has changed for me over time and the phrase itself offers multiple interpretations.

Might I suggest that your values are what help define how you prioritize your alternatives and efforts within that ends-means framework.