I've read through some of the Sequences, but I'm still unclear on a few basic concepts around LW rationality. This is in part to my learning still which benefits from social engagement (ie discussions) rather than just reading. One of those concepts I'm unclear on: Is there an inherent value to human (or sentient) life?

It appears to me that one common theme on this site is that human life (current and future) is very important. Why is that so? Why is the goal people over paper clips?

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 10:57 PM

Is there an inherent value to human (or sentient) life?

That's a question about an individual utility function, not rationality. I can't convince you why your utility function should have a term for the existence of other humans. But my utility function does. As it does for puppies, flowers, and double rainbows.

I see I've mistaken the word "inherent" to mean "many people share a term for the existence of humans". Thanks.

Is there an inherent value to human (or sentient) life?

"Inherent Value" is an oxymoron. The universe does not assign values to things. Value is a thing which exists only within a mind's Map of the world, and is not a property things can have.

See also: Metaethics.

As has been said, the Metaethics Sequence attempts to articulate a position on this.

My summary, take it for what it's worth:

Some things have more value than other things.
Talking about whether that value is "inherent" tends to cause confusion, but there is some calculation that can be performed on measurements of a thing to determine its value.
That computation is complex and has many inputs and is effectively impossible for humans to articulate or even understand completely or precisely; it's not anything as simple as "human life" or "sentient life" or "happiness" or anything like that.
That computation will reliably return the judgment that a human being is more valuable than an equivalent mass of paper clips.

Words like "right", "moral", and so forth are best understood as references to that (as-yet-unspecified) computation, just like the symbol "+" is best understood as a reference to the operation of addition.
If someone misunderstands what "+" means -- for example, if they understand it to be a reference to multiplication instead -- they might agree that "2+2=4" but assert that "3+3=9". That's not because they disagree about addition, it's because they disagree about what "+" means, and they are simply mistaken about what "+" means.
Similarly, if someone misunderstands what "right" means -- for example, if they understand it to be a reference to the speaker's values, rather than a reference to that specific computation we talked about a second ago -- they will make false statements about what's right and wrong. That's not because they disagree about right and wrong, it's because they disagree about what "right" and "wrong" mean, and they are mistaken.

I should perhaps note that I don't hold this position myself, and may be mischaracterizing it.

Is there an inherent value to human (or sentient) life?

There is no such thing as inherent value. All value is subjective. Humans are the ones making these subjective value judgement. As such A) without humans nothing has value, and B) humans tend to value themselves a lot, so yes, they have value.

Feel free to replace "humans" with "sentient life" as appropriate.

We're humans, so we maximize human utility. If squirrels were building AIs, they ought to maximize what's best for squirrels.

There's nothing inherently better about people vs paperclips vs squirrels. But since humans are making the AI, we might as well make it prefer people.

That's one element in what started my line of thought..I was imagining situations where I would consider the exchange of human lives for non-human objects. How many people's lives would be a fair exchange for a pod of bottlenose dolphins? A West Virginia mountaintop? An entire species of snail?

I think what I'm getting towards is there's a difference between human preferences and human preference for other humans. And by human preferences, I mean my own.

[-][anonymous]12y40

I think what I'm getting towards is there's a difference between human preferences and human preference for other humans. And by human preferences, I mean my own.

That is one objection to Coherent Extrapolated Volition (CEV), i.e. that human values are too diverse. Though the space of possible futures that an AGI could spit out is VERY large compared to the space of futures people would want, even if one takes into consideration the diversity of human values.

Is there an inherent value to human (or sentient) life?

If by "inherent" we mean "contained within itself", then it depends on whether this human (or sentient life) attributes value to its own existence.

This may seem like wordplay to you, but it's as meaningful an answer as can be given -- sentient life has inherent value if it values itself, because it's then that it has value contained within itself.

By contrast things like rainbows have only value in the minds of others, no inherent value in themselves, because rainbows don't have value systems, only creatures with minds do.

Why is the goal people over paper clips?

Why do you need justification on this? Would you exchange lifes of your friends or relatives for a ton of paperclips?

No, but I might exchange the lives of someone elses friends for a billion tons of paperclips.

[-][anonymous]12y20

But would you exchange every single person on the planet for 10^18 tons of paperclips?

There is no rule in the laws of physics that says that human life is valuable.

However, when people hear this, there is a flinch response to this: 'human life is not really valuable, we only think it is.'

That is wrong. Or, at least, not justified. In fact, the laws of physics and the universe at large are not qualified to comment on moral issues. Value, as a universal value, is not well defined. It is subjective, although not totally -- it can be argued about between people, and minds can be changed. Morality is something that exists only when intelligent agents are concerned. I value other humans because I value myself and logic dictates that they are not so different than me. As for why I value myself... well, that one may be down to basic architecture of the mind.

"Physics" is high status. We don't like when high status things don't value the same things we do, it's scary. That physics isn't an enemy monkey is immaterial to our stupid stupid brain.

Here's a stab at your question: what is right is not only a function of the action and/or its consequences; it also depends on your values. Humans are not inherently valuable. The universe as a whole doesn't care about them, or about anything. But humans care about other humans. You probably care about humans. Thus the word "right", as it applies to you, means in part doing what's good for humans. If you only valued paperclips, then it would be right with respect to you for you to maximize paperclips at the cost of human lives--but it would still be wrong with respect to me and my preferences.

Some further resources: The sequence that will answer your question is the metaethics sequence. Any other questions you have can be posted in this month's open thread.

Values (utilities, goals, etc) are arational. Rationality, LW or otherwise, has nothing to say about "correctness" of terminal values. (Epistemic rationality - the study of how to discover objective truth - is valuable for most actual values which reference the objective, real world; but it is still only a tool, not necessarily valued for itself.)

Many LW posters and readers share some values, including human life; so we find it productive to discuss it. But no-one can or will tell you that you should or ought to have that value, or any other value - except as an instrumental sub-goal of another value you already have.

Your expression, "inherent values", is at best confusing. Values cannot be attributes purely of the valued things; they are always attributes of the tuple (valued thing, valuing agent). It doesn't make sense to say they are "inherent" in just one of those two parts.

Now, if you ask why many people here share this value, the answers are going to be of two kinds. First, why people in general have a high likelihood of holding this value. And second, whether this site tends to filter or select people based on their holding this value, and if so how and why it does that. These are important, deep, interesting questions that may allow for many complex answers, which I'm not going to try to summarize here. (A brief version, however, is that people care more about other people than about paperclips, because other people supply or influence almost all that a person tends to need or want in life, while paperclips give the average person little joy. I doubt that's what you're asking about.)

Rationality, LW or otherwise, has nothing to say about "correctness" of terminal values.

Correctness is the property of a description that accords with the thing being described. When you ask, "What are my terminal values?", you are seeking just such a description. A belief about terminal values can be correct or incorrect when it reflects or doesn't reflect the terminal values themselves. This is not fundamentally different from a belief about yesterday's weather being correct or incorrect when it reflects the weather correctly or incorrectly. Of course, the weather itself can't be "correct" or "incorrect".

I've been trying to work through Torture versus Dustspecks and The Intuitions Behind Utilitarianism and getting stuck...

It seems Values are arational, but there can be an irrational difference between what we believe our values are and what they really are.

there can be an irrational difference between what we believe our values are and what they really are.

Certainly. We are not transparent to ourselves: we have subconscious and situation-dependent drives; we don't know in advance precisely how we'll respond to hypothetical situations, how much we'll enjoy and value them; we have various biases and inaccurate/fake memory issues which cause us to value things wrongly because we incorrectly remember enjoying them; our conscious selves self-deceive and are deceived by other brain modules; and so on.

Moreover, humans don't have well-defined (or definable) utility functions; our different values conflict.

Have you gotten to the meta-ethics sequence? It's (indirectly) addressed by there.