Many people see themselves in various groups (member of the population of their home country, or their social network), and feel justified in caring more about the well-being of people in this group than about that of others. They will argue with reciprocity: "Those people pay taxes in our country, they are entitled to more support from 'us' than others!" My question is: Is this inconsistent with some rationality axioms that seem obvious? What often-adopted or reasonable axioms are there that make this inconsistent?

New to LessWrong?

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 9:07 AM

My question is: Is this inconsistent with some rationality axioms that seem obvious?

No. Rationality has not sent you orders to serve the collective. You are free to value the people and things you in fact value.

That's cultural kin selection. It isn't necessarily bad - for example, sometimes supporting your group pays. Of course, it can be bad - when patriotism leads to dying in battle for the sake of your comrades, that isn't so great for those that fell.

Depends on their [the fallen's] values.

If you don't think dying is sufficiently bad, feel free to substitute an example of memetic hijacking of your choice.

Oh, I certainly agree, but who are we to decide how others value dying relative to other goals. Their utility function ain't faulty just because we call its features memetic hijacking.

In one of Yvain's posts he mentions that a perfect utilitarian "attaches exactly the same weight to others' welfare as to [his or her] own". Utilitarianism seems to be popular here. "Others" seems to imply all others & makes no distinctions.

Well, nobody can dictate which terminal values you should have, i.e. the utility function is not up for grabs. However, if you choose a class of things similar to you

(e.g. everything which weighs 150 lbs, everyone of your race, every human less that 5km away, all human brains which weigh more than 1 pound, everything which has human DNA or possibly everything living)

, then you can limit your utilitarianism to these things and be a perfect utilitarian w.r.t. this group. I guess she meant it this way, although I would embrace clarification or confirmation.

nobody can dictate which terminal values you should have, i.e. the utility function is not up for grabs

The most important case is where your can't yourself arbitrarily declare your own values.

I don't see the connection to my comment. Could you enlighten me, please?

(Your wording, even if unintentionally, seemed to suggest that the statement applies mainly to the way other people won't be able to actually force arbitrary terminal values on you (even when they convince you that they have). I think the remaining case, where you do that to yourself, is particularly important, as it's not a well-known idea that this too should be guarded against.)

Should it be a well-known idea [that this (=forcing terminal values on yourself) should be guarded against], or even desirable (to guard against forcing terminal values on yourself)?

Edit: Clarified

I don't expect that being systematically wrong about your own values would be desirable.

(See clarification in the grandparent)

Isn't your present self the determinant of your terminal values? The blueprint you compare against? Isn't it a tautology that your current utility function is the utility function of your present self?

If so, if at any one point in time you desire to reprogram a part of your own utility function, wouldn't that desire in itself mean that such a change is already a justified part of your present utility function?

If there is some tension between your conscious desires ("I want to feel this or that way about this or that") and your "subconscious" desires, why should that not be resolved in favor of your conscious choice?

If you consciously want to want X, but subconsciously want Y, who says which part of you takes precedence, and which is the "systematically wrong" part?

There is a difference between (say) becoming skilled at mathematics, and arbitrarily becoming convinced that you are, when in fact that doesn't happen. Both are changes in state of your mind, both are effected by thinking, but there are also truth conditions on beliefs about the state of mind. If you merely start believing that your values include X, that doesn't automatically make it so. The fact of whether your values include X is a separate phenomenon from your belief about whether they do. The problem is when you become convinced that you value X, and start doing things that accord with valuing X, but you are in fact mistaken. And not being able to easily and reliably say what is is you value is not grounds for accepting an arbitrary hypothesis about what it is.

Thanks for the answer.

Your example is an epistemic truth statement. Changing "I am good at mathematics" to "I am not good at mathematics" or vice versa does not change your utility function.

Just like saying "I am overweight" does not imply that you value being overweight, or that you don't.

I understand your point that simply saying "I value X deeply" does not override all your previous utility assessments of X. However, I disagree on how to resolve that contradiction. You want to guard against it, you'd say "it's wrong". I'd embrace it as the more important utility function of your conscious mind.

You take the position of "What I consciously want to want does not matter, it only matters what I actually want, which can well be entirely different".

My question is what elevates those subconscious and harder to access stored terminal values over those you consciously want to value.

Should it not be the opposite, since you typically have more control (and can exert more rationality) over your conscious mind than your unconscious wants and needs?

Rephrase: When there is a clear conflict between what your conscious mind wants to want, and what you subconsciously want, why should that contradiction not be resolved in favor of your consciously expressed needs, guiding your actions? Making them your actual utility function.

Wanting to want X is again distinct from believing that you want X. Perhaps you believe that you want to want X, but you don't actually want to want X, you want to want Y instead, while currently you want Z and believe that you want W. (This is not about conscious vs. subconscious, this is about not confusing epistemic estimates of values with the values themselves, whatever nature each of these has.)

(See also An Epistemological Nightmare; I'm not joking though.)

Good link. I agree with guarding against wrong epistemic estimates of values (good wording).

Our disagreement comes down to this (I think): "I want to want X" Is this

a) an epistemic estimate of a value

b) a value in itself, pattern matching "I want Y", with Y being "to want X"

Consider a LW reader saying "I want to be a more rational reasoning agent", when previously she did not (this does not fit "want to want", but is also stating a potentially new element of a utility function potentially at odds with the the previous versions of the u.f.).

Could that reader be wrong about such? Or could there merely be a contradiction with the (consciously, how else) stated value versus other, contradictory values.

You'd say such a stated value can be wrong because it is merely an epistemic estimate of a value.

But why can you not introduce new values by wanting to want new values? Can you not (sorry) consciously try to modify your utility function at all? That would sound a bit fatalistic.

But why can you not introduce new values by wanting to want new values?

You can, it might be a bad idea (for some senses of "values"), and if you believe that you are doing that, it's not necessarily true, even though it might be.

I'm not saying that it's not possible to be correct, I'm saying that it's possible to be mistaken, and in many situations where people claim to be correct about their values, there appears to be no reason to strongly expect that to be so, so they shouldn't have that much certainty.