You have desires. You also have desires about your desires: perhaps you desire cake but you also desire that you didn't desire cake. You also have desires about the processes which produce your desires: perhaps you desire X and Y but only because of a weird evolutionary turn and you wish the processes which created your desires weren't so far beyond your own control.

But what should you do, when these different kinds of desires are in conflict with each other? If you could reflect upon and then rewrite your own desires, how should you choose to resolve those conflicts?

Nozick (1993) proposes 23 constraints on rational preferences, which one could also interpret as 23 constraints on the process of resolving conflicts among one's preferences. I reproduce this passage below, for those who are interested:

Let me emphasize that my purpose is not to endorse the particular conditions I shall put forward or to defend their particular details. Rather, I hope to show what promising room there is for conditions of the sort that I discuss...

Some of these conditions are justified by instrumental considerations, such as the “money pump” argument that preferences be transitive, while others are presented as normatively appealing on their face. (Unless these latter can be given an instrumental justification also, isn’t this already a step beyond instrumental rationality?) Contemporary decision theory takes this one step beyond Hume: group of them together can be. Let us suppose that there are normative principles specifying the structure of several preferences together and that these principles are conditions of rationality. (The literature contains putative counterexamples and objections to some of the Von Neumann–Morgenstern conditions; the point here is not to use those particular ones but some such appropriate set of conditions.)

I. The person satisfies the Von Neumann–Morgenstern or some other specified appropriate set of conditions upon preferences and their relations to probabilities.

This suggests at least one further condition that a person’s preferences must satisfy in order to be rational, namely, that she must prefer satisfying the normative conditions to not satisfying them. Indeed, for any valid structural condition C of rationality, whether rationality of preference, of action, or of belief:

II. The person prefers satisfying the rationality condition C to not satisfying C.

(This condition should be stated as a prima facie one or with a ceteris paribus clause, as should many of the ones below. The person who knows that he will be killed if he always satisfies the condition that indifference be transitive, or the condition that he not believe any statement whose credibility is less than that of an incompatible statement, may well prefer not to.) Since the person is, let us assume, instrumentally rational,

III. The person will, all other things being equal, desire the means and preconditions to satisfying rationality conditions C.

These rationality conditions C not only concern the structure of preferences but also include whatever the appropriate structural conditions are on the rationality of belief. Hence the person will desire the means and preconditions of rational belief, she will desire the means and preconditions for the effective assignment of credibility values (and for deciding about the utility of holding a particular belief).

A person lacks rational integration when he prefers some alternative x to another alternative y, yet prefers that he did not have this preference, that is, when he also prefers not preferring x to y to preferring x to y. When such a second-order preference conflicts with a first-order one, it is an open question which of these preferences should be changed. What is clear is that they do not hang together well, and a rational person would prefer that this not (continue to) be the case. We thus have a requirement that a person have a particular third-order preference, namely, preferring that the conflict of preferences not obtain. Let S stand for this conflict situation, where the person prefers x to y yet prefers not having this preference, that is, let S stand for: xPy & [not-(xPy) P (xPy)]. Then

IV. For every x and y, the person prefers not-S to S, all other things being equal. 

This does not mean the person must choose not-S over S no matter what. An addict who desires not to desire heroin may know that he cannot feasibly obliterate his first-order desire for heroin, and thus know that the only way to resolve the conflict of preferences is to drop his second-order desire not to have that first-order desire. Still, he may prefer to keep the conflict among desires because, with it, the addiction will be less completely pursued or his addictive desire less of a flaw.

Hume claims that all preferences are equally rational. But an under- standing of what a preference is, and what preferences are for, might make further conditions appropriate. In recent theories, a preference has been understood as a disposition to choose one thing over an- other.8 The function of preferences, the reason evolution instilled the capacity for them within us, is to eventuate in preferential choice. But one can make preferential choices only in some situations: being alive, having the capacity to know of alternatives, having the capacity to make a choice, being able to effectuate an action toward a chosen alter- native, facing no interference with these capacities that makes it im- possible to exercise them. These are preconditions (means) for prefer- ential choice. Now, one does not have to prefer that these conditions continue; some people might have reason to prefer being dead. But they need a reason, I think; the mere preference for being dead, for no reason at all, is irrational. There is a presumption that the person will prefer that the necessary conditions for preferential choice, for making any preferential choice at all, be satisfied; she need not actually have the preference, but she needs a reason for not having it.

V. The person prefers that each of the preconditions (means) for her making any preferential choices be satisfied, in the absence of any particular reason for not preferring this.

So a person prefers being alive and not dying, having a capacity to know of alternatives and not having this capacity removed, having the capacity to effectuate a choice and not having this capacity destroyed, and so on. Again, we might add

VI. The person prefers, all other things being equal, that the capacities that are the preconditions for preferential choice not be interfered with by a penalty (= a much unpreferred alternative) that makes him prefer never to exercise these capacities in other situations.

There is something more to be said about reasons, I think. (I propose this very tentatively; more work is needed to get this matter right.)

Suppose I simply prefer x to y for no reason at all. Then I will be willing, and should be willing, to reverse my preference to gain something else that I prefer having. I should be willing, were it in my power, to reverse my preference, to now start preferring y to x, in order to receive 25 cents. I then would move from a situation of preferring x to y to one of preferring y to x and having 25 additional cents. And won't I prefer the latter to the former? Perhaps not, perhaps I strongly prefer x to y, and do so for no reason at all. Having a strong preference for no reason at all is, I think, anomalous. Given that I have it, I will act upon it; but it is irrational to be wedded to it, paying the cost of pursuing it or keeping it when I have no reason to hold it. Or perhaps I prefer preferring x to y to not having this preference, and I prefer that strongly enough to outweigh 25 cents. So this second-order preference for preferring x to y might make me unwilling to give up that preference. But why do I have this second-order preference? I want to say that, unlike any arbitrary first-order preference, a second-order preference requires a reason behind it. A second-order preference for preferring x to y is irrational unless the person has some reason for preferring x to y. That is, he must have a reason for preferring to have that first-order preference-perhaps his mother told him to, or perhaps that preference now has become part of his identity and hence something he would not wish to change-or have a direct reason for preferring x to y, a reason concerning the attributes of x and y. But what is a direct reason? Must a reason in this context be anything more than another preference? It must at least be another preference that functions like a reason, that is, one that is general, though defeasible. To have a reason for preferring x to y is standardly thought to involve knowing some feature F of x such that, in general, all other things being equal, you prefer things with F to things without them, among things of the type that x is. (Preferring cold drinks to warm does not require preferring cold rooms to warm ones.)

VII. If the person prefers x to y, either: (a) the person is willing to switch to preferring y to x for a small gain, or (b) the person has some reason to prefer x to y, or (c) the person has some reason to prefer preferring x to y to not doing that.

I don't say that all of a person's preferences require reasons for them- it is unclear what to say about ones that are topmost; perhaps they are anchored by ones under them-but first-level ones do require reasons when the person is not willing to shift them. Once we are launched within the domain of reasons for preferences, we can consider how more general reasons relate to less general ones, we can impose consistency conditions among the reasons, and so forth. The way becomes open for further normative conditions upon preferences, at least for those preferences a person is not willing to switch at the drop of a hat. Especially in the case of preferences that go against the preconditions for preferential choice mentioned above, a person will need not just any reasons but reasons of a certain weight, where this means at least that the reasons must intertwine with many of the person's other preferences, perhaps at various levels.

We also might want to add that the desires and preferences are in equilibrium, in that knowing the causes of your having them does not lead you to (want to) stop having them. The desires and preferences withstand a knowledge of their causes.

VIII. The person's desires and preferences are in equilibrium (with his beliefs about their causes).

Since preferences and desires are to be realized or satisfied, a person whose preferences were so structured that he always wanted to be in the other situation-preferring y to x when he has x and preferring x to y when he has y-would be doomed to dissatisfaction, to more dissatisfaction than is inherent in the human condition. The grass shouldn't always be greener in the other place. So

IX. For no x and y does the person always prefer x to y when y is the case and y to x when x is the case. (His conditional preferences are not such that for some x and y he prefers x to y/given that y is the case, and prefers y to x/given that x is the case.)

Desires are not simply preferences. A level of filtering or processing takes place in the step from preferences to desires-as (we shall see) another does in the step from desires to goals. We might say that rational desires are those it is possible to fulfill, or at least those you believe it is possible to fulfill, or at least those you don't believe it is impossible to fulfill. Let us be most cautious and say

X. The person does not have desires that she knows are impossible to fulfill.

Perhaps it is all right to prefer to fly unaided, but it is not rational for a person to desire this. (It might be rational, though, to wish it were possible.) Desires, unlike mere preferences, will feed into some decision process. They must pass some feasibility tests, and not simply in isolation: your desires must be jointly copossible to satisfy. And when it is discovered they are not, the desires must get changed, although a desire that is altered or dropped may remain as a preference.

Goals, in turn, are different from preferences or desires. To have or accept goals is to use them to filter from consideration in choice situations those actions that don't serve these goals well enough or at all. For beings of limited capacity who cannot at each moment consider and evaluate every possible action available to them-try to list all of the actions available to you now-such a filtering device is crucial. Moreover, we can use goals to generate actions for serious consideration, actions that do serve these goals. And the goals provide salient dimensions of the outcomes, dimensions that will get weight in assessing the utility of these outcomes. Given these multiple and important functions of goals, one would expect that for an important goal that is stable over time we would devote one of our few channels of alertness to it, to noticing promising routes to its achievement, monitoring how we currently are doing, and so on.

How do our goals arise? How are they selected? It seems plausible to think that they arise out of a matrix of preferences, desires, and beliefs about probabilities, possibilities, and feasibilities. (And then goals reorganize our desires and preferences, giving more prominence to some and reversing others because that reversed preference fits or advances the goal.) One possibility is that goals arise in an application of expected utility theory. For each goal Gi, treat pursuing goal Gi as an action with its own probability distribution over outcomes, and compute the expected utility of this "action." Adopt that goal with the maximum expected utility, and then use it to generate options, exclude others, and so forth.

There is an objection to this easy way of fitting goals within an expected utility framework. The effect of making something Gi a goal is a large one. Now Gi functions as an exclusionary device and has a status very different from another possible goal Gj that came very close but just missed having maximum expected utility. A marginal difference now makes a very great difference. It seems that large differences, such as one thing setting the framework whereby other things are excluded, should be based upon pre-existing differences that are significant. Consider the descriptive finding a dominant structure, and she uses mechanisms such as combining and altering attributes and collapsing alternatives in order to get one action weakly dominating all others on all (considered) attributes. Thereby, conflict is avoided, for one action clearly is best; there is no reason for doing another. Will such dominance always set up a gulf between actions that is significant enough to make a qualitative difference with large effects and so be applicable to the formation of goals? Yet one action can weakly dominate another when there are six dimensions, the two actions tying on five of these while the first action is (only) slightly better on the sixth. Even in this framework, we seem to need more than simply weak dominance; perhaps we need strong winning on one dimension or winning on many of them.

Returning to the expected utility framework, we might say that goal Gi is to be chosen not simply when it has maximum expected utility but when it beats the other candidate goals decisively. For each j, EU(Gi) − EU(Gj) is greater than or equal to some fixed positive specified quantity q. (There remains a similar but smaller problem, though. Gi beats the other goals decisively, yet there is no decisive difference between beating decisively and not doing so; the difference EU(Gi) − EU(Gj) might barely reach, or just fail to reach, q.) To make something a goal is, in part, to adopt a desire to find a feasible route from where you are to the achievement of that goal. Therefore,

XI. A person will not have a goal for which he knows that there is no feasible route, however long, from his current situation to the achievement of that goal.

Moreover, we might say that a rational person will have some goals toward which she will search for feasible routes and not just have merely preferences and desires. She will filter out actions that cannot reach these goals, generate for consideration actions that might reach them, and so on. And some of these goals will have some stability, so that they can be pursued over time with some prospect of success.

XII. A person will have some stable goals.

A rational person will consider not only particular (external) outcomes but also what he himself is like, and he will have some preferences among the different ways he might be. Let Wp be the way the person believes he will be when p is the case; let Wq be the way he believes he will be when q is the case. (These include the ways that p, or q, will cause or shape or prompt him to be.) There is a presumption, which can be overriden by reasons, that preferences among ways of being will take precedence over lower-level preferences that are personal ones. (Personal preferences are ones derived solely from estimates of benefits to himself.)

XIII. If the person prefers Wp to Wq, then (all things being equal) he does not hold the (personal) preference of q to p.

Condition XIII holds that the way the person is, what kind of person he is, will have greater weight in his preferences than (what otherwise would be) his personal preferences. (Is this condition culture-bound and plausible only to people in certain kinds of cultures?)

The dutch book argument that someone's probability beliefs should satisfy the axioms of probability theory says that if they do not, and if she is willing always to bet upon such probability beliefs, then someone can so arrange things so that she is sure to lose money and hence reach a less preferred alternative. This argument says that if her (probabilistic) beliefs are irrational, she can be guaranteed to end up worse off on her utility scale. We might try the dual of this argument, imposing as a condition:

XIV. A person's desires are not such that acting upon them guarantees that she will end up with irrational beliefs or probabilities.

Various things might come under the ban of this condition: desiring to believe something no matter what the evidence; desiring to spend time with a known liar without any safeguards; desiring to place oneself in a state-through alcohol, drugs, or whatever-that will have continuing effects on the rationality of one's beliefs. But this requirement is too strong as stated; perhaps acting upon the desire will bring her something she (legitimately) values more than avoiding some particular irrational beliefs or probabilities. Similarly, the dutch book requirement is too strong as usually stated, for perhaps some situation holds in the world so that having incoherent probabilities will bring a far greater benefit-someone will bestow a large prize upon you for those incoherent probabilities-than the loss to be incurred in the bets. The dutch book argument points out that loss can be guaranteed, but still it might be counterbalanced; so too the irrational beliefs or probabilities you have through violating condition XIV might be counterbalanced. To avoid this, the moral of the dutch book argument must not be put too strongly, and similarly for condition XIV.

These fourteen conditions can take us some considerable distance past Hume toward substantive constraints upon preferences and desires. Empirical information about the actual preconditions of satisfying the conditions of rationality, and of making preferential choices- mandated by conditions III, V, and VI-might require quite specific substantive content to one's preferences and desires, the more so when combined with the constraints of the other conditions.

Can we proceed further to specific content? One intriguing route is to attempt to parallel with desire what we want to say about the rationality of belief. For example, people have held that a belief is rational if it is formed by a reliable process whose operation yields a high percentage of true beliefs. To be sure, the details are more complicated, but we might hope to parallel these complications also. A rational desire, then, would be one formed by a process that reliably yields a high percentage of dominant desires. But how are we to fill in that blank? What, for desires, corresponds to truth in the case of beliefs? For now, I have no independent substantive criterion to propose.

We can, however, use our previous conditions, and any additional similar ones, to specify the goal of that process: a desire or preference is rational only if it was formed by a process that reliably yields desires and preferences that satisfy the previous conditions on how preferences are to be structured, namely, conditions I–XIV. This says more than just that these fourteen conditions are to be satisfied, for any process (we can follow) that reliably yields the satisfaction of these conditions may also further constrain a person's desires and preferences.

XV. A particular preference or desire is rational only if there is a process P for arriving at desires and preferences, and (a) that preference or desire was arrived at through that process P, and (b) that process P reliably yields desires and preferences that satisfy the above normative structural conditions I–XIV, and (c) there is no narrower process P′ such that the desire or preference was arrived at through P′, and P′ tends to produce desires and preferences that fail to satisfy conditions I–XIV.

If we say that preferences and desires are rationally coherent when they satisfy conditions I–XIV (and similar conditions), then condition XV says that a preference or desire is rational only if (it is rationally coherent and) it is arrived at by a process that yields rationally coherent preferences and desires.

Not only can that process P reliably yield rationally coherent preferences and desires, it can aim at such preferences and desires, it can shape and guide preferences and desires into rational coherence. The process P can be a homeostatic mechanism, one of whose goal-states is that preferences and desires be rationally coherent. In that case, a function of preferences and desires is to be rationally coherent. (Similarly, if the belief-forming mechanism B aims at beliefs being approximately true, then one function of beliefs is to be approximately true.)

We therefore might add the following condition.

XVI. The process P that yields preferences and desires aims at their being rationally coherent; it is a homeostatic mechanism, one of whose goal-states is that preferences and desires be rationally coherent.

And similarly,

XVII. The cognitive mechanism B that yields beliefs aims at these beliefs satisfying particular cognitive goals, such as these beliefs being (approximately) true, having explanatory power, and so on. B is a homeostatic mechanism, one of whose goal-states is that the beliefs meet the cognitive goals.

A function of preferences and desires is to be rationally coherent; a function of beliefs is to meet the cognitive goals. That follows from our earlier account of function, if these mechanisms P and B are indeed such homeostatic mechanisms. Suppose these homeostatic mechanisms do produce beliefs and desires with these functions. Is it their function to do so? That depends upon what other mechanisms and processes produce and maintain those desireand belief-forming mechanisms. If those preference and cognitive mechanisms P and B were themselves designed, produced, or altered and maintained by homeostatic devices whose goals included aiming P and B at being devices that produced rationally coherent preferences and approximately true beliefs, then we have a double functionality. It is a function of the preferences and beliefs to be rationally coherent and approximately true, and it also is a function of the mechanisms that produce such beliefs and preferences to produce things like that, with those functions.

XVIII. There is a homeostatic mechanism M1 whose goal-state is that the preference mechanism P yield rationally coherent preferences, and P is produced or maintained by M1 (through M1's pursuit of this goalstate).

XIX. There is a homeostatic mechanism M2, whose goal-state is that the belief mechanism B yield beliefs that fulfill cognitive goals, and B is produced or maintained by M2 (through M2's pursuit of this goal-state).

It is plausible to think that our desireand belief-forming mechanisms have undergone evolutionary and social shaping that in some signifi-cant part aimed at their having these functions. There is more. Once people become self-conscious about their preferences and beliefs, they can guide them, monitor them for deviations from rational coherence and truth, and make appropriate corrections. Conscious awareness becomes a part of the processes P and B, and consciously aims them at the goals of rational coherence and truth.

XX. One component of the homeostatic preferenceand desire-forming process P is the person's consciously aiming at rationally coherent preferences and desires.

XXI. One component of the homeostatic belief-forming process B is the person's consciously aiming at beliefs that fulfill cognitive goals.

This self-awareness and monitoring gives us a fuller rationality. (Some might suggest that only when these conditions are satisfied do we have any rationality at all.)

Self-conscious awareness can monitor not just preferences and beliefs but also the processes by which these are formed, P and B themselves. It can alter and improve these processes; it can reshape them. Conscious awareness thus becomes a part of the mechanisms M1 and M2 and so comes to play a role in determining the functions of the preferenceand belief-forming mechanisms themselves.

XXII. One component of the homeostatic mechanism M1 that maintains P is the person's consciously aiming at P's yielding rationally coherent preferences.

XXIII. One component of the homeostatic mechanism M2 that maintains B is the person's consciously aiming at B's yielding beliefs that satisfy cognitive goals.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:54 AM

I just want to note that I find the lack of discussion (at the time of posting, 60% of these were my comments) of Nozick's The Nature of Rationality on Less Wrong to be highly puzzling and slightly embarrassing.

the mere preference for being dead, for no reason at all, is irrational [because being alive is a prerequisite to rational decisionmaking]

I disagree. If you had no other preferences at all, then it would be rational, and I think Nozick would concede at least that specific scenario. I also think it might be possible to take this line of argument further. It's not possible to conceptually evaluate what it's like to NOT be conscious and to have some form of rationality and rational decision making going on, because our understanding of that necessarily is shaped by our own experiences and none of us can consciously remember what it's like to be unconscious, by definition. If you can't evaluate what being dead or being unconscious is like, you can't get a preference on it one way or the other, in and of itself.