Utilitarianism twice fails


It seems almost self-evident that (barring foreign subjugation) a government will care about the wants of (some of) its citizens and nothing else: no other object of concern is plausible. If governments concern themselves with the wants of noncitizens, that will be only because citizens desire their well-being. The now platitudinous insight that the only possible basis for government policy is people’s wants can be attributed to utilitarianism, which gets credit in its stronger form for the apparent success of weaker claims.

Another reasonable claim derives from utilitarianism: citizens’ wants should count equally. This seems only fair in a democracy, where one citizen gets one vote. Few today would deny the principle that public policy should serve the most good of the greatest number, which may seem to contradict my claim that no general moral principle governs public policy, but in practice, the consequences of this limited utilitarianism are thin indeed, leaving ample room for ideology. I’ll call thin utilitarianism this public-policy formula: the greatest good for the greatest number of citizens, weighting their welfare equally.

First, I’ll consider whether thin utilitarianism succeeds on its own terms by providing a practical guide to public policy. Second, I’ll examine how this deceptively appealing guide to public policy transmogrifies into the monster of full-blown utilitarianism, a form of moral realism. The first constrains even casual use of thin utilitarianism; the second impugns utilitarianism as a general ethical theory.

1. Non-negotiable conflicts between subagents undermine thin utilitarianism

Although simple economic models attributing conduct to rational self-interest require that agents assign consistent utilities to outcomes, agents are inconsistent. One example of inconsistent utility assignment is the endowment effect, where agents assign more value to property they own than  to the same property they don’t own. The inconsistency considered here is stronger than the endowment effect and similar phenomena that we can surmount with effort, as professional traders must do. Despite the effect, there is a real answer to how much utility an outcome affords; the endowment effect is a bias, which willpower or habit can neutralize.

The conflict between subagents within a single person, on the other hand, can’t be resolved by means of a common criterion, such as market price, since two subagents pursue different ends. Which of these subagents dominates depends on situational and personological factors that elicit one or the other, not on overcoming bias. Construal-level theory reveals a conflict between intrapersonal subagents, near-mode and far-mode, integrated mindsets applied to matter experienced at fine or broad granularities. Modes (or “construal levels”) differ in that far-mode is more future-oriented and principled, near-mode, present-oriented and contextual. Far-mode and near-mode are elicited by the way social choices are made: voting elicits far-mode and market choices, near-mode; the utility of a choice depends on construal level.

Take a policy choice: how much wealth should be spent on preventive medicine? There are two basic ways allocating resources to medical care, political process and the market, socialized medicine being an example of political process, private medicine, the market. Socialized medicine makes allocating funds for the medical care a political decision; the market makes it each consumer’s personal choice. When you compare the utility of the choices by political process with those on the market, you should expect to find that when people choose politically, they use far-mode thinking encouraged by voting; whereas when they make purchases, they use near-mode thinking encouraged by the market. The preventive-care expenditure will be higher under socialized medicine because political process elicits far-mode, which is concerned with future health. People will be more miserly with preventive care under private medicine, where the decision to spend is made by consumer choice in near-mode, where we care more about the present. People favor spending more on preventive care when they vote to tax themselves than when they buy it on the market. Which outcome provides the greater utility—more preventive care or more recreation—is relative to construal level.

The same indeterminacy of utility occurs when comparing decisions made under different political processes, such as local versus central. Local decisions will be near-mode, central decisions far-mode. Assuming socialized medicine, less funding would be available if it were subject to state rather than federal control. Which provides more utility depends on whether the consequences are evaluated in near-mode or far-mode; no thin-utilitarian criterion applies.

Some utilitarians will protest that we should measure experiences rather than wants. The objection misses the argument’s point, which is that utility is relative to mode, a conclusion easiest to see in the public-choice process because the alternatives may be delimited. If the conclusion that utility depends on construal level holds, the same indeterminacies occur in evaluating experience. That apart, when utilitarianism is applied to public policy, present wants rather than experienced satisfaction is the criterion; agents necessarily choose based on present wants whether on the market or the political process.

2. Full-blown utilitarianism stands convicted of moral realism

Full-blown utilitarians are necessarily moral realists, but increasingly they are seen to deny it. While moral realism is widely recognized as absurd, utilitarianism seems to some an attractive ethical philosophy. For the sake of intellectual respectability, utilitarians can appear to reject an anachronistic moral realism while practicing it philosophically.

Full-blown utilitarianism often obscures its differences with thin utilitarianism, which is a questionable doctrine but in accord with ordinary common sense. It emerges from thin utilitarianism by the misdirection of subjecting ethical premises to the test of simplicity, a test appropriate to realist theories exclusively, because simplicity serves truth. A classic illustration: Aristotle theorized that everything on earth that goes up goes down; Newton set out the gravity theory, which applies to all objects, not just those terrestrial, and which predicts that objects can escape the earth’s gravitational field by traveling fast. Scientists confidently bet on Newton well before rockets were invented, and their confidence was vastly increased by the simplicity of Newton’s theory, which made correct predictions concerning all objects. Although philosophers have explained variously the correlation between simplicity and truth, they generally agree that simplicity signals truth. Unless utilitarians can otherwise justify it, searching for a simple moral theory means searching for a true theory.

The full-blown utilitarian seeks a misplaced simplicity by insisting that all entities that can experience happiness, a much simpler criterion than “current citizens,” serve as the beneficiary reference group—including future generations of humans and even beasts, whose existence depends on policy; whereas, thin utilitarianism is a democratic convention, serving only the wants of the currently existing citizens . Because they must incorporate future generations into the reference group, utilitarian philosophers have had to accept that a policy-dependent reference group entails a dilemma regarding interpretation of full-blown utilitarianism, with unattractive consequences at both horns, which realize radically different ideals.  In one version, you maximize the average utility obtained by the whole population; in the other you sum the utilities. These interpretations seem almost equally unattractive: the averaging view says that one supremely happy human is better than a billion very happy ones; the adding approach implies that a hundred trillion miserable wretches is better than a billion happy people. To apply a utilitarian standard to scenarios so distant from thin utilitarianism, accepting their consequences because of simplicity’s demands, is to treat moral premises as truths and to practice moral realism, despite contrary self-description. Those agreeing that moral realism is impossible must reject full-blown utilitarianism.

17 comments, sorted by
magical algorithm
Highlighting new comments since Today at 9:18 PM
Select new highlight date

It seems almost self-evident that (barring foreign subjugation) a government will care about the wants of (some of) its citizens and nothing else

That is not at all self-evident.

While moral realism is widely recognized as absurd

While some people believe this, it is not widely recognized.

These phrases should trigger warnings.

Although philosophers have explained variously the correlation between simplicity and truth, they generally agree that simplicity signals truth. Unless utilitarians can otherwise justify it, searching for a simple moral theory means searching for a true theory.

This is the fallacy of the undistributed middle. You say, essentially,

  • All searches for truth are searches for simplicity
  • Utilitarianism is a search for simplicity
  • Therefore utilitarianism is a search for truth

While I understand that utilitarianism being a search for simplicity is evidence that it's a search for truth, that does not give you license to automatically assume the worst of a theory you dislike.

It seems almost self-evident that (barring foreign subjugation) a government will care about the wants of (some of) its citizens and nothing else: no other object of concern is plausible.

This is not at all "self-evident", unless you choose to interpret the sentence completely literally, which would render it nearly meaningless.

For example, the government of North Korea does indeed care about the wants of "some" of its citizens, where the number of such citizens is pretty close to 1.

I think North Korea is no problem for the quoted sentence. I interpret it as saying that the government doesn't care about the wants of non-citizens, rather than asserting that the government cares about a significant number of citizens.

Nevertheless, even assuming this interpretation it is still not self-evident.

Listen, srdiamond, aka dEMOCRATIC_cENTRALIST, aka commonlaw, aka [sockpuppet list who knows how long], this isn't "my community" or any of my business or anything, but it seems to me that nobody really {likes|finds thoughtful} your posts either here or on overcomingbias.

Why are you posting here? I used to think this was a parasitic strategy to drive views to your blog, but given how your stuff is received, this can't be a good strategy at all. Are you trolling? Is this some sort of elaborate farce or performance art?

For some context for you, while you held the first two paragraphs to be self evident, they seemed to wrong, or not even wrong, in every claim to me.

Does a government care? That's anthropomorphizing an organization of people. Further, the organization may produce results that none of the citizens desire (see 1984).

Another reasonable claim derives from utilitarianism: citizens’ wants should count equally.

The traditional American constitutional view is that no one's wants count for anything to the government. The government is there to protect your rights, not satisfy your wants.

Others are busy criticizing your interpretations of utilitarianism. I think your priors about the beliefs of others are mistaken again and again.

Further, the organization may produce results that none of the citizens desire (see 1984).

I don't think 1984 is a good example, since the government in 1984 served the desires of the Party members very well.

On the other hand, I would agree that a government can still end up producing results that none of the citizens actually desire as end goals, mostly due to perverse incentives and lost purposes.

On 1984, I don't think the O'Brien displays any love for the Party. He's playing his part, because he has little choice, like everyone else.

I think that's the point. Everyone can get screwed by the wrong institutions that no one intends. There doesn't have to be an evil cabal for an evil result. One set of institutions can bring a benevolent invisible hand, and another can bring a boot stomping a human face forever, both in contradiction to the intent of the individual actors involved.

Dennett calls it competence without comprehension, though I've never seen him give Adam Smith any credit. The system does what it does, without knowing what it does.

The number of fallacies in this strawmanning of utilitarianism is only rivaled by the number of fallacies in your previous posts as metaphysicist about nonexistence of infinities.

Utilitarianism as used on LW typically refers to something closer to "It is in my personal nature to act as to maximize the average amount of [list of types of computation] minus [list of other types of computation] of entities that [qualification criteria]" where the contents of the brackets vary from person to person and are extremely complex, not "everyone objectively should maximize 'happiness' ".

Note that's "closer" not close; this is a one-sentence oversimplification.

The moral reasoning in utilitarianism is actually really convoluted. Like you say, not self-evident:

""Samuel Scheffler takes a different approach and amends the requirement that everyone be treated the same.[101] In particular, Scheffler suggests that there is an ‘agent-centered prerogative’ such that when the overall utility is being calculated it is permitted to count our own interests more heavily than the interests of others. Kagan suggests that such a procedure might be justified on the grounds that, “a general requirement to promote the good would lack the motivational underpinning necessary for genuine moral requirements” and, secondly, that personal independence is necessary for the existence of commitments and close personal relations and that “the value of such commitments yields a positive reason for preserving within moral theory at least some moral independence for the personal point of view.”[102]

Robert Goodin takes yet another approach and argues that the demandingness objection can be ‘blunted’ by treating utilitarianism as a guide to public policy rather than one of individual morality. He suggests that many of the problems arise under the traditional formulation because the conscientious utilitarian ends up having to make up for the failings of others and so contributing more than their fair share.[103]

Harsanyi argues that the objection overlooks the fact that “people attach considerable utility to freedom from unduly burdensome moral obligations… most people will prefer a society with a more relaxed moral code, and will feel that such a society will achieve a higher level of average utility—even if adoption of such a moral code should lead to some losses in economic and cultural accomplishments (so long as these losses remain within tolerable limits). This means that utilitarianism, if correctly interpreted, will yield a moral code with a standard of acceptable conduct very much below the level of highest moral perfection, leaving plenty of scope for supererogatory actions exceeding this minimum standard.”[104]""

Oh look, our Emmanuel Goldstein is back. Arrrrrr!

While this post is pretty bad, I don't think it's bad enough to be at -22 with 0% positive as of this writing. Thus, to denote that it has a small good part, I give it one upvote.

Specifically, the part I find to be pretty good is the observation that it may be impossible to accurately model a human as having one consistent set of preferences (as exemplified by the seeming plausibility of construal level theory), and that this potential impossibility can pose a challenge to utilitarianism as it is classically conceived.

This is an issue with the voting system; I think this deserves a -4 or -5 for muddled thinking. -22 is... excessive.

I'd like the ability to say that something deserves, say, a -5. And then if the current score deviates from that, my vote is applied to bring it closer to that goal number.

Bad posts often get a strong karma hit initially when the most vigilant readers check them and later return towards zero. It is possible (although not likely) that two months from now the post would stand at +2, your vote contributing to the positive score.

I feel this is sufficiently improbable that I'm willing to take the risk. That said, you raise a good point, and I'll make a note to check on this two months from now and see how it turned out (if it's -5 or higher, I'll consider my vote to be "wrong", if it's -6 or lower, I'll consider it to have been good).

I was supposed to check on this a long time ago, but forgot/went inactive on LW, but the post actually ended up at -26, so seemingly slightly lower than it was, which is evidence against your regression to 0 theory.