Review

Nothing really matters, Anyone can see,
Nothing really matters,

Queen on deontological norms

Crosspost of this on my blog

Thank you to Amos Wollen and James Reilly for helpful discussion of this topic.

Note, this is an argument for why deontology is not the right moral view.  Deontology can still give us useful heuristics and be worth following even if it does not correctly describe what one should do.  

Richard has a series of articles in which he presses the following charge: deontology requires caring about things that clearly don’t really matter. I think that not only is this charge correct, it is probably the most fundamental reason to reject deontology—there are lots of other reasons to reject deontology, but the basic intuition behind rejection of deontology is that morality should be about things that really matter, and deontology isn’t. The argument is as follows.

1 Our reasons to take actions are given by the things that really matter.

2 If deontology is true, our reasons to take actions are not given by the things that really matter.

Therefore, deontology is not true.

Note that here by deontology I mean it in a very broad sense. It will include any view that says that, for example, one shouldn’t kill one to save five. If one believes there should be constraints in our pursuit of the good, this will be, for present purposes, classified as deontology. Thus, this will include many forms of virtue ethics and particularism.

1 is true

 

1 just seems incredibly obvious. How could it be the case that things that are plainly irrelevant, that don’t really matter, determine whether one should take an act? The phrase “My decision to take the act was determined by something irrelevant,” strikes us as an admission of error, not of picking out the fundamental moral facts.

Additionally, it seems that if we deny 1, we have the possibility of fairly radical moral skepticism. We get the possibility that whether we should take some act could be determined by whether it affects the number of hairs on the head of a tortoise, for example, despite that being clearly irrelevant. If morality need not be about what matters, then the things that could determine whether one should take an act could be bizarre things, unbound by the requirement of importance.

One might object by claiming that this charge applies to all moral views. They might claim that any moral view leaves open the possibility of radical skepticism. They might claim that it’s always possible that the moral facts tell us to maximize helium or destroy all green objects. It is, of course, true that as a purely epistemic matter, we could be radically mistaken about what morality requires. The odds are not zero that we should maximize helium, just as the odds are not zero in any proposition.

However, it is unclear on what basis the person who denies that morality is about what matters could have a strong claim that morality does not result in the worthwhileness of our decisions depending on totally arbitrary things. It seems that the reply to the person who thinks one should maximize helium is that helium does not really matter. For things to matter, they have to affect the interests of people, and so things that don’t affect anyone’s welfare cannot matter. But if we deny this core moral intuition, it’s unclear how people could be very confident that we don’t have strong reasons to care about totally arbitrary things.

The deontologist could propose a different way to avoid a skeptical scenario. They could suppose that we can have intuitions about the things that give us reasons. Thus, we might have the intuition that things like the amount of helium affected by our actions cannot give us reasons.

But the deontologist cannot consistently take this line. Deontologists generally do not claim that their principles seem initially like things that give us reasons—it does not, for example, seem prior to reflection that whether a killing of person is permissible will hinge on whether, at the time of their death, the plan involving their death would be thwarted or aided by the discovery that they haven’t died (this is a rough articulation of part of the doctrine of double effect). Instead, the deontologist claims that, though this doesn’t seem like something that matters, we should infer that it gives us reasons because it best accounts for our intuitions about various cases—trolley problems and such.

But if this is true, then the deontologist is not relying on intuitions about what things give us reasons at the level of principles. They do not say that we should accept their principles because they seem right, they say we should accept their principles because they make particular judgments about cases, and those judgments seem right. But this opens up the possibility of the account of what one should do—according to the deontologists’ methodology—being something totally bizarre. For example, suppose that, as Joshua Greene has supposed, the best explanation of our deontological intuitions is that we intuit that it is wrong to kill someone if it is a result of us physically touching them. This is not, I believe, an adequate account of our deontological intuitions—but suppose that it, or something in the vicinity of it, was. It seems that the deontologist who abandons our moral reasons being about things that really matter would have to say that, though this account seems to make morality hinge on plainly irrelevant factors like whether we’ve physically touched the victims of our killings, because our moral reasons are not given by what matters, this does not disqualify the account as an explanation of the true moral law. Such a view is implausible, absurd.

It is perhaps possible that the deontologist can avoid this challenge in some way. However, the best response is not available to them—pointing out that morality cannot be about maximizing helium, because helium is not what really matters. And if morality is just about curve-fitting our moral intuitions, even if the explanatory model ends up resulting in our moral reasons depending on clearly irrelevant things, then this leaves open a high probability of morality producing the judgment that the true moral law will seem deeply silly.

Another argument can be made for the conclusion that, for something to give us a moral reason, it must really matter. This argument has been made by Richard—if moral norms don’t really matter, then they begin to seem like irrelevant culinary norms. They begin to seem like the norms of divine command theory which claims that the wrongness of an act is determined by whether it has been commanded by god. If the moral norms aren’t about what matters, then why not be a benevolent amoralist—why care about wrongness if it is about things that are unimportant?

The deontologist could reply by pointing out that one should objectively do what is right and not do what is wrong. Thus, if the moral facts are as they describe, then the moral facts suffice to explain why one should not be a benevolent amoralist. However, if the moral norms result in morality hinging on unimportant things, then it’s unclear how morality can have its force. I currently have a concept of morality according to which morality relates to what is important and should be followed—but if divorced from importance, why not suppose that it is more like norms around sinfulness that don’t really pick out anything of significance? If moral norms don’t really matter, then why suppose that things that are wrong are worth avoiding, rather than wrong just picking out some fictitious property that’s not worth caring about.

Huemer has pointed out that there are lots of thick moral concepts that we should disbelieve in. A thick moral concept is a concept that combines a descriptive claim, which means a claim about the way things are, and a normative one, which means a claim about the way things should be—for example, ‘brave’ is a thick moral concept, in that when we call someone brave, we are both expressing a positive normative judgment and a descriptive claim (they are not fearful). We should not believe in the concept ‘morality’ as used by the bible—which refers exclusively to sexual sins. We should not think that there is a distinct type of wrong relating exclusively to taboo sex between two consenting adults. As Huemer notes “Consider the phrase “a woman of loose morals”. This does not refer to a woman who causes needless pain, violates people’s rights, or fails to donate to famine relief.” If a deontologist’s morality is not about what really matters, we should think of it the same way we think of the phrase ‘loose morals’ in a the phrase ‘a woman of loose morals’—it’s just not worth caring about, regardless of whether semantically it is correct to use the term. Even if the descriptive part of the phrase a woman of loose morals is accurate—namely, the person has lots of sex in ways that the bible would condemn—we should think that this property is not worth caring about—the associated normative judgment is worth rejecting.

2 is true too!

 

Indisputably, the lives and welfare of people matter. Utilitarianism says that these things matter, other things don’t, and so we should try to maximize welfare—the thing that matters. But deontology fetishizes rules, treating not violating them as more important than saving more lives. This can be seen by, for example, deontologists claiming that one should not kill one to prevent several deaths—they believe that adherence to the rule is more important than saving extra lives.

For example, suppose that you find out that your uncle was run over by a train. You ask “was he treated as a mere means?” This would seem to be the wrong question. Even if being treated as a mere means is one thing that matters a bit, it cannot possibly matter more than the lives of four whole entire people—people with lives, loves, families, aspirations, goals, joys, friends, and more. For something to matter—well and truly matter—it must affect how well things go, but if the only things that affect one’s decisions are how well things go, then they are necessarily a consequentialist. Consequentialism, the opposite of deontology, just says that one should take the actions that make things go best, so if the only factors that affect what one should do are the goodness of the world, then they must reject deontology and accept consequentialism.

Utilitarianism is often characterized as unfeeling, but it is the opposite. Killing one to save five in the organ harvesting case isn’t done because one has an abstract fetishization for bigger numbers—it’s done because the lives of four people matter more than adherence to abstract rules. When rules result in 4 more deaths than there needed to have been, those rules should be jettisoned.

Utilitarians endorse the Pareto principle because making people’s lives better is more important than not violating arbitrary norms. If some rule results in people’s lives being worse than they needed to have been, then one should not follow the rule. People’s lives are more important than that.

Indeed, I’d go further than Richard—I think this gets us to impartiality. Impartiality says that whether one should benefit someone else does not depend on whether those two people are close—for example, if all else was really held equal, I should benefit a stranger by some amount if the alternative was benefitting my mother by a lesser amount.

Whether someone is a member of your family isn’t fundamentally important—your family members are no more important than mine. Given this, partialism requires caring about things other than what is important. Of course, we may—and do—still have strong practical reasons to care about our family and close friends, for this is a good way to benefit the world. Whether a person has is necessarily existing or independently existing—or when they exist—does not really matter. What matters is who we can help and how much.

The demandingness objection says that consequentialism is too demanding—if you’re always doing what’s best, then you’ll have to avoid vacations just to donate lots of extra money to charities, where you can save lives. (On a sidenote, you really should donate to GiveWell charities, it only costs a few thousand dollars to save a life). But here I think there is a perfectly good reply: yes, it’s unfortunate that you have to give up vacations, but other people’s lives are more important than that.

I think that these considerations can get us to impartialist consequentialism. For something to be important, it must make the world a better place. If something really matters, if its existence really is good, then the world is better because it exists. But consequentialism just cares about those things that make the world better, those things that are important, rather than adhering to arbitrary norms.

One might object by claiming that deontology does care about things that are important. Whether a person is treated as a mere means, for example, relates to whether their rights are violated, and whether their rights are violated is important. However, there are a few ways to see this response doesn’t work.

First, the argument I provide in this article is an argument for consequentialism, rather than an argument for utilitarianism in particular. Consequentialism says that one should take the action with the best consequences, utilitarianism goes further by saying that the way to determine the best consequences is to add up how well everyone’s life is going—utilitarianism, thus, leaves no room for caring about whether people are treated as mere means if that doesn’t make their life go worse. Thus, if one believes that rights matter, they could simply say that when rights are violated, that is a bad consequence. Indeed, it seems true almost by definition that for something to matter, it has to affect how good the world is—bad things make the world worse.

But consequentialism of rights is not deontology. Consequentialism of rights is a type of consequenitalism that just says that one of the consequences that matters is how many people’s rights are violated, while deontology claims that there are constraints on actions—for example, one shouldn’t kill one even to stop multiple killings. Consequentialism of rights implies, among other things, that one should kill one person to prevent five other killings. Additionally, it cannot maintain many of the judgments that deontology would like to make. While perhaps whether one’s rights are violated in an act that causes their death is somewhat important, it cannot be plausibly claimed to be more important than saving 4 extra lives—thus, consequentialism of rights still produces the judgment that one should kill people and harvest their organs to save five.

Second, if something is actually important, it must be worth caring about. But many of the deontological distinctions that determine whether one should take actions that ultimately result in four fewer dead people hinge on things that are plainly unimportant. Perhaps one should have a weak preference for someone being killed by redirection of a trolley rather than being pushed off a footbridge, but this preference should be weak—certainly far weaker than their preference for four extra people not to die. Indeed, it seems very plausible that the only reason that one should care about whether a person is pushed off the footbridge is because it is bad when wrongdoing is done—however, it cannot be the case both that the factor that makes it wrong to push the person is the importance of not bringing about the state of affairs and the factor that makes it a state of affairs whose existence is important not to bring about is that the act is wrong; this is viciously circular. Thus, merely pointing out that it’s important for bad acts not to occur does not vindicate the original judgment, for one needs to provide an account of why the acts are bad, and the badness and importance cannot both be justified by the other.

One might claim that there are two senses in which things are important: deontological and axiological. Something is axiologically important if it affects the goodness of a state of affairs—for example, it is important that babies not get killed by tornadoes. Something is deontically important if it affects what act one should take—for example, the fact that my action would cause suffering is deontically important because it counts against my taking the act. Thus, they may claim that, while whether a person is used as a mere means is not axiologically important—it doesn’t affect the value of the world—it is deontically important, and thus deontology is saved.

However, while it is certainly true that one can make up distinctions in the word important, this just seems to be a semantic trick—deontic importance doesn’t seem to match what we actually mean by importance, and what we mean by importance is what generates the intuition that the things that give us moral reasons must be important. Imagine the following sentence: “it is very important that this doesn’t happen, but the world would be no worse if it happens.” This seems both to misunderstand what importance means and also not to maintain the type of importance that seems to generate moral reasons. If something doesn’t make the world worse at all, if third parties have no reason to care about it—as is true of whether one is used as a mere means—then it seems it can’t be a genuine source of moral reasons. This is true even if we expand the category of importance to include deontic importance.

Ultimately, it seems that deontology and non-consequentialism more broadly hinges our moral decisions on things that really don’t matter. Whether a person’s body stops a train such that they’re used as a mere means wouldn’t matter to them or their family—and it shouldn’t matter to morality. The response to deontologists claiming that consequentialists don’t care about not violating rights should be that saving lives is just more important than that. It matters more that I save 5 lives than that I don’t dirty my hands.

New Comment
8 comments, sorted by Click to highlight new comments since:

At the root off any moral system is an unjustified intuition of what matters. It’s ridiculous to object to deontology on the basis that it’s not consequentialist.

It’s ridiculous to object to deontology on the basis that it’s not consequentialist.

That's a class of arguments, shouldn't be discarded on the basis of something that's not their content.

That's not the basis that I objected to it on. 

I was confused at first what you meant by "1 is true" because when you copied the post from your blog you didn't copy the numbering of the claims. You should probably fix that.

See Ends Don't Justify Means (Among Humans) for the standard consequentialist rejection of this view.

That's consistent with my argument here.  There may be good reasons to act as a deontologist even if itgives a wrong account of what matters. 

The second highest rated comment to that post is:

The tendency to be corrupted by power is a specific biological adaptation, supported by specific cognitive circuits, built into us by our genes for a clear evolutionary reason. It wouldn't spontaneously appear in the code of a Friendly AI any more than its transistors would start to bleed.

This is critical to your point. But you haven't established this at all. You made one post with a just-so story about males in tribes perceiving those above them as corrupt, and then assumed, with no logical justification that I can recall, that this meant that those above them actually are corrupt. You haven't defined what corrupt means, either.

 

I think you need to sit down and spell out what 'corrupt' means, and then Think Really Hard about whether those in power actually are more corrupt than those not in power;and if so, whether the mechanisms that lead to that result are a result of the peculiar evolutionary history of humans, or of general game-theoretic / evolutionary mechanisms that would apply equally to competing AIs.

You might argue that if you have one Sysop AI, it isn't subject to evolutionary forces. This may be true. But if that's what you're counting on, it's very important for you to make that explicit. I think that, as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.

Which albeit is a common paradigm for online writing, to make elaborate claims hinge off obfuscated logical errors, whether intentionally or unintentionally. 

But it doesn't help when they are cited in the future as credible sources.