Many consequentialists of my acquaintance appear to suffer from a tragic case of deontologist envy.

In consequentialism, one makes ethical decisions by choosing the actions that have the best consequences, whether that means maximizing your own happiness and flourishing (consequentialist ethical egoism), increasing pleasure and decreasing pain (hedonic utilitarianism), satisfying the most people's preferences (preference utilitarianism) or increasing the number of pre-defined Good Things in the world (objective list consequentialism). Of course, it's impossible to figure out all the consequences of your actions in advance, so many people follow particular sets of rules which they believe maximize utility overall; this is sometimes called "rule consequentialism" or "rule utilitarianism."

In deontology, one makes ethical decisions by choosing the actions that follow some particular rule. For example, one might do only the actions that you'd will that everyone do, or actions that involve treating other people as ends rather than means, or actions that don't violate the rights of other beings, or actions that don't involve initiating aggression, or actions that are not sins according to the teachings of the Catholic Church. While it's allowed to care about whether things are better or worse (some deontologists I know call it their "axiology"), you can only care about that within the constraints of the rule system.

In spite of my sympathies for virtue ethics, I do think it is generally better to make decisions based on whether the outcomes are good as opposed to decisions based on whether they follow a particular set of rules or are the decisions a person with particular virtues would make. (I continue to find it weird that these are the Only Three Options For Decision-Making About Ethics, So Says Philosophy, but anyway.) So do most people I know.

I have some consequentialist beliefs about free speech. For instance, I support making fun of people who say sexist or racist things in public. I think it is fine to call someone a bigoted asshole if they are, in fact, saying bigoted asshole things. I appreciate Charles Murray refusing to speak at an event Milo Yiannopoulous is at because he is "a despicable asshole" and I wish more people would follow his example. And when I express my consequentialist beliefs about free speech a surprising number of my consequentialist friends respond with "but what if your political opponents did that?"

I did not realize we are all Kantians now.

I think there are three things that people sometimes mean by "but what if everyone did that?" The first is simple empathy: if it hurts you to be shamed, then you should consider the possibility that it hurts other people to be shamed too, no differently from how you are hurt. I agree that this is an important argument, and we could all stand to be a little bit more aware that people we disagree with are people with feelings. But even deontologists agree sometimes it's necessary to hurt one person for the greater good: for example, even if you are very lonely and it hurts you not to get to talk to people, you don't get to force people to interact with you against their will. So I don't think that the mere fact that it hurts people implies that (say) public shaming should be off-limits.

The second is a rather touching faith in the ability of people's virtuous behavior to influence their political opponents.

Now, if it happened that my actions had any influence whatsoever over the behavior of r/TumblrInAction, that would be great. I don't screenshot random tumblr users and mock them in front of an audience of over three hundred thousand people, so the entire subreddit would close down, which would be a great benefit to humanity. While we're at it, there are many other places people who read r/TumblrInAction could follow my illustrious example. For instance, they could be tolerant of teenagers with dumb political beliefs, remembering how stupid their own teenage political beliefs were. They could stop making fun of deitykin, otherwise known as "psychotic people with delusions of grandeur," because jesus fucking christ it is horrible to mock a mentally ill person for showing mental illness symptoms. They could stop with the "I identify as an attack helicopter" jokes; I mean, I don't have any ethical argument against those jokes, it's just that there is exactly one of them that was ever funny

In general people rarely have their behavior influenced by their political enemies. Trans people take pains to use the correct pronouns; people who are overly concerned about trans women in bathrooms still misgender them. Anti-racists avoid the use of slurs; a distressing number of people who believe in human biodiversity appear to be incapable of constructing a sentence without one. Social justice people are conscientious about trigger warnings; we are subjected to many tedious articles about how mentally ill people should be in therapy instead of burdening the rest of the world with our existence.

Therefore, I suspect that if supporters of social justice universally became conscientious about representing their opponents' views fairly, defaulting to kindness and using cruelty only as a last resort when it is necessary to reduce overall harm, and not getting people fired from their jobs, it would not have any effect on how often opponents of social justice represent opponents' views fairly, behave kindly, and condemn campaigns to fire people. In fact, they might end up doing so more enthusiastically, because suddenly kindness and charity and not getting people fired are Social Justice Things, and you don't want to support Social Justice Things, do you?

(I'm making this argument with the social justice side as the good side, but it works equally well for literally any two sides in the relevant positions.)

Third, there's an argument I personally find very compelling. Nearly everyone who does wrong things, even evil things, thinks that they're on the side of good. Therefore, the fact that you think you're on the side of good doesn't mean you actually are. (The traditional example is Nazis, but I think Stalinism is probably better, because in my experience most people agree that your average rank-and-file Stalinist supported an ideology that killed millions of people because they had a good goal but were horribly mistaken about how to bring it about.) So it's important to take steps to reduce the harm of your actions if you're actually doing evil.

Like I said, I find this argument compelling. But you can't get an entire ethical system out of trying to avoid being a Stalinist. Lots of generally neutral or even good things are evil if a Stalinist happens to be doing them, such as trying to convince people of your point of view or going to political rallies or donating to causes you think will do the most good in the world. If you were a Stalinist, the maximally good action you could do, short of not becoming a Stalinist anymore, is sitting on the couch watching Star Trek reruns. This moral system has some virtues-- depressed people the world over can defend their actions by saying "well, actually, I'm one of the best people in the world by Not-Having-Even-The-Slightest-Chance-Of-Being-A-Stalinist-ianism"-- but I think it is unsatisfying for most people.

(I can tell someone is about to say "you can donate to the Against Malaria Foundation, there's no possible way that could be evil!" and honestly that just seems like a failure of imagination.)

That's not to say that trying to avoid being a Stalinist should have no effects on your ethical system at all. Perhaps most important is never, ever, ever engaging in deliberate self-deception. Of almost equal importance is not hiding inconvenient facts. If you know damn well the Holodomor is happening, do not write a bunch of articles denouncing everyone who says the Holodomor is happening as a reactionary who hates poor people. On a less dramatic level, if there's a study that doesn't say what you want it to say, mention it anyway; if you can massage the evidence into saying something that it doesn't really say, don't; take care to mention the downsides and upsides of proposed policies as best you can. These are most important, because they directly harm the ability of truth to hurt falsehood.

And there are some things that I think it's worth putting on the list of things you shouldn't do even if you have a really really good reason, because it is far more likely that you are mistaken than that this is actually right this time. Violence against people who aren't being violent against others, outside of war (and no rules-lawyering about how being mean is violence, either). Being a dick to people who are really weird but not hurting anyone (and no rules-lawyering about indirect harm to the social fabric, either). Firing people for reasons unrelated to their ability to perform their jobs. I've added "not listening to your kid and respecting their point of view when they try to tell you something important about themselves, even if you disagree," but that's a personal thing related to my own crappy relationship with my parents.

But that's not a complete ethical system. At some point you have to do things. And that means, yes, that there's a possibility you will do something wrong. Maybe you will be a participant in an ongoing moral catastrophe; maybe you will make the situation worse in a way you wouldn't have if you sat on your ass and watched Netflix. On the other hand, if you don't do anything at all, you get to be the person sitting idly by while ongoing moral catastrophes happen, and those people don't exactly get a good reputation in the history textbooks either. (“The only thing necessary for the triumph of evil is for good men to do nothing," quoth Edmund Burke.)

The virtue of consequentialism is that it pays attention to consequences. It is consistent for me to say "feminist activism is good, because it has good consequences, and anti-feminist activism is bad, because it has bad consequences." (Similarly, it is consistent to say that you should lie to axe murderers and homophobic parents, but not to more prosocial individuals.) This is compatible with me believing that if I had a different set of facts I would probably be engaged in anti-gay activism, and in fact many loving, compassionate, and intelligent people of my acquaintance do or have in the past. Moral luck exists; it is possible to do evil without meaning to. There would be worse consequences if everyone adopted the policy of never doing anything that might possibly be wrong.

There is a common criticism of consequentialism where people say "well if torture had good consequences then you'd support torture! CHECKMATE CONSEQUENTIALISTS." Of course, in the real world torture always has bad consequences, which is why consequentialists oppose it. If stabbing people in the gut didn't cause them pain or kill them, and in fact gave them sixteen orgasms and a chocolate cake, then stabbing people would be a good thing, but it is not irrelevant to consequentialism that stabbing does not do this.

Some people seem to want to be able to do consequentialism without ever making reference to a consequence. If you just find enough levels of meta and use the categorical imperative enough, then maybe you will be able to do consequentialism without all that scary "evidence" and "facts" stuff, and without the possibility that you could be mistaken. This seems like a perverse desire, and in my opinion is best dealt with by no longer envying deontology and instead just becoming a deontologist.

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 10:03 PM

To be honest, I'm not entirely sure that anyone is a consequentialist.

I do use consequentialism a lot, but almost always in combination with an intuitive sort of 'sanity check'- I will try to assign values to different outcomes and try to maximize that value in the usual way, but I instinctively shrink from any answer that tends to involve things like "start a war" or "murder hundreds of people."

For example, consider a secret lottery where doctors quietly murder one out of every [n] thousand patients, in order to harvest their organs and save more lives than they take. There are consequentialist arguments against this, such as the risk of discovery and consequent devaluation of hospitals, but I don't reject this idea because I've assigned QALY values to each outcome. I reject it because a conspiracy of murder-doctors is bad.

On the one hand, it's easy to say that this is a moral failing on my part, and it might be that simple. Sainthood in deontological religious traditions looks like sitting in the desert for forty years; sainthood in consequentialist moral traditions probably looks more like Bond villainy. (The relative lack of real-world Bond villainy is part of what makes me suspect that there might be no consequentialists.)

But on the other hand, consequentialism is particularly prone to value misalignment. In order to systematize human preferences or human happiness, it requires a metric; in introducing a metric, it risks optimizing the metric itself over the actual preferences and happiness. So it seems important to have an ability to step back and ask, "am I morally insane?", commensurate with one's degree of confidence in the metric and method of consequentialism.

This sounds to me very strongly like a rejection of utilitarianism, not of consequentialism.

Presumably you don't have ontologically basic objections to a conspiracy of murder doctors because "conspiracy" "murder" and "doctor" are all not ontologically basic. And you aren't saying "this is wrong because murder is wrong" or "this is wrong because they are bad people for doing it." You're saying "this is wrong because it results in a bad world-state."

Consequentialism only requires a partial ordering of worlds, not a metric; and satisficing under uncertainty over a family of possible utility functions or similar probably looks a lot more like usual good behavior.

I do agree that there are "no real world utilitarians" in the sense of having certainty in a specific utility function though, with Peter Singer being the possible exception and also looking kind of like a bond villain.

But on the other hand, consequentialism is particularly prone to value misalignment. In order to systematize human preferences or human happiness, it requires a metric; in introducing a metric, it risks optimizing the metric itself over the actual preferences and happiness.

Yes, in consequentialism you try to figure out what values you should have, and your attempts at doing better might lead you down the Moral Landscape rather than up toward a local maximum.

But what are the alternatives? In deontology you try to follow a bunch of rules in the hope that they will keep you where you are on the landscape, trying to halt progress. Is this really preferable?

So it seems important to have an ability to step back and ask, "am I morally insane?", commensurate with one's degree of confidence in the metric and method of consequentialism.

It seems to me that any moral agent should have this ability.


I don't think the thing by Vox Day that you linked to is aimed at moderates; not, at least, by any definition of "moderate" that I recognize.

Tit-for-tat is not "the most rational strategy" in IPD -- there is no such thing -- though indeed it works pretty well in practice. However, "they did X to us, so we can do it to them" commonly turns into somewhat-more-than-one-tit-for-a-tat, which is unstable and leads to ever-escalating retribution.


Perhaps you would like to turn that into an actual criticism capable of being evaluated, rather than a sneer?

I take it your meaning is something like "Actually Vox Day's audience is representative of maybe half the US population, and therefore moderate by definition", but without a bit more specificity it's hard to respond usefully.

Could you perhaps give an example of a position that you consider only just not-moderate in Vox Day's direction?