(cross-posted on the EA Forum)

A consequentialist, a deontologist, and a virtue ethicist are walking through a forest discussing the fundamental nature of morality. So heated is their disagreement that they startle a nearby small child, who falls into a nearby small pond. At once, they all cease their arguing and leap in to save the child. But why?

Recap: The Big Three Schools

The Consequentialist thinks ethics is about consequences. She thinks that a drowned child is a bad consequence, and a saved child is a good one. She thinks you should save a drowning child because the end result will be a child that hasn't drowned.

The Deontologist thinks ethics is about rules. He thinks that one of those rules is "whenever you get the chance to save someone's life for very little cost, take it". He thinks you should save a drowning child because otherwise you're breaking a moral rule.

The Virtue Ethicist thinks ethics is about virtues. They think that saving a drowning child is courageous and kind. They think that you should save a drowning child because you should be the kind of person who saves a drowning child.

This isn't doing credit to the range and nuances of any of them, but it will do for now. What matters isn't the specifics of the theory but rather that

  • all three have fundamentally different ideas about what ethics is about (and these aren't even all the ideas)
  • they still all save the child

Although they disagree about some very fundamental questions, they seem to broadly agree on a lot of actions. Is there anything they can learn from each other?

(For the sake of brevity I'm going to focus specifically on the interplay between Consequence and Virtue)

The Consequence Of Virtue

Why might a Consequentialist care about Virtue? Well, first of all, our Consequentialist would like to give a full account of ethics, and it seems to her that "what kind of person should I be?" is a question of ethics. Even if the answer is basically 'the kind of person who produces good consequences', it would be good to be able to say what kind of person that is.

Second, whether or not she believes that virtue is what ethics is about, she can't help but notice that helping people become wiser, kinder, more courageous, and more prudent, tends to have good consequences.

Let's make a distinction: Virtue Ethics is about virtue as a foundation for ethics, and Virtue Theory is about providing a theory for what virtue is, how it operates, and how we can interact with it, without necessarily making it the foundation of ethics. Our Consequentialist must reject Virtue Ethics, but she might still be curious about Virtue Theory

Suppose that our Consequentialist notices, in her day to day life, that people are very bad at quickly and accurately evaluating the range of possible consequences their actions might have, but very good at quickly spotting which responses to a situation would be temperate, charitable, or just, and that virtuous responses very consistently produce good consequences. What then?

It would be reasonable for the Consequentialist to conclude that actually

  • even if ethics is basically about consequence
  • and even if big decisions without serious time constraints should definitely be made by evaluating consequences
  • and even if 'virtue' should really only be understood as a theory of consequence, and our understanding of it massively informed by consequence
  • noenetheless, most people should be encouraged to make their day-to-day decisions by considering which course of action would be virtuous

The Virtue Of Consequence

Why might a Virtue Ethicist care about Consequence? Well, it's basically all the same reasons in reverse. Even if our Virtue Ethicist doesn't think that good consequences is what ethics is fundamentally about, they can still appreciate that consequences matter. Being a Virtue Ethicist doesn't mean you're neutral on child drowning.

For them, the issue is not that the world is the kind of place where children drown, it is that the people in it are not the kinds of people who would save a drowning child. But it's still an issue! And to make sense of notions like 'prudence' and 'wisdom', it would be good to understand which states of affairs a virtuous agent should be trying to bring about.

If our Consequentialist can provide good models for how to bring about different ends, like 'no more child drowning', any Virtue Ethicist would want to see those models. What's distinctive about Consequentialism is that it puts consequence at the root of moral calculations, but that doesn't mean other moral theories can't make use of consequence in their moral calculations.

Ethics vs Theory

Building on the above, I want to draw a general distinction between 'Ethics' and 'Theory'. By 'Ethics' I mean the root of goodness, badness, obligation, prohibition, and everything else. By 'Theory' I mean the systems you use for operating on that Ethics. And although we conventionally put the Virtue Theory with the Virtue Ethics, we needn't.

Consider this: For any Consequentialist with a convincing way of measuring goodness that they're trying to maximise, there could equally be a Deontologist whose one and only principle is 'maximise that measure of goodness'.

They would have exactly the same answers to every ethical conundrum, except 'why is this a good thing to do', to which the Consequentialist would reply 'because this gets me the highest measure of goodness', and the Deontologist would reply 'because it is a moral law that I should get the highest measure of goodness'.

(Note that these people really would have quite different outlooks on the world, and meaningful disagreements. But more on that in due course.)

Unbundling 'Consequentialism'

As you can hopefully now see, what is often loosely described as 'consequentialism' in fact encompasses three large and distinct questions:

  • Is moral obligation and ethics fundamentally about better and worse consequences
  • How, in actuality, do different consequences vary in goodness
  • How can we as boundedly rational humans act to bring about better consequences

It seems to me that 'being a consequentialist' is primarily about the first of these, and 'being an effective altruist' is primarily about the third.

Obviously these questions are connected, but they might have very different answers. It's not impossible that ethics is fundamentally about consequences, but that the best way to bring about good consequences is to wholeheartedly believe that ethics is fundamentally about virtue. Now, I'm not sure if I believe that, but I do believe in the possibility.

The point I'm trying to make is that it's worth taking some time to disentangle these questions, especially if you consider yourself to be a consequentialist.

Laying Dominos

If you've made it this far, I think it's only fair to you that I now lay my cards on the table. I have issues with utilitarianism: I think it can be harmful. I have a tentative suspicion that some of the current problems that Effective Altruism is facing (most notably, burnout), are in fact downstream of problematic ethical foundations. And yes, I know not all EAs consider themselves utilitarian, but I also think dividing people up into ethical camps is trickier than it seems.

I intend to make that argument in full, and submit it to the appropriate degree of scrutiny, but for now I just want to note a few claims I intend to come back to:

  • You can measure impact and prioritise causes without being a consequentialist
  • You can even believe that we have a moral obligation to bring about some consequences and avoid others without believing that some consequences are better than others
  • Some formulations of consequentialism imply that you should not be a consequentialist
New Comment
12 comments, sorted by Click to highlight new comments since:

At once, they all cease their arguing and leap in to save the child. But why?

Because all three of their System 1s executed adaptations that made a decision to save the child. They all have these adaptations because System 1s without this kind of adaptation would have lower fitness, because other members of the tribe would see them as untrustworthy potential allies and disloyal potential mates.

Their System 2s then make three different post-hoc rationalizations for their decision, and pretend like these rationalizations are the reasons that led them to jump in the pond.

Ok I think this is partly fair, but also clearly our moral standards are informed by our society, and in no small part those standards emerge from discussions about what we collectively would like those standards to be, and not just a genetically hardwired disloyalty sensor.

Put another way: yes, in pressured environments we act on instinct, but those instincts don't exist in a vacuum, and the societal project of working out what they ought to be is quite important and pretty hard, precisely because in the moment where you need to refer to it, you will be acting on System 1.

Clearly our moral standards are informed by our society, and in no small part those standards emerge from discussions about what we collectively would like those standards to be, and not just a genetically hardwired disloyalty sensor.

Yes, these discussions set / update group norms. Perceived defection from group norms triggers the genetically hardwired disloyalty sensor.

In pressured environments we act on instinct, but those instincts don't exist in a vacuum

Right, System 1 contains adaptations optimized to signal adherence to group norms.

the societal project of working out what [people's instincts] ought to be is quite important and pretty hard

The societal project of working out what norms other people should adhere to is known as "politics", and lots of people would agree that it's important.

Well, I basically agree with everything you just said. I think we have quite different opinions about what politics is, though, and what it's for. But perhaps this isn't the best place to resolve those differences.

Although they disagree about some very fundamental questions, they seem to broadly agree on a lot of actions.

I think this is mixing up cause and effect.

People instinctively find certain things moral. One of them is saving drowning children.

Ethical theories are our attempts to try to find order in our moral impulses. Of course they all save the drowning child, because any that didn't wouldn't describe how humans actually behave in practice, and so wouldn't be good ethical theories.

It's similar to someone being surprised that Newton's theories predict results that are so similar to Einstein's even though they were wrong. But Newton would never have suggested his theories if they didn't accurately predict the Einsteinian world we actually live in.

I'm not sure I'm entirely persuaded. Are you saying that the goal of ethics is to accurately predict what people's moral impulse will be in arbitrary situations?

I think moral impulses have changed with times, and it's notable that some people (Bentham, for example) managed to think hard about ethics and arrive at conclusions which massively preempted later shifts in moral values.

Like, Newton's theories give you a good way to predict what you'll see when you throw a ball in the air, but it feels incorrect to me to say that Newton's goal was to find order in our sensory experience of ball throwing. Do you think that there are in fact ordered moral laws that we're subject to, which our impulses respond to, and which we're trying to hone in on?

I'm not saying that's the explicit goal. I'm saying that in practice, if someone suggests a moral theory which doesn't reflect how humans actually feel about most actions nobody is going to accept it.

The underlying human drive behind moral theories is to find order in our moral impulses, even if that's not the system's goal

Newton's theories give you a good way to predict what you'll see when you throw a ball in the air, but it feels incorrect to me to say that Newton's goal was to find order in our sensory experience of ball throwing.

I like this framing! The entire point of having a theory is to predict experimental data, and the only way I can collect data is through my senses.

Do you think that there are in fact ordered moral laws that we're subject to, which our impulses respond to, and which we're trying to hone in on?

You could construct predictive models of people's moral impulses. I wouldn't call these models laws, though.

The rules say we must use consequentialism, but good people are deontologists, and virtue ethics is what actually works.

- Eliezer Yudkowsky

A consequentialist, a deontologist, and a virtue ethicist are walking through a forest discussing the fundamental nature of morality. So heated is their disagreement that they startle a nearby small child, who falls into a nearby small pond. At once, they all cease their arguing and leap in to save the child. But why?

Brilliant introduction, loved it.

But do we all have same System 1 instincts? 


A consequentialist, a deontologist, and a virtue ethicist are walking back into town after a hunting trip (and all armed to the teeth) discussing the fundamental nature of morality. They come across a group of angry citizens about to lynch a man for rape and murder of a young girl. The man pleads that he didnt mean to kill her...

Having made the above comment, I find myself struggling with my own approaches to it. Suppose in addition, the jurisdiction of the events has abolished the death penalty. A deontologist would want (a system 2 level) to stop the lynch mob. Stopping a lynch mob would also appeal to Virtue ethicist. So what would a Consequentialist do? What is the System 1 response of the person or persons who started the lynch mob? I feel that long term, a consequentialist would say rule of law is Good. In short term, it is easy to say the law is an ass and let's have proper justice (I am pretty sure that would be my System 1 response if I was the girls father), despite being intellectually opposed to death penalties.

Saving a drowning child is no test for ethical theories.