Utilitarianism and others

Folk physics evolved as a way of predicting what will happen to objects. It works fine within its domain, but fails beyond that. To give more precise answers, we developed Newtonian physics. But even Newtonian physics has limits. It breaks down at very large or small scales.

Morality evolved as a system for getting on well with our neighbours. Our folk theories of morality work fine within their domain, but fail beyond that. To deal with higher level problems, like policy questions facing whole societies, we developed the theory of utilitarianism. But even utilitarianism has its limits. Faced with questions of the infinite future, it swiftly devolves into fun with maths.

Utilitarianism treats all human concerns as preferences, and turns them into numbers. That is a helpful tool for trading off competing concerns. But don’t mistake the map for the territory.

The Effective Altruism community worries about paper clip maximisers: Artificial Intelligences which serve their single goal obsessively. You tell them to produce paperclips; they recycle humans into paperclips. The only way to dissuade them is to appeal to their inhuman goals: “spare me and I’ll slave in your paperclip factory!”

Utilitarians reduce all concerns to maximising utility. They can’t be swayed by argument, except about how to maximise utility. This makes them a bit like paperclip maximisers themselves.

Utilitarianism is originally an outgrowth of Christianity.[1] Like Christianity, it imposes extreme moral demands on people. Take up your cross and follow me. Sell all your possessions and give them to the poor. But utilitarianism is infinitely more scientific — and infinitely more one-sided. Jesus thought even the rich might be forgiven. For utilitarianism, forgiveness doesn’t come into it. Either someone is maximizing utility, or they’re in the way.

Morality evolved as a system for getting on well with our neighbours. It’s lucky that the Axial religions came along and pushed it to be more than this — since your neighbours may be, say, fellow Athenian slave holders. But there are losses as well as gains. One possible loss, in replacing the man who has to get on with his neighbours with the lone actor responsible before God for the whole world, is humility.

C.S. Lewis famously said that when people stop believing in God, they don’t believe in nothing — they believe in anything. This is not just a witticism about astrology. The moral frameworks of traditional religions evolved over centuries. They are complex and subtle. They contain contradictions. That makes them rich enough to cope with human life, which is contradictory. More recent alternatives have not had their corners worn down.

One reason to listen to others is that you may be wrong. Rationalists have an appealing sensitivity to this, and practices like steelmanning which are (mostly) admirable. Another reason is just that other people’s concerns, right or wrong, deserve listening to. On some views this is not a means to an end, but the essence of the moral situation.

Utilitarianism and you

The scientificness of utilitarianism comes at a price. You must swallow one large ethical frog: you have a duty to maximise the sum of utility in the world. Why is this plausible? It’s questionable whether you even have the right. (You barge past me, about my lawful business, on your mission of mercy. “Out of the way! Your utility has already been included in my decision calculus!” Really? Can I see your working?)

In any case, where does this duty come from? As teenagers point out, nobody asks to be born. What’s it to you if I spend my summers by the pool? What if I abandon everything for a life of colonialist Tahitian debauchery? More might come out of it than from all your earnest strivings.

John Stuart Mill faced this issue acutely, and it put him through a serious depression.

It was in the autumn of 1826. I was in a dull state of nerves, such as everybody is occasionally liable to; unsusceptible to enjoyment or pleasurable excitement; one of those moods when what is pleasure at other times, becomes insipid or indifferent…. In this frame of mind it occurred to me to put the question directly to myself: “Suppose that all your objects in life were realized; that all the changes in institutions and opinions which you are looking forward to, could be completely effected at this very instant: would this be a great joy and happiness to you?” And an irrepressible self-consciousness distinctly answered, “No!” At this my heart sank within me: the whole foundation on which my life was constructed fell down….

All those to whom I looked up, were of opinion that the pleasure of sympathy with human beings, and the feelings which made the good of others, and especially of mankind on a large scale, the object of existence, were the greatest and surest sources of happiness. Of the truth of this I was convinced, but to know that a feeling would make me happy if I had it, did not give me the feeling.

Incentives

EA focuses on two kinds of moral issue. The first is effective action in the here and now — maximising the bang for your charitable buck. The second is the very long run: controlling artificial general intelligence (AGI), or colonizing other planets so that humanity doesn’t keep all its eggs in one basket.

Contributing to the first topic requires discipline. You need to learn about the mechanics of COVID or malaria or education or planning policy. It will help to understand experimental design and statistics. A PhD may be in order. Effective altruists are also not the only people working on these problems. These fields are dug already.

The second topic is much more fun. Nobody knows about the far future, so anyone can speculate. It’s exciting to range over the millennia in your imagination. Plus, it’s a great chance to write short stories. These concerns are also more specific to the EA community, which makes them a clear badge of identity.

Perhaps it is not surprising that, whatever the ratio of actual work by effective altruists on these two sets of problems, the second set is much more visible. After a visit to Lesswrong, someone will probably associate EA more with preventing bad AIs than with expanding access to clean drinking water.

Many people have deep fears about AI.  I have never bothered to think much about it. I don’t dismiss those fears. But clever people are working on the problem already. I trust them. My marginal contribution would be small.

Localness

Einstein scooped Hilbert by a few days at most in producing general relativity. In that sense, the contribution of this great genius was to give us general relativity a few days earlier. There are many people in the world with the same skills, interests and brainpower as you. That’s lucky! If capitalism, politics or science depended on rare geniuses, they would not be reliable systems. By contrast, to your parents, children and friends, you are irreplaceable.

Steve Jobs created the iPhone, but neglected his child. I think the iPhone would have come along anyway. (Perhaps it would not have been as good: only as good as Android, for instance.) Firms have incentives to find substitute products. There are very few incentives to be a substitute father.

Your most important net contribution to human happiness today is likely to be calling your Mum. In the Silicon Valley jargon, calling your Mum is the thing that does not scale. Here’s a more general claim: the more local the issue, the less substitutable people are. Many people are working on the great needs of the world. Relatively few are going to step up and organize the Department’s charity raffle. Of course, the great needs of the world are more important per se.

Grandiose goals are especially common in the neighbourhood of Palo Alto. That is an intellectual by-product of the revolution in computing, which opened up a space for a new generation of planet-scale firms. Revolutions fade. There is still a lot of tech-driven change waiting to happen, but much of it may be on a smaller, more ordinary and local level. (On the analogy of the military theorists’ “strategic corporal”, consider the idea of the strategic coffee shop.)

Many young people want to change the world, which is good. “The first duty of a young man is to be ambitious.” Without their many ambitions, we wouldn’t get the few people who succeed at it. Just by arithmetic, only few will succeed. In spring, plan for winter. What will motivate you if you don’t change the world? Can you be satisfied doing a little? Cultivating the mental independence to work without appreciation, and the willpower to keep pedalling your bike, might be a valuable investment. Ambition is like an oxidizer. It gets things going, and can create loud explosions, but without another source of fuel, it burns out. It also helps to know what you actually want. “To maximize the welfare of all future generations” may not be the true answer.

  1. ^

    This is a very broad shorthand. The long version would probably be along the lines of Christianity creating natural law, and the concept of utility growing up within eighteenth-century natural law theories, before Bentham cuts it free.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:45 PM

As a non-Utilitarian, I find myself laughing at the comparison to Newton's and Einstein's work.  and at the just-so story of developing the theory in order to deal with policy problems in societies (IMO it's the reverse - there are a bunch of moral theories, and societies find the ones that justify their direction, and then mostly ignore the parts they don't like).

Physics uses a TON of measurements and experiments (both natural via differential observation and controlled via intervention) to determine that the consistent math is also correlated with observations.  Moral theories do part of this - they test against intuitions and extrapolations to find errors in the theory.  But they don't have the strong tie to observation, and don't take failures or counter-observations as a need to improve the theory.  This is probably because PEOPLE are inconsistent, and there is no formalization that applies to all of us. 

That said, I like the post after the intro, and I fully agree that calling your mum (or otherwise acting locally) is among the more satisfying things you can spend effort on.  I don't think I've seen the justification that spending 10 minutes making one person happier for a short time is actually better than spending those 10 minutes separating your recycling types or modeling some aspect of AI risk.

So the argument is based on substitutability. If you don't do (global good thing X) - and if it is truly important - someone else probably will.