It seems that in the rationalist community there's almost universal acceptance of utilitarianism as basics of ethics. The version that seems most popular goes something like this:

  • Everybody has preference function assigning real values (utilons) to states of reality
  • Preference function is a given and shouldn't be manipulated
  • People try to act to maximize number of utilons, that's how we find about their preference function
  • People are happier when they get more utilons
  • We should give everybody as much utilons as we can

There are a few obivous problems here, that I won't be bothering with today:

  • Any affine transformation of preference function leaves what is essentially the same preference function, but it matters when we try to aggregate them. If we multiply one person's preference function values by 3^^^3, they get to decide everything in every utilitarian scenario
  • Problem of total vs average number of utilons
  • People don't really act consistently with "maximizing expected number of utilons" model
  • Time discounting is a horrible mess, especially since we're hyperbolic so inconsistent by definition

But my main problem is that there's very little evidence getting utilons is actually increasing anybody's happiness significantly. Correlation might very well be positive, but it's just very weak. Giving people what they want is just not going to make them happy, and not giving them what they want is not going to make them unhappy. This makes perfect evolutionary sense - an organism that's content with what it has will fail in competition with one that always wants more, no matter how much it has. And organism that's so depressed it just gives up will fail in competition with one that just tries to function the best it can in its shabby circumstances. We all had extremely successful and extremely unsuccessful cases among our ancestors, and the only reason they are on our family tree was because they went for just a bit more or respectively for whatever little they could get.

Modern economy is just wonderful at mass producing utilons - we have orders of magnitude more utilons per person than our ancestors - and it doesn't really leave people that much happier. It seems to me that the only realistic way to significantly increase global happiness is directly hacking happiness function in brain - by making people happy with what they have. If there's a limit in our brains, some number of utilons on which we stay happy, it's there only because it almost never happened in our evolutionary history.

There might be some drugs, or activities, or memes that increase happiness without dealing with utilons. Shouldn't we be focusing on those instead?


84 comments, sorted by Click to highlight new comments since: Today at 4:07 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

We should give everybody as much utilions as we can

Not at all. We're all just trying to maximize our own utilions. My utility function has a term int it for other people's happiness. Maybe it has a term for other people's utilions (I'm not sure about that one though). But when I say I want to maximize utility, I'm just maximizing one utility function: mine. Consideration for others is already factored in.

In fact I think you're confusing two different topics: decision theory and ethics. Decision theory tells us how to get more of what we want (inclu... (read more)

Seconded. It seems to me that what's universally accepted is that rationality is maximizing some utility function, which might not be the sum/average of happiness/preference-satisfaction of individuals. I don't know if there's a commonly-used term for this. "Consequentialism" is close and is probably preferable to "utilitarianism", but seems to actually be a superset of the view I'm referring to, including things like rule-consequentialism.
Thirded. I would add that my utility function need not have a term for your utility function in it's entirety. If you intrinsically like murdering small children, there's no positive term in my utility function for that. Not all of your values matter to me.
  • You're mostly criticizing preference utilitarianism (with the preferences being uninformed preferences at that), which is far from the only possible utilitarianism and not (I think) held by all that many people here.
  • It's not a given that only happiness matters (on the face of it, this is false).
  • "Utilitarianism could be total or average" isn't an argument against the disjunction of total and average utilitarianism.

This post seems reflect a confabulation between "utilons" and "wealth", as well as a confabulation between "utilons" and happiness.

We have orders of magnitude more wealth per person than our ancestors. We are not particularly good at turning wealth into happiness. This says very, very little about how good we are at achieving any goals that we have that are unrelated to happiness. For example, the world is far less dangerous than it used to be. Even taking into account two world wars, people living in the twentieth century we... (read more)

  1. It seems that it is possible to compare happiness of two different people; ie I can say that giving the cake to Mary would give her twice as much happiness as it would give Fred. I think that's all you need to counter your first objection. You'd need something much more formal if you were actually trying to calculate it out rather than use it as a principle, but as far as I know no one does this.

  2. This is a big problem. I personally solve it by not using utilitarianism on situations that create or remove people. This is an inelegant hack, but it works.

  3. T

... (read more)
This is confusing the issue. Utility, which is an abstract thing measuring preference satisfaction, is not the same thing as happiness, which is a psychological state.
It's a pretty universal confusion. Many people when asked what they want out of life will say something like 'to be happy'. I suspect that they do not exactly mean 'to be permanently in the psychological state we call happiness' though, but something more like, 'to satisfy my preferences, which includes, but is not identical with, being in the psychological state of happiness more often than not'. I actually think a lot of ethics gets itself tied up in knots because we don't really understand what we mean when we say we want to be happy.
0Eliezer Yudkowsky13y
True, but even so, thinking about utilon-seconds probably does steer your thoughts in a different direction from thinking about utility.
So let's call them hedon-seconds instead.
1Scott Alexander13y
The terminology here is kind of catching me in between a rock and a hard place. My entire point is that the "utility" of "utilitarianism" might need more complexity than the "utility" of economics, because if someone thinks they prefer a new toaster but they actually wouldn't be any happier with it, I don't place any importance on getting them a new toaster. IANAEBAFAIK economists' utility either would get them the new toaster or doesn't really consider this problem. ...but I also am afraid of straight out saying "Happiness!", because if you do that you're vulnerable to wireheading. Especially with a word like "hedon" which sounds like "hedonism", which is very different from the "happiness" I want to talk about. CEV might help here, but I do need to think about it more.
Agreed. For clarity, the economist's utility is just preference sets, but these aren't stable. Morality's utility is what those preference sets would look like if they reflected what we would actually value, given that we take everything into account. I.e., Eliezer's big computation. Utilitarianism's utility, in the sense that Eliezer is a utilitarian, is the terms of the implied utility function we have (i.e., the big computation) that refers to the utility functions of other agents. Using "utility" to refer to all of these things is confusing. I choose to call economist's utility functions preference sets, for clarity. And, thus, economic actors maximize preferences, but not necessarily utility. Perhaps utilitarianism's utility - the terms in our utility function for the values of other people - can be called altruistic utility, again, for clarity. ETA: and happiness I use to refer to a psychological state - a feeling. Happiness, then, is nice, but I don't want to be happy unless it's appropriate to be happy. Your mileage may vary with this terminology, but it helps me keep things straight.
My rough impression is that "utilitarianism" is generally taken to mean either hedonistic or preference utilitarianism, but nothing else, and that we should be saying "consequentialism". I think the "big computation" perspective in The Meaning of Right [] is sufficient. Or if you're just looking for a term to use instead of "utility" or "happiness", how about "goodness" or "the good"? (Edit: "value", as steven suggests, is better.)
My impression is that it doesn't need to be pleasure or preference satisfaction; it can be anything that could be seen as "quality of life" or having one's true "interests" satisfied. Or "value".
I agree we should care about more than people's economic utility and more than people's pleasure. "eudaimon-seconds", maybe?
This is one reason I say my notional utility function is defined over 4D histories of the entire universe, not any smaller structures like people.

It seems that in the rationalist community there's almost universal acceptance of utilitarianism as basics of ethics.

I'd be interested to know if that's true. I don't accept utilitarianism as a basis for ethics. Alicorn's recent post suggests she doesn't either. I think quite a few rationalists are also libertarian leaning and several critiques of utilitarianism come from libertarian philosophies.

Suggests? I state it outright (well, in a footnote). Not a consequentialist over here. My ethical views are deontic in structure, although they bear virtually no resemblance to the views of the quintessential deontologist (Kant).
I did think twice over using 'suggests' but I just threw in the link to let you speak for yourself. Thanks for clarifying :)
Additional data point: not a utilitarian either. FWIW: fairly committed consequentialist. Most likely some form of prioritarian, possibly a capability prioritarian (if that even means anything); currently harboring significant uncertainty with regard to issues of population ethics.
Person-affecting consequentialisms are pretty nice about population ethics.
Yeah, that's the way I tend, but John Broome has me doubting whether I can get everything I want here.
Conchis, take a look at Krister Bykvist's paper, "The Good, the Bad and the Ethically Neutral []" for a convincing argument that Broome should embrace a form of consequentialism. (As an aside, the paper contains this delightful line: "My advice to Broome is to be less sadistic.")
Thanks for the link. As far as I can tell, Bykvist seems to be making an argument about where the critical level should be set within a critical-level utilitarian framework rather than providing an explicit argument for that framework. (Indeed, the framework is one that Broome appears to accept already.) The thing is, if you accept critical-level utilitarianism you've already given up the intuition of neutrality, and I'm still wondering whether that's actually necessary. In particular, I remain somewhat attracted to a modified version of Dasgupta's "relative betterness" idea, which Broome discusses in Chapter 11 of Weighing Lives. He seems to accept that it performs well against our intuitions (indeed, arguably better his own theory), but ultimately rejects it as being undermotivated. I still wonder whether such motivation can be provided. (Of course, if it can't, then Bykvist's argument is interesting.)

Do you want to be a wirehead?

I do. Very much so, in fact.
It's fairly straightforward to max out your subjective happiness with drugs today, why wait?
Is it? What drugs?
Well, that's an interesting question. If you wanted to just feel maximum happiness in a something like your own mind, you could take the strongest dopamine and norepinephrin reuptake inhibitors you could find. If you didn't care about your current state, you could get creative, opioids to get everything else out of the way, psychostimulants, deliriants. I would need to think about it, I don't think anyone has ever really worked out all the interactions. It would be easy to achieve a extremely high bliss, but some interactions work would be required to figure out something like a theoretical maximum. The primary thing in the way is the fact that even if you could find a way to prevent physical dependency, the subject would be hopelessly psychologically addicted, unable to function afterwards. You'd need to stably keep them there for the rest of their life expectancy, you couldn't expect them to take any actions or move in and out of it. Depending on the implementation, I would expect wireheading to be much the same. Low levels of stimulation could potentially be controlled, but using to get maximum pleasure would permanently destroy the person. Our architecture isn't built for it.
Current drugs will only give you a bit of pleasure before wrecking you in some way or another. CronoDAS should be doing his best to stay alive, his current pain being a down payment on future real wireheading.
5Paul Crowley13y
Some current drugs, like MDMA, are extremely rewarding at a very low risk.
"Probably the gravest threat to the long-term emotional and physical health of the user is getting caught up in the criminal justice system." * []
MDMA is known to be neurotoxic. It's definitely not the way to attain maximum happiness in the long run, unless your present life expectancy is very short indeed.
I think that is incorrect. Please substantiate.
From the same page cited by timtyler above: [] Yes, the page goes on to describe reasons to be skeptical of the studies, but I think that I don't want to risk it - and I don't know how to get the drugs anyway, especially not in a reasonably pure form. I've also made a point of refusing alcoholic beverages even when under significant social pressure to consume them; my family medical history indicates that I may be at unusually high risk for alcoholism, and I would definitely describe myself as having an "addictive personality", assuming such a thing exists.
Ah, thanks for the relevant response. I was carelessly assuming a stronger definition of neurotoxicity along the lines of the old 80s propaganda ("one dose of MDMA = massive brain damage").
The most recent meta-analysis acknowledges that "the evidence cannot be considered definitive", but concludes: For practical purposes, this lingering doubt makes little difference. Hedonists are well-advised to abstain from taking Ecstasy on a regular basis even if they assign, say, a 25% chance to the hypothesis that MDMA is neurotoxic. I myself believe that positive subjective experience ("happiness", in one of its senses) is the only thing that ultimately matters, and would be the first to advocate widespread use of ecstasy in the absence of concerns about its adverse effects on the brain. -- Gouzoulis-Mayfrank, E.; Daumann, Neurotoxicity of methylenedioxyamphetamines (MDMA; ecstasy) in humans: how strong is the evidence for persistent brain damage?, J. Addiction. 101(3):348-361, March 2006.
1Paul Crowley13y
Yes, I think the sort of "eight pills every weekend" behavour that is sometimes reported is definitely inadvisable. However, there are escalating hazards and diminishing returns; it seems to me that the costs/benefits analysis looks quite the other way for infrequent use. The benefits extend beyond the immediate experience of happiness.
It depends on what you mean by wrecking. Morphine, for example, is pretty safe. You can take it in useful, increasing amounts for a long time. You just can't ever stop using it after a certain point, or your brain will collapse on itself. This might be a consequence of the bluntness of our chemical instruments, but I don't think so. We now have much more complicated drugs that blunt and control physical withdrawal and dependence, like Subutex and so forth, but the recidivism and addiction numbers are still bad. Directly messing with your reward mechanisms just doesn't leave you a functioning brain afterward, and I doubt wireheading of any sophistication will either.
My preference function advices me against becoming a wirehead, but I would be much happier if I did it. Obviously. And it's not really a binary choice.

Modern economy is just wonderful at mass producing utilons - we have orders of magnitude more utilons per person than our ancestors - and it doesn't really leave people that much happier.

Current research suggests it does:

The facts about income and happiness turn out to be much simpler than first realized:

1) Rich people are happier than poor people.

2) Richer countries are happier than poorer countries.

3) As countries get richer, they tend to get happier.

It's true that my critique would be a lot weaker if Easterlin paradox [] turned out to be false, but neither me nor Easterlin are anywhere close to being convinced about that. It would surprise me greatly (in <1% chance sense) if it turned out to be so. 1 is obviously predicted by the hedonic treadmill, so it's not surprising. And as far as I know there's very little evidence for 2 and 3 - there might be some tiny effect, but if it was strong then either everybody today would feel ecstatic all the time, or our ancestors 200 years ago would all feel suicidal all the time, neither of which is the case.
The research I linked claims to be evidence for 2 and 3. I'd say it's not irrefutable evidence but it's more than 'very little'. Do you take issue with specific aspects of the research? There seems to be a certain amount of politics tied up in happiness research. Some people prefer to believe that improved material wealth has no correlation with happiness because it fits better with their political views, others prefer to believe that improved material wealth correlates strongly with happiness. I find the evidence that there is a correlation persuasive, but I am aware that I may be biased to view the evidence favourably because it is more convenient if it is true in the context of my world view.
This could be partly a comparison effect. It's possible that rich people are happier than poor people because they compare themselves to poor people, and the denizens of rich countries are happier than the denizens of the Third World because they can likewise make such a comparison. A country that's gaining wealth is gaining countries-that-it's-better-than and shrinking the gap between countries that are still wealthier. If wealth were fairly distributed, it's arguable if we'd have much to show for some flat increase in everyone's wealth, handed out simultaneously and to everyone.
It's certainly possible but the research doesn't seem to suggest [] that: Also [] : More research is needed but the current research doesn't really support the comparison explanation.
I think you're over-interpreting the results of a single (and as far as I'm aware as-yet-non-peer-reviewed) paper. Cross-country studies are suggestive, but as far as I'm concerned the real action is in micro data (and especially in panel studies tracking the same individuals over extended periods of time). These have pretty consistently found evidence of comparison effects in developed countries. (The state of play is a little more complicated for transition and developing countries.) A good overview is: * Clark, Frijters and Shields (2008) "Relative Income, Happiness, and Utility" Journal of Economic Literature 46(1): 95-144. (Earlier version on SSRN here []) For what it's worth, my read of the micro data is that it generally doesn't support the "money doesn't make people happy" hypothesis either. Money does matter, though in many cases rather less than some other life outcomes.
My claim would be that if the poorest country in the world could be brought up to the standard of living of the US and the rest of the world could have its standard of living increased so as to maintain the same relative inequality, then (to a first approximation) every individual in the world would find their happiness either increased or unchanged. I don't know if anyone would go so far as to claim otherwise but it sometimes seems that some people would dispute that claim.

But my main problem is that there's very little evidence getting utilons is actually increasing anybody's happiness significantly.

If you give someone more utilons, and they do not get happier, you're doing it wrong by definition. Conversely, someone cannot get happier without acquiring more utilons by definition.

You've rejected a straw man. You're probably right to reject said straw man, but it doesn't relate to utilitarianism.

Utilons are not equivalent to happiness. Utilons are basically defined as "whatever you care about," while happiness is a specific brain state. For example, I don't want people to be tortured. If you save someone else from torture and don't tell me about it, you've given me more utilons without increasing my happiness one bit. The converse is true as well - you can make someone happier without giving them utilons. From what I know of Eliezer, if you injected him with heroin, you'd make him (temporarily) happier, but I doubt you'd have given him any utilons. Beware arguing by definition. [] Especially when your definition is wrong.
You caution against arguing by definition and yet claim definitions that are not universally agreed on as authoritative. There is genuine confusion over some of these definitions, it's useful to try and clarify what you mean by the words but you should refrain from claiming that is the meaning. For example, contrary definitions of happiness (it's not just a brain state): Happiness [] Wikipedia [] I don't think it's uncontroversial to claim that utilons can be increased by actions you don't know about either. The definitions really are at issue here and there are relevant differences between commonly used definitions of happiness.
My understanding of utilitarian theory is that, at the highest meta level, every utilitarian theory is unified by the central goal of maximizing happiness, though the definitions, priorities, and rules may vary. If this is true, "Utilitarianism fails to maximize happiness" is an illegitimate criticism of the meta-theory. It would be saying, "Maximizing happiness fails to maximize happiness," which is definitionally impossible. Since the meta-theory is "Maximize happiness," you can't say that the meta-theory fails to maximize happiness, only that specific formulations do, which is absolutely legitimate. The original author appears to be criticizing a specific formulation while he claims to be criticizing the meta-theory. That was my original point, and I did not make it clearly enough. I used "by definition" precisely because I had just read that article. I'm clearly wrong because apparently the definition of utilons is controversial. I simply think of them as a convenient measurement device for happiness. If you have more utilons, you're that much happier, and if you have fewer, you're that much less happy. If buying that new car doesn't increase your happiness, you derive zero utilons from it. To my knowledge, that's a legitimate and often-used definition of utilon. I could be wrong, in which case my definition is wrong, but given the fact that another poster takes issue with your definition, and that the original poster implicitly uses yet another definition, I really don't think mine can be described as "wrong." Though, of course, my original assertion that the OP is wrong by definition is wrong.

This reminds me of a talk by Peter Railton I attended several years ago. He described happiness as a kind of delta function: we are as happy as our difference from our set point, but we drift back to our set point if we don't keep getting new input. Increasing one's set point will make one "happier" in the way you seem to be using the word, and it's probably possible (we already treat depressed people, who have unhealthily low set points and are resistant to more customary forms of experiencing positive change in pleasure).

So happiness is the difference between your set point of happiness and your current happiness? Looks circular.
What do you / did he mean by delta function? Dirac delta and Kronecker delta don't seem to fit.
Delta means, in this case, change. We are only happy if we are constantly getting happier; we don't get to recycle utilons.
Making explicit something implicit in steven0461's comment: the term "delta function" has a technical meaning, and it doesn't have anything to do with what you're describing. You might therefore prefer to avoid using that term in this context. (The "delta function" is a mathematical object that isn't really even a function; handwavily it has f(x)=0 when x isn't 0, f(x) is infinite when x is 0, and the total area under the graph of f is 1. This turns out to be a very useful gadget in some areas of mathematics, and one can turn the handwaving into actual mathematics at some cost in complexity. When handwaving rather than mathematics is the point, one sometimes hears "delta function" used informally to denote anything that starts very small, rapidly becomes very large, and then rapidly becomes very small again. Traffic at a web site when it gets a mention in some major media outlet, say. That's the "Dirac delta" Steven mentioned; the "Kronecker delta" is a function of two variables that's 1 when they're equal and 0 when they aren't, although most of the time when it's used it's actually denoting something hairier than that. This isn't the place for more details.)
This doesn't make logical sense if both these words "happy" mean the same thing, so we should use different words for both.
We only occupy a level of happiness/contentment above our individual, natural, set points as long as we are regularly satisfying previously unsatisfied preferences. When that stream of satisfactions stops, we gradually revert to that set point.
OK, so the point is happiness depends on the time derivative of preference satisfaction rather than on preference satisfaction itself?
If I knew what "time derivative" meant, I might agree with you.
Amount of change per unit of time, basically.
Then yes, that's exactly it.
You can think of happiness as the derivative of utility []. (Caution: That is making more than just a mathematical claim.)

I think it's pretty clear we should have a term in our social utility function that gives value to complexity (of the universe, of society, of the environment, of our minds). That makes me more than just a preference utilitarian. It's an absolute objective value. It may even, with interpretation, be sufficient by itself.

There are specific things that I value that are complex, and in some cases I value them more the more complex they are, but I don't think I value complexity as such. Complexity that doesn't hit some target is just randomness, no?
2Scott Alexander13y
Can you explain that a little better? It seems to me like if you define complexity in any formal way, you'll end up tiling the universe with either random noise, fractals, or some other extremely uninteresting system with lots and lots of variables. I always thought that our love of complexity is a side-effect of the godshatter, ie there's no one thing that will interest us. Solve everything else, and the desire for complexity disappears. You might convince me otherwise by defining "complexity" more rigorously.
Could complexity-advocates reply to this point specifically? Either to say why they don't actually want this or to admit that they do. I'm confused.
Living systems do produce complex, high-entropy states, as a matter of fact. Yes, that leads to universal heat death faster if they keep on, but - so what?
Someone whose name escapes me has argued that this is why living systems exist - the universe tends towards maximum entropy, and we're the most efficient way of getting there. Let's see how much energy we can waste today!
There's a few of us. My pages on the topic: [] [] See also: [] The main recent breakthrough of our understanding in this areas is down to Dewar - and the basic idea goes back at least to Lotka, from 1922.
What are you trying to say? Preference-satisfaction is exactly as absolute and objective a value as complexity; it's just one that happens to explicitly depend on the contents of people's minds.
Now I don't know what you are trying to say. Saying that preferences are values is tautological - "preferences" and "values" are synonyms in this kind of discussion.
One of the things that currently frustrates me most about this site is the confusion that seems to surround the use of words like value, preference, happiness, and utility. Unfortunately, these words do not have settled, consistent meanings, even within literatures that utilize them extensively (economics is a great example of this; philosophy tends to be better, though still not perfect). Nor does it seem likely that we will be able to collectively settle on consistent usages across the community. (Indeed, some flexibility may even be useful.) Given that, can we please stop insisting that others' statements are wrong/nonsensical/tautological etc. simply on the basis that they aren't using our own preferred definitions. If something seems not to make sense to you, consider extending some interpretative charity by (a) considering whether it might make sense given alternative definitions; and/or (b) asking for clarification, before engaging in potentially misguided criticisms. EDIT: By way of example here, many people would claim that things can be valuable, independently of whether anyone has a preference for them. You may not think such a view is defensible, but it's not obviously gibberish, and if you want to argue against it, you'll need more than a definitional fiat.
Whoa! Hold your horses! I started out by saying: "I don't know what you are trying to say." Clarify definitions away - if that is the problem - which seems rather unlikely.
seemed more like an assertion than an attempt to seek clarification, but I apologize if I misinterpreted your intention. The EDIT was supposed to be an attempt to clarify. Does the claim I made there make sense to you?
FWIW, to my way of thinking, we can talk about hypothetical preferences just about as easily as hypothetical values.
I'm afraid that I don't understand the relevance of this to the discussion. Could you expand?
Read what I said: "preference-satisfaction is... a value", not "preferences are values". The point is that the extent to which people's preferences are satisfied is just as objective a property of a situation as the amount of complexity present.
The preferences can be anything. If I claim that complexity should be one of the preferences, for me and for everyone, that's an objective claim - "objective" in the sense "claiming an objective value valid for all observers, rather than a subjective value that they can choose arbitrarily". It's practically religious. It's radically different from saying "people satisfy their preferences". "The extent to which people's preferences are satisfied" is an objective property of a situation. But that has nothing to do with what I said; it's using a different meaning of the word "objective".
Trying for a sympathetic interpretation - I /think/ you must be talking about the preferences of a particular individual, or an average human - or something like that. In general, preference-satisfaction is not specific - in the way that maximising complexity is (for some defined metric of complexity) - because the preferences could be any agent's preferences - and different agents can have wildly different preferences.
Preference-satisfaction in this context is usually considered as an aggregate (usually a sum or an average) of the degree to which all individuals' preferences are satisfied (for some defined metric of satisfaction).
The preferences of the people in the situation being evaluated.
That's the theory, as I understand it: []