Eudaimonic Utilitarianism

by Darklight 6y4th Sep 201335 comments

7


Eliezer Yudkowsky on several occasions has used the term “Eudaimonia” to describe an objectively desirable state of existence. While the meta-ethics sequence on Less Wrong has been rather emphatic that simple universal moral theories are inadequate due to the complex nature of human values, one wonders, just what would happen if we tried anyway to build a moral theory around the notion of Eudaimonia. The following is a cursory attempt to do so. Even if you don’t agree with everything I say here, I ask that you please bear with me to the end before making judgments about this theory. Also, if you choose to downvote this post, please offer some criticism in the comments to explain why you choose to do so. I am admittedly new to posting in the Less Wrong community, and would greatly appreciate your comments and criticisms. Even though I use imperative language to argue my ideas, I consider this theory to be a work in progress at best. So without further ado, let us begin…

 

Classical Utilitarianism allows for situations where you could theoretically justify universal drug addiction as a way to maximize happiness if you could find some magical drug that made people super happy all the time with no side effects. There's a book called Brave New World by Aldous Huxley, where this drug called Soma is used to sedate the entire population, making them docile and dependent and very, very happy. Now, John Stuart Mill does argue that some pleasures are of a higher quality than others, but how exactly do you define and compare that quality? What exactly makes Shakespeare better than Reality TV? Arguably a lot of people are bored by Shakespeare and made happier by Reality TV.

 

Enter Aristotle. Aristotle had his own definition of happiness, which he called Eudaimonia. Roughly translated, it means "Human Flourishing". It is a complex concept, but I like to think of it as "reaching your full potential as a human being", "being the best that you can be", "fulfilling your purpose in life", and “authentic happiness” (based on the existential notion of authenticity). I think a better way to explain it is like this. The Classical Utilitarian concept of happiness is subjective. It is just the happiness that you feel in your limited understanding of everything. The Eudaimonic Utilitarian concept of happiness is objective. It is the happiness you would have if you did know everything that was really happening. If you, from the perspective of an impartial observer, knew the total truth (perfect information), would you be happy with the situation? You would probably only be truly happy if you were in the process of being the best possible you, and if it was the best possible reality. Theists have another name for this, and it is God's Will (See: Divine Benevolence, or an Attempt to Prove That the Principal End of the Divine Providence and Government is the Happiness of His Creatures (1731) by Thomas Bayes) (yes, that Bayes).

 

Looking at the metaphor of God, an omnibenevolent God wants everyone to be happy. But more than just happy as docile creatures, he wants them to fulfill their purpose and destiny and achieve their fullest potential for greatness because doing so allows them to contribute so much more to everything, and make the whole universe and His creation better. Now, it's quite possible that God does not exist. But His perspective, that of the impartial observer with perfect information and rationality, is still a tremendously useful perspective to have to make the best moral decisions, and is essentially the one that Eudaimonic Utilitarianism would like to be able to reason from.

 

Such happiness would be based on perfect rationality, and the assumption that happiness is the emotional goal state. It is the state that we achieve when we accomplish our goals, that is to say, we are being rational, and committing rational activity, also known as Arête. For this reason, Eudaimonia as a state is not necessarily human-specific. Any rational agent with goals, including, say a Paperclip Maximizer, might reach a Eudaimonic state even if it isn't "sentient" or "intelligent" in the way that we would understand it. It need not "feel happy" in a biochemical manner, only be goal-directed and have some sort of desired success state. Though I could argue that this desired success state would be the mechanical equivalent of happiness to a Really Powerful Optimization Process, that in its own way the Paperclip Maximizer feels pleasure when it succeeds at maximizing paperclips, and pain when it fails to do so.

 

Regardless, Eudaimonia would not be maximized by taking Soma. Eudaimonia would not be achieved by hooking up to the matrix if the matrix was a perfect utopia of happiness, because that utopia and happiness aren't real. They're a fantasy, a drug that prevents them from actually living and being who they're supposed to be, who they can be. They would be living a lie. Eudaimonia is based on the truth. It is based on reality and what can and should be done. It requires performing rational activity or actually achieving goals. It is an optimization given all the data.

 

I have begun by explaining how Eudaimonic Utilitarianism is superior to Classical Utilitarianism. I will now try to explain how Eudaimonic Utilitarianism is both superior and compatible to Preference Utilitarianism. Regular Preference Utilitarianism is arguably even more subjective than Classical Utilitarianism. With Preference Utilitarianism, you’re essentially saying that whatever people think is in their interests, is what should be maximized. But this assumes that their preferences are rational. In reality, most people’s preferences are strongly influenced by emotions and bounded rationality.

 

For instance, take the example of a suicidal and depressed man. Due to emotional factors, this man has the irrational desire to kill himself. Preference Utilitarianism would either have to accept this preference even though most would agree it is objectively “bad” for him, or do something like call this “manifest” preference to be inferior to the man’s “true” preferences. “Manifest” preferences are what a person’s actual behaviour would suggest, while “true” preferences are what they would have if they could view the situation with all relevant information and rational care. But how do we go about determining a person’s “true” preferences? Do we not have to resort to some kind of objective criterion of what is rational behaviour?

 

But where is this objective criterion coming from? Well a Classical Utilitarian would argue that suicide would lead to a negation of all the potential happiness that the person could feel in the future, and that rationality is what maximizes happiness. A Eudaimonic Utilitarian would go further and state that if the person knew everything, both their happiness and their preferences would be aligned towards rational activity and therefore not only would their objective happiness be maximized by not committing suicide, but their “true” preferences would also be maximized. Eudaimonia therefore is the objective criterion of rational behaviour. It is not merely subjective preference, but a kind of objective preference based on perfect information and perfect rationality.

 

Preference Utilitarianism only really works as a moral theory if the person’s preferences are based on rationality and complete knowledge of everything. Coincidentally, Eudaimonic Utilitarianism, assumes this position. It assumes that what should be maximized is the person’s preferences if they were completely rational and knew everything, because those preferences would naturally align with achieving Eudaimonia.

 

Therefore, Eudaimonic Utilitarianism can be seen as a merging, a unification of both Classical and Preference Utilitarianism because, from the perspective of an objective impartial observer, the state of Eudaimonia is simultaneously happiness and rational preference achieved through Arête, or rational activity, which is equivalent to “doing your best” or “maximizing your potential”.

 

Preference Utilitarianism is neutral as to whether or not to take Soma or plug into the Utopia Matrix. For Preference Utilitarianism, it’s up to the individual’s “rational” preference. Eudaimonic Utilitarianism on the other hand would argue that it is only rational to take Soma or plug into the Utopia Matrix if doing so still allows you to achieve Eudaimonia, which is unlikely, as doing so prevents one from performing Arête in the real world. At the very least, rather than basing it on a subjective preference, we are now using an objective evaluation function.

 

The main challenge of Eudaimonic Utilitarianism of course is that we as human beings with bounded rationality, do not have access to the position of God with regards to perfect information. Nevertheless, we can still apply Eudaimonic Utilitarianism in everyday scenarios.

 

For instance, consider the problem of Adultery. A common criticism of Classical Utilitarianism is that it doesn’t condemn acts like Adultery because at first glance, an act like Adultery seems like it would increase net happiness and therefore be condoned. This does not take into account the probabilities of being caught however. Given uncertainty, it is usually safe to assume a uniform distribution of probabilities, which means that getting caught has a 0.5 probability. We must then compare the utilities of not getting caught, and getting caught. It doesn’t really matter what the exact numbers are, so much as the relative relationship of the values. So for instance, we can say that Adultery in the not getting caught scenario has a +5 to each member of the Adultery, for a total of +10. However, in the getting caught scenario, there is a +5 to the uncoupled member, but a net loss of -20 to the coupled member, and -20 to the wronged partner, due to the potential falling out and loss of trust resulting from the discovered Adultery.

 

 

Commit Adultery

Don’t Commit Adultery

Truth Discovered

-35 effect x 0.5 probability

0 effect x 0.5 probability

Truth Not Discovered

+10 effect x 0.5 probability

0 effect x 0.5 probability

Potential Consequences

-12.5

0

 

Thus the net total effect of Adultery in the caught scenario is -35. If we assign the probabilities to each scenario, +10 x 0.5 = +5, while -35 x 0.5 = -17.5. +5 – 17.5 = -12.5, therefore the probable net effect of Adultery is actually negative and therefore morally wrong.

 

But what if getting caught is very unlikely? Well, we can show that to a true agnostic at least, the probability of getting caught would be at least 0.5, because if we assume total ignorance, the probability that God and/or an afterlife exist would be a uniform distribution, as suggested by the Principle of Indifference and the Principle of Maximum Entropy. Thus there is at least a 0.5 chance that eventually the other partner will find out. But assuming instead a strong atheistic view, there is the danger that hypothetically, if the probability of truth not discovered was 1, then this calculation would actually suggest that committing Adultery would be moral.

 

The previous example is based on the subjective happiness of Classical Utilitarianism, but what if we used a criterion of Eudaimonia, or the objective happiness we would feel if we knew everything? In that case the Adultery scenario looks even more negative.

 

In this instance, we can say that Adultery in the not getting caught scenario has a +5 to each member of the Adultery, but also a -20 to the partner who is being wronged because that is how much they would suffer if they knew, which is a net -10. In the getting caught scenario, there is a +5 to the uncoupled member, but a net loss of -20 to the coupled member and an additional -20 to the partner being wronged, due to the potential falling out and loss of trust resulting from the discovered Adultery.

 

 

Commit Adultery

Don’t Commit Adultery

Truth Discovered

-35 effect x 0.5 probability

0 effect x 0.5 probability

Truth Not Discovered

-10 effect x 0.5 probability

0 effect x 0.5 probability

Potential Consequences

-22.5

0

 

As you can see, with a Eudaimonic Utilitarian criterion, even if the probability of truth not discovered was 1, it would still be negative and therefore morally wrong. Thus, whereas Classical Utilitarianism based on subjective happiness bases its case against Adultery on the probability of being caught and the potential negative consequences, Eudaimonic Utilitarianism takes a more solid case that Adultery would always be wrong because regardless of the probability of being caught, the consequences are inherently negative. It is therefore unnecessary to resort to traditional Preference Utilitarianism to achieve our moral intuitions about Adultery.

 

Consider another scenario. You are planning a surprise birthday party for your friend, and she asks you what you are doing. You can either tell the truth or lie. Classical Utilitarianism would say to lie because the happiness of the surprise birthday party outweighs the happiness of being told the truth. Preference Utilitarianism however would argue that it is rational for the friend to want to know the truth and not have her friends lie to her generally, that this would be her “true” preference. Thus, Preference Utilitarianism would argue in favour of telling the truth and spoiling the surprise. The happiness that the surprise would cause does not factor into Preference Utilitarianism at all, and the friend has no prior preference for a surprise party she doesn’t even know about.

 

What does Eudaimonic Utilitarianism say? Well, if the friend really knew everything that was going on, would she be happier and prefer to know the truth in this situation, or be happier and prefer not to know? I would suggest she would be happier and prefer not to know, in which case Eudaimonic Utilitarianism agrees with Classical Utilitarianism and says we should lie to protect the secret of the surprise birthday party.

 

Again, what's the difference between eudaimonia and preference-fulfillment? Basically, preference-fulfillment is based on people's subjective preferences, while Eudaimonia is based on objective well-being, or as I like to explain, the happiness they would feel if they had perfect information.

 

The difference is somewhat subtle to the extent that a person's "true" preferences are supposed to be “the preferences he would have if he had all the relevant factual information, always reasoned with the greatest possible care, and were in a state of mind most conducive to rational choice.” (Harsanyi 1982) Note that relevant factual information is not the same thing as perfect information.

 

For instance, take the classic criticism of Utilitarianism in the form of the scenario where you hang an innocent man to satisfy the desires for justice of the unruly mob. Under both hedonistic and preference utilitarianism, the hanging of the innocent man can be justified because hanging the innocent man satisfies both the happiness of the mob, and the preferences of the mob. However, hanging an innocent man does not satisfy the Eudaimonia of the mob, because if the people in the mob knew that the man was innocent and were truly rational, they would not want to hang him after all. Note that in this case they only have this information under perfect information, as it is assumed that the man appears to all rational parties to be guilty even though he is actually innocent.

 

So, Eudaimonia assumes that in a hypothetical state of perfect information and rationality (that is to say objectivity), a person's happiness would best be satisfied by actions that might differ from what they might prefer in their normal subjective state, and that we should commit to the actions that satisfy this objective happiness (or well-being), rather than satisfy subjective happiness or subjective preferences.

 

For instance, we can take the example from John Rawls of the grass-counter. "Imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns of Harvard." Under both hedonistic and preference utilitarianism, this would be acceptable. However, a Eudaimonic interpretation would argue that counting blades of grass would not maximize her objective happiness, that there is an objective state of being that would actually make her happier, even if it went against her personal preferences, and that this state of being is what should be maximized. Similarly, consider the rational philosopher who has come to the conclusion that life is meaningless and not worth living and therefore develops a preference to commit suicide. This would be his "true" preference, but it would not maximize his Eudaimonia. For this reason, we should try to persuade the suicidal philosopher not to commit suicide, rather than helping him do so.

 

How does Eudaimonia compare with Eliezer Yudkowsky’s concept of Coherent Extrapolated Volition (CEV)? Similarly to Eudaimonia, CEV is based on what an idealized version of us would want "if we knew more, thought faster, were more the people we wished we were, had grown up farther together". This is similar to but not the same thing as an idealized version of us with perfect information and with perfect rationality. Arguably Eudaimonia is sort of an extreme form of CEV that endorses the limits in this regard.

 

Furthermore, CEV assumes that the desires of humanity converge. The concept of Eudaimonia does not require this. The Eudaimonia of different sentient beings may well conflict, in which case Eudaimonic Utilitarianism takes the Utilitarian route and suggests the compromise of maximizing Eudaimonia for the greatest number of sentient beings, with a hierarchical preference for more conscious beings such as humans, over say ants. This is not to say that humans are necessarily absolute utility monsters to the ants. One could instead set it up so that the humans are much more heavily weighted in the moral calculus by their level of consciousness. Though that could conceivably lead to the situation where a billion ants might be more heavily weighted than a single human. If such a notion is anathema to you, then perhaps making humans absolute utility monsters may be reasonable to you after all. However, keep in mind that the same argument can be made that a superintelligent A.I. is a utility monster to humans. The idea that seven billion humans might outweigh one superintelligent A.I. in the moral calculus may not be such a bad idea.

 

In any case, Eudaimonic Utilitarianism does away with many of the unintuitive weaknesses of both Classical Hedonistic Utilitarianism, and Preference Utilitarianism. It validates our intuitions about the importance of authenticity and rationality in moral behaviour. It also attempts to unify morality and rationality. Though it is not without its issues, not the least of which being that it incorporates a very simplified view of human values, I nevertheless offer it as an alternative to other existing forms of Utilitarianism for your consideration.

7