Eliezer Yudkowsky on several occasions has used the term “Eudaimonia” to describe an objectively desirable state of existence. While the meta-ethics sequence on Less Wrong has been rather emphatic that simple universal moral theories are inadequate due to the complex nature of human values, one wonders, just what would happen if we tried anyway to build a moral theory around the notion of Eudaimonia. The following is a cursory attempt to do so. Even if you don’t agree with everything I say here, I ask that you please bear with me to the end before making judgments about this theory. Also, if you choose to downvote this post, please offer some criticism in the comments to explain why you choose to do so. I am admittedly new to posting in the Less Wrong community, and would greatly appreciate your comments and criticisms. Even though I use imperative language to argue my ideas, I consider this theory to be a work in progress at best. So without further ado, let us begin…

 

Classical Utilitarianism allows for situations where you could theoretically justify universal drug addiction as a way to maximize happiness if you could find some magical drug that made people super happy all the time with no side effects. There's a book called Brave New World by Aldous Huxley, where this drug called Soma is used to sedate the entire population, making them docile and dependent and very, very happy. Now, John Stuart Mill does argue that some pleasures are of a higher quality than others, but how exactly do you define and compare that quality? What exactly makes Shakespeare better than Reality TV? Arguably a lot of people are bored by Shakespeare and made happier by Reality TV.

 

Enter Aristotle. Aristotle had his own definition of happiness, which he called Eudaimonia. Roughly translated, it means "Human Flourishing". It is a complex concept, but I like to think of it as "reaching your full potential as a human being", "being the best that you can be", "fulfilling your purpose in life", and “authentic happiness” (based on the existential notion of authenticity). I think a better way to explain it is like this. The Classical Utilitarian concept of happiness is subjective. It is just the happiness that you feel in your limited understanding of everything. The Eudaimonic Utilitarian concept of happiness is objective. It is the happiness you would have if you did know everything that was really happening. If you, from the perspective of an impartial observer, knew the total truth (perfect information), would you be happy with the situation? You would probably only be truly happy if you were in the process of being the best possible you, and if it was the best possible reality. Theists have another name for this, and it is God's Will (See: Divine Benevolence, or an Attempt to Prove That the Principal End of the Divine Providence and Government is the Happiness of His Creatures (1731) by Thomas Bayes) (yes, that Bayes).

 

Looking at the metaphor of God, an omnibenevolent God wants everyone to be happy. But more than just happy as docile creatures, he wants them to fulfill their purpose and destiny and achieve their fullest potential for greatness because doing so allows them to contribute so much more to everything, and make the whole universe and His creation better. Now, it's quite possible that God does not exist. But His perspective, that of the impartial observer with perfect information and rationality, is still a tremendously useful perspective to have to make the best moral decisions, and is essentially the one that Eudaimonic Utilitarianism would like to be able to reason from.

 

Such happiness would be based on perfect rationality, and the assumption that happiness is the emotional goal state. It is the state that we achieve when we accomplish our goals, that is to say, we are being rational, and committing rational activity, also known as Arête. For this reason, Eudaimonia as a state is not necessarily human-specific. Any rational agent with goals, including, say a Paperclip Maximizer, might reach a Eudaimonic state even if it isn't "sentient" or "intelligent" in the way that we would understand it. It need not "feel happy" in a biochemical manner, only be goal-directed and have some sort of desired success state. Though I could argue that this desired success state would be the mechanical equivalent of happiness to a Really Powerful Optimization Process, that in its own way the Paperclip Maximizer feels pleasure when it succeeds at maximizing paperclips, and pain when it fails to do so.

 

Regardless, Eudaimonia would not be maximized by taking Soma. Eudaimonia would not be achieved by hooking up to the matrix if the matrix was a perfect utopia of happiness, because that utopia and happiness aren't real. They're a fantasy, a drug that prevents them from actually living and being who they're supposed to be, who they can be. They would be living a lie. Eudaimonia is based on the truth. It is based on reality and what can and should be done. It requires performing rational activity or actually achieving goals. It is an optimization given all the data.

 

I have begun by explaining how Eudaimonic Utilitarianism is superior to Classical Utilitarianism. I will now try to explain how Eudaimonic Utilitarianism is both superior and compatible to Preference Utilitarianism. Regular Preference Utilitarianism is arguably even more subjective than Classical Utilitarianism. With Preference Utilitarianism, you’re essentially saying that whatever people think is in their interests, is what should be maximized. But this assumes that their preferences are rational. In reality, most people’s preferences are strongly influenced by emotions and bounded rationality.

 

For instance, take the example of a suicidal and depressed man. Due to emotional factors, this man has the irrational desire to kill himself. Preference Utilitarianism would either have to accept this preference even though most would agree it is objectively “bad” for him, or do something like call this “manifest” preference to be inferior to the man’s “true” preferences. “Manifest” preferences are what a person’s actual behaviour would suggest, while “true” preferences are what they would have if they could view the situation with all relevant information and rational care. But how do we go about determining a person’s “true” preferences? Do we not have to resort to some kind of objective criterion of what is rational behaviour?

 

But where is this objective criterion coming from? Well a Classical Utilitarian would argue that suicide would lead to a negation of all the potential happiness that the person could feel in the future, and that rationality is what maximizes happiness. A Eudaimonic Utilitarian would go further and state that if the person knew everything, both their happiness and their preferences would be aligned towards rational activity and therefore not only would their objective happiness be maximized by not committing suicide, but their “true” preferences would also be maximized. Eudaimonia therefore is the objective criterion of rational behaviour. It is not merely subjective preference, but a kind of objective preference based on perfect information and perfect rationality.

 

Preference Utilitarianism only really works as a moral theory if the person’s preferences are based on rationality and complete knowledge of everything. Coincidentally, Eudaimonic Utilitarianism, assumes this position. It assumes that what should be maximized is the person’s preferences if they were completely rational and knew everything, because those preferences would naturally align with achieving Eudaimonia.

 

Therefore, Eudaimonic Utilitarianism can be seen as a merging, a unification of both Classical and Preference Utilitarianism because, from the perspective of an objective impartial observer, the state of Eudaimonia is simultaneously happiness and rational preference achieved through Arête, or rational activity, which is equivalent to “doing your best” or “maximizing your potential”.

 

Preference Utilitarianism is neutral as to whether or not to take Soma or plug into the Utopia Matrix. For Preference Utilitarianism, it’s up to the individual’s “rational” preference. Eudaimonic Utilitarianism on the other hand would argue that it is only rational to take Soma or plug into the Utopia Matrix if doing so still allows you to achieve Eudaimonia, which is unlikely, as doing so prevents one from performing Arête in the real world. At the very least, rather than basing it on a subjective preference, we are now using an objective evaluation function.

 

The main challenge of Eudaimonic Utilitarianism of course is that we as human beings with bounded rationality, do not have access to the position of God with regards to perfect information. Nevertheless, we can still apply Eudaimonic Utilitarianism in everyday scenarios.

 

For instance, consider the problem of Adultery. A common criticism of Classical Utilitarianism is that it doesn’t condemn acts like Adultery because at first glance, an act like Adultery seems like it would increase net happiness and therefore be condoned. This does not take into account the probabilities of being caught however. Given uncertainty, it is usually safe to assume a uniform distribution of probabilities, which means that getting caught has a 0.5 probability. We must then compare the utilities of not getting caught, and getting caught. It doesn’t really matter what the exact numbers are, so much as the relative relationship of the values. So for instance, we can say that Adultery in the not getting caught scenario has a +5 to each member of the Adultery, for a total of +10. However, in the getting caught scenario, there is a +5 to the uncoupled member, but a net loss of -20 to the coupled member, and -20 to the wronged partner, due to the potential falling out and loss of trust resulting from the discovered Adultery.

 

 

Commit Adultery

Don’t Commit Adultery

Truth Discovered

-35 effect x 0.5 probability

0 effect x 0.5 probability

Truth Not Discovered

+10 effect x 0.5 probability

0 effect x 0.5 probability

Potential Consequences

-12.5

0

 

Thus the net total effect of Adultery in the caught scenario is -35. If we assign the probabilities to each scenario, +10 x 0.5 = +5, while -35 x 0.5 = -17.5. +5 – 17.5 = -12.5, therefore the probable net effect of Adultery is actually negative and therefore morally wrong.

 

But what if getting caught is very unlikely? Well, we can show that to a true agnostic at least, the probability of getting caught would be at least 0.5, because if we assume total ignorance, the probability that God and/or an afterlife exist would be a uniform distribution, as suggested by the Principle of Indifference and the Principle of Maximum Entropy. Thus there is at least a 0.5 chance that eventually the other partner will find out. But assuming instead a strong atheistic view, there is the danger that hypothetically, if the probability of truth not discovered was 1, then this calculation would actually suggest that committing Adultery would be moral.

 

The previous example is based on the subjective happiness of Classical Utilitarianism, but what if we used a criterion of Eudaimonia, or the objective happiness we would feel if we knew everything? In that case the Adultery scenario looks even more negative.

 

In this instance, we can say that Adultery in the not getting caught scenario has a +5 to each member of the Adultery, but also a -20 to the partner who is being wronged because that is how much they would suffer if they knew, which is a net -10. In the getting caught scenario, there is a +5 to the uncoupled member, but a net loss of -20 to the coupled member and an additional -20 to the partner being wronged, due to the potential falling out and loss of trust resulting from the discovered Adultery.

 

 

Commit Adultery

Don’t Commit Adultery

Truth Discovered

-35 effect x 0.5 probability

0 effect x 0.5 probability

Truth Not Discovered

-10 effect x 0.5 probability

0 effect x 0.5 probability

Potential Consequences

-22.5

0

 

As you can see, with a Eudaimonic Utilitarian criterion, even if the probability of truth not discovered was 1, it would still be negative and therefore morally wrong. Thus, whereas Classical Utilitarianism based on subjective happiness bases its case against Adultery on the probability of being caught and the potential negative consequences, Eudaimonic Utilitarianism takes a more solid case that Adultery would always be wrong because regardless of the probability of being caught, the consequences are inherently negative. It is therefore unnecessary to resort to traditional Preference Utilitarianism to achieve our moral intuitions about Adultery.

 

Consider another scenario. You are planning a surprise birthday party for your friend, and she asks you what you are doing. You can either tell the truth or lie. Classical Utilitarianism would say to lie because the happiness of the surprise birthday party outweighs the happiness of being told the truth. Preference Utilitarianism however would argue that it is rational for the friend to want to know the truth and not have her friends lie to her generally, that this would be her “true” preference. Thus, Preference Utilitarianism would argue in favour of telling the truth and spoiling the surprise. The happiness that the surprise would cause does not factor into Preference Utilitarianism at all, and the friend has no prior preference for a surprise party she doesn’t even know about.

 

What does Eudaimonic Utilitarianism say? Well, if the friend really knew everything that was going on, would she be happier and prefer to know the truth in this situation, or be happier and prefer not to know? I would suggest she would be happier and prefer not to know, in which case Eudaimonic Utilitarianism agrees with Classical Utilitarianism and says we should lie to protect the secret of the surprise birthday party.

 

Again, what's the difference between eudaimonia and preference-fulfillment? Basically, preference-fulfillment is based on people's subjective preferences, while Eudaimonia is based on objective well-being, or as I like to explain, the happiness they would feel if they had perfect information.

 

The difference is somewhat subtle to the extent that a person's "true" preferences are supposed to be “the preferences he would have if he had all the relevant factual information, always reasoned with the greatest possible care, and were in a state of mind most conducive to rational choice.” (Harsanyi 1982) Note that relevant factual information is not the same thing as perfect information.

 

For instance, take the classic criticism of Utilitarianism in the form of the scenario where you hang an innocent man to satisfy the desires for justice of the unruly mob. Under both hedonistic and preference utilitarianism, the hanging of the innocent man can be justified because hanging the innocent man satisfies both the happiness of the mob, and the preferences of the mob. However, hanging an innocent man does not satisfy the Eudaimonia of the mob, because if the people in the mob knew that the man was innocent and were truly rational, they would not want to hang him after all. Note that in this case they only have this information under perfect information, as it is assumed that the man appears to all rational parties to be guilty even though he is actually innocent.

 

So, Eudaimonia assumes that in a hypothetical state of perfect information and rationality (that is to say objectivity), a person's happiness would best be satisfied by actions that might differ from what they might prefer in their normal subjective state, and that we should commit to the actions that satisfy this objective happiness (or well-being), rather than satisfy subjective happiness or subjective preferences.

 

For instance, we can take the example from John Rawls of the grass-counter. "Imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns of Harvard." Under both hedonistic and preference utilitarianism, this would be acceptable. However, a Eudaimonic interpretation would argue that counting blades of grass would not maximize her objective happiness, that there is an objective state of being that would actually make her happier, even if it went against her personal preferences, and that this state of being is what should be maximized. Similarly, consider the rational philosopher who has come to the conclusion that life is meaningless and not worth living and therefore develops a preference to commit suicide. This would be his "true" preference, but it would not maximize his Eudaimonia. For this reason, we should try to persuade the suicidal philosopher not to commit suicide, rather than helping him do so.

 

How does Eudaimonia compare with Eliezer Yudkowsky’s concept of Coherent Extrapolated Volition (CEV)? Similarly to Eudaimonia, CEV is based on what an idealized version of us would want "if we knew more, thought faster, were more the people we wished we were, had grown up farther together". This is similar to but not the same thing as an idealized version of us with perfect information and with perfect rationality. Arguably Eudaimonia is sort of an extreme form of CEV that endorses the limits in this regard.

 

Furthermore, CEV assumes that the desires of humanity converge. The concept of Eudaimonia does not require this. The Eudaimonia of different sentient beings may well conflict, in which case Eudaimonic Utilitarianism takes the Utilitarian route and suggests the compromise of maximizing Eudaimonia for the greatest number of sentient beings, with a hierarchical preference for more conscious beings such as humans, over say ants. This is not to say that humans are necessarily absolute utility monsters to the ants. One could instead set it up so that the humans are much more heavily weighted in the moral calculus by their level of consciousness. Though that could conceivably lead to the situation where a billion ants might be more heavily weighted than a single human. If such a notion is anathema to you, then perhaps making humans absolute utility monsters may be reasonable to you after all. However, keep in mind that the same argument can be made that a superintelligent A.I. is a utility monster to humans. The idea that seven billion humans might outweigh one superintelligent A.I. in the moral calculus may not be such a bad idea.

 

In any case, Eudaimonic Utilitarianism does away with many of the unintuitive weaknesses of both Classical Hedonistic Utilitarianism, and Preference Utilitarianism. It validates our intuitions about the importance of authenticity and rationality in moral behaviour. It also attempts to unify morality and rationality. Though it is not without its issues, not the least of which being that it incorporates a very simplified view of human values, I nevertheless offer it as an alternative to other existing forms of Utilitarianism for your consideration.

New to LessWrong?

New Comment
35 comments, sorted by Click to highlight new comments since: Today at 8:39 AM

How does EU resolve a toy problem like Jews vs Nazis? Or, in a more realistic example, hiring an African-American/a woman/a gay person to work in a racist/misogynistic/homophobic work environment? Presumably it would fight the hypothetical and state that "if the Nazis were objectively rational they would not hate Jews"?

It is not necessary for Nazis hating Jews to be rational that there are reasons for hating Jews, only that the reasons for not hating Jews do not outweigh the reasons for hating Jews. But their reasons for hating Jews are either self-contradictory or in fact support not hating Jews when properly worked out.

I can see the idea of fighting the hypothetical and arguing that the Nazis hate of the Jews isn't rational, and in a state of perfect information they would think differently. At the very least they would need some kind of rational reason to hate the Jews. The scenario seems slightly different if the Jews are responsible for harming the goals of the Nazis in some way. For instance, if the Jews, I dunno, consumed disproportionate amounts of food in relation to their size and thus posed a threat to the Nazis in terms of causing worldwide famine. Even then, maximizing EU would probably involve some weird solution like limiting the food intake of the Jews, rather than outright killing them or putting them in concentration camps.

Another way of going at it would be to argue that killing the Jews would have a disproportionate negative effect on their Eudaimonia that cannot realistically be offset by the Nazis feeling better about a world without Jews. Though this may not hold if the number of Jews is sufficiently small, and the number of Nazis sufficiently large. For instance, 1 Jew vs. 1 billion Nazis.

To be honest this is something of a problem for all forms of Utilitarianism, and I don't think EU actually solves it. EU fixes some issues people have about Classical and Preference Utilitarianism, but it doesn't actually solve big ones like the various Repugnant Conclusions. Alas, that seems to be a problem with any form of Utilitarianism that accepts the "Greatest Number" maximization principle.

I'm not sure what exactly you're asking about the "hiring an African-American/a woman/a gay person to work in a racist/misogynistic/homophobic work environment". Can you clarify this example?

Another way of going at it would be to argue that killing the Jews would have a disproportionate negative effect on their Eudaimonia that cannot realistically be offset by the Nazis feeling better about a world without Jews.

Would it be OK to make 1000 Nazis feel better by making 1 Jew feel worse?

I'm not sure what exactly you're asking about the "hiring an African-American/a woman/a gay person to work in a racist/misogynistic/homophobic work environment". Can you clarify this example?

Same as the original example: e.g. 1000 (bigoted) men get pissed off every day seeing a woman doing a man's job.

it doesn't actually solve big ones like the various Repugnant Conclusions

Right, that's the main issue. If you cannot solve it, an AGI implementing the algorithm will inevitably run into one.

I suppose an interesting way of attacking this problem would be to argue that while the magnitude of 1 million Jews is significant. The magnitude of 1 Jew is not. What I mean by this is that the degree to which the 1 million Nazis will benefit from the extermination of the Jews is actually proportional to the number of Jews that exist. This amount of Eudaimonic benefit then, will never exceed the loss of Eudaimonia that occurs when the Jews are exterminated or interned.

Making 1 Jew feel worse is a much smaller effect than making 1000 Jews feel worse. Thus, making 1 Jew feel worse has a much smaller effect on each of the 1000 Nazis than the effect of making 1000 Jews feel worse. The net effect of making 1 Jew feel worse and 1000 Nazis feel better is actually the same as making 1 Jew feel worse to make 1 Nazis feel better, or 1 million Jews feel worse to make 1 million Nazis feel better. This assumes that the Nazis' hatred and pleasure from seeing their enemy suffer is not simply additive, but proportional to the size and scope of their "enemy".

Thus the question really boils down to, is it alright to make one person feel bad in order to make one person feel good? If one takes the stance that equivalent pain is worse than equivalent pleasure, then the answer is no. To reach this stance, one needs only assert the validity of this thought experiment:

Would you endure one day of torture in exchange for one day of bliss? Most humans, are biased to say no. Humans in general are more pain averse than they are pleasure seeking. Therefore, making the Jews feel bad to make the Nazis feel good is not justifiable.

I'm honestly not sure if this argument makes much sense, but I present it to you as something to consider.

The problem here is that your math doesn't seem to take into account that humans are deeply scope insensitive and our emotional involvement is largely a function of 'distance' (space, time, appearance, etc).

(Note: I am not a utilitarian and do not condone any illegal actions. This is a hypothetical to better express my point.)

A public execution on national TV of a hated individual, or even just a member of a hated group, could potentially give tens or hundreds of millions of observers a half hour of high-quality entertainment. Even discounting the value of the positive memories and added social cohesion from the shared experience, the kind of money advertisers will pay for even a poorly rated cable show should prove that the execution is a net benefit.

A newspaper article or TV news report, even with grizzly pictures and video, describing the deaths of hundreds of members of a hated group is less personal and thus less intense an experience. The same viewers might feel better in an abstract sense, maybe even enough to break even, but it's probably not a particularly useful expenditure. Even if the number of viewers increased by a hundred-fold to match the victims it's still going to be a worse deal in $$$ per utilons.

Killing millions in a short time-frame for entertainment value is just realistically not going to be an efficient use of time and energy, even if trillions or quadrillions of human viewers were watching. A million really is a statistic, and it's unreasonable to expect anyone to care more about a million deaths than a half dozen.

I think you got mired in the goop of this "impartial observer" stuff, and this led you to be a bit careless in other places, like the Adultery example. But other than that, I like the post.

"A common criticism of Classical Utilitarianism is that it doesn’t condemn acts like Adultery because at first glance, an act like Adultery seems like it would increase net happiness and >>therefore be condoned.<< "

Confusing wording . At first glance, it looks like xyz is an action that increases net happiness and therefore be condoned.

But what if getting caught is very unlikely? Well, we can show that to a true agnostic at least, the probability of getting caught would be at least 0.5, because if we assume total ignorance, the probability that God and/or an afterlife exist would be a uniform distribution, as suggested by the Principle of Indifference and the Principle of Maximum Entropy.

Isn't this just the original Pascal's Wager fallacy?

If you don't know anything (true agnosticism), the probability of most proposition isn't 50%, because there are many mutually exclusive propositions. A universal agnostic prior gives any proposition as complex as "there's a God who punishes adultery in an afterlife" a vanishingly tiny probability not worth even raising to the level of conscious notice. Also, each proposition tends to be balanced by an opposite proposition: that there is a God who rewards adultery in an afterlife.

I will admit that it does sound a lot like the Pascal's Wager fallacy, but I do think there is a slight difference. Pascal's Wager makes a very specific proposition, while the proposition I am making is actually very general.

See, it's not that a God needs exist that punishes the adultery. The only requirement is that there is either a God that knows everything and would therefore be able to to tell the partner that the adultery occurred (doesn't matter if this God rewards, punishes, or is indifferent), or an afterlife in which people exist after death and are able to observe the world and tell the partner that the adultery occurred. Basically there just has to exist some way for the partner to find out after death that the adultery occurred. This doesn't have to be a God AND afterlife. Note that I said AND/OR rather than AND. The manner in which the adultery is discovered doesn't really matter. Pascal's Wager on the other hand depends on a very specific God and a very specific set of propositions being true.

That each proposition tends to be balanced by an opposite proposition to me, actually supports the notion that everything evens out to around 50%, assuming that each of these propositions is conditionally independent. At least, that's my understanding. I will admit that I am not a master of Probability theory, and that you could be quite right. That is sort of why in the next paragraph after this one I assume an atheistic view and look at the consequences of that.

Eliezer Yudkowsky on several occasions has used the term “Eudaimonia” to describe an objectively desirable state of existence.

Can there be such a thing? Doesn't desirability depend on the goals of whoever is asking? Basically, why would an "impartial observer" have a specific set of goals? I think you need an axiom for what your observer has to look out for in order to get your theory off the ground.

There's a book called Brave New World by Aldous Huxley, where this drug called Soma is used to sedate the entire population, making them docile and dependent and very, very happy.

As long as those beings never experience a subjective state they want to change or get out of while they are in it, why would it be morally urgent to change anything about their situation? If the only way to turn the situation described in BNW (ignoring the savages) into an eudaimonic utopia was by torturing the whole population for a couple of years first, would you think it was the moral thing to do?

I'd say that the thing that sets utilitarianism apart from other forms of consequentialism is that it is altruistic, i.e. something one wouldn't have reasonable grounds for objection towards, judging from behind the veil of ignorance. But if I have to risk torture for having happiness that is "informed", I don't think it would be worth it. As long as my momentary selves are always perfectly content with their experiential states, I see no reason to object to anything, and most importantly, I think it would be immoral to increase my chances of ending up suffering by forcing something onto me that someone else considers to be "good" only because of their personal preferences.

It might bother you to imagine the BNW scenario if you look at it from the outside, but the thing is, no one in BNW is bothered by it. And I personally am not even bothered by it from the outside, and I don't see how trying to think more like an impartial observer would make me more bothered.

Technicality: A number of alphas, such as Helmholtz, are in fact bothered by the obligation to be "emotionally infantile." They mostly end up in exile on various islands, where they can lead a more eudaimonic existence without endangering the happiness of society at large.

Hardly a technicality. Entire point of the novel.

It might bother you to imagine the BNW scenario if you look at it from the outside, but the thing is, no one in BNW is bothered by it.

Wireheaders aren't bothered either. Is this an argument in favour of forcibly wireheading the entire population, surgically stunting their brains in the (artificial) womb to do so?

Yes, but it wouldn't be the only scenario leading to an outcome no consciousness moment could object to. (And for strategic reasons, moral uncertainty reasons and opportunity costs reasons of preventing suffering elsewhere, this conclusion would likely remain hypothetical.)

Note that not wireheading would be forcing future consciousness moments to undergo suffering. We think that this is justified because present-me has "special authority" over future-mes, but I for one think that there's nothing ethically relevant about our sense of personal identity.

Note that not wireheading would be forcing future consciousness moments to undergo suffering. We think that this is justified because present-me has "special authority" over future-mes,

Speak for yourself. I think that not wireheading is justified because wireheading is equivalent to being dead. It's a way of committing suicide, only to be considered as a last resort in the face of unendurable and incurable suffering, and even then I'd rather be actually dead.

I'd only consider wireheading equivalent to death from an outsider's perspective. It's interesting that you'd treat converting a suicidal person into someone modded to happiness worse than a suicidal person ceasing to exist; I can think of a few possible reasons for that, but the only one that appeals to me personally is that a wireheaded person would be a resource drain. (The others strike me as avoidable given a sufficiently patient wireheader overlord, rather than the "just pump them full of orgasmium" overlord).

Wireheading is a snare and a delusion. An ethical theory that would wirehead the entire population as the ultimate good has fallen at the first hurdle.

Wireheaders aren't bothered either. Is this an argument in favour of forcibly wireheading the entire population, surgically stunting their brains in the (artificial) womb to do so?

Yes

Consider me suitably appalled. Perhaps you can expand on that. How much surgically applied brain-damage do you consider leaves enough of a person for the wireheading to be justified in your eyes?

I don't understand why this would matter, any level of brain damage seems equally fine to me as long as the conscious experience stays the same. I think the difference in our values stems from me only caring about (specific) conscious experience, and not about personhood or other qualities associated with it.

However, I'm not a classical utilitarian, I don't believe it is important to fill the universe with intense happiness. I care primarily about reducing suffering, and wireheading would be one (very weird) way to do that. Another way would be Pearcean paradise engineering, and a third way would be through preventing new consciousness moments from coming into existence. The paradise engineering one seems to be the best starting point for compromising with people who have different values, but intrinsically, I don't have a preference for it.

any level of brain damage seems equally fine to me as long as the conscious experience stays the same.

What does that even mean? The lower castes in Brave New World are brain-damaged precisely so that their conscious experience will not be the same. A Delta has just enough mental capacity to be an elevator attendant.

However, I'm not a classical utilitarian, I don't believe it is important to fill the universe with intense happiness. I care primarily about reducing suffering

That is exactly what BNW does: blunting sensibility, by surgery, conditioning, and drugs, to replace all suffering by bland contentment.

and wireheading would be one (very weird) way to do that. Another way would be Pearcean paradise engineering

My reading of that maze of links is that Pearcean paradise engineering is wireheading. It makes a nod here and there to "fulfilling our second-order desires for who and what we want to become", but who and what Pearce wants us to become turns out to be just creatures living in permanent bliss by means of fantasy technologies. What these people will actually be doing with their lives is not discussed.

I didn't explore the whole thing, but I didn't notice any evidence of anyone doing anything in the present day to achieve this empty vision other than talk about it. I guess I'm safe from the wireheading police for now.

and a third way would be through preventing new consciousness moments from coming into existence.

Kill every living creature, in other words.

The paradise engineering one seems to be the best starting point for compromising with people who have different values, but intrinsically, I don't have a preference for it.

But presumably, you do have a preference for those options collectively? Stunt everyone into contentment, wirehead them into bliss, or kill them all? But in another comment you say:

My terminal value is about doing something that is coherent/meaningful/altruistic

There doesn't seem to be any scope for that in the Pearcian scenario, unless your idea of what would be coherent/meaningful/altruistic to do is just to bring it about. But after Paradise, what?

Any opinion on this alternative?

Does "preventing new consciousness moments from coming into existence" encompass, for example, adjusting the oxygen content in the air (or the cyanide content in the water, or whatever) so that currently living brains stop generating consciousness moments?

I assume your answer is "no" but I'm curious as to why not.

It does encompass that.

I used "preventing" because my view implies that there's no ethically relevant difference between killing a being and preventing a new being from coming into existence. I think personal identity is no ontologically basic concept and I don't care terminally about human evolved intuitions towards it. Each consciousness moment is an entity for which things can go well or not, and I think things go well if there is no suffering, i.e. no desire to change something about the experiential content. It's very similar to the Buddhist view on suffering, I think.

Going to the other extreme to maximize happiness in the universe seems way more counterintuitive to me, especially if that would imply that sources of suffering get neglected because of opportunity costs.

Ah, OK. That's consistent.

I won't get into whether killing everyone in order to maximize value is more or less counterintuitive than potentially accruing opportunity costs in the process of maximizing happiness, because it seems clear that we have different intuitions about what is valuable.

But on your view, why bother with wireheading? Surely it's more efficient to just kill everyone, thereby preventing new consciousness moments from coming into existence, thereby eliminating suffering, which is what you value. That is, if it takes a week to wirehead P people, but only a day to kill them, and a given consciousness-day will typically involve S units of suffering, that's 6PS suffering eliminated (net) by killing them instead of wireheading them.

The advantage is greater if we compare our confidence that wireheaded people will never suffer (e.g. due to power shortages) to our confidence that dead people will never suffer (e.g.. due to an afterlife).

Sure. I responded to this post originally not because I think wireheading is something I want to be done, but rather because I wanted to voice the position of it being fine in theory.

I also take moral disagreement seriously, even though I basically agree with EY's meta-ethics. My terminal value is about doing something that is coherent/meaningful/altruistic, and I might be wrong about what this implies. I have a very low credence in views that want to increase the amount of sentience, but for these views, much more is at stake.

In addition, I think avoiding zero-sum games and focusing on ways to cooperate likely leads to the best consequences. For instance, increasing the probability of a good (little suffering plus happiness in the ways people want it) future conditional on humanity surviving seems to be something lots of altruistically inclined people can agree on being positive and (potentially) highly important.

Ah, OK. Thanks for clarifying.

Sure, I certainly agree that if the only valuable thing is eliminating suffering, wireheading is fine... as is genocide, though genocide is preferable all else being equal.

I'm not quite sure what you mean by taking moral disagreement seriously, but I tentatively infer something like you assign value to otherwise-valueless things that other people assign value to, within limits. (Yes? No?) If that's right, then sure, I can see where wireheading might be preferable to genocide conditional on other people valuing not-being-genocided more than not-being-wireheaded, .

Not quite, but something similar. I acknowledge that my views might be biased, so I assign some weight to the views of other people. Especially if they are well informed, rational, intelligent and trying to answer the same "ethical" questions I'm interested in.

So it's not that I have other people's values as a terminal value among others, but rather that my terminal value is some vague sense of doing something meaningful/altruistic where the exact goal isn't yet fixed. I have changed my views many times in the past after considering thought experiments and arguments about ethics and I want to keep changing my views in future circumstances that are sufficiently similar.

Let me echo that back to you to see if I get it.

We posit some set S1 of meaningful/altruistic acts.
You want to perform acts in S1.
Currently, the metric you use to determine whether an act is meaningful/altruistic is whether it reduces suffering or not. So there is some set (S2) of acts that reduce suffering, and your current belief is that S1 = S2.
For example, wireheading and genocide reduce suffering (i.e., are in S2), so it follows that wireheading and genocide are meaningful/altruistic acts (i.e., are in S1), so it follows that you want wireheading and genocide.

And when you say you take moral disagreement seriously, you mean that you take seriously the possibility that in thinking further about ethical questions and discussing them with well informed, rational, intelligent people, you might have some kind of insight that brings you to understand that in fact S1 != S2. At which point you would no longer want wireheading and genocide

Did I get that right?

Yes, that sounds like it. Of course I have to specify what exactly I mean by "altruistic/meaningful", and as soon as I do this, the question whether S1=S2 might become very trivial, i.e. a deductive one-line proof. So I'm not completely sure whether the procedure I use makes sense, but it seems to be the only way to make sense of my past selves changing their ethical views. The alternative would be to look at each instance of changing my views as a failure of goal preservation, but that's not how I want to see it and not how it felt.

If the only way to turn the situation described in BNW (ignoring the savages) into an eudaimonic utopia was by torturing the whole population for a couple of years first, would you think it was the moral thing to do?

The Brave New World setting requires that the majority of the population undergo intentional directed mental retardation and cultural disability, among with other matters+. So, yeah, even assuming massive discounts on future suffering and a very low value for the "intelligent diverse" part of "intelligent diverse lives". That the folk brain-damaged into slaves /like/ being slaves doesn't forgive the brain damage or the slavery.

  • Huxley also sets the world up so that several basic and emotional drives are tabooed, vast amounts of labor is wasted making things by hand, large portions of media space is off-limits for the normal folk, and implies that there's not really meaningful consent for sex. Soma's honestly the least of the problems for the BNW setting.

Three observations on the Adultery problem.

First of all, if you consider the possibility of getting caught, you are mixing pure Utilitarianism with a little bit of Consequentialism. This begs the question: why consider only the possibility of getting caught, that is the utility of the first consequence of the act? To achieve Eudaimonia, shouldn't you consider all the consequences of your actions? What if getting caught has a second consequences the fact that the couple split, and the wronged wife finds a better man?

Secondly, you cannot retreat to 0.5 probability for getting caught. The probability of getting caught is as anything dependent on the prior probability. In the case of no information is 0.5, but what if you have strong reason to believe you won't get caught? There is no default value for a probability.

Thirdly (is that a word?), the numbers plugged into the matrix seems very ad hoc. What if the utility for the uncoupled member is 35 or 50?

Eudaimonic Utilitarianism, as presented here, shows that Adultery is immoral only because you plugged in the right numbers to make it so.

Um, I was under the understanding that Utilitarianism is a subset of Consequentialism.

For simplicity's sake I only considered the first consequences because it is very difficult to be certain that second consequences such as "the couple split, and the wronged wife finds a better man" will actually occur. Obviously if you can somehow reliably compute those possibilities, then by all means.

The 0.5 is based on the Principle of Indifference (also known as the Principle of Insufficient Reason) and the Principle of Maximum Entropy. It may not be proper, but these principles at least suggest a default probability given high uncertainty. I admit that they are very preliminary efforts, and that there may be a better prior. For the most part, I'm just trying to show the difference between Classical Utilitarianism which might in some circumstances allow for Adultery being moral, and Eudaimonic Utilitarianism, which shows it is generally not moral.

Thirdly is a word yes.

The numbers plugged into the matrix are based on my own intuitions of the relative effects of Adultery. The absolute values are not really important. It's the relative difference between the effects that matters. I think anyone will agree that the fleeting pleasures of an affair are not greater in value than the fallout of losing a partner. I assumed that the fallout was four times worse than the affair was good. Admittedly this is an ad hoc assumption and can be argued. But whatever numbers you plug into the equations, as long as the damage from the fallout is at least twice as worse than the pleasure of the affair (which I think is a fair assumption given the long term nature of the fallout compared to the fleeting nature of the affair), it always comes out as morally wrong in Eudaimonic Utilitarianism.

Moral theories of this sort need an all-knowing source of perfect judgement, else it can't resolve the following case:

Rational Agent A, surrounded by rational Collective-B, knows to the best of their ability that they can achieve Areté through life course alpha. Collective-B disagrees, knowing Agent A can only achieve Areté through life course beta.

Assuming each party reasons from an equivalent depth of information, Eudaimonic Utilitarianism cannot resolve the conflict without an Omega's help.

Areté itself is a subjective indicator without an Omega, though I admit seems a nice metric 'twere an Omega present. On second thought, even with an Omega, any incongruence between the agent and Omega's value function leads not to eudaimonic fulfilment of the agent, as Omega, under your proposed theory, needs not account for the agent's preferences lest unfulfilled preferences preclude achieving Areté.

Whatever the value function of a moral theory requisite an Omega, I see not how any agent with less knowledge and reasoning power than Omega could reconcile their theory with the territory. Hence CEV's presumed super-AGI.

Admittedly the main challenge of Eudaimonic Utilitarianism is probably the difficulty of calculating a utility function that asks would a perfectly rational version of the agent with perfect information would do. Given that we usually only know from behaviour what an agent with bounded rationality would want, it is difficult to extrapolate without an Omega. That being said, even a rough approximation based on what is generally known about rational agents and as much information as can reasonably be mustered, is probably better than not trying at all.

If anything it is a strong imperative to gather as much information as possible (to get as close to perfect information as you can) before making decisions. So EU would probably support Rational Agent A and Collective-B pooling their information and together gathering more information and trying to come to some consensus about alpha vs beta by trying to approximate perfect information and perfect rationality as closely as they can.

It is assumed in this theory that intrinsic values would be congruent enough for the agent and the Omega to agree at the high level abstraction of were the agent given all the information and rationality that the Omega has. Of course, the agent without this information may find what the Omega does to help it achieve Eudaimonia to be strange and unintuitive, but that would be due to its lack of awareness of what the Omega knows. Admittedly this can lead to some rather paternalistic arrangements, but assuming that the Omega is benevolent, this shouldn't be too bad an arrangement for the agent.

My apologies if I'm misunderstanding what you mean by Omega.

If anything it is a strong imperative to gather as much information as possible (to get as close to perfect information as you can) before making decisions.

This is an imperative for any rational agent insofar as the situation warrants. To assist in this process, philosopher's develop decision theories. Decision theories are designed to assist an agent in processing information, and deciding a course of action, in furtherance of the agent's values; they do not assist in determining what is worth valuing. Theories of proper moral conduct fill this gap.

So EU would probably support Rational Agent A and Collective-B pooling their information and together gathering more information and trying to come to some consensus about alpha vs beta by trying to approximate perfect information and perfect rationality as closely as they can.

That does indeed seem like an intermediary course of action designed to further the values of both Collective-B and Agent A. This still feels unsatisfactory, but as I cannot reason why, I must conclude I have a true rejection somewhere I can't find at the moment. I was going to point out that the above scenario doesn't reflect human behaviour, but there's no need: it demonstrates the moral ideal to which we should strive.

Perhaps I object with the coining, as it seems a formalisation of what many do anyway, yet that's no reason to - Aha!

My true rejection lies in your theory's potential for being abused. Were one to claim they knew better than any other what would achieve others' Areté, they could justify behaviour that in fact infringes upon others' quest for Areté; they could falsely assume the role of Omega.

In the counter case of Preference Utilitarianism, one must account for the Preferences of others in their own utility calculation. Though it has the same pitfall, wherein one claims they know the 'true' preference of others' differs from their 'manifest' preference.

The difference lies in each theory's foundations. Preference utilitarianism is founded upon the empathic understanding that others pursuing their value function makes them, and thus those around them, more fulfilled. In your theory, one can always claim, "If you were only more rational, you would see I am in the right on this. Trust me."

One becoming an evil overlord would also constitute a moral good in your theory, if their net capacity for achievement supersedes that of those whom they prey upon. I make no judgement on this.

Honestly though, I'm nitpicking by this point. Quite clearly written (setting aside the Adultery calculation), this, and good on you essaying to incorporate eudaimonia into a coherent moral theory.