Most people believe the way to lose weight is through willpower.  My successful experience losing weight is that this is not the case.  You will lose weight if you want to, meaning you effectively believe0 that the utility you will gain from losing weight, even time-discounted, will outweigh the utility from yummy food now.  In LW terms, you will lose weight if your utility function tells you to.  This is the basis of cognitive behavioral therapy (the effective kind of therapy), which tries to change peoples' behavior by examining their beliefs and changing their thinking habits.

Similarly, most people believe behaving ethically is a matter of willpower; and I believe this even less.  Your ethics is part of your utility function.  Acting morally is, technically, a choice; but not the difficult kind that holds up a stop sign and says "Choose wisely!"  We notice difficult moral choices more than easy moral choices; but most moral choices are easy, like choosing a ten dollar bill over a five.  Immorality is not a continual temptation we must resist; it's just a kind of stupidity.

This post can be summarized as:

  1. Each normal human has an instinctive personal morality.
  2. This morality consists of inputs into that human's decision-making system.  There is no need to propose separate moral and selfish decision-making systems.
  3. Acknowledging that all decisions are made by a single decision-making system, and that the moral elements enter it in the same manner as other preferences, results in many changes to how we encourage social behavior.

Many people have commented that humans don't make decisions based on utility functions.  This is a surprising attitude to find on LessWrong, given that Eliezer has often cast rationality and moral reasoning in terms of computing expected utility.  It also demonstrates a misunderstanding of what utility functions are.  Values, and utility functions, are models we construct to explain why we do what we do.  You can construct a set of values and a utility function to fit your observed behavior, no matter how your brain produces that behavior.  You can fit this model to the data arbitrarily well by adding parameters.  It will always have some error, as you are running on stochastic hardware.  Behavior is not a product of the utility function; the utility function is a product of (and predictor of) the behavior.  If your behavior can't be modelled with values and a utility function, you shouldn't bother reading LessWrong, because "being less wrong" means behaving in a way that is closer to the predictions of some model of rationality.  If you are a mysterious black box with inscrutable motives that makes unpredictable actions, no one can say you are "wrong" about anything.

If you still insist that I shouldn't talk about utility functions, though - it doesn't matter!  This post is about morality, not about utility functions.  I use utility functions just as a way of saying "what you want to do".  Substitute your own model of behavior.  The bottom line here is that moral behavior is not a qualitatively separate type of behavior and does not require a separate model of behavior.

My view isn't new.  It derives from ancient Greek ethics, Nietzsche, Ayn Rand, B.F. Skinner, and comments on LessWrong.  I thought it was the dominant view on LW, but the comments and votes indicate it is held at best by a weak majority.

Relevant EY posts include "What would you do without morality?", "The gift we give to tomorrow", "Changing your meta-ethics", and "The meaning of right"; and particularly the statement, "Maybe that which you would do even if there were no morality, is your morality."  I was surprised that no comments mentioned any of the many points of contact between this post and Eliezer's longest sequence.  (Did anyone even read the entire meta-ethics sequence?)  The view I'm presenting is, as far as I can tell, the same as that given in EY's meta-ethics sequence up through "The meaning of right"1; but I am talking about what it is that people are doing when they act in a way we recognize as ethical, whereas Eliezer was talking about where people get their notions of what is ethical.

Ethics as willpower

Society's main story is that behaving morally means constantly making tough decisions and doing things you don't want to do.  You have desires; other people have other desires; and ethics is a referee that helps us mutually satisfy these desires, or at least not kill each other.  There is one true ethics; society tries to discover and encode it; and the moral choice is to follow that code.

This story has implications that usually go together:

  • Ethics is about when peoples' desires conflict.  Thus, ethics is only concerned with interpersonal relations.
  • There is a single, Platonic, correct ethical system for a given X. (X used be a social class but not a context or society.  Nowadays it can be a society or context but not a social class.)
  • Your desires and feelings are anti-correlated with ethical behavior.  Humans are naturally unethical.  Being ethical is a continual, lifelong struggle.
  • The main purpose of ethics is to stop people from doing what they naturally want to do, so "thou shalt not" is more important than "thou shalt".
  • The key to being ethical is having the willpower not to follow your own utility function.
  • Social ethics are encouraged by teaching people to "be good", where "good" is the whole social ethical code.  Sometimes this is done without explaining what "good" is, since it is considered obvious, or perhaps more convenient to the priesthood to leave it unspecified. (Read the Koran for an extreme example.)
  • The key contrast is between "good" people who will do the moral thing, and "evil" people who do just the opposite.
  • Turning an evil person into a good person can be done by reasoning with them, teaching them willpower, or convincing them they will be punished for being evil.
  • Ethical judgements are different from utility judgements.  Utility is a tool of reason, and reason only tells you how to get what you want, whereas ethics tells you what you ought to want.  Therefore utilitarians are unethical.
  • Human society requires spiritual guidance and physical force to stop people from using reason to seek their own utility.
    • Religion is necessary even if it is false.
    • Reason must be strictly subordinated to spiritual authority.
    • Smart people are less moral than dumb people, because reason maximizes personal utility.
  • Since ethics are desirable, and yet contrary to human reason, they prove that human values transcend logic, biology, and the material world, and derive from a spiritual plane of existence.
  • If there is no God, and no spiritual world, then there is no such thing as good.
    • Sartre: "There can no longer be any good a priori, since there is no infinite and perfect consciousness to think it."
  • A person's ethicality is a single dimension, determined by the degree to which a person has willpower and subsumes their utility to social utility.  Each person has a level of ethicality that is the same in all domains.  You can be a good person, an evil person, or somewhere in between - but that's it.  You should not expect someone who cheats at cards to be courageous in battle, unless they really enjoy battle.

People do choose whether to follow the ethics society promulgates.  And they must weigh their personal satisfaction against the satisfaction of others; and those weights are probably relatively constant across domains for a given person.  So there is some truth in the standard view.  I want to point out errors; but I mostly want to change the focus.  The standard view focuses on a person struggling to implement an ethical system, and obliterates distinctions between the ethics of that person, the ethics of society, and "true" ethics (whatever they may be).  I will call these "personal ethics", "social ethics", and "normative ethics" (although the last encompasses all of the usual meaning of "ethics", including meta-ethics).  I want to increase the emphasis on personal ethics, or ethical intuitions.  Mostly just to insist that they exist.  (A surprising number of people simultaneously claim to have strong moral feelings, and that people naturally have no moral feelings.)

The conventional story denies these first two exist:  Ethics is what is good; society tries to figure out what is good; and a person is more or less ethical to the degree that they act in accordance with ethics.

The chief error of the standard view is that it explains ethics as a war between the physical and the spiritual.  If a person is struggling between doing the "selfish" thing and the "right" thing, that proves that they want both about equally.  The standard view instead supposes that they have a physical nature that wants only the "selfish" thing, and some internal or external spiritual force pulling them towards the "right" thing.  It thus hinders people from thinking about ethical problems as trade-offs, because the model never shows two "moral" desires in conflict except in "paradoxes" such as the trolley problem.  It also prevents people from recognizing cultures as moral systems--to really tick these people off, let's say morality-optimizing machines--in which different agents with different morals are necessary parts for the culture to work smoothly.

You could recast the standard view with the conscious mind taking the place of the spiritual nature, the subconscious mind taking the place of the physical nature, and willpower being the exertion of control over the subconscious by the conscious.  (Suggested by my misinterpretation of Matt's comment.)  But to use that to defend the "ethics as willpower" view, you assume that the subconscious usually wants to do immoral things, while the conscious mind is the source of morality.  And I have no evidence that my subconscious is less likely to propose moral actions than my conscious. My subconscious mind usually wants to be nice to people; and my conscious mind sometimes comes up with evil plans that my subconscious responds to with disgust.

... but being evil is harder than being good

At times, I've rationally convinced myself that I was being held back from my goals by my personal ethics, and I determined to act less ethically.  Sometimes I succeeded.  But more often, I did not.  Even when I did, I had to first build up a complex structure of rationalizations, and exert a lot of willpower to carry through.  I have never been able (or wanted) to say, "Now I will be evil" (by my personal ethics) and succeed.

If being good takes willpower, why does it take more willpower to be evil?

Ethics as innate

One theory that can explain why being evil is hard is Rousseau's theory that people are noble savages by birth, and would enact the true ethics if only their inclinations were not crushed by society.  But if you have friends who have raised their children by this theory, I probably need say no more. A fatal flaw in noble-savage theory is that Rousseau didn't know about evolution. Child-rearing is part of our evolutionary environment; so we should expect to have genetically evolved instincts and culturally evolved beliefs about child-rearing which are better than random, and we should expect things to go terribly wrong if we ignore these instincts and practices.

Ethics as taste

Try, instead, something between the extremes of saying that people are naturally evil, or naturally good.  Think of the intuitions underlying your personal morality as the same sort of thing as your personal taste in food, or maybe better, in art.  I find a picture with harmony and balance pleasing, and I find a conversation carried on in harmony and with a balance of speakers and views pleasing.  I find a story about someone overcoming adversity pleasing, as I find an instance of someone in real life overcoming adversity commendable.

Perhaps causality runs in the other direction; perhaps our artistic tastes are symbolic manifestations of our morals and other cognitive rules-of-thumb.  I can think of many moral "tastes" for which I have which have no obvious artistic analog, which suggests that the former is more fundamental.  I like making people smile; I don't like pictures of smiling people.

I don't mean to trivialize morality.  I just want people to admit that most humans often find pleasure in being nice to other humans, and usually feel pain on seeing other humans--at least those within the tribe--in pain.  Is this culturally conditioned?  If so, it's by culture predating any moral code on offer today.  Works of literature have always shown people showing some other people an unselfish compassion.  Sometimes that compassion can be explained by a social code, as with Wiglaf's loyalty to Beowulf.  Sometimes it can't, as with Gilgamesh's compassion for the old men who sit on the walls of Ur, or Odysseus' compassion for Ajax.

Subjectively, we feel something different on seeing someone smile than we do on eating an ice-cream cone.  But it isn't obvious to me that "moral feels / selfish feels" is a natural dividing line.  I feel something different when saving a small child from injury than when making someone smile, and I feel something different when drinking Jack Daniels than when eating an ice-cream cone.

Computationally, there must be little difference between the way we treat moral, aesthetic, and sensual preferences, because none of them reliably trumps the others.  We seem to just sum them all up linearly.  If so, this is great, to a rationalist, because then rationality and morals are no longer separate magisteria.  We don't need separate models of rational behavior and moral behavior, and a way of resolving conflicts between them.  If you are using utility functions, you only need one model; values of all types go in, and a single utility comes out.  (If you aren't using utility functions, use whatever it is you use to predict human behavior.  The point is that you only need one of them.)  It's true that we have separate neural systems that respond to different classes of situation; but no one has ever protested against a utility-based theory of rationality by pointing out that there are separate neural systems responding to images and sounds, and so we must have separate image-values and sound-values and some way of resolving conflicts between image-utility and sound-utility.  The division of utility into moral values and all other values may even have a neural basis; but modelling that difference has, historically, caused much greater problems than it has solved.

The problem for this theory is:  If ethics is just preference, why do we prefer to be nice to each other?  The answer comes from evolutionary theory.  Exactly how it does this is controversial, but it is no longer a deep mystery.  One feasible answer is that reproductive success is proportional to inclusive fitness.3  It is important to know how much of our moral intuitions is innate, and how much is conditioned; but I have no strong opinion on this other than that it is probably some of each.

This theory has different implications than the standard story:

  • Behaving morally feels good.
  • Social morals are encouraged by creating conditions that bring personal morals into line with social morals.
  • A person can have personal morals similar to society's in one domain, and very different in another domain.
  • A person learns their personal morals when they are young.
  • Being smarter enables you to be more ethical.
  • A person will come to feel that an action is ethical if it leads to something pleasant shortly after doing it, and unethical if it leads to displeasure.
  • A person can extinguish a moral intuition by violating it many times without consequences - whether they do this of their own free will, or under duress.
  • It may be easier to learn to enjoy new ethical behaviors (thou shalts), than to dislike enjoyable behaviors (thou shalt nots).
  • The key contrast is between "good" people who want to do the moral thing, and "bad" people who are apathetic about it.
  • Turning a (socially) bad person into a good person is done one behavior at a time.
  • Society can reason about what ethics they would like to encourage under current conditions.

As I said, this is nothing new.  The standard story makes concessions to it, as social conservatives believe morals should be taught to children using behaviorist principles ("Spare the rod and spoil the child").  This is the theory of ethics endorsed by "Walden Two" and warned against by "A Clockwork Orange".  And it is the theory of ethics so badly abused by the former Soviet Union, among other tyrannical governments.  More on this, hopefully, in a later post.

Does that mean I can have all the pie?


Eliezer addressed something that sounds like the "ethics as taste" theory in his post "Is morality preference?", and rejected it.  However, the position he rejected was the straw-man position that acting to immediately gratify your desires is moral behavior.  (The position he ultimately promoted, in "The meaning of right", seems to be the same I am promoting here:  That we have ethical intuitions because we have evolved to compute actions as preferable that maximized our inclusive fitness.)

Maximizing expected utility is not done by greedily grabbing everything within reach that has utility to you.  You may rationally leave your money in a 401K for 30 years, even though you don't know what you're going to do with it in 30 years and you do know that you'd really like a Maserati right now.  Wanting the Maserati does not make buying the Maserati rational.  Similarly, wanting all of the pie does not make taking all of the pie moral.

More importantly, I would never want all of the pie.  It would make me unhappy to make other people go hungry.  But what about people who really do want all of the pie?  I could argue that they reason that taking all the pie would incur social penalties.  But that would result in morals that vanish when no one is looking.  And that's not the kind of morals normal people have.

Normal people don't calculate the penalties they will incur from taking all the pie.  Sociopaths do that.  Unlike the "ethics as willpower" theorists, I am not going to construct a theory of ethics that takes sociopaths as normal.4  They are diseased, and my theory of ethical behavior does not have to explain their behavior, any more than a theory of rationality has to explain the behavior of schizophrenics.  Now that we have a theory of evolution that can explain how altruism could evolve, we don't have to come up with a theory of ethics that assumes people are not altruistic.

Why would you want to change your utility function?

Many LWers will reason like this:  "I should never want to change my utility function.  Therefore, I have no interest in effective means of changing my tastes or my ethics."

Reasoning this way makes the distinction between ethics as willpower and ethics as taste less interesting.  In fact, it makes the study of ethics in general less interesting - there is little motivation other than to figure out what your ethics are, and to use ethics to manipulate others into optimizing your values.

You don't have to contemplate changing your utility function for this distinction to be somewhat interesting.  We are usually talking about society collectively deciding how to change each others' utility functions.  The standard LessWrongian view is compatible with this:  You assume that ethics is a social game in which you should act deceptively, trying to foist your utility functions on other people and avoid letting yours being changed.

But I think we can contemplate changing our utility functions.  The short answer is that you may choose to change your future utility function when doing so will have the counter-intuitive effect of better-fulfilling your current utility function (as some humans do in one ending of Eliezer's story about babyeating aliens).  This can usually be described as a group of people all conspiring to choose utility functions that collectively solve prisoners' dilemmas, or (as in the case just cited) as a rational response to a threatened cost that your current utility function is likely to trigger.  (You might model this as a pre-commitment, like one-boxing, rather than as changing your utility function.  The results should be the same.  Consciously trying to change your behavior via pre-commitment, however, may be more difficult, and may be interpreted by others as deception and punished.)

(There are several longer, more frequently-applicable answers; but they require a separate post.)

Fuzzies and utilons

Eliezer's post, Purchase fuzzies and utilons separately, on the surface appears to say that you should not try to optimize your utility function, but that you should instead satisfy two separate utility functions:  a selfish utility function, and an altruistic utility function.

But remember what a utility function is.  It's a way of adding up all your different preferences and coming up with a single number.  Coming up with a single number is important, so that all possible outcomes can be ordered.  That's what you need, and ordering is what numbers do.  Having two utility functions is like having no utility function at all, because you don't have an ordering of preferences.

The "selfish utility function" and the "altruistic utility function" are different natural categories of human preferences.  Eliezer is getting indirectly at the fact that the altruistic utility function (which gives output in "fuzzies") is indexical.  That is, its values have the word "I" in them.  The altruistic utility function cares whether you help an old lady across the street, or some person you hired in Portland helps an old lady across the street.  If you aren't aware of this, you may say, "It is more cost-effective to hire boy scouts (who work for less than minimum wage) to help old ladies across the street and achieve my goal of old ladies having been helped across the street."  But your real utility function prefers that you helped them across the street; and so this doesn't work.


The old religious view of ethics as supernatural and contrary to human nature is dysfunctional and based on false assumptions.  Many religious people claim that evolutionary theory leads to the destruction of ethics, by teaching us that we are "just" animals.  But ironically, it is evolutionary theory that provides us with the understanding we need to build ethical societies.  Now that we have this explanation, the "ethics as taste" theory deserves to be evaluated again, and see if it isn't more sensible and more productive than the "ethics as willpower" theory.


0.  I use the phrase "effectively believe" to mean both having a belief, and having habits of thought that cause you to also believe the logical consequences of that belief.

1.  We have disagreements, such as the possibility of dividing values into terminal and instrumental, the relation of the values of the mind to the values of its organism, and whether having a value implies that propagating that value is also a value of yours (I say no).  But they don't come into play here.

3.  For more details, see Eliezer's meta-ethics sequence.

4.  Also, I do not take Gandhi as morally normal.  Not all brains develop as their genes planned; and we should expect as many humans to be pathologically good as are pathologically evil.  (A biographical comparison between Gandhi and Hitler shows a remarkable number of similarities.)

New to LessWrong?

New Comment
146 comments, sorted by Click to highlight new comments since: Today at 12:56 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So, when I agonize over whether to torrent an expensive album instead of paying for it, and about half the time I end up torrenting it and feeling bad, and about half the time I pay for it but don't enjoy doing so ... what exactly am I doing in the latter case if not employing willpower?

I mean, I know willpower probably isn't a real thing on the deepest levels of the brain, but it's fake in the same way centrifugal force is fake, not in the way Bigfoot is fake. It sure feels like I'm using willpower when I make moral decisions about pirating, and I don't understand how your model above interprets that.

Granted, there are many other moral decisions I make that don't require willpower and do conform to your model above, and if I had to choose black-and-white between ethics-as-willpower or ethics-as-choice I'd take the latter, your model just doesn't seem complete.

My interpretation of the post in this case is: it's not that you're not employing willpower, instead you're not employing personal morality. So, while TORRENT vs BUY fits into the societal ethics view, it does not fit into your personal morality. From the personal morality perspective, the bad feeling you get is the thing you need willpower to fight against/suppress. You probably also need willpower to fight against/suppress the bad feeling you might be getting from buying the album. These need not be mutually exclusive. Personal morality can be both against torrenting and against spending money unduly.
5Scott Alexander12y
Let me rephrase my objection, then. I feel a certain sense of mental struggle when considering whether to torrent music. I don't feel this same sense of mental struggle when considering whether or not to murder or steal or cheat . Although both of these are situations that call my personal morality, the torrenting situation seems to be an interesting special case. We need a word to define the way in which the torrenting situation is a special case and not just another case where I don't murder or steal or cheat because I'm not that kind of person. The majority of the English-speaking world seems to use "willpower". As far as I know there's no other definition of willpower, where we could say "Oh, that's real willpower, this torrenting thing is something else." If we didn't have the word "willpower", we'd have to make up a different word, like "conscious-alignment in mental struggle" or something. So why not use the word "willpower" here?
Suppose that you have one extra ticket to the Grand Galloping Gala, and you have several friends who each want it desperately. You can give it to only one of them. Doesn't the agonizing over that decision feel a lot like the agonizing over whether to buy or torrent? Yet we don't think of that as involving willpower.
4Scott Alexander12y
At the risk of totally reducing this to unsupportable subjective, the two decisions wouldn't feel the same at all. I can think of some cases in which it would feel similar. If one of the ticket-seekers was my best friend whom I'd known forever, and another was a girl I was trying to impress, and I had to decide between loyalty to my best friend or personal gain from impressing the girl. Or if one of the ticket-seekers had an incurable disease and this was her last chance to enjoy herself, and the other was a much better friend and much more fun to be around. But both of these are, in some way, moral issues. In the simple ticket-seeker case without any of these complications, there would be a tough decision, but it would be symmetrical: there would be certain reasons for choosing Friend A, and certain others for choosing Friend B, and I could just decide between them. In the torrenting case, and the complicated ticket-seeker cases, it feels asymmetrical: like I have a better nature tending toward one side, and temptation tending toward the other side. This asymmetry seems to be the uniting factor behind my feeling of needing "willpower" for some decisions.
Mm. So, OK, to establish some context first: one (ridiculously oversimplified) way of modeling situations like this is to say that in both cases I have two valuation functions, F1 and F2, which give different results when comparing the expected value of the two choices, because they weight the relevant factors differently (for example, the relative merits of being aligned with my better nature and giving in to temptation, or the relative merits of choosing friend A and friend B), but in the first (simple) case the two functions are well-integrated and I can therefore easily calculate the weighted average of them, and in the second (complicated) case the two functions are poorly integrated and averaging their results is therefore more difficult. So by the time the results become available to consciousness in the first case, I've already made the decision, so I feel like I can "just decide" whereas in the second case, I haven't yet, and therefore feel like I have a difficult decision to make, and the difference is really one of how aware I am of the decision. (There are a lot of situations like this, where when an operation can be performed without conscious monitoring it "feels easy.") So. In both cases the decision is "asymmetrical," in that F1 and F2 are different functions, but in the torrenting case (and the complicated ticket case), the difference between F1 and F2 is associated with a moral judgment (leading to words like "better nature" and "temptation"). Which feels very significant, because we're wired to attribute significance to moral judgments. I wonder how far one could get by treating it like any other decision procedure, though. For example, if I decide explicitly that I weight "giving into temptation" and "following my better nature" with a ratio of 1:3, and I flip coins accordingly to determine whether to torrent or not (and adjust my weighting over time if I'm not liking the overall results)... do I still need so much "willpower"?
I love the idea of the coin-flip diet. Although it can be gamed by proposing to eat things more often. Maybe you could roll a 6-sided die for each meal. 1 = oatmeal and prune juice, 2-3 = lentil soup, 4-5 = teriyaki chicken, 6 = Big Mac or ice cream. If you know the weight, and you have a way of sorting the things you would flip a coin for, you can use the sorting order instead. For instance, I typically buy rather than torrent if the artist is in the bottom half of artists sorted by income.
I diet more or less this way. Not a coinflip, but a distribution that seems sustainable in the long term. Lentil soup twice a week, Big Mac and ice cream once a week, so to speak.
Or, if I wanted to choose between a car with good gas mileage and one with good performance, that could seem moral. Or if I were choosing between a food high in sugar, or one high in protein. Or one high in potassium, or one high in calcium. What's an example of an amoral choice?
Choosing between two cars with equally good gas mileage and performance, one which has more trunk space and one which has a roof rack.
It all depends on why you decide to torrent/not torrent: Are you more likely to torrent if the album is very expensive, or if it is very cheap? If you expect it to be of high quality, or of low quality? If the store you could buy the album at is far away, or very close? If you like the band that made it, or if you don't like them? Longer albums or shorter? Would you torrent less if the punishment for doing so was increased? Would you torrent more if it was harder to get caught? What if you were much richer, or much poorer? I'm confident that if you were to analyze when you torrent vs. when you buy, you'd notice trends that, with a bit of effort, could be translated into a fairly reasonable "Will I Torrent or Buy?" function that predicts whether you'll torrent or not with much better accuracy than random.
Yes, but the function might all include terms for things like how rude were Yvain's co-workers to Yvain that day, what mood was Yvain in that day, was Yvain hungry at the moment, i.e., stuff a reasonably behaved utility function shouldn't have terms for but the outcome of a willpower based struggle very well might.
1Scott Alexander12y
I'm sure that's true, but what relevance does that have to the current discussion?

IAWYC, but...

Many people have commented that humans don't make decisions based on utility functions. This is a surprising attitude to find on LessWrong, given that Eliezer has often cast rationality and moral reasoning in terms of computing expected utility. It also demonstrates a misunderstanding of what utility functions are.

The issue is not that people wouldn't understand what utility functions are. Yes, you can define arbitrarily complicated utility functions to represent all of a human's preferences, we know that. There's an infinite amount of valid methods by which you could model a human's preferences, utility functions being one of them. The question is which model is the most useful, and which models have the least underlying assumptions that will lead your intuitions astray. Utility functions are sometimes an appropriate model and sometimes not.

To expand on this...

Tim van Gelder has an elegant, although somewhat lengthy, example of this. He presents us with a problem engineers working with early steam engines had: how to translate the oscillating action of the steam piston into the rotating motion of a flywheel?

(Note: it's going to take a while before the relationship between this and utility functions becomes clear. Bear with me.)

High-quality spinning and weaving required, however, that the source of power be highly uniform, that is, there should be little or no variation in the speed of revolution of the main driving flywheel. This is a problem, since the speed of the flywheel is affected both by the pressure of the steam from the boilers, and by the total workload being placed on the engine, and these are constantly fluctuating.

It was clear enough how the speed of the flywheel had to be regulated. In the pipe carrying steam from the boiler to the piston there was a throttle valve. The pressure in the piston, and so the speed of the wheel, could be adjusted by turning this valve. To keep engine speed uniform, the throttle valve would have to be turned, at just the right time and by just the right amount, to cope with

... (read more)

(part two)

van Gelder holds that an algorithmic approach is simply insuitable for understanding the centrifugal governor. It just doesn't work, and there's no reason to even try. To understand the behavior of centrifugal governor, the appropriate tool to use are differential equations that describe its behavior as a dynamic system where the properties of various parts depend on each other.

Changing a parameter of a dynamical system changes its total dynamics (that is, the way its state variables change their values depending on their current values, across the full range of values they may take). Thus, any change in engine speed, no matter how small, changes not the state of the governor directly, but rather the way the state of the governor changes, and any change in arm angle changes the way the state of the engine changes. Again, however, the overall system (coupled engine and governor) settles quickly into a point attractor, that is, engine speed and arm angle remain constant.

Now we finally get into utility functions. van Gelder holds that all the various utility theories, no matter how complex, remain subject to specific drawbacks:

(1) They do not incorporate any account of

... (read more)
Kaj may be too humble to self-link the relevant top level post, so I'll do it for him.
I actually didn't link to it, because I felt that those comments ended up conveying the same point better than the post did.
A set of differential equations that describe its behavior as a dynamic system where the properties of various parts depend on each other, would still be an algorithm. van Gelder appears not to have heard of universal computation. I would say that the selection and representation of values is exactly this account. False. Perceived preferences are often inconsistent and inconstant. So you try to find underlying preferences. Also false. The utility function itself is precisely a model of the deliberation process. It isn't going to be an equation that fits on a single line. And it is going to have some computational complexity, which will relate the relationships between time spent deliberating and the choice eventually made. I hope - because this is the most charitable interpretation I can make - that all these people complaining about utility functions are just forgetting that it uses the word "function". Not "arithmetic function", or "regular expression". Any computable function. If an output can't be modelled with a utility function, it is non-computable. If humans can't be modelled with utility functions, that is a proof that a computer program can't be intelligent. I'm not concerned with whether this is a good model. I just want to able to say, theoretically, that the question of what a human should do in response to a situation, is something that can be said to have right answers and wrong answers, given that human's values/preferences/morals. All this harping about whether utility functions can model humans is not very relevant to my post. I bring up utility functions only to communicate, to a LW audience, that you are only doing what you want to do when you behave morally. If you have some other meaningful way of stating this - of saying what it means to "do what you want to do" - by all means do so! (If you want to work with meta-ethics, and ask why some things are right and some things are wrong, you do have to work with utility functions, if you beli
Sure, but that's not the sense of "algorithm" that was being used here. None of this is being questioned. You said that you're not concerned with whether this is a good model, and that's fine, but whether or not it is a good model was the whole point of my comment. Neither I nor van Gelder claimed that utility functions couldn't be used as models in principle. My comments did not question the conclusions of your post (which I agreed with and upvoted). I was only the addressing the particular paragraph which I quoted in my initial comment. (I should probably have mentioned that IAWYC in that one. I'll edit that in now.)
Sorry. I'm getting very touchy about references utility functions now. When I write a post, I want to feel like I'm discussing a topic. On this post, I feel like I'm trying to compile C++ code and the comments are syntax error messages. I'm pretty much worn out on the subject for now, and probably getting sloppy, even though the post could still use a lot of clarification.
No problem - I could have expressed myself more clearly, as well. Take it positively: if people only mostly nitpick on your utility function bit, then that implies that they agree with the rest of what you wrote. I didn't have much disagreement with the actual content of your post, either.

You can construct a set of values and a utility function to fit your observed behavior, no matter how your brain produces that behavior.

I'm deeply hesitant to jump into a debate that I don't know the history of, but...

Isn't it pretty generally understood that this is not true? The Utility Theory folks showed that behavior of an agent can be captured by a numerical utility function iff the agent's preferences conform to certain axioms, and Allais and others have shown that human behavior emphatically does not.

Seems to me that if human behavior were in g... (read more)

A person's behavior can always be understood as optimizing a utility function, it just that if they are irrational (as in the Allais paradox) the utility functions start to look ridiculously complex. If all else fails, a utility function can be used that has a strong dependency on time in whatever way is required to match the observed behavior of the subject. "The subject had a strong preference for sneezing at 3:15:03pm October 8, 2011." From the point of view of someone who wants to get FAI to work, the important question is, if the FAI does obey the axioms required by utility theory, and you don't obey those axioms for any simple utility function, are you better off if: * the FAI ascribes to you some mixture of possible complex utility functions and helps you to achieve that, or * the FAI uses a better explanation of your behavior, perhaps one of those alternative theories listed in the wikipedia article, and helps you to achieve some component of that explanation? I don't understand the alternative theories well enough to know if the latter option even makes sense.
This is the Texas Sharpshooter fallacy again. Labelling what a system does with 1 and what it does not with 0 tells you nothing about the system. It makes no predictions. It does not constrain expectation in any way. It is woo. Woo need not look like talk of chakras and crystals and angels. It can just as easily be dressed in the clothes of science and mathematics.
You say "again", but in the cited link it's called the "Texas Sharpshooter Utility Function". The word "fallacy" does not appear. If you're going to claim there's a fallacy here, you should support that statement. Where's the fallacy? The original claim was that human behavior does not conform to optimizing a utility function, and I offered the trivial counterexample. You're talking like you disagree with me, but you aren't actually doing so. If the only goal is to predict human behavior, you can probably do it better without using a utility function. If the goal is to help someone get what they want, so far as I can tell you have to model them as though they want something, and unless there's something relevant in that Wikipedia article about the Allais paradox that I don't understand yet, that requires modeling them as though they have a utility function. You'll surely want a prior distribution over utility functions. Since they are computable functions, the usual Universal Prior works fine here, so far as I can tell. With this prior, TSUF-like utility functions aren't going to dominate the set of utility functions consistent with the person's behavior, but mentioning them makes it obvious that the set is not empty.
How do you know this? If that's true, it can only be true by being a mathematical theorem, which will require defining mathematically what makes a UF a TSUF. I expect this is possible, but I'll have to think about it.
No, it's true in the same sense that the statement "I have hands" is true. That is, it's an informal empirical statement about the world. People can be vaguely understood as having purposeful behavior. When you put them in strange situations, this breaks down a bit and if you wish to understand them as having purposeful behavior you have to contrive the utility function a bit, but for the most part people do things for a comprehensible purpose. If TSUF's were the simplest utility functions that described humans, then human behavior would be random, which is isn't. Thus the simplest utility functions that describe humans aren't going to be TSUF-like.
I was referring to the same fallacy in both cases. Perhaps I should have written out TSUF in full this time. The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do. I am disagreeing, by saying that the triviality of the counterexample is so great as to vitiate it entirely. The TSUF is not a utility function. One might as well say that a rock has a utility of 1 for just lying there and 0 for leaping into the air. You have to model them as if they want many things, some of them being from time to time in conflict with each other. The reason for this is that they do want many things, some of them being from time to time in conflict with each other. Members of LessWrong regularly make personal posts on such matters, generally under the heading of "akrasia", so it's not as if I was proposing here some strange new idea of human nature. The problem of dealing with such conflicts is a regular topic here. And yet there is still a (not universal but pervasive) assumption that acting according to a utility function is the pinnacle of rational behaviour. Responding to that conundrum with TSUFs is pretty much isomorphic to the parable of the Heartstone. I know the von Neumann-Morgenstern theorem on utility functions, but since they begin by assuming a total preference ordering on states of the world, it would be begging the question to cite it in support of human utility functions.
Models relying on expected utility make extremely strong assumption about treatment of probabilities with utility being strictly linear in probability, and these assumptions can be very easily demonstrated to be wrong. They also make assumptions that many situations are equivalent (pay $50 for 50% chance to win $100 vs accept $50 for 50% chance of losing $100) where all experiments show otherwise. Utility theory without these assumptions predicts nothing whatsoever.
Seems to me we've got a gen-u-ine semantic misunderstanding on our hands here, Tim :) My understanding of these ideas is mostly taken from reinforcement learning theory in AI (a la Sutton & Barto 1998). In general, an agent is determined by a policy pi that determines the probability that the agent will make a particular action in a particular state, P = pi(s,a). In the most general case, Pi can also depend on time, and is typically quite complicated, though usually not complex ;). Any computable agent operating over any possible state and action space can be represented by some function pi, though typically folks in this field deal in Markov Decision Processes since they're computationally tractable. More on that in the book, or in a longer post if folks are interested. It seems to me that when you say "utility function", you're thinking of something a lot like pi. If I'm wrong about that, please let me know When folks in the RL field talk about "utility functions", generally they've got something a little different in mind. Some agents, but not all of them, determine their actions entirely using a time-invariant scalar function U(s) over the state space. U takes in future states of the world and outputs the reward that the agent can expect to receive upon reaching that state (loosely "how much the agent likes s"). Since each action in general leads to a range of different future states with different probabilities, you can use U(s) to get an expected utility U'(a,s): U'(a,s) = sum((p(s,a,s')*U(s')), where s is the state you're in, a is the action you take, s' are the possible future states, and p is the probability than action a taken in state s will lead to state s'. Once your agent has a U', some simple decision rule over that is enough to determine the agent's policy. There are a bunch of cool things about agents that do this, one of which (not the most important) is that their behavior is much easier to predict. This is because behavior is determined entir
If we're talking about ascribing utility functions to humans, then the state space is the universe, right? (That is, the same universe the astronomers talk about.) In that case, the state space contains clocks, so there's no problem with having a time-dependent utility function, since the time is already present in the domain of the utility function. Thus, I don't see the semantic misunderstanding -- human behavior is consistent with at least one utility function even in the formalism you have in mind. (Maybe the state space is the part of the universe outside of the decision-making apparatus of the subject. No matter, that state space contains clocks too.) The interesting question here for me is whether any of those alternatives to having a utility function mentioned in the Allais paradox Wikipedia article are actually useful if you're trying to help the subject get what they want. Can someone give me a clue how to raise the level of discourse enough so it's possible to talk about that, instead of wading through trivialities? PM'ing me would be fine if you have a suggestion here but don't want it to generate responses that will be more trivialities to wade through.
Allais did more than point out that human behavior disobeys utility theory, specifically the "Sure Thing Principle" or "Independence Axiom". He also argued - to my mind, successfully - that there needn't be anything irrational about violating the axiom.

I think there are two "mistakes" in the article.

The first is claiming (or at least, assuming) that ethics are "monolithics", that either they come from willpower alone, or they don't come from willpower at all. Willpower do play a role in ethics, every time your ethical system contradicts the instinct, or unconscious, part of your mind. Be it to resist the temptation of a beautiful member of the opposite (or same, depending of your tastes) sex, overcome the fear of spiders or withstand torture to not betray your friends. I would say tha... (read more)

I've said things that sound like this before but I want to distance myself from your position here.^

But remember what a utility function is. It's a way of adding up all your different preferences and coming up with a single number. Coming up with a single number is important, so that all possible outcomes can be ordered. That's what you need, and ordering is what numbers do. Having two utility functions is like having no utility function at all, because you don't have an ordering of preferences.

This is all true. But humans do not have utility funct... (read more)

If you think that's relevant, you should also go write the same comment on Eliezer's post on utilons and fuzzies. Having two coherent, consistent utility functions is no more realistic than having one. If you want to be rational, you need to try to figure out what your values are, and what your utility function is. Humans don't act consistently. Whether their preferences can be described by a utility function is a more subtle question whose answer is unknown. But in either case, in order to be more rational, you need to be able to approximate your preferences with a utility function. You can alternately describe this as the place where the part of your utility function that you call your far self, and the part of your utility function that you call your near self, sum to zero and provide no net information on what to do. You can choose to describe the resultant emotional confusion as "fighting for willpower". But this leads to the erroneous conclusions I described under the "ethics as willpower" section.
Just to clarify I am not, not, not defending the willpower model you described-- I just don't think willpower, properly understood as a conflict between near and far modes can be left out of an account of human decision making processes. I think the situation is both more complicated and more troubling than both models and don't think it is rational to force the square peg that is human values into the round hole that is 'the utility function'.
I'll agree that willpower may be a useful concept. I'm not providing a full model, though - mostly I want to dismiss the folk-psychology close tie between willpower and morals.
He never said these "utility functions" are coherent. In fact a large part of the problem is that the "fuzzies" utility function is extremely incoherent.
You keep using that word. I do not think it means what you think it means. A utility function that is incoherent is not a utility function. If it is acceptable for Eliezer to talk about having two utility functions, one that measures utilons and one that measures fuzzies, then it is equally acceptable to talk about having a single utility function, with respect to the question of whether humans are capable of having utility functions.
I was using the same not-quite strict definition of "utility function" that you seemed to be using in your post. In any case, I don't believe Eliezer ever called fuzzies a utility function.
This is neither here nor there. I have no doubt it can help to approximate your preferences with a utility function. But simply erasing complication by reducing all your preference-like stuff to a utility function decreases the accuracy of your model. You're ignoring what is really going on inside. So yes, if you try to model humans as holders of single utility functions... morality has nothing to do with willpower! Congrats! But my point is that such a model is far too simple. Well you can do that-- it doesn't seem at all representative of the way choices are made, though. What erroneous conclusions? What does it predict that is not so?

"true" ethics (whatever they may be). I call [this] ... "meta-ethics".

This is a bad choice of name, given that 'Metaethics' already means something (though people on LW often conflate it with Normative Ethics)

Perhaps I should use "normative ethics" instead.

Hi! This sounds interesting, but I couldn't conveniently digest it. I would read it carefully if you added more signposts to tell me what I was about to hear, offered more concrete examples, and explained how I might behave or predict differently after understanding your post.

For whatever it's worth, I completely agree with you that utility functions are models that are meant to predict human behavior, and that we should all try making a few to model our own and each others' behavior from time to time. Dunno if any downvotes you're getting are on that or just on the length/difficulty of your thoughts.

No vote yet from me.

There is plenty of room for willpower in ethics-as-taste once you have a sufficiently complicated model of human psychology in mind. Humans are not monolithic decision makers (let alone do they have a coherent utility function, as others have mentioned).

Consider the "elephant and rider" model of consciousness (I thought Yvain wrote a post about this but I couldn't find it; in any case I'm not referring to this post by lukeprog, which is talking about something else). In this model, we divide the mind into two parts - we'll say my mind just for co... (read more)

I'll accept that willpower means something like the conscious mind trying to reign in the subconscious. But when you use that to defend the "ethics as willpower" view, you're assuming that the subconscious usually wants to do immoral things, and the conscious mind is the source of morality. On the contrary, my subconscious is at least as likely to propose moral actions as my conscious. My subconscious mind wants to be nice to people. If anything, it's my conscious mind that comes up with evil plans; and my subconscious that kicks back. I think there's a connection with the mythology of the werewolf. Bear with me. Humans have a tradition at least 2000 years long of saying that humans are better than animals because they're rational. We characterize beasts as bestial; and humans as humane. So we have the legend of the werewolf, in which a rational man is overcome by his animal (subconscious) nature and does horrible things. Yet if you study wolves, you find they are often better parents and more devoted partners than humans are. Being more rational may let you be more effective at being moral; but it doesn't appear to give you new moral values. (I once wrote a story about a wolf that was cursed with becoming human under the full moon, and did horrible things to become the pack alpha that it never could have conceived of as a wolf. It wasn't very good.)
In one of Terry Pratchett's novels (I think it is The Fifth Elephant) he writes that werewolves face as much hostility among wolves as among humans, because the wolves are well aware which of us is actually the more brutal animal.
I agree. I'm not sure if you're accusing me of holding the position or not so just to be clear, I wasn't defending ethics as willpower - I was carving out a spot for willpower in ethics as taste. I'm not sure whether the conscious or unconscious is more likely to propose evil plans; only that both do sometimes (and thus the simple conscious/unconscious distinction is too simple).
Oh! Okay, I think we agree.
What do you call the part of your mind that judges whether proposed actions are good or evil?
I would need evidence that there is a part of my mind that specializes in judging whether proposed actions are good or evil.
You referred to some plans as good and some plans as evil; therefore, something in your mind must be making those judgements (I never said anything about specializing).
In that case, I call that part of my mind "my mind". The post could be summarized as arguing that the division of decisions into moral and amoral components, if it is even neurally real, is not notably more important than the division of decisions into near and far components, or sensory and abstract components, or visual and auditory componets, etc.
Notice I said mind not brain. So I'm not arguing that it necessarily always takes place in the same part of the brain.
Oh yes, I should probably state my position. I want to call the judgement about whether a particular action is good or evil the "moral component", and everything else the "amoral" component. Thus ethics amounts to two things: 1) making the judgement about whether the action is good or evil as accurate as possible (this is the "wisdom" part) 2) acting in accordance with this judgement, i.e., performing good actions and not performing evil actions (this is the "willpower" part)
Why do you want to split things up that way? As opposed to splitting them up into the part requiring a quick answer and the part you can think about a long time (certainly practical), or the part related to short-term outcome versus the part related to long-term outcome, or other ways of categorizing decisions?

What about the "memes=good, genes=evil" model? The literally meant one where feudalism or lolcats are "good" and loving your siblings or enjoying tasty food is "evil".


...Did you really just index your footnotes from zero?

Of course. Indices should always start at zero. It saves one CPU instruction, allows one more possible footnote, and helps avoid fencepost errors. (I indexed my footnotes from 1, then wanted to add a footnote at the beginning.)
No, it only looks that way on your computer.
Look at the HTML; it contains a literal zero.

Having two utility functions is like having no utility function at all, because you don't have an ordering of preferences.

The only kind of model that needs a global utility function is an optimization process. Obviously, after considering each alternative, there needs to be a way to decide which one to choose... assuming that we do things like considering alternatives and choosing one of them (using an ordering that is represented by the one utility function).

For example, evolution has a global utility function (inclusive genetic fitness). Of course, it... (read more)

Upvoted for introducing the very useful term "effective belief".

This seems to be the crux of your distinction.

Under the willpower theory, morality means the struggle to consistently implement a known set of rules and actions.

Whereas under the taste theory, morality is a journey to discover and/or create a lifestyle fitting your personal ethical inclinations.

We should not ask "which is right?" but "but how much is each right? In what areas?"

I'm not sure of the answer to that question.

Humans don't make decisions based primarily on utility functions. To the extent that the Wise Master presented that as a descriptive fact rather than a prescriptive exhortation, he was just wrong on the facts. You can model behavior with a set of values and a utility function, but that model will not fully capture human behavior, or else will be so overfit that it ceases to be descriptive at all (e.g. "I have utility infinity for doing the stuff I do and utility zero for everything else" technically predicts your actions but is practically usel... (read more)

Nobody said that humans implement utility functions. Since I already said this, all I can do is say it again: Values, and utility functions, are both models we construct to explain why we do what we do. Whether or not any mechanism inside your brain does computations homomorphic to utility computations is irrelevant. [New edit uses different wording.] Saying that humans don't implement utility functions is like saying that the ocean doesn't simulate fluid flow, or that a satellite doesn't compute a trajectory.
It's more like saying a pane of glass doesn't simulate fluid flow, or an electron doesn't compute a trajectory.
Which would be way off!
Does it flow, or simulate a flow?
So how would you define rationality? What are you trying to do, when you're trying to behave rationally?
Indeed, and a model which treats fuzzies and utils as exchangeable is a poor one.
You could equally well analyze the utils and the fuzzies, and find subcategories of those, and say they are not exchangable. The task of modeling a utility function is the task of finding how these different things are exchangeable. We know they are exchangable, because people have preferences between situations. They eventually do one thing or the other.

Something the conventional story about ethics gets right, with which you seem to disagree, is that ethics is a society-level affair. That is, to justify an action as ethically correct is implicitly to claim that a rational inquiry by society would deem the action acceptable.

Another point convention gets right, and here again you seem to differ, is motivational externalism. That is, a person can judge that X is right without necessarily being motivated to do X. Of course, you've given good evolutionary-biological reasons why most of the time moral judg... (read more)

You're just redefining "ethics" as what I called "social ethics", and ignoring the other levels. That's treating ethics as a platonic ideal rather than as a product of evolution. In the view I'm presenting here, judgements by a person's personal ethics do always motivate action, by definition. Moral judgements computed using society's ethics don't directly motivate; the motivation is mediated through the person's motivation to accept society's ethics.
I'm not denying other levels, just insisting that "social ethics" is one of the levels. If you define personal ethics as inherently motivating, then "personal ethics" becomes a technical term. Which is perfectly fine, as long as you recognize it.

Somebody probably sent me this link to a previous LW discussion on the distinction between morality and ethics. (Sorry to not give credit. I just found it in a browser window on my computer and don't remember how it got there.)


I have never trusted theories of ethics whose upshot is that most people are moral.

I think most people will "take all the pie" if they can frame it as harmless, and something they're entitled to. Almost everybody, from homeless people to privileged college students to PTA moms, loves a freebie -- it's unusual to see people giving more, or putting in more work, than they're socially constrained to. The reason the "tragedy of the commons" happens every single time there's a commons (if you've ever lived in a co-op you know what I mean) ... (read more)

[This comment is no longer endorsed by its author]Reply

Everything you've said about the standard view is pretty much how I think of ethics... which makes things difficult.

[This comment is no longer endorsed by its author]Reply

Behaving according to the utility function of the part of your psyche that deals with willpower requires willpower.

Also, saying that ethics doesn't require willpower isn't the same as saying it's not a choice. I act moral based on my utility function, which is part of who I am. When I make an act based on what kind of a person I am, I make a choice. That's the compatibilist definition of choice. Ergo, acting moral is a choice.

I agree; but compatibilism is at odds with how people commonly use language. David DeAngelo says "Attraction isn't a choice", and by saying that he communicates the valuable lesson that you can't make a woman be attracted to you by convincing her to choose to be attracted to you. And yet, attraction is a choice, by the compatibilist definition. The compatibilist definition of "choice" ruins the word's usefulness, as it then fails to make any distinction... everything we do is a "choice" to the compatibilist, even breathing.
On the other hand it is possible to change someone's ethics, e.g., change their religion, make them vegetarian, by convincing them to change their religion, become a vegetarian, etc.
That's a good point.
This isn't true. You can be a compabilist without believing that all mental states are the result of choices. Breathing, for instance, is neurally involuntary. Your breath reflexes will ultimately override a decision to not breathe.
The compatibilist definition of choice says that "choice" is the deterministic working-out of "who you are". You could, in principle, work out some sort of division of your actions into categories that have causes within the parts of your brain that you are most fond of, and those that have causes primarily with other parts, like the brainstem. Why would you want to do that?

Society's main story is that ethics are a choice...This story has implications:...

The list following the above helped me understand why deontological judgements are prevalent, even if I don't find any strong arguments backing such theories (What do you mean, you don't care about the outcomes of an action? What do you mean that something is "just ethically right"?) In particular:

Ethical judgements are different from utility judgements. Utility is a tool of reason, and reason only tells you how to get what you want, whereas ethics tells you w

... (read more)
I didn't downvote you; but generally, people downvote positions that they don't like. Even on LessWrong. What's valuable, what I find I can learn from, is comments people make. The principal component of the down/up vote count on a post or a comment is how much what you said agrees with the dominant memes on LessWrong.
I don't think this is true. Speaking for myself, I've upvote quite frequently comments I disagree with in part or in whole. For example, I upvoted Hyena's remark here even though I disagreed with it. (In fact, further discussion strongly supported Hyena's claim. But my upvote came before that discussion.) I have a fair number of other examples of this. I don't think that I'm at all unique in this. I've made multiple comments about why I think AI going foom is unlikely and discussing what I consider to be serious problems with cryonics. Almost every single one has been upvoted sometimes quite highly.
I've heard that we are supposed to upvote something if we want to see more like it on LessWrong. And that seems like a good rule of thumb. I usually upvote a post or comment before replying to it, because that typically means it's a subject I want more discussion on. And I comment more often when I have a disagreement, or at least feel that something's been left out.
Why do you sometimes upvote comments that you disagree with? Do you mean comments that make statements you agree with in support of positions you disagree with?

I mean positions that I disagree with but make me think. This includes arguments that I had not considered that seem worthwhile to consider even if they aren't persuasive, and posts where even if the conclusions are wrong use interesting facts that I wasn't aware of, or posts that while I disagree with parts have other good points in them. Sometimes I will upvote a comment I disagree with simply because it is a demonstration of extreme civility in a highly controversial issue (so for example some of the recent discussions on gender issues I was impressed enough with the cordiality and thoughtfulness of people arguing different positions that I upvoted a lot of the comments).

In general, if a comment makes me think and makes me feel like reading it was a useful way to spend my time, I'll upvote it.

Of all the votes I've given I don't recall thinking my being hungry, or distracted, or etc. was a deciding factor, but those things are reasons as much as my sensible half-rationalizations that are also real reasons.

It helps to define your terms before philosophizing. I assume that you mean morality(a collection of beliefs as to what constitutes a good life) when you write ethics.

I can't speak for you, but my moral views are originally based on what I was taught by my family and the society in general, explicitly and implicitly, and then developed based on my reasoning and experience. Thus, my personal moral subsystem is compatible with, but not identical to what other people around me have. The differences might be minor (is torrenting copyrighted movies immoral?) o... (read more)

"Morality" is cognate with "mores", and has connotations of being a cultural construct (what I called social ethics) that an individual is not bound to (e.g, "When in Rome, do as the Romans do"). But my real answer is that neither of these terms are defined clearly enough for me to worry much over which one I used. I hope you found all terms sufficiently defined by the time you reached the end. When you say you developed your morals based on reasoning and experience, how did reason help? Reasoning requires a goal. I don't think you can reason your way to a new terminal goal; so what do you mean when you say reasoning helped develop your morals? That it helped you know how better to achieve your goals, but without giving you any new goals? If you say something like, "Reason taught me that I should value the wants of others as I value my own wants", I won't believe you. Reason can't do that. It might teach you that your own wants will be better-satisfied if you help other people with their wants. But that's different. As for myself, everything I call "moral" comes directly out of my wants. I want things for myself, and I also want other people to be happy, and to like me. Everything else follows from that. I may have had to be conditioned to care about other people. I don't know. That's a nature/nurture argument.
You may have an overly narrow view of what is usually meant by the word "reason".
No. Saying that reason taught you a new value is exactly the same as saying that you logically concluded to change your utility function.
Why would you expect to think of your utility function as a utility function? That's like expecting a squirrel to think of the dispersion of nuts it buried around its tree as having a two-dimensional Gaussian distribution.
I agree with this critique. Basically, the problem with PhilGoetz's view is that we do need a word for the kind of informal dispute resolution and conflict de-escalation we all do within social groups, and in our language we use the word ethics for that, not morality. "Morality" connotes either the inner "moral core" of individuals, i.e. the values PhilGoetz talks about in this post, or social "moral codes" which do proscribe "socially good behavior" everyone is expected to follow, but are pretty much unrelated to the kind of dispute resolution which concerns ethics. Yes, there is a weird inversion in etymology, with morality being connected with a word for "custom" and ethics with "character". But etymology does not always inform the current meaning of words.
I'm rather curious where the whole ethics-morality distinction came from. It seems to be a rather recent and non-specialist usage. I remember being dimly aware of such a distinction before college and then it sort of disappeared once I started studying philosophy where ethics is just the name of the subfield that studies moral questions. I'm really thrown off when people confidently assert the distinction as if were obvious to all English speakers. I'm guessing the usage as something to do with the rise of professional codes of ethics -- for lawyers, doctors, social workers etc. So you now have dramatic depictions of characters violating 'ethical codes' for the sake of what is right (recently Dr. House, any David E. Kelley legal drama, though I'm sure it goes farther back than that). As a result 'code of ethics' sometimes refers to a set of pseudo-legal professional restrictions which has given the word 'ethics' this strange connotation. But I don't see any particular reason to embrace the connotation since the specialist vocabulary of philosophers doesn't regularly employ any such distinction. I'm all for fiddling with philosophical vocabulary to fix confusions. Philosophers do lots of things wrong. But I don't think a morality-ethics distinction clarifies much-- that usage of ethics just isn't what we're talking about at all. Phil's usage is totally consistent with the specialist vocabulary here (though definitely his use of "meta-ethics" and "normative ethics" is not).
Indeed, the distinction is not at all clear in language. Most philosophers of ethics I've talked to prefer to 1) use the words interchangeably, or 2) stipulatively define a distinction for the purposes of a particular discussion. In the wild, I've seen people assume distinctions between ethics and morality in both directions; what some call "ethics" others call "morality" and vice-versa. Comment where I enumerate some common definitions
The distinction is needed precisely because "moral philosophy" studies 'moral' questions ("how should we live?") from a rather peculiar point of view, which does not quite fit with any descriptive morality. Real-world moralities are of course referenced in moral philosophy, but only as a source of "values" or "rights" or "principles of rational agency". Most philosophers are not moral absolutists, by and large: they take it for granted that moral values will need to be balanced in some way, and perhaps argued for in terms of more basic values. However, in actual societies, most of that "balancing" and arguing practically happens through ethical disputes, which also inform political processes (See George Lakoff's Moral Politics, and works by Jane Jacobs and by Jonathan Haidt for more on how differences in moral outlook lead to political disputes). I agree that the usage of "ethics" in the sense of "ethical code" (for a specialized profession) can be rather misleading. In all fairness, using "ethics" here sort of makes sense because the codes are (1) rather specialized--given the complexity of these professions, non-specialists and the general public cannot be expected to tell apart good decisions from bad in all circumstances, so "ethical" self-regulation has to fill the gap. (2) with limited exceptions (say, biological ethics) they are informed by generally agreed-upon values, especially protecting uninformed stakeholders. More like a case of "normative ethics" than a contingent "code of morality" which could be disagreed with.
I don't understand this comment. What peculiar point of view? I don't know what this means. Ethics is divided into three subfield: meta-ethics, normative ethics and applied ethics. Meta-ethics addresses what moral language means, this is where debates over moral realism, motivational internalism, and moral cognitivism take place. Normative ethics involves general theories of how we should act, utilitarianism, Kantianism, particularism, virtue ethics etc. Applied ethics involves debates over particular moral issues: abortion, euthanasia, performance enhancing drugs etc. Then there is moral psychology and anthropology which study descriptive questions. Political philosophy is closely related and often touches on moral questions relating to authority, rights and distributional justice. Obviously all of these things inform one another. What exactly is the conceptual division that the proposed ethics-morality distinction reveals?
And most people who claim to follow some sort of morality in the real world would care little or not at all about these subdivisions. Sure, you could pidgeon-hole some of them as "moral absolutists", and others as following a "divine command theory". But really, most people's morality is constantly informed by ethical debates and disputes, so even that is not quite right. Needless to say, these debates largely occur in the public sphere, not academic philosophy departments. Conflating "morality" and "ethics" would leave you with no way to draw a distinction between what these common folks are thinking about when they reason "morally", vs. what goes on in broader ethical (and political) debates, both within academic philosophy and outside of it. This would seem to be a rather critical failure of rationality.
Why should we think that what common folks are reasoning about and what gets argued about in the public sphere are different things?
They're not "different things", in that public ethical debate (plus of course legal, civic and political debate as well) is often a matter of judging and balancing disputes between previously existing "rights". Moreover, inner morality, public ethics and moral philosophy all address the same basic topic of "how we should live". But this is not to say that either "morality" or "ethics" mean nothing more than "How should we live?". Indeed, you have previously referred to ethics as 'the... stud[y of] moral questions', implying some sort of rigorous inquiry. Conversely, philosophers routinely talk about "descriptive morality" when they need to refer to moral values or socially-endorsed moral codes as they actually exist in the real world without endorsing them normatively.
I'm interested, but skeptical that English makes this clear distinction. I'd really appreciate references to authoritative sources on the distinction in meaning between morality and ethics.
The SEP article on the definition of morality seems fairly clear to me. Yes, the article tries to advance a distinction between "descriptive" and "normative" morality. But really, even descriptive morality is clearly "normative" to those who follow it in the real world. What they call "normative" morality should really be called "moral philosophy", or "the science/philosophy of universalized morality" (CEV?), or, well... ethics. Including perhaps "normative ethics", i.e. the values that pretty much all human societies agree about, so that philosophers can take them for granted without much issue.
It does not really help your case to simultaneously lean on SEP as an authority and claim that it is wrong.
Why? I'm not trying to rely on SEP as an authority here. Indeed, given your own findings about how the terms are used in the philosophical literature, no such authority could plausibly exist. What I can hope to show is that my use of the terms is more meaningful than others' (it "carves concept-space at its joints") and that it's at least roughly consistent with what English speakers mean when they talk about "morality" and "ethics". AFAICT, the SEP entry supports my argument: it exemplifies a meaningful distinction between "descriptive" and "normative" morality, which most English-speaking folks would readily describe as a distinction between "morality" and "ethics".
Then it was a particularly bad choice as a response when asked for an authoritative source.
Then just use "descriptive" and "normative"... The concept space is already carved the way you like it, get your hand out of the cake.
I don't use these terms because they have undesirable connotations and are unfamiliar to most English speakers. Calling morality as it is actually reasoned about in the real world "descriptive" only makes sense as a way of emphasizing that you seek to describe something as opposed to endorsing it. OK for philosophers (and folks working on CEV, perhaps), not so much for everyone else.
It would help if we agreed on what "willpower" meant. I am not convinced it is a single thing. We say that a person who breaks their diet, and a person who can't do ten pushups, and a lazy person, and a person with OCD, and a thief, all lack willpower. I don't think these are the same.
Agreed. Some people who are highly courageous suffer deeply from akrasia. Other people do not even understand what is meant by the concept of Akrasia when it is explained to them. I do not think that means they have a large amount of willpower; akrasia is simply not an issue for them. I remember reading about how performing tasks that require a great deal of mental concentration seems to drain an internal reserve of mental energy. As a result, your performance on other tasks that require mental focus also lapses, until you've had time to rest and restore your energy. I would refer to this mental energy reserve as willpower. I think this fits very well with people's experience in fighting akrasia: attempts to use willpower in order to (for example) go on a diet works for a little bit, until all your willpower is expended and you revert to your usual habits.

Human behaviours cannot be described or even sensibly approximated by an utility functions. You can wish otherwise, since utility function has nice mathematical properties, but wishing something hard doesn't make it true.

There are some situations where an extremely rough approximation like utility function can reasonably be used as a model of human behaviour, just as there are some situations where an extremely rough approximation like uniform sphere can reasonably be used as model of a cow. These are facts about modelling practice, not about human behavior or shape of cows, and it's a fundamental misunderstanding to confuse these.

Re-read the first several paragraphs of the post, please. I disagree with your point, but it doesn't matter, as it is irrelevant to this post. Alternately, explain what you mean by moral behavior, or rational behavior, since you don't believe in predictive models of behavior, nor that humans have any preference or way of ranking different possible behaviors (since any predictive model or ranking model can be phrased as a utility function).
This is just nonsense. Expected utility cannot even model straightforward risk aversion. Even simplest algorithms like "minimize possible loss" are impossible to express in terms of utility functions. Crude approximations are sometimes useful, confusing them with the real thing never is.
Either I'm missing something, or this seems dead wrong. Doesn't risk aversion fall right out of expected utility, plus the diminishing marginal utility in whatever is getting you the utils?
Here's fundamental impossibility result for modeling risk aversion in expected utility framework, and this is for the ridiculously oversimplified and horribly wrong version of "risk aversion" invented specifically for expected utility framework. Actual risk aversion looks nothing like a concave utility curve, you can get one bet right, then when you change bet size, bet probability, wealth level or anything, you start getting zeroes or infinities where you shouldn't, always. Prospect theory provides simple models that don't immediately collapse and are of some use.
I don't have the time to examine the paper in depth just now (I certainly will later, it looks interesting) but it appears our proximate disagreement is over what you meant when you said "risk aversion" - I was taking it to mean a broader "demanding a premium to accept risk", whereas you seem to have meant a deeper "the magnitudes of risk aversion we actually observe in people for various scenarios." Assuming the paper supports you (and I see no reason to think otherwise), then my original objection does not apply to what you were saying. I am still not sure I agree with you, however. It has been shown that, hedonically, people react much more strongly to loss than to gain. If taking a loss feels worse than making a gain feels good, then I might be maximizing my expected utility by avoiding situations where I have a memory of taking a loss over and above what might be anticipated looking only at a "dollars-to-utility" approximation of my actual utility function.
The only reason expected utility framework seems to "work" for single two-outcome bets is that is has more parameters to tweak than datapoints we want to simulate, and we throw away utility curve immediately other than for 3 points - no bet, bet fail, bet win. If you try to reuse this utility curve for any other bet or bet with more than two outcomes, you'll start seeing the same person accepting infinite, near-zero, or even negative risk premia.
Could you provide a simple (or at least, near minimally complex) example?
Examples in paper are very simple (but explaining them with math and proving why expected utility fails so miserably takes much of the paper).
You are being frustrating. Your citations here are talking about trying to model human behavior based on trying to fit concave functions of networth-to-utility to realistic numbers. The bit you quoted here was from a passage wherein I was ceding this precise point. I was explaining that I had previously thought you to be making a broader theoretical point, about any sort of risk premia - not just those that actually model real human behavior. Your quoting of that passage lead me to believe that was the case, but your response here leads me to wonder whether there is still confusion. Do you mean this to apply to any theoretical utility-to-dollars function, even those that do not well model people? If so, can you please give an example of infinite or negative risk premia for an agent (an AI, say) whose dollars-to-utility function is U(x) = x / log(x + 10).
This utility function has near zero risk aversion at relevant range. Assuming our AI has wealth level of $10000, it will happily take a 50:50 bet of gaining $100.10 vs losing $100.00. It also gets to infinities if there's a risk of dollar worth below -$10.
Yes, it is weak risk aversion - but is it not still risk aversion, as I had initially meant (and initially thought you to mean)? Yes, of course. I'd considered this irrelevant for reasons I can't quite recall, but it is trivially fixed; is there a problem with U(x) = x/log(x+10)?
To quote from that paper My reaction was essentially: yeah: right .
Also, prospect theory is a utility theory. You compute a utility for each possible outcome associated with each action, add them up to compute the utility for each action, then choose the action with the highest utility. This is using a utility function. It is a special kind of utility function, where the utility for each possible outcome is calculated relative to some reference level of utility. But it's still a utility function. Every time someone thinks they have a knockdown argument against the use of utility functions, I find they have a knockdown argument against some special simple subclass of utility functions.
I'm just skimming the beginning of the paper, but it says This is shown by observing that this bet refusal means the person values the 11th dollar above her current wealth by at most 10/11 as much as the 10th-to-last dollar of her current wealth. You then consider that she would also turn down the same bet if she were $21 wealthier, and see that she values the 32nd dollar above her current wealth at most 10/11 x 10/11 as much as the 10th-to-last dollar of her current wealth. Etcetera. It then says, There are 2 problems with this argument already. The first is that it's not clear that people have a positive-expected-value bet that they would refuse regardless of how much money they already have. But the larger problem is that it assumes "utility" is some simple function of net worth. We already know this isn't so, from the much simpler observation that people feel much worse about losing $10 than about not winning $10, even if their net worth is so much larger than $10 that the concavity of a utility function can't explain it. A person's utility is not based on an accountant-like evaluation of their net worth. Utility measures feelings, not dollars. Feelings are context-dependent, and the amount someone already has in the bank is not as salient as it ought to be if net worth were the only consideration. We have all heard stories of misers who had a childhood of poverty and were irrationally cheap even after getting rich; and no one thinks this shatters utility theory. So this paper is not a knockdown argument against utility functions. It's an argument against the notion that human utility is based solely on dollars.
"Minimize possible loss" can be modelled by an utility function −exp(cL) in the limit of very large c.

The short answer is that you may choose to change your future utility function when doing so will have the counter-intuitive effect of better-fulfilling your current utility function

Fulfilling counter-intuitive resolutions of your utility function should be an instrumental product of reason. I don't see a need to change your utility function in order to satisfy your existing utility function. You might consider tackling this whole topic separately and more generally for all utility functions if you think its important.

In a prisoners' dilemma, especially one that fails the conditions needed for cooperation to be rational, everyone's current utility function would be better-fulfilled in the future if everyone's future self had a different utility function. You might be able to fulfil your current utility function even better in the future if you fool everyone else into changing their utility functions, and don't change yours. But if this can be detected, and you will thus be excluded from society, it's better, even by your current utility function, to adopt a new utility function. (You can tell yourself that you're just precomitting to act differently. But I expect the most effective methods of precommitment will look just like methods for changing your utility function.) This is in the 'normal ending' of the babyeater story I linked to.
I read that story. I guess I was confused by the ending in which an entire populated star system was destroyed to prevent everyone having their utility function changed.
I guess he might be referring to the Superhappies' self-modification? I don't know, I'm still struggling to understand how someone could read "True Ending: Sacrificial Fire" and think Bad End.
That's what happens when you take "thou shalt not modify thine own utility function" as a religious tenet.
Wrong, the "religious tenet" is not "thou shalt not modify thine own utility function", but "minimize evil". Of course, being changed to no longer care about minimizing evil is not conducive to minimizing evil.

Part of your description of the "ethics is willpower" position appears to be a strawman, as other parts of the same description are accurate, I assume it is because you do not fully understand it:

Firstly the position would more accurately be called "ethics is willpower plus wisdom", but even that doesn't fully capture it. Let's go through your points one by one:

Ethics is specifically about when your desires conflict with the desires of others. Thus, ethics is only concerned with interpersonal relations.

No, it also includes delayin... (read more)

I was parodying that view when I said it is "acquired automatically as a linear function of age." If you know of any studies that attempted to measure wisdom, or show correlations between different tests of wisdom, or between wisdom and outcomes, I'd be very interested in them. I can't offhand think of any good uses of the word "wisdom" that would not be better replaced by some combination of "intelligent" and "knowledgeable". It is often used as a way to claim intelligence without having intelligence; or to criticize intelligent statements by saying they are not "wise", whatever that is.
It has been observed that people with high intelligence, nonetheless, frequently do stupid things, including stupid things that many people with less intelligence get right (I don't think this is controversial, but can provide examples as necessary). I am, therefore, using "wisdom" to mean whatever is necessary besides intelligence to avoid doing stupid things.