Followup toTorture vs. Dust Specks, Zut Allais, Rationality Quotes 4

Suppose that a disease, or a monster, or a war, or something, is killing people.  And suppose you only have enough resources to implement one of the following two options:

  1. Save 400 lives, with certainty.
  2. Save 500 lives, with 90% probability; save no lives, 10% probability.

Most people choose option 1.  Which, I think, is foolish; because if you multiply 500 lives by 90% probability, you get an expected value of 450 lives, which exceeds the 400-life value of option 1.  (Lives saved don't diminish in marginal utility, so this is an appropriate calculation.)

"What!" you cry, incensed.  "How can you gamble with human lives? How can you think about numbers when so much is at stake?  What if that 10% probability strikes, and everyone dies?  So much for your damned logic!  You're following your rationality off a cliff!"

Ah, but here's the interesting thing.  If you present the options this way:

  1. 100 people die, with certainty.
  2. 90% chance no one dies; 10% chance 500 people die.

Then a majority choose option 2.  Even though it's the same gamble.  You see, just as a certainty of saving 400 lives seems to feel so much more comfortable than an unsure gain, so too, a certain loss feels worse than an uncertain one.

You can grandstand on the second description too:  "How can you condemn 100 people to certain death when there's such a good chance you can save them?  We'll all share the risk!  Even if it was only a 75% chance of saving everyone, it would still be worth it - so long as there's a chance - everyone makes it, or no one does!"

You know what?  This isn't about your feelings.  A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan.  Does computing the expected utility feel too cold-blooded for your taste?  Well, that feeling isn't even a feather in the scales, when a life is at stake.  Just shut up and multiply.

Previously on Overcoming Bias, I asked what was the least bad, bad thing that could happen, and suggested that it was getting a dust speck in your eye that irritated you for a fraction of a second, barely long enough to notice, before it got blinked away.  And conversely, a very bad thing to happen, if not the worst thing, would be getting tortured for 50 years.

Now, would you rather that a googolplex people got dust specks in their eyes, or that one person was tortured for 50 years?  I originally asked this question with a vastly larger number - an incomprehensible mathematical magnitude - but a googolplex works fine for this illustration.

Most people chose the dust specks over the torture.  Many were proud of this choice, and indignant that anyone should choose otherwise:  "How dare you condone torture!"

This matches research showing that there are "sacred values", like human lives, and "unsacred values", like money.  When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).

My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective.  The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life.  After rejecting the report, the agency decided not to implement the measure.

Trading off a sacred value (like refraining from torture) against an unsacred value (like dust specks) feels really awful.  To merely multiply utilities would be too cold-blooded - it would be following rationality off a cliff...

But let me ask you this.  Suppose you had to choose between one person being tortured for 50 years, and a googol people being tortured for 49 years, 364 days, 23 hours, 59 minutes and 59 seconds.  You would choose one person being tortured for 50 years, I do presume; otherwise I give up on you.

And similarly, if you had to choose between a googol people tortured for 49.9999999 years, and a googol-squared people being tortured for 49.9999998 years, you would pick the former.

A googolplex is ten to the googolth power.  That's a googol/100 factors of a googol.  So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort, and multiplying by a factor of a googol each time, until we choose between a googolplex people getting a dust speck in their eye, and a googolplex/googol people getting two dust specks in their eye.

If you find your preferences are circular here, that makes rather a mockery of moral grandstanding.  If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, you may have fun driving, but you aren't going anywhere.  Maybe you think it a great display of virtue to choose for a googolplex people to get dust specks rather than one person being tortured.  But if you would also trade a googolplex people getting one dust speck for a googolplex/googol people getting two dust specks et cetera, you sure aren't helping anyone.  Circular preferences may work for feeling noble, but not for feeding the hungry or healing the sick. 

Altruism isn't the warm fuzzy feeling you get from being altruistic.  If you're doing it for the spiritual benefit, that is nothing but selfishness.  The primary thing is to help others, whatever the means.  So shut up and multiply!

And if it seems to you that there is a fierceness to this maximization, like the bare sword of the law, or the burning of the sun - if it seems to you that at the center of this rationality there is a small cold flame -

Well, the other way might feel better inside you.  But it wouldn't work.

And I say also this to you:  That if you set aside your regret for all the spiritual satisfaction you could be having - if you wholeheartedly pursue the Way, without thinking that you are being cheated - if you give yourself over to rationality without holding back, you will find that rationality gives to you in return.

But that part only works if you don't go around saying to yourself, "It would feel better inside me if only I could be less rational."

Chimpanzees feel, but they don't multiply.  Should you be sad that you have the opportunity to do better?  You cannot attain your full potential if you regard your gift as a burden.

Added:  If you'd still take the dust specks, see Unknown's comment on the problem with qualitative versus quantitative distinctions.

Circular Altruism
New Comment
310 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

400 people die, with certainty.

Should that be 100?

  1. 400 people die, with certainty.
  2. 90% chance no one dies; 10% chance 500 people die.

ITYM 1. 100 people die, with certainty.

Care to test your skills against the Repugnant Conclusion? http://plato.stanford.edu/entries/repugnant-conclusion/

A life barely worth living is worth living. I see no pressing need to disagree with the Repugnant Conclusion itself.

However, I suspect there is a lot of confusion between "a life barely worth living" and "a life barely good enough that the person won't commit suicide".

A life barely good enough that the person won't commit suicide is well into the negatives.

Not to mention the confusion between "a life barely worth living" and "a life that has some typical number of bad experiences in it and barely any good experiences".

9AndyC
I don't understand why it's supposed to be somehow better to have more people, even if they are equally happen. 10 billion happy people is better than 5 billion equally happy people? Why? It makes no intuitive sense to me, I have no innate preference between the two (all else equal), and yet I'm supposed to accept it as a premise.
0AlexanderRM
Isn't it usually brought up by people who want you to reject it as a premise, as an argument against hedonic positive utilitarianism? Personally I do disagree with that premise and more generally with hedonic utilitarianism. My utility function is more like "choice" or "freedom" (an ideal world would be one where everyone can do whatever they want, and in a non-ideal one we should try to optimize to get as close to that as possible), so based on that I have no preference with regards to people who haven't been born yet, since they're incapable of choosing whether or not to be alive. (on the other hand my intuition is that bringing dead people back would be good if it were possible... I suppose that if the dead person didn't want to die at the moment of death, that would be compatible with my ideas, and I don't think it's that far off from my actual, intuitive reasons for feeling that way.)
1altleft
It makes some sense in terms of total happiness, since 10 billion happy people would give a higher total happiness than 5 billion happy people.
3[anonymous]
But the Repugnant Conclusion is wrong. People who don't exist have no interest in existing; they don't have any interests, because they don't exist. To make the world a better place means making it a better place for people who already exist. If you add a new person to that pool of 'people who exist', then of course making the world a better place means making it a better place for that person as well. But there's no reason to go around adding imaginary babies (as in the example from part one of the linked article) to that pool for the sake of increasing total happiness. It's average happiness on a personal level -- not total happiness -- which makes people happy, and making people happy is sort of the whole point of 'making the world a better place'. Or else why bother? To be honest, the entire Repugnant Conclusion article felt a little silly to me.
1Jerdle
My answer to it is that it's a case of status quo bias. People see the world we live in as world A, and so status quo bias makes the repugnant conclusion repugnant. But, looking at the world, I see no reason to assume we aren't in world Z. So the question becomes, would it be acceptable to painlessly kill a large percentage of the population to make the rest happier, and the intuitive answer is no. But that is the same as saying world Z is better than world A, which is the repugnant conclusion.

Whilst the your analysis of life-saving choices seems fairly uncontentious, I'm not entirely convinced that the arithmetic of different types of suffering add together the way you assume. It seems at least plausible to me that where dust motes are individual points, torture is a section of a contiuous line, and thus you can count the points, or you can measure the lengths of different lines, but no number of the former will add up to the latter.

A dust speck takes a finite time, not an instant. Unless I'm misunderstanding you, this makes them lines, not points.

4AndyC
You're misunderstanding. It has nothing to do with time -- it's not a time line. It means the dust motes are infinitesimal, while the torture is finite. A finite sum of infinitesimals is always infinitesimal. Not that you really need to use a math analogy here. The point is just that there is a qualitative difference between specs of dust and torture. They're incommensurable. You cannot divide torture by spec of dust, because neither one is a number to start with.
-1AlexanderRM
I think the dust motes vs. torture makes sense if you imagine a person being bombarded with dust motes for 50 years. I could easily imagine a continuous stream of dust motes being as bad as torture (although possibly the lack of variation would make it far less effective than what a skilled torturer could do). Based on that, Eliezer's belief is just that the same number of dust motes spread out among many people is just as bad as one person getting hit by all of them. Which I will admit is a bit harder to justify. One possible way to make the argument is to think in terms of rules utilitarianism, and imagine a world where a huge number of people got the choice, then compare one where they all choose the torture vs. one where they all choose the dust motes- the former outcome would clearly be better. I'm pretty sure there are cases where this could be important in government policy.
2dxu
This is an interesting claim. Either it implies that the human brain is capable of detecting infinitesimal differences in utility, or else it implies that you should have no preference between having a dust speck in your eye and not having one in your eye.
-1Slider
There is a perfectly good way of treating this as numbers. Transfinite division is a thing. With X people experiencing infinidesimal discomfort and Y people experiening finite discomfort if X and Y are finites then torture is always worse. With X being transfinite dust specks could be worse. But in reverse if you insist that the impacts are reals ie finites then there are finite multiples that go past each other that is for any r,y,z in R r>0,y>r, there is a z so that rz>y.

I'm sorry, but I find this line of argument not very useful. If I remember correctly (which I may not be doing), a googolplex is larger than the estimated number of atoms in the universe. Nobody has any idea of what it implies except "really, really big", so when your concepts get up there, people have to do the math, since the numbers mean nothing. Most of us would agree that having a really really lot of people bothered just a bit is better than having one person suffer for a long life. That has little to do with math and a lot to do with o... (read more)

-2Strange7
They've probably already had sex once by then, and thus a fair chance to pass on their genes. Notice that we're not as eager to send 18-year-old women off to war.
[-]Dojan140

Nobody has any idea of what it implies except "really, really big", so when your concepts get up there, people have to do the math, since the numbers mean nothing.

This applies just as much to numbers such as million and billion, which people mixes up regularly; the problem though is that people dont do the math, despite not understanding the magnitudes if the numbers, and those numbers of people are actually around.

Personaly, if I first try to visualize a crowd of a hundred people, and then a crowd of a thousand, the second crowd seems about three times as large. If I start with a thousand, and then try a hundred, this time around the hundred people crowd seems a lot bigger than it did last time. And the bigger numbers I try with, the worse it gets, and there is a long way to go to get to 7'000'000'000(# of people of earth). All sorts of biases seems to be at work here, anchoring among them. Result: Shut up and multiply!

[Edit: Spelling]

2Normal_Anomaly
This is an excellent point, but your spelling errors are distracting. You said "av" seven times when you meant "a", and "ancoring" in the last line should be "anchoring".
2Dojan
Wow, I must have been half asleep when writing that...
9Dojan
This is further evidenced by the fact that most people dont know about the long and short scales, and never noticed.

One can easily make an argument like the torture vs. dust specks argument to show that the Repugnant Conclusion is not only not repugnant, but certainly true.

More intuitively, if it weren't true, we could find some population of 10,000 persons at some high standard of living, such that it would be morally praiseworthy to save their lives at the cost of a googolplex galaxies filled with intelligent beings. Most people would immediately say that this is false, and so the Repugnant Conclusion is true.

1AlexanderRM
Note here that the difference is between the deaths of currently-living people, and preventing the births of potential people. In hedonic utilitarian terms it's the same, but you can have other utilitarian schemes (ex. choice utilitarianism as I commented above) where death either has an inherent negative value, or violates the person's preferences against dying. BTW note that even if you draw no distinction, your thought experiment doesn't necessarily prove the Repugnant Conclusion. The third option is to say that because the Repugnant Conclusion is false, it must be that the automatic response to your thought experiment is incorrect, i.e. that it's OK to wipe out a googolplex galaxies full of people with lives barely worth living to save 10,000 people. Although I feel like most people, if they rejected the killing/preventing birth distinction, would go with the Repugnant Conclusion over that.
0dxu
Interestingly enough, I don't find the Repugnant Conclusion all that repugnant. Is there anyone else here who shares this intuition?
[-]Lee60

Eliezer, I am skeptical that sloganeerings ("shut up and calculate") will not get you across this philosophical chasm: Why do you define the best one-off choice as the choice that would be prefered over repeated trials?

Can someone please post a link to a paper on mathematics, philosophy, anything, that explains why there's this huge disconnect between "one-off choices" and "choices over repeated trials"? Lee?

Here's the way across the philosophical "chasm": write down the utility of the possible outcomes of your action. Use probability to find the expected utility. Do it for all your actions. Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.

You mi... (read more)

0josinalvo
http://en.wikipedia.org/wiki/Prisoner%27s_dilemma#The_iterated_prisoners.27_dilemma (just an example of such a disconnect, not a general defence of disconects)
[-]Lee160

Consider these two facts about me:

(1) It is NOT CLEAR to me that saving 1 person with certainty is morally equivalent to saving 2 people when a fair coin lands heads in a one-off deal.

(2) It is CLEAR to me that saving 1000 people with p=.99 is morally better than saving 1 person with certainty.

Models are supposed to hew to the facts. Your model diverges from the facts of human moral judgments, and you respond by exhorting us to live up to your model.

Why should we do that?

3DPiepgrass
In a world sufficiently replete with aspiring rationalists there will be not just one chance to save lives probabilistically, but (over the centuries) many. By the law of large numbers, we can be confident that the outcome of following the expected-value strategy consistently (even if any particular person only makes a choice like this zero or one times in their life) will be that more total lives will be saved. Some people believe that "being virtuous" (or suchlike) is better than achieving a better society-level outcome. To that view I cannot say it better than Eliezer: "A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan." I see a problem with Eliezer's strategy that is psychological rather than moral: if 500 people die, you may be devastated, especially if you find out later that the chance of failure was, say, 50% rather than 10%. Consequentialism asks us to take this into account. If you are a general making battle decisions, which would weigh on you more? The death of 500 (in your effort to save 100), or abandoning 100 to die at enemy hands, knowing you had a roughly 90% chance to save them? Could that adversely affect future decisions? (in specific scenarios we must also consider other things, e.g. in this case whether it's worth the cost in resources - military leaders know, or should know, that resources can be equated with lives as well...) Note: I'm pretty confident Eliezer wouldn't object to you using your moral sense as a tiebreaker if you had the choice between saving one person with certainty and two people with 50% probability.

Torture vs dust specks, let me see:

What would you choose for the next 50 days:

  1. Removing one mililiter of the daily water intake of 100,000 people.
  2. Removing 10 liters of the daily water intake of 1 person.

The consequence of choice 2 would be the death of one person.

Yudkowsky would choose 2, I would choose 1.

This is a question of threshold. Below certain thresholds things don't have much effect. So you cannot simply add up.

Another example:

  1. Put 1 coin on the head of each of 1,000,000 people.
  2. Put 100,000 coins on the head of one guy.

What do you choose? Can we add up the discomfort caused by the one coin on each of 1,000,000 people?

These are simply false comparisons.

Had Eliezer talked about torturing someone through the use of googelplex of dust specks, your comparison might have merit, but as is it seems to be deliberately missing the point.

Certainly, speaking for someone else is often inappropriate, and in this case is simple strawmanning.

8bgaesop
I really don't see how his comparison is wrong. Could you explain in more depth, please
[-]ata170

The comparison is invalid because the torture and dust specks are being compared as negatively-valued ends in themselves. We're comparing U(torture one person for 50 years) and U(dust speck one person) * 3^^^3. But you can't determine whether to take 1 ml of water per day from 100,000 people or 10 liters of water per day from 1 person by adding up the total amount of water in each case, because water isn't utility.

Perhaps this is just my misunderstanding of utility, but I think his point was this: I don't understand how adding up utility is obviously a legitimate thing to do, just like how you claim that adding up water denial is obviously not a legitimate thing to do. In fact, it seems to me as though the negative utility of getting a dust speck in the eye is comparable to the negative utility of being denied a milliliter of water, while the negative utility of being tortured for a lifetime is more or less equivalent to the negative utility of dying of thirst. I don't see why it is that the one addition is valid while the other isn't.

If this is just me misunderstanding utility, could you please point me to some readings so that I can better understand it?

9ata
To start, there's the Von Neumann–Morgenstern theorem, which shows that given some basic and fairly uncontroversial assumptions, any agent with consistent preferences can have those preferences expressed as a utility function. That does not require, of course, that the utility function be simple or even humanly plausible, so it is perfectly possible for a utility function to specify that SPECKS is preferred over TORTURE. But the idea that doing an undesirable thing to n distinct people should be around n times as bad as doing it to one person seems plausible and defensible, in human terms. There's some discussion of this in The "Intuitions" Behind "Utilitarianism". (The water scenario isn't comparable to torture vs. specks mainly because, compared to 3^^^3, 100,000 is approximately zero. If we changed the water scenario to use 3^^^3 also, and if we assume that having one fewer milliliter of water each day is a negatively terminally-valued thing for at least a tiny fraction of those people, and if we assume that the one person who might die of dehydration wouldn't otherwise live for an extremely long time, then it seems that the latter option would indeed be preferable.)
1Will_Sawin
In particular, VNM connects utility with probability, so we can use an argument based on probability. One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person. One person gaining N utility should be equally good as one randomly selected person out of N people gaining N utility. Now we analyze it from each person's perspective. They each have a 1/N chance of gaining N utility. This is 1 unit of expected utility, so they find it as good as surely gaining one unit of utility. If they're all indifferent between one person gaining N and everyone gaining 1, who's to disagree?
-6bgaesop
3roystgnr
If you look at the assumptions behind VNM, I'm not at all sure that the "torture is worse than any amount of dust specks" crowd would agree that they're all uncontroversial. In particular the axioms that Wikipedia labels (3) and (3') are almost begging the question. Imagine a utility function that maps events, not onto R, but onto (R x R) with a lexicographical ordering. This satisfies completeness, transitivity, and independence; it just doesn't satisfy continuity or the Archimedian property. But is that the end of the world? Look at continuity: if L is torture plus a dust speck (utility (-1,-1)). M is just torture (utility (-1,0)) and N is just a dust speck ((0,-1)), then must there really be a probability p such that pL + (1-p)N = M? Or would it instead be permissable to say that for p=1, torture plus dust speck is still strictly worse than torture, whereas for any p<1, any tiny probability of reducing the torture is worth a huge probabilty of adding that dust speck to it? (edited to fix typos)
4AgentME
Agree - I was kind of thinking it as friction. Say you have 1000 boxes in a warehouse, all precisely where they need to be. Being close to their current positions is better than not. Is it better to A) apply 100 N of force over 1 second to 1 box, or B) 1 N of force over 1 second to all 1000 boxes? Well if they're frictionless and all on a level surface, do option A because it's easier to fix, but that's not how the world is. Say that 1 N against the boxes isn't even enough to defeat the static friction: that means in option B, none of the boxes will even move. Back to the choice between A) having a googolplex of people have a speck of dust in their eye vs B) one person being tortured for 50 years: in option A, you have a googolplex of people who lead productive lives who don't even remember that anything out of the ordinary happened to them suddenly (assuming one single dust speck doesn't even pass the memorable threshold), and in option B, you have a googolplex - 1 of people leading productive lives who don't remember anything out of the ordinary happening, and one person being tortured and never accomplishing anything.

Eliezer, can you explain what you mean by saying "it's the same gamble"? If the point is to compare two options and choose one, then what matters is their values relative to each other. So, 400 certain lives saved is better than a 90% chance of 500 lives saved and 10% chance of 500 deaths, which is itself better than 400 certain deaths.

Perhaps it would help to define the parameters more clearly. Do your first two options have an upper limit of 500 deaths (as the second two options seem to), or is there no limit to the number of deaths that may occur apart from the lucky 4-500?

Many were proud of this choice, and indignant that anyone should choose otherwise: "How dare you condone torture!" I don't think that's a fair characterization of that debate. A good number of people using many different reasons thought something along the lines of negligible "harm" * 3^^^3<50 years of torture. That many people spraining their ankle or something would be a different story. Those harms are different enough that it's by no means obvious which we should prefer, and it's not clear that trying to multiply is really productive, whereas your examples in this entry are indeed obvious.

"The primary thing is to help others, whatever the means. So shut up and multiply!"

Would you submit to torture for 50 years to save countless people? I'm not sure I would, but I think I'm more comfortable with the idea of being self-interested and seeing all things through the prism of self interest.

Similar problem: if you had this choice--you can die peacefully and experience no afterlife, or literally experience hell for 100 years if one was rewarded with an eternity of heaven, would you choose the latter? Calculating which provides the greatest utility, the latter would be preferable, but I'm not sure I would choose it.

Eliezer, as I'm sure you know, not everything can be put on a linear scale. Momentary eye irritation is not the same thing as torture. Momentary eye irritations should be negligible in the moral calculus, even when multiplied by googleplex^^^googleplex. 50 years of torture could break someone's mind and lead to their destruction. You're usually right on the mark, but not this time.

[-]phob110

Would you pay one cent to prevent one googleplex of people from having a momentary eye irration?

Torture can be put on a money scale as well: many many countries use torture in war, but we don't spend huge amounts of money publicizing and shaming these people (which would reduce the amount of torture in the world).

In order to maximize the benefit of spending money, you must weigh sacred against unsacred.

5jeremysalwen
I certainly wouldn't pay that cent if there was an option of preventing 50 years of torture using that cent. There's nothing to say that my utility function can't take values in the surreals.
5AndyC
There's an interesting paper on microtransactions and how human rationality can't really handle decisions about values under a certain amount. The cognitive effort of making a decision outweighs the possible benefits of making the decision. How much time would you spend making a decision about how to spend a penny? You can't make a decision in zero time, it's not physically possible. Rationally you have to round off the penny, and the spec of dust.

To get back to the 'human life' examples EY quotes. Imagine instead the first scenario pair as being the last lifeboat on the Titanic. You can launch it safely with 40 people on board, or load in another 10 people, who would otherwise die a certain, wet, and icy death, and create a 1 in 10 chance that it will sink before the Carpathia arrives, killing all. I find that a strangely more convincing case for option 2. The scenarios as presented combine emotionally salient and abstract elements, with the result that the emotionally salient part will tend to be foreground, and the '% probabilities' as background. After all no-one ever saw anyone who was 10% dead (jokes apart).

Eliezer's point would have been valid, had he chosen almost anything other than momentary eye irritation. Even the momentary eye-irritation example would work if the eye irritation would lead to serious harm (e.g. eye inflammation and blindness) in a small proportion of those afflicted with the speck of dust. If the predicted outcome was millions of people going blind (and then you have to consider the resulting costs to society), then Eliezer is absolutely right: shut-up and do the math.

-1HungryHobo
Imagine that you had the choice but once you've made that choice it will be applied the same way whenever someone will get tortured, magic intervenes, saves that one person and a googleplex other people get a spec in their eye. it feels like it's not a big deal if it happens once or twice but imagine that across all the universes where it applies it ended up triggering 3,153,600,000 times, not even half the population of our world. suddenly a googleplex of people are suffering constantly and half blinded most of the time. it feels small when it happens once but the same has to apply when it happens again and again.
[-]Lee20

GreedyAlgorithm, this is the conversation I want to have.

The sentence in your argument that I cannot swallow is this one: "Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences." This is circular, is it not?

You want to establish that any decision, x, should be made in accordance w/ maximum expected utility theory ("shut up and calculate"). You ask me to consider X = {x_i}, the set of many decisions over my life ("after a while"). You sa... (read more)

2pandamodium
This whole argument only washes if you assume that things work "normally" (eg like they do in the real field, eg are subject to the axioms that make addition/subtraction/calculus work). In fact we know that utility doesn't behave normally when considering multiple agents (as proved by arrows impossibility theorm), so the "correct" answer is that we can't have a true pareto-optimal solution to the eye-dust-vs-torture problem. There is no reason why you couldn't contstruct a ring/field/group for utility which produced some of the solutions the OP dismisses, and in fact IMO those would be better representations of human utility than a straight normal interpretation.
[-]Lee00

(I should say that I assumed that a bag of decisions is worth as much as the sum of the utilities of the individual decisions.)

I'm seconding the worries of people like the anonymous of the first comment and Wendy. I look at the first, and I think "with no marginal utility, it's an expected value of 400 vs an expected value of 450." I look at the second and think "with no marginal utility, it's an expected value of -400 vs. an expected value of -50." Marginal utility considerations--plausible if these are the last 500 people on Earth--sway the first case much more easily than they do the second case.

So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort...

Eliezer, your readiness to assume that all 'bad things' are on a continuous scale, linear or no, really surprises me. Put your enormous numbers away, they're not what people are taking umbrage at. Do you think that if a googol doesn't convince us, perhaps a googolplex will? Or maybe 3^^^3? If x and y are finite, there will always be a quantity of x that exceeds y, and vice versa. We get the maths, we just don't agree that the phenomena are comparable. Broken ankle? Stubbing your toe? Possibly, there is certainly more of a tangible link there, but you're still imposing your judgment on how the mind experiences and deals with discomfort on us all and calling it rationality. It isn't.

Put simply - a dust mote registers exactly zero on my torture scale, and torture registers fundamentally off the scale (not just off the top, off) on my dust mote scale.

You're asking how many biscuits equal one steak, and then when one says 'there is no number', accusing him of scope insensitivity.

1phob
So you wouldn't pay one cent to prevent 3^^^3 people from getting a dust speck in their eye?
5Hul-Gil
Sure. My loss of utility from losing the cent might be less than the gain in utility for those people to not get dust specks - but these are both what Ben might consider trivial events; it doesn't address the problem Ben Jones has with the assumption of a continuous scale. I'm not sure I'd pay $100 for any amount of people to not get specks in their eyes, because now we may have made the jump to a non-trivial cost for the addition of trivial payoffs.
2Salivanth
Ben Jones didn't recognise the dust speck as "trivial" on his torture scale, he identified it as "zero". There is a difference: If dust speck disutility is equal to zero, you shouldn't pay one cent to save 3^^^3 people from it. 0 3^^^3 = 0, and the disutility of losing one cent is non-zero. If you assign an epsilon of disutility to a dust speck, then 3^^^3 epsilon is way more than 1 person suffering 50 years of torture. For all intents and purposes, 3^^^3 = infinity. The only way that Infinity(X) can be worse than a finite number is if X is equal to 0. If X = 0.00000001, then torture is preferable to dust specks.

Well, he didn't actually identify dust mote disutility as zero; he says that dust motes register as zero on his torture scale. He goes on to mention that torture isn't on his dust-mote scale, so he isn't just using "torture scale" as a synonym for "disutility scale"; rather, he is emphasizing that there is more than just a single "(dis)utility scale" involved. I believe his contention is that the events (torture and dust-mote-in-the-eye) are fundamentally different in terms of "how the mind experiences and deals with [them]", such that no amount of dust motes can add up to the experience of torture... even if they (the motes) have a nonzero amount of disutility.

I believe I am making much the same distinction with my separation of disutility into trivial and non-trivial categories, where no amount of trivial disutility across multiple people can sum to the experience of non-trivial disutility. There is a fundamental gap in the scale (or different scales altogether, à la Jones), a difference in how different amounts of disutility work for humans. For a more concrete example of how this might work, suppose I steal one cent each from one billi... (read more)

4Salivanth
You might be right. I'll have to think about this, and reconsider my stance. One billion is obviously far less than 3^^^3, but you are right in that the 10 million dollars stolen by you would be preferable to me than the 100,000 dollars stolen by Eliezer. I also consider losing 100,000 dollars less than or equal to 100,000 times as bad as losing one dollar. This indicates one of two things: A) My utility system is deeply flawed. B) My utility system includes some sort of 'diffiusion factor' wherein a disutility of X becomes <X when divided among several people, and the disutility becomes lower the more people it's divided among. In essence, there is some disutility for one person suffering a lot of disutility, that isn't there when it's divided among a lot of people. Of this, B seems more likely, and I didn't take it into account when considering torture vs. dust specks. In any case, some introspection on this should help me further define my utility function, so thanks for giving me something to think about.
5Desrtopa
Assuming that none of them end up one cent short for something they would otherwise have been able to pay for, which out of a billion people is probably going to happen. It doesn't have to be their next purchase.
7OnTheOtherHandle
But this is analogous to saying some tiny percentage of the people who got dust specks would be driving a car at that moment and lose control, resulting in an accident. That would be an entirely different ballgame, even if the percent of people this happened to was unimaginably tiny, because in an unimaginably vast population, lots of people are bound to die of gruesome dust-speck related accidents. But Eliezer explicitly denied any externalities at all; in our hypothetical the chance of accidents, blindness, etc are literally zero. So the chances of not being able to afford a vital heart transplant or whatever for want of a penny must also be literally zero in the analogous hypothetical, no matter how ridiculously large the population gets.
4Desrtopa
Not being able to pay for something due to the loss of money isn't an externality, it's the only kind of direct consequence you're going to get. If you took a hundred thousand dollars from an individual, they might still be able to make their next purchase, but the direct consequence would be their being unable to pay for things they could previously have afforded.
0OnTheOtherHandle
Another thing that seems to be a factor, at least for me, is that there's a term in my utility function for "fairness," which usually translates to something roughly similar to "sharing of burdens." (I also have a term for "freedom," which is in conflict with fairness but is on the same scale and can be traded off against it.) Why wouldn't this be a situation in which "the complexity of human value" comes into play? Why is it wrong to think something along the lines of, "I would be willing to make everyone a tiny bit worse off so that no one person has to suffer obscenely"? It's the rationale behind taxation, and while it's up for debate many Less Wrongers support moderate taxation if it would help a few people a lot while hurting a bunch of people a little bit. Think about it: the exact number of dollars taken from people in taxes don't go directly toward feeding the hungry. Some of it gets eaten up in bureaucratic inefficiencies, some of it goes to bribery and embezzlement, some of it goes to the military. This means if you taxed 1,000,000 well-off people $1 each, but only ended up giving 100 hungry people $1000 each to stave of a painful death from starvation, we as utilitarians would be absolutely, 100% obligated to oppose this taxation system, not because it's inefficient, but because doing nothing would be better. There is to be no room for debate; it's $100,000 - $1,000,000 = net loss; let the 100 starving peasants die. Note that you may be a libertarian and oppose taxation on other grounds, but most libertarians wouldn't say you are literally doing morality wrong if you think it's better to take $1 each from a million people, even if only $100,000 of it gets used to help the poor. I could easily be finding ways to rationalize my own faulty intuitions - but I managed to change my mind about Newcomb's problem and about the first example given in the above post despite powerful initial intuitions, and I managed to work the latter out for myself. So I think,
5AndyC
That makes no sense. Just because one thing cost $1, and another thing cost $1000, does not mean that the first thing happening 1001 times is better than the second one happening once. Preferences logically precede prices. If they didn't, nobody would be able to decide what they were willing to spend on anything in the first place. If utilitarianism requires that you decide the value of things based on their prices, then utilitarians are conformists without values of their own, who derive all of their value judgments from non-utilitarian market participants who actually have values. (Besides, money that is spent on "overhead" does not magically disappear from the economy. Someone is still being paid to do something with that money, who in turn buys things with the money, and so on. And even if the money does disappear -- say, dollar bills are burnt in a furnace -- it still would not represent a loss of productive capacity in the economy. Taxing money and then completely destroying the money (shrinking the money supply) is sound monetary policy, and it occurs on a regular (cyclical) basis. Your whole argument here is a complete non-starter.)
2Multiheaded
As a rather firm speck-ist, I'd like to say that this is the best attempt at a formal explanation of speckism that I've read so far! I'm grateful for this, and pleased that I no longer need to use muddier and vaguer justifications.
9Scott Alexander
Thank you for trying to address this problem, as it's important and still bothers me. But I don't find your idea of two different scales convincing. Consider electric shocks. We start with an imperceptibly low voltage and turn up the dial until the first level at which the victim is able to perceive slight discomfort (let's say one volt). Suppose we survey people and find that a one volt shock is about as unpleasant as a dust speck in the eye, and most people are indifferent between them. Then we turn the dial up further, and by some level, let's say two hundred volts, the victim is in excruciating pain. We can survey people and find that a two hundred volt shock is equivalent to whatever kind of torture was being used in the original problem. So one volt is equivalent to a dust speck (and so on the "trivial scale"), but two hundred volts is equivalent to torture (and so on the "nontrivial scale"). But this implies either that triviality exists only in degree (which ruins the entire argument, since enough triviality aggregated equals nontriviality) or that there must be a sharp discontinuity somewhere (eg a 21.32 volt shock is trivial, but a 21.33 volt shock is nontrivial). But the latter is absurd. Therefore there should not be separate trivial and nontrivial utility scales.
6fubarobfusco
Except perception doesn't work like that. We can have two qualitatively different perceptions arising from quantities of the same stimulus. We know that irritation and pain use different nerve endings, for instance; and electric shock in different quantities could turn on irritation at a lower threshold than pain. Similarly, a dim colored light is perceived as color on the cone cells, while a very bright light of the same frequency is perceived as brightness on the rod cells. A baby wailing may be perceived as unpleasant; turn it up to jet-engine volume and it will be perceived as painful.
2Scott Alexander
Okay, good point. But if we change the argument slightly to the smallest perceivable amount of pain it's still biting a pretty big bullet to say 3^^^3 of those is worse than 50 years of torture. (the theory would also imply that an infinite amount of irritation is not as bad as a tiny amount of pain, which doesn't seem to be true)
2Nornagest
I'm increasingly convinced that the whole Torture vs. Dust Specks scenario is sparking way more heat than light, but... I can imagine situations where an infinite amount of some type of irritation integrated to something equivalent to some finite but non-tiny amount of pain. I can even imagine situations where that amount was a matter of preference: if you asked someone what finite level of pain they'd accept to prevent some permanent and annoying but non-painful condition, I'd expect the answers to differ significantly. Granted, "lifelong" is not "infinite", and there's hyperbolic discounting and various other issues to correct for, but even after these corrections a finite answer doesn't seem obviously wrong.
0fubarobfusco
Well, for one thing, pain is not negative utility .... Pain is a specific set of physiological processes. Recent discoveries suggest that it shares some brain-space with other phenomena such as social rejection and math anxiety, which are phenomenologically distinct. It is also phenomenologically distinct from the sensations of disgust, grief, shame, or dread — which are all unpleasant and inspire us to avoid their causes. Irritation, anxiety, and many other unpleasant sensations can take away from our ability to experience pleasure; many of them can also make us less effective at achieving our own goals. In place of an individual experiencing "50 years of torture" in terms of physiological pain, we might consider 50 years of frustration, akin to the myth of Sisyphus or Tantalus; or 50 years of nightmare, akin to that inflicted on Alex Burgess by Morpheus in The Sandman ....
3drnickbone
Hmm not sure. It seems quite plausible to me that for any n, an instance of real harm to one person is worse than n instances of completely harmless irritation to n people. Especially if we consider a bounded utility function; the n instances of irritation have to flatten out at some finite level of disutility, and there is no a priori reason to exclude torture to one person having a worse disutility than that asymptote. Having said all that, I'm not sure I buy into the concept of completely harmless irritation. I doubt we'd perceive a dust speck as a disutility at all except for the fact that it has small probability of causing big harm (loss of life or offspring) somewhere down the line. A difficulty with the whole problem is the stipulation that the dust specks do nothing except cause slight irritation... no major harm results to any individual. However, throwing a dust speck in someone's eye would in practice have a very small probability of very real harm, such as distraction while operating dangerous machinery (driving, flying etc), starting an eye infection which leads to months of agony and loss of sight, a slight shock causing a stumble and broken limbs or leading to a bigger shock and heart attack. Even the very mild irritation may be enough to send an irritable person "over the edge" into punching a neighbour, or a gun rampage, or a borderline suicidal person into suicide. All these are spectacularly unlikely for each individual, but if you multiply by 3^^^3 people you still get order 3^^^3 instances of major harm.
4AndyC
With that many instances, it's even highly likely that at least one of the specs in the eye will offer a rare opportunity for some poor prisoner to escape his captors, who had intended to subject him to 50 years of torture.
5AndyC
First of all, you might benefit from looking up the beard fallacy. To address the issue at hand directly, though: Of course there are sharp discontinuities. Not just one sharp discontinuity, but countless. However, there is not particular voltage at which there is a discontinuity. Rather, increasing the voltage increases the probability of a discontinuity. I will list a few discontinuities established by torture. 1. Nightmares. As the electrocution experience becomes more severe, the probability that it will result in a nightmare increases. After 50 years of high voltage, hundreds or even thousands of such nightmares are likely to have occurred. However, 1 second of 1V is unlikely to result in even a single nightmare. The first nightmare is a sharp discontinuity. But furthermore, each additional nightmare is another sharp discontinuity. 2. Stress responses to associational triggers. The first such stress response is a sharp discontinuity, but so is every other one. But please note that there is a discontinuity for each instance of stress response that follows in your life: each one is its own discontinuity. So, if you will experience 10,500 stress responses, that is 10,500 discontinuities. It's impossible to say beforehand what voltage or how many seconds will make the difference between 10,499 and 10,500, but in theory a probability could be assigned. I think there are already actual studies that have measured the increased stress response after electroshock, over short periods. 3. Flashbacks. Again, the first flashback is a discontinuity; as is every other flashback. Every time you start crying during a flashback is another discontinuity. 4. Social problems. The first relationship that fails (e.g., first woman that leaves you) because of the social ramifications of damage to your psyche is a discontinuity. Every time you flee from a social event: another discontinuity. Every fight that you have with your parents as a result of your torture (and the fact
0AlexanderRM
A better metaphor: What if we replaced "getting a dust speck in your eye" with "being horribly tortured for one second"? Ignore the practical problems of the latter, just say the person experiences the exact same (average) pain as being horribly tortured, but for one second. That allows us to directly compare the two experiences much better, and it seems to me it eliminates the "you can't compare the two experiences"- except of course with long term effects of torture, I suppose; to get a perfect comparison we'd need a torture machine that not only does no physical damage, but no psychological damage either. On the other hand, it does leave in OnTheOtherHandle's argument about "fairness" (specifically in the "sharing of burdens" definition, since otherwise we could just say the person tortured is selected at random). Which to me as a utilitarian makes perfect sense; I'm not sure if I agree or disagree with him on that.
3Eliezer Yudkowsky
Isn't this a reductio of your argument? Stealing $10,000,000 has less economic effect than stealing $100,000, really? Well, why don't we just do it over and over, then, since it has no effect each time? If I repeated it enough times, you would suddenly decide that the average effect of each $10,000,000 theft, all told, had been much larger than the average effect of the $100,000 theft. So where is the point at which, suddenly, stealing 1 more cent from everyone has a much larger and disproportionate effect, enough to make up for all the "negligible" effects earlier? See also: http://lesswrong.com/lw/n3/circular_altruism/
5Bugmaster
It seems like you and Hul-Gil are using different formulae for evaluating utility (or, rather, disutility); and, therefore, you are talking past each other. While Hul-Gil is looking solely at the immediate purchasing power of each individual, you are considering ripple effects affecting the economy as a whole. Thus, while stealing a single penny from a single individual may have negligible disutility, removing 1e9 such pennies from 1e9 individuals will have a strong negative effect on the economy, thus reducing the effective purchasing power of everyone, your victims included. This is a valid point, but it doesn't really lend any support to either side in your argument with Hul-Gil, since you're comparing apples and oranges.
2IainM
I'm pretty sure Eliezer's point holds even if you only consider the immediate purchasing power of each individual. Let us define thefts A and B: A : Steal 1 cent from each of 1e9 individuals. B : Steal 1e7 cents from 1 individual. The claim here is that A has negligible disutility compared to B. However, we can define a new theft C as follows: C: Steal 1e7 cents from each of 1e9 individuals. Now, I don't discount the possibility that there are arguments to the contrary, but naively it seems that a C theft is 1e9 times as bad as a B theft. But a C theft is equivalent to 1e7 A thefts. So, necessarily, one of those A thefts must have been worse than a B theft - substantially worse. Eliezer's question is: if the first one is negligible, at what point do they become so much worse?
1Bugmaster
I think this is a question of ongoing collateral effects (not sure if "externalities" is the right word to use here). The examples that speak of money are additionally complicated by the fact that the purchasing power of money does not scale linearly with the amount of money you have. Consider the following two scenarios: A). Inflict -1e-3 utility on 1e9 individuals with negligible consequences over time, or B). Inflict a -1e7 utility on a single individual, with further -1e7 consequences in the future. vs. C). Inflict a -1e-3 utility on 1e9 individuals leading to an additional -1e9 utility over time, or B). Inflict a one-time -1e7 utility on a single individual, with no additional consequences. Which one would you pick, A or B, and C or D ? Of course, we can play with the numbers to make A and C more or less attractive. I think the problem with Eliezer's "dust speck" scenario is that his disutility of option A -- i.e., the dust specs -- is basically epsilon, and since it has no additional costs, you might as well pick A. The alternative is a rather solid chunk of disutility -- the torture -- that will further add up even after the initial torture is over (due to ongoing physical and mental health problems). The "grand theft penny" scenario can be seen as AB or CD, depending on how you think about money; and the right answer in either case might change depending on how much you think a penny is actually worth.
6CCC
Money is not a linear function of utility. A certain amount is necessary to existance (enough to obtain food, shelter, etc.) A person's first dollar is thus a good deal more valuable than a person's millionth dollar, which is in turn more valuable than their billionth dollar. There is clearly some additional utility from each additional dollar, but I suspect that the total utility may well be asymptotic. The total disutility of stealing an amount of money, $X, from a person with total wealth $Y, is (at least approcximately) equal to the difference in utility between $Y and $(Y-X). (There may be some additional disutility from the fact that a theft occurred - people may worry about being the next victim or falsely accuse someone else or so forth - but that should be roughly equivalent for any theft, and thus I shall disregard it). So. Stealing one dollar from a person who will starve without that dollar is therefore worse than stealing one dollar from a person who has a billion more dollars in the bank. Stealing one dollar from each of one billion people, who will each starve without that dollar, is far, far worse than stealing $100 000 from one person who has another $1e100 in the bank. Stealing $100 000 from a person who only had $100 000 to start with is worse than stealing $1 from each of one billion people, each of whom have another billion dollars in savings. ---------------------------------------- Now, if we assume a level playing field - that is, that every single person starts with the same amount of money (say, $1 000 000) and no-one will starve if they lose $100 000, then it begins to depend on the exact function used to find the utility of money. There are functions such that a million thefts of $1 each results in less disutility that a single theft of $100 000. (If asked to find an example, I will take a simple exponential function and fiddle with the parameters until this is true). However, if you continue adding additional thefts of $1 each fro
0cousin_it
Yeah, but also keep in mind that people's utility functions cannot be very concave. (My rephrasing is pretty misleading but I can't think of a better one, do read the linked post.)
0CCC
Hmmm. The linked post talks about the perceived utility of money; that is, what the owner of the money thinks it is worth. This is not the same as the actual utility of money, which is what I am trying to use in the grandparent post. I apologise if that was not clear, and I hope that this has cleared up any lingering misunderstandings.
0AlexanderRM
"But with this dust speck scenario, if we accept Mr. Yudkowsky's reasoning and choose the one-person-being-tortured option, we end up with a situation in which every participant would rather that the other option had been chosen! Certainly the individual being tortured would prefer that, and each potentially dust-specked individual* would gladly agree to experience an instant of dust-speckiness in order to save the former individual." A question for comparison: would you rather have a 1/Googolplex chance of being tortured for 50 years, or lose 1 cent? (A better comparison in this case would be if you replaced "tortured for 50 years" with "death".) Also: for the original metaphor, imagine that you aren't the only person being offered this choice, and that the people suffering the consequences are out of the same pool- which is how real life works, although in this world we have a population of 1 googolplex rather than 7 billion. If we replace "dust speck" with "horribly tortured for 1 second", and we give 1.5 billion people the same choice and presume they all make the same decision, then the choice is between 1.5 billion people being horribly tortured for 50 years, and 1 googolplex people begin horribly tortured for 50 years.
2Jiro
Whenever I drive, I have a greater than a 1/googlolplex chance of getting into an accident which would leave me suffering for 50 years, and I still drive. I'm not sure how to measure the benefit I get from driving, but there are at least some cases where it's pretty small, even if it's not exactly a cent.
4soreff
Whenever one bends down to pick up a dropped penny, one has more than a 1/Googolplex chance of a slip-and-fall accident which would leave one suffering for 50 years.
6Good_Burning_Plastic
But you also slightly improve your physical fitness which might reduce the probability of an accident further down the line by more than 1/10^10^100.
0dxu
This argument does not show that putting dust specks in the eyes of 3^^^3 people is better than torturing one person for 50 years. It shows that putting dust specks in the eyes of 3^^^3 people and then telling them they helped save someone from torture is better than torturing one person for 50 years.
0hairyfigment
Yes - though it does mean Eliezer has to assume that the reader's implausible state of knowledge is not and will not be shared by many of the 3^^^3.
1Manfred
Dust, it turns out, is not naturally occurring, but is only produced as a byproduct of thought experiments.
1DPiepgrass
The loss of $100,000 (or one cent) is more or less significant depending on the individual. Which is worse: stealing a cent from 100,000,000 people, or stealing $100,000 from a billionaire? What if the 100,000,000 people are very poor and the cent would buy half a slice of bread and they were hungry to start with? (Tiny dust specks, at least, have a comparable annoyance effect on almost everyone.) Eliezer's main gaffe here is choosing a "googolplex" people with dust specks when humans do not even have an intuition for googols. So let's scale the problem down to a level a human can understand: instead of a googolplex dust specks versus 50 years of torture, let's take "50 years of torture versus a googol (1 followed by 100 zeros) dust specks", and scale it down linearly to "1 second of torture verses "6.33 x 10^90 dust specks, one per person" - which is still far more people than have ever lived, so let's make it "a dust speck once per minute for every person on Earth for their entire lives (while awake) and make it retroactive for all of our human ancestors too" (let's pretend for a moment that humans won't evolve a resistance to dust specks as a result). By doing this we are still eliminating virtually all of the dust specks. So now we have one second of torture versus roughly 2 billion billions of dust specks, which is nothing at all compared to a googol of dust specks. Once the numbers are scaled down to a level that ordinary college graduates can begin to comprehend, I think many of them would change their answer. Indeed, some people might volunteer for one second of torture just to save themselves from getting a tiny dust speck in their eye every minute for the rest of their lives. The fact that humans can't feel these numbers isn't something you teach by just saying it. You teach it by creating a tension between the feeling brain and the thinking brain. Due to your ego, I would guess your brain can better imagine feeling a tiny dust speck in its eye once per
3inemnitable
This doesn't follow. Epsilon is by definition arbitrary, therefore I could say that I want it to be 1 / 4^^^4 if I want to. If we accept Eliezer's proposition that the disutility of a dust speck is > 0, this doesn't prevent us from deciding that it is < epsilon when assigning a finite disutility to 50 years of torture.
1JaySwartz
For a site promoting rationality this entire thread is amazing for a variety of reasons (can you tell I'm new here?). The basic question is irrational. The decision for one situation over another is influenced by a large number of interconnected utilities. A person, or an AI, does not come to a decision based on a single utility measure. The decision process draws on numerous utilities, many of which we do not yet know. Just a few utilities are morality, urgency, effort, acceptance, impact, area of impact and value. Complicating all of this is the overlay of life experience that attaches a function of magnification to each utility impact decision. There are 7 billion, and growing, unique overlays in the world. These overlays can include unique personal, societal or other utilities and have dramatic impact on many of the core utilities as well. While you can certainly assign some value to each choice, due to the above it will be a unique subjective value. The breadth of values do cluster in societal and common life experience utilities enabling some degree of segmentation. This enables generally acceptable decisions. The separation of the value spaces for many utilities preclude a single, unified decision. For example, a faith utility will have radically different value spaces for Christians and Buddhists. The optimum answer can be very different when the choices include faith utility considerations. Also, the circular example of driving around the Bay Area is illogical from a variety of perspectives. The utility of each stop is ignored. The movement of the driver around the circle does not correlate to the premise that altruistic actions of an individual are circular. For discussions to have utility value relative to rationality, it seems appropriate to use more advanced mathematics concepts. Examining the vagaries created when decisions include competing utility values or are near edges of utility spaces are where we will expand our thinking.
0JoshuaZ
So in most forms of utilitarianism, there's still an overall ut