(Still no Internet access.  Hopefully they manage to repair the DSL today.)

I haven't said much about metaethics - the nature of morality - because that has a forward dependency on a discussion of the Mind Projection Fallacy that I haven't gotten to yet.  I used to be very confused about metaethics.  After my confusion finally cleared up, I did a postmortem on my previous thoughts.  I found that my object-level moral reasoning had been valuable and my meta-level moral reasoning had been worse than useless.  And this appears to be a general syndrome - people do much better when discussing whether torture is good or bad than when they discuss the meaning of "good" and "bad".  Thus, I deem it prudent to keep moral discussions on the object level wherever I possibly can.

Occasionally people object to any discussion of morality on the grounds that morality doesn't exist, and in lieu of jumping over the forward dependency to explain that "exist" is not the right term to use here, I generally say, "But what do you do anyway?" and take the discussion back down to the object level.

Paul Gowder, though, has pointed out that both the idea of choosing a googolplex dust specks in a googolplex eyes over 50 years of torture for one person, and the idea of "utilitarianism", depend on "intuition".  He says I've argued that the two are not compatible, but charges me with failing to argue for the utilitarian intuitions that I appeal to.

Now "intuition" is not how I would describe the computations that underlie human morality and distinguish us, as moralists, from an ideal philosopher of perfect emptiness and/or a rock. But I am okay with using the word "intuition" as a term of art, bearing in mind that "intuition" in this sense is not to be contrasted to reason, but is, rather, the cognitive building block out of which both long verbal arguments and fast perceptual arguments are constructed.

I see the project of morality as a project of renormalizing intuition.  We have intuitions about things that seem desirable or undesirable, intuitions about actions that are right or wrong, intuitions about how to resolve conflicting intuitions, intuitions about how to systematize specific intuitions into general principles.

Delete all the intuitions, and you aren't left with an ideal philosopher of perfect emptiness, you're left with a rock.

Keep all your specific intuitions and refuse to build upon the reflective ones, and you aren't left with an ideal philosopher of perfect spontaneity and genuineness, you're left with a grunting caveperson running in circles, due to cyclical preferences and similar inconsistencies.

"Intuition", as a term of art, is not a curse word when it comes to morality - there is nothing else to argue from.  Even modus ponens is an "intuition" in this sense - it's just that modus ponens still seems like a good idea after being formalized, reflected on, extrapolated out to see if it has sensible consequences, etcetera.

So that is "intuition".

However, Gowder did not say what he meant by "utilitarianism".  Does utilitarianism say...

  1. That right actions are strictly determined by good consequences?
  2. That praiseworthy actions depend on justifiable expectations of good consequences?
  3. That probabilities of consequences should normatively be discounted by their probability, so that a 50% probability of something bad should weigh exactly half as much in our tradeoffs?
  4. That virtuous actions always correspond to maximizing expected utility under some utility function?
  5. That two harmful events are worse than one?
  6. That two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one?
  7. That for any two harms A and B, with A much worse than B, there exists some tiny probability such that gambling on this probability of A is preferable to a certainty of B?

If you say that I advocate something, or that my argument depends on something, and that it is wrong, do please specify what this thingy is... anyway, I accept 3, 5, 6, and 7, but not 4; I am not sure about the phrasing of 1; and 2 is true, I guess, but phrased in a rather solipsistic and selfish fashion: you should not worry about being praiseworthy.

Now, what are the "intuitions" upon which my "utilitarianism" depends?

This is a deepish sort of topic, but I'll take a quick stab at it.

First of all, it's not just that someone presented me with a list of statements like those above, and I decided which ones sounded "intuitive".  Among other things, if you try to violate "utilitarianism", you run into paradoxes, contradictions, circular preferences, and other things that aren't symptoms of moral wrongness so much as moral incoherence.

After you think about moral problems for a while, and also find new truths about the world, and even discover disturbing facts about how you yourself work, you often end up with different moral opinions than when you started out.  This does not quite define moral progress, but it is how we experience moral progress.

As part of my experienced moral progress, I've drawn a conceptual separation between questions of type Where should we go? and questions of type How should we get there?  (Could that be what Gowder means by saying I'm "utilitarian"?)

The question of where a road goes - where it leads - you can answer by traveling the road and finding out.  If you have a false belief about where the road leads, this falsity can be destroyed by the truth in a very direct and straightforward manner.

When it comes to wanting to go to a particular place, this want is not entirely immune from the destructive powers of truth.  You could go there and find that you regret it afterward (which does not define moral error, but is how we experience moral error).

But, even so, wanting to be in a particular place seems worth distinguishing from wanting to take a particular road to a particular place.

Our intuitions about where to go are arguable enough, but our intuitions about how to get there are frankly messed up.  After the two hundred and eighty-seventh research study showing that people will chop their own feet off if you frame the problem the wrong way, you start to distrust first impressions.

When you've read enough research on scope insensitivity - people will pay only 28% more to protect all 57 wilderness areas in Ontario than one area, people will pay the same amount to save 50,000 lives as 5,000 lives... that sort of thing...

Well, the worst case of scope insensitivity I've ever heard of was described here by Slovic:

Other recent research shows similar results. Two Israeli psychologists asked people to contribute to a costly life-saving treatment. They could offer that contribution to a group of eight sick children, or to an individual child selected from the group. The target amount needed to save the child (or children) was the same in both cases. Contributions to individual group members far outweighed the contributions to the entire group.

There's other research along similar lines, but I'm just presenting one example, 'cause, y'know, eight examples would probably have less impact.

If you know the general experimental paradigm, then the reason for the above behavior is pretty obvious - focusing your attention on a single child creates more emotional arousal than trying to distribute attention around eight children simultaneously.  So people are willing to pay more to help one child than to help eight.

Now, you could look at this intuition, and think it was revealing some kind of incredibly deep moral truth which shows that one child's good fortune is somehow devalued by the other children's good fortune.

But what about the billions of other children in the world?  Why isn't it a bad idea to help this one child, when that causes the value of all the other children to go down?  How can it be significantly better to have 1,329,342,410 happy children than 1,329,342,409, but then somewhat worse to have seven more at 1,329,342,417?

Or you could look at that and say:  "The intuition is wrong: the brain can't successfully multiply by eight and get a larger quantity than it started with.  But it ought to, normatively speaking."

And once you realize that the brain can't multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever.  You don't get the impression you're looking at the revelation of a deep moral truth about nonagglomerative utilities.  It's just that the brain doesn't goddamn multiply.  Quantities get thrown out the window.

If you have $100 to spend, and you spend $20 each on each of 5 efforts to save 5,000 lives, you will do worse than if you spend $100 on a single effort to save 50,000 lives.  Likewise if such choices are made by 10 different people, rather than the same person.  As soon as you start believing that it is better to save 50,000 lives than 25,000 lives, that simple preference of final destinations has implications for the choice of paths, when you consider five different events that save 5,000 lives.

(It is a general principle that Bayesians see no difference between the long-run answer and the short-run answer; you never get two different answers from computing the same question two different ways.  But the long run is a helpful intuition pump, so I am talking about it anyway.)

The aggregative valuation strategy of "shut up and multiply" arises from the simple preference to have more of something - to save as many lives as possible - when you have to describe general principles for choosing more than once, acting more than once, planning at more than one time.

Aggregation also arises from claiming that the local choice to save one life doesn't depend on how many lives already exist, far away on the other side of the planet, or far away on the other side of the universe.  Three lives are one and one and one.  No matter how many billions are doing better, or doing worse. 3 = 1 + 1 + 1, no matter what other quantities you add to both sides of the equation.  And if you add another life you get 4 = 1 + 1 + 1 + 1.  That's aggregation.

When you've read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you've seen the "Dutch book" and "money pump" effects that penalize trying to handle uncertain outcomes any other way, then you don't see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty.  It just goes to show that the brain doesn't goddamn multiply.

The primitive, perceptual intuitions that make a choice "feel good" don't handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency.  So you reflect, devise more trustworthy logics, and think it through in words.

When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don't think that their protestations reveal some deep truth about incommensurable utilities.

Part of it, clearly, is that primitive intuitions don't successfully diminish the emotional impact of symbols standing for small quantities - anything you talk about seems like "an amount worth considering".

And part of it has to do with preferring unconditional social rules to conditional social rules.  Conditional rules seem weaker, seem more subject to manipulation.  If there's any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.

So it seems like there should be an unconditional social injunction against preferring money to life, and no "but" following it.  Not even "but a thousand dollars isn't worth a 0.0000000001% probability of saving a life".  Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.

The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise.  So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.

On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.

But you don't conclude that there are actually two tiers of utility with lexical ordering.  You don't conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity.  You don't conclude that utilities must be expressed using hyper-real numbers.  Because the lower tier would simply vanish in any equation.  It would never be worth the tiniest effort to recalculate for it.  All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority.

As Peter Norvig once pointed out, if Asimov's robots had strict priority for the First Law of Robotics ("A robot shall not harm a human being, nor through inaction allow a human being to come to harm") then no robot's behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.

Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off.  When you reveal a value, you reveal a utility.

I don't say that morality should always be simple.  I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up.  I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination.  And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more complex than I know. 

But that's for one event.  When it comes to multiplying by quantities and probabilities, complication is to be avoided - at least if you care more about the destination than the journey.  When you've reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as "Shut up and multiply."

Where music is concerned, I care about the journey.

When lives are at stake, I shut up and multiply.

It is more important that lives be saved, than that we conform to any particular ritual in saving them.  And the optimal path to that destination is governed by laws that are simple, because they are math.

And that's why I'm a utilitarian - at least when I am doing something that is overwhelmingly more important than my own feelings about it - which is most of the time, because there are not many utilitarians, and many things left undone.

</rant>

The "Intuitions" Behind "Utilitarianism"
New Comment
205 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Eliezer, to be clear, do you still think that 3^^^3 people having momentary eye irritations--from dust-specs--is worth torturing a single person for 50 years, or is there a possibility that you did the math incorrectly for that example? A proper utilitarian needs to consider the full range of outcomes--and their probabilities--associated with different alternatives. If the momentary eye irritation leads to a greater than 1/3^^^3 probability that someone will have an accident that leads to an outcome worse than 50 years of torture, then the dust specs are... (read more)

Eliezer, to be clear, do you still think that 3^^^3 people having momentary eye irritations--from dust-specs--is worth torturing a single person for 50 years, or is there a possibility that you did the math incorrectly for that example?

No. I used a number large enough to make math unnecessary.

I specified the dust specks had no distant consequences (no car crashes etc.) in the original puzzle.

Unless the torture somehow causes Vast consequences larger than the observable universe, or the suicide of someone who otherwise would have been literally immortal, it doesn't matter whether the torture has distant consequences or not.

I confess I didn't think of the suicide one, but I was very careful to choose an example that didn't involve actually killing anyone, because there someone was bound to point out that there was a greater-than-tiny probability that literal immortality is possible and would otherwise be available to that person.

So I will specify only that the torture does not have any lasting consequences larger than a moderately sized galaxy, and then I'm done. Nothing bound by lightspeed limits in our material universe can morally outweigh 3^^^3 of anything noticeable. You'd ha... (read more)

3bgaesop
I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number." or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World." As a separate response, what's wrong with this calculation: I base my judgments largely on the duration of the disutility. After 1 second, the dust specks disappear and are forgotten, and so their disutility also disappears. The same is not true of the torture; the torture is therefore worse. I can foresee some possible problems with this line of thought, but it's 2:30 am in New Orleans and I just got done with a long evening of drinking and Joint Mathematics Meeting, so please forgive me if I don't attempt to formalize it now. An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0. Secondly, by virtue of your asserting that there exists an action with minimal disutility, you've shown that the Field of Utility is very different from the field of, say, the Real numbers, and so I am incredulous that we can simply "multiply" in the usual sense.
[-]kaz120

I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number."

You can say anything, but Graham's number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.

or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World."

If your utility function ceases to correspond to utility at extreme values, isn't it more of an approximation of utility than actual utility? Sure, you don't need a model that works at the extremes - but when a model does hold for extreme values, that's generally a good sign for the accuracy of the model.

An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0.

If utility ... (read more)

3bgaesop
Yes, this seems like a good argument that we can't add up disutility for things like "being bumped into by particle type X" linearly. In fact, it seems like having 1, or even (whatever large number I breathe in a day) molecules of air bumping into me is a good thing, and so we can't just talk about things like "the disutility of being bumped into by kinds of particles". Yeah, of course. Why, do you know of some way to accurately access someone's actually-existing Utility Function in a way that doesn't just produce an approximation of an idealization of how ape brains work? Because me, I'm sitting over here using an ape brain to model itself, and this particular ape doesn't even really expect to leave this planet or encounter or affect more than a few billion people, much less 3^^^3. So it's totally fine using something accurate to a few significant figures, trying to minimize errors that would have noticeable effects on these scales. Yes, I agree. Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that's a bad sign for your model. Yeah, absolutely, I definitely agree with that.
1kaz
That would be failing, but 3^^^3 people blinking != you blinking. You just don't comprehend the size of 3^^^3. Well it's self evident that that's silly. So, there's that.
6Douglas_Reay
What about the consequences of the precedent set by the person making the decision that it is ok to torture an innocent person, in such circumstances? If such actions get officially endorsed as being moral, isn't that going to have consequences which mean the torture won't be a one-off event? There's a rather good short story about this, by Ursula K LeGuin: The Ones Who Walk Away From Omelas
[-]gwern120

If such actions get officially endorsed as being moral, isn't that going to have consequences which mean the torture won't be a one-off event?

Why would it?

And I don't think LeGuin's story is good - it's classic LeGuin, by which I mean enthymematic, question-begging, emotive substitution for thought, which annoyed me so much that I wrote my own reply.

8Alicorn
I've read your story three times now and still don't know what's going on in it. Can I have it in the form of an explanation instead of a story?
2gwern
Sure, but you'll first have to provide an explanation of LeGuin's.
3Alicorn
There is this habitation called Omelas in which things are pretty swell for everybody except one kid who is kept in lousy conditions; by unspecified mechanism this is necessary for things to be pretty swell for everybody else in Omelas. Residents are told about the kid when they are old enough. Some of them do not approve of the arrangement and emigrate. Something of this form about your story will do.
3gwern
There is this city called Acre where things are pretty swell except for this one guy who has a lousy job; by a well-specified mechanism, his job makes him an accessary to murders which preserve the swell conditions. He understands all this and accepts the overwhelmingly valid moral considerations, but still feels guilty - in any human paradise, there will be a flaw.
2Alicorn
Since the mechanism is well-specified, can you specify it?
1gwern
I thought it was pretty clear in the story. It's not easy coming up with analogues to crypto, and there's probably holes in my lock scheme, but good enough for a story.
3Alicorn
Please explain it anyway. (It never goes well for me when I reply to this sort of thing with snark. So I edited away a couple of drafts of snark.)
7[anonymous]
It's a prediction market where the predictions (that we care about, anyway) are all of the form "I bet X that Y will die on date Z."
3Alicorn
Okay, and I imagine this would incentivize assassins, but how is this helping society be pretty swell for most people, and what is the one guy's job exactly? (Can you not bet on the deaths of arbitrary people, only people it is bad to have around? Is the one guy supposed to determine who it's bad to have around or something and only allow bets on those folks? How does he determine that, if so?)
[-][anonymous]100

Everything you'd want to know about assassination markets.

but how is this helping society be pretty swell for most people, and what is the one guy's job exactly?

Incentive to cooperate? A reduction in the necessity of war, which is by nature an inefficient use of resources? From the story:

The wise men of that city had devised the practice when it became apparent to them that the endless clashes of armies on battlefields led to no lasting conclusion, nor did they extirpate the roots of the conflicts. Rather, they merely wasted the blood and treasure of the people. It was clear to them that those rulers led their people into death and iniquity, while remaining untouched themselves, lounging in comfort and luxury amidst the most crushing defeat.

It was better that a few die before their time than the many. It was better that a little wealth go to the evil than much; better that conflicts be ended dishonorably once and for all, than fought honorably time and again; and better that peace be ill-bought than bought honestly at too high a price to be borne. So they thought.

Moving on.

(Can you not bet on the deaths of arbitrary people, only people it is bad to have around?

Nope, &qu... (read more)

3Alicorn
I think I get it. I have worldbuilding disagreements with this but am no longer bewildered. Thank you!
1pedanterrific
So, I have some questions: how could you actually make money from this? It seems like the idea is that people place bets on the date that they're planning to assassinate the target themselves. So... where's the rest of the money come from, previous failed attempts? I'm not sure that "A whole bunch of guys tried to assassinate the president and got horribly slaughtered for their trouble. That means killing him'd make me rich! Where's my knife?" is a realistic train of thought.
7[anonymous]
The gamblers collect their winnings; the merchant of death charges a fee, presumably to compensate for the hypothetical legal liability and moral hazard. See the last quote from the story in grandparent. Or they want someone else to become more motivated to assassinate the target. It's not, because that's not how the information on how much a certain death is worth propagates. The assassination market needs to be at least semi-publicly observable -- in the story's case, the weight of the money in the named cylinder pulls it down, showing how much money is in the cylinder. If someone wanted a high-risk target, they'd have to offer more money to encourage the market to supply the service.
2pedanterrific
Ahh, that was the bit I missed. Okay, that makes sense now. Edit: Upon rereading, I think this could perhaps be a bit clearer. Cylinders hung suspended, okay. Held by cords leading into the "depths" - what? Holes by that cylinder- presumably in the wall or floor? The money goes into the locked treasure room, not the cylinder. And it causes (somehow) the cylinder to rise, not fall.
3gwern
The idea is that the room in the dungeons has two compartments which the two holes lead to: one contains the locks and predictions, and only the 'winning' lock is used when the person is assassinated (my offline analogue to crypto signatures), but the other just holds the money/rewards, and is actually a big cup or something held up by the cord which goes up to the ceiling, around a pulley, and then down to the cylinder. Hence, the more weight (money) inside the cup, the higher the cylinder is hoisted. I guess ropes and pulleys are no longer so common these days as to make the setup clear and not requiring further explanation? (This is one of the vulnerabilities as described - what's to stop someone from dumping in some lead? As I said, real-world equivalents to crypto are hard. Probably this could be solved by bringing in another human weak point - eg. specifying that only the merchants are allowed to put money in.)
6kpreid
The described pulley setup will simply accelerate until it reaches one limit or the other depending on the balance of weights. In order to have the position vary with the load, you need a position-varying force, such as * A spring. * A rotating off-center mass, as in a balance scale. (This is nonlinear for large angles.) * An asymmetric pulley, i.e. a cam (in the shape of an Archimedean spiral). * A tall object (of constant cross-section) entering a pool of water.

"Omelas" contrasts the happiness of the citizens with the misery of the child. I couldn't tell from your story that the tradesman felt unusually miserable, nor that the other people of his city felt unusually happy. Nor do I know how this affects your reply to LeGuin, since I can't detect the reply.

3NancyLebovitz
For what it's worth, some people read "Omelas" as being about a superstition that torturing a child is necessary (see the bit about good weather) rather than a situation where torturing a child is actually contributing to public welfare.
0gwern
And the 'wisdom of their scholars' depends on the torture as well? 'terms' implies this is a magical contract of some sort. No mechanism, of course, like most magic and all of LeGuin's magic that I've read (Earthsea especially).
0MileyCyrus
America kills 20,000 people/yr via air pollution.. Are you ready to walk away?
4thomblake
It's worth noting, for 'number of people killed' statistics, that all of those people were going to die anyway, and many of them might have been about to die for some other reason. Society kills about 56 million people each year from spending resources on things other than solving the 'death' problem.
9A1987dM
Some of whom several decades later. (Loss of QALYs would be a better statistic, and I think it would be non-negligible.)
1xkwwqjtw
Please don’t build a machine that will torture me to save you from dust specks.
-2TAG
How confident are you that physics has anything to do with morality?
0Miriorrery
If it were that many dust specs in one person's eye, then the 50 years of torture would be reasonable, but getting dust specs in your eye doesn't cause lasting trauma, and it doesn't cause trauma to the people around you. Graham's number is big, yes, but all these people will go about their lives as if nothing happened afterwards- won't they? I feel like if someone were to choose torture for more than half a person's life for one person over everyone having a minor discomfort for a few moments, and everyone knew that the person had made the choice, everyone who knew would probably want absolutely nothing to do with that person. I feel like the length of the discomfort and how bad the discomfort is, ends up outweighing the number of times it happens, as long as it happens to different people and not the same person. The torture would have lasting consequences as well, and the dust specs wouldn't. I get your point and all, but I feel like dust specs compared to torture was a bad thing to use as an example.

Favoring an unconditional social injunction against valuing money over lives is consistent with risking one's own life for money; you could reason that if trading off money and other people's lives is permitted at all, this power will be abused so badly that an unconditional injunction has the best expected consequences. I don't think this is true (because I don't think such an injunction is practical), but it's at least plausible.

[+]tcpkac-70

So it seems you have two intuitions. One is that you like certain kinds of "feel good" feedback that aren't necessarily mathematically proportional to the quantifiable consequences. Another is that you like mathematical proportionality. The "Shut up and multiply" mantra is simply a statement that your second preference is stronger than the first.

In some ways it seems reasonable to define morality in a way that treats all people equally. If we do so, than our preference for multiplying can be more moral, by definition, than our less ... (read more)

"Well, when you're dealing with a number like 3^^^3 in a thought experiment, you can toss out the event descriptions. If the thing being multiplied by 3^^^3 is good, it wins. If the thing being multiplied by 3^^^3 is bad, it loses. Period. End of discussion. There are no natural utility differences that large."

Let's assume the eye-irritation lasts 1-second (with no further negative consequences). I would agree that 3^^^3 people suffering this 1-second irritation is 3^^^3-times worse than 1 person suffering thusly. But this irritation should not... (read more)

3^^^3?

http://www.overcomingbias.com/2008/01/protecting-acro.html#comment-97982570

A 2% annual return adds up to a googol (10^100) return over 12,000 years

Well, just to point out the obvious, there aren't nearly that many atoms in a 12,000 lightyear radius.

Robin Hanson didn't get very close to 3^^^3 before you set limits on his use of "very very large numbers".

Secondly, you refuse to put "death" on the same continuum as "mote in the eye", but behave sanctimoniously (example below) when people refuse to put "50 years ... (read more)

The only reason Eliezer didn't put death on the same scale as the dust mote was on account of his condition that the dust specks have no further consequences. In real life, everything has consequences, and so in real life, death is on the same scale with everything else, including dust motes. Eliezer expressed this extremely well: "Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off."

So yes, in real life there is some number of dust motes such that it would be better to prevent the dust storm than to save a life.

A dust speck in the eye with no external ill effects was chosen as the largest non-zero negative utility. Torture, absent external effects (e.g., suicide), for any finite time, is a finite amount of negative utility. Death in a world of literal immortality cuts off an infinite amount of utility. There is a break in the continuum here.

If you don't accept that dust specks are negative utility, you didn't follow the rules. Pick a new tiny ill effect (like a stubbed toe) and rethink the problem.

If you still don't like it because for a given utility n, n + n !=... (read more)

I assert the use of 3^^^3 in a moral argument is to avoid the effort of multiplying.

Yes, that's what I said. If the quantities were close enough to have to multiply, the case would be open for debate even to utilitarians.

Demonstration: what is 3^^^3 times 6?

3^^^3, or as close as makes no difference.

What is 3^^^3 times a trillion to the trillionth power?

3^^^3, or as close as makes no difference.

...that's kinda the point.

So it seems you have two intuitions. One is that you like certain kinds of "feel good" feedback that aren't necessarily mathematically proportional to the quantifiable consequences. Another is that you like mathematical proportionality.

Er, no. One intuition is that I like to save lives - in fact, as many lives as possible, as reflected by my always preferring a larger number of lives saved to a smaller number. The other "intuition" is actually a complex compound of intuitions, that is, a rational verbal judgment, which enables me to appreciate that any non-aggregative decision-making will fail to lead to the consequence of saving as many lives as possible given bounded resources to save them.

I'm feeling a bit of despair here... it seems that no... (read more)

-8NancyLebovitz

Correction: What I said: "one-second of irritation is less than 3^^^3-times as bad as the 50 years of torture." What I meant: "50 years of torture is more than 3^^^3-times as bad as 1-second of eye-irritation." Apologies for the mis-type (as well as for saying "you're" when I meant "your").

But the point is, if there are no additional consequences to the suffering, then it's irrelevant. I don't care how many people experience the 1-second of suffering. There is no number large enough to make it matter.

Eliezer had a... (read more)

It's because something that's non-consequential is non-consequential

The dust specks are consequential; people suffer because of them. The further negative consequences of torture are only finitely bad.

Eliezer: would you torture a person for fifty years, if you lived in a large enough universe to contain 3^^^3 people, and if the omnipotent and omniscient ruler of that universe informed you that if you did not do so, he would carry out the dust-speck operation?

Seriously, would you pick up the blow torch and use it for the rest of your life, for the sake of the dust-specks?

8wedrifid
Hey, that's an actual Pascal's Mugging! As opposed to "Pascal's generous offer that at worst can be refused for no negative consequences beyond the time spent listening to it". Come to think of it, we probably should be using "Pascal's Spam" for the exciting yet implausible offer.
2Eliezer Yudkowsky
Yeah, if we're going to bastardize the terms anyway, we should definitely distinguish Pascal's Spamming from Pascal's Mugging, where Spamming is any Mugging of a type that has a thousand easily generated variants without loss of plausibility ('plausibility' to a reasonable non-troll not committing the noncentral fallacy). (For emotional purposes, not decision-theory purposes.)

Eliezer: it doesn't matter how big of a number you can write down. You are dealing with an asymptote. There is a limit to how bad momentary eye-irritation can be, no matter how many people it happens to. no matter how many people. That limit is far less than how bad a 50 year torture is.

let f(x) = (5x - 1)/x what is f(3^^^3)? It's 5, or close enough that it doesn't matter.

Eliezer: after wrestling with this for a while, I think I've identified at least one of the reasons for all the fighting. First of all, I agree with you that the people who say, "3^^^3 isn't large enough" are off-base. If there's some N that justifies the tradeoff, 3^^^3 is almost certainly big enough; and even if it isn't, we can change the number to 4^^^4, or 3^^^^3, or Busy Beaver (Busy Beaver (3^^^3)), or something, and we're back to the original problem.

For me, at least, the problem comes down to what 'preference' means. I don't think I h... (read more)

[-]Bob3140

I share El's despair. Look at the forest, folks. The point is that you have to recognize that harm aggregates (and not to an asymptote) or you are willing to do terrible things. The idea of torture is introduced precisely to make it hard to see. But it is important, particularly in light of how easily our brains fail to scale harm and benefit. Geez, I don't even have to look at the research El cites - the comments are enough.

Stop saying the specks are "zero harm." This is a thought experiment and they are defined as positive harm.

Stop saying that torture is different. This is a thought experiment and torture is defined to be absolutely terrible, but finite, harm.

Stop saying that torture is infinite harm. That's just silly.

Stop proving the point over and over in the comments!

/rant/

[-]TAG110

Not all harms aggregate, and in particular lots of nano-pains experienced by lots of sufferers aren't ontologically equivalent to a single agony experienced by a single subject. Utilitarianism isn't an objective fact about how the world works. There's an element of make-believe in treating all harms as aggregating. You can treat things that way if your intuitions tell you to, but the world doesn't force you to.

[-]Ben400

This whole dust vs. torture "dilemma" depends on a couple assumptions: (1) That you can assign a cost to any event and that all such values lie within the same group (allowing multiples of one event to "add up" to another event) and (2) That the function that determines the cost of a certain number of a specific type of events does not have a hard upper limit (such as a logistic function). If either of these assumptions is wrong then the largeness of 3^^^3 or any other "large" number is totally irrelevant. One way to test (1) is to replace "torture" with "kill". If the answer is no then (1) is an invalidate assumption.

Larry D'anna:

You are dealing with an asymptote. There is a limit to how bad momentary eye-irritation can be, no matter how many people it happens to.

By positing that dust-speck irritation aggregates non-linearly with respect to number of persons, and thereby precisely exemplifying the scope-insensitivity that Eliezer is talking about, you are not making an argument against his thesis; instead, you are merely providing an example of what he's warning against.

You are in effect saying that as the number of persons increases, the marginal badness of the suffering of each new victim decreases. But why is it more of an offense to put a speck in the eye of Person #1 than Person #6873?

Isn't there maybe a class insignificant harms where net utility is neutral or even positive (learn to squint or where goggles in a duststorm, learn that motes in ones eye are annoying but nothing really to worry about, increased unerstanding of Christian parables, eg; also consider schools of parenting that allow children to experiment with various behaviors that the parents would prefer they avoid, since directly experiencing the adverse event in a more controlled situation will prevent worse outcomes down the road). I'm not sure you can trust most people's expressed preference on this.

That being said, I don't know where that class begins and ends.

Bob: Sure, if you specify a disutility function that mandates lots-o'-specks to be worse than torture, decision theory will prefer torture. But that is literally begging the question, since you can write down a utility function to come to any conclusion you like. On what basis are you choosing that functional form? That's where the actual moral reasoning goes. For instance, here's a disutility function, without any of your dreaded asymptotes, that strictly prefers specks to torture:

U(T,S) = ST + S

Freaking out about asymptotes reflects a basic misunderstan... (read more)

Care to advance an argument, Caledonian? (Not saying I disagree... or agree, for that matter.)

If harm aggregates less-than-linearly in general, then the difference between the harm caused by 6271 murders and that caused by 6270 is less than the difference between the harm caused by one murder and that caused by zero. That is, it is worse to put a dust mote in someone's eye if no one else has one, than it is if lots of other people have one.

If relative utility is as nonlocal as that, it's entirely incalculable anyway. No one has any idea of how many beings are in the universe. It may be that murdering a few thousand people barely registers as harm, ... (read more)

So what exactly do you multiply when you shut up and multiply? Can it be anything else then a function of the consequences? Because if it is a function of the consequences, you do believe or at least act as if believing your #4.

In which case I still want an answer to my previously raised and unanswered point: As Arrow demonstrated a contradiction-free aggregate utility function derived from different individual utility functions is not possible. So either you need to impose uniform utility functions or your "normalization" of intuition leads to a logical contradiction - which is simple, because it is math.

Neel Krishnaswami: you reference an article called "An Airtight Dutch Book," but I can't find it online without a subscription. Can you summarize the argument?

[-]Bob300

Neel, I think you and I are looking at this as two different questions. I'm fine with bounded utility at the individual level, not so good with bounds on some aggregate utility measure across an unbounded population (but certainly willing to listen to a counter position), which is what we're talking about here. Now, what form an aggregate utility function should take is a legitimate question (although, as Salutator points out, unlikely to be a productive discussion), but I doubt that you would argue it should be bounded.

I have really enjoyed following th... (read more)

Peter DeBlanc: check your email

The issue with a utility function U(T,S) = ST + S is that there is no motivation to have torture's utility depend on dust's utility. They are distinct and independent events, and in no way will additional specks worsen torture. If it is posited that dust specks asymptotically approach a bound lower than torture's bound, order issues present themselves and there should be rational preferences that place certain evils at such order that people should be unable to do anything but act to prevent those evils.

There's additional problems here, like the idea that ... (read more)

Sean: why is that "what utils do"? To the extent that we view utils as the semi-scientific concept from economics, they don't "just sum linearly." To economists utils don't sum at all; you can't make interpersonal comparisons of utility. So if you claim that utils sum linearly, you're making a claim of moral philosophy, and haven't argued for it terribly strongly.

Sean, one problem is that people can't follow the arguments you suggest without these things being made explicit. So I'll try to do that:

Suppose the badness of distributed dust specks approaches a limit, say 10 disutility units.

On the other hand, let the badness of (a single case of ) 50 years of torture equal 10,000 disutility units. Then no number of dust specks will ever add up to the torture.

But what about 49 years of torture distributed among many? Presumably people will not be willing to say that this approaches a limit less than 10,000; otherwise we would torture a trillion people for 49 years rather than one person for 50.

So for the sake of definiteness, let 49 years of torture, repeatedly given to many, converge to a limit of 1,000,000 disutility units.

48 years of torture, let's say, might converge to 980,000 disutility units, or whatever.

Then since we can continuously decrease the pain until we reach the dust specks, there must be some pain that converges approximately to 10,000. Let's say that this is a stubbed toe.

Three possibilities: it converges exactly to 10,000, to less than 10,000, or more than 10,000. If it converges to less, than if we choose another pain ever so... (read more)

It seems to me that the dust-specks example depends on the following being true: both dust-specks and 50 years of torture can be precisely quantified.

What is the justification for this belief? I find it hard to see any way of avoiding the conclusion that some harms may be compared, as in A < B (A=1 person/1 dustspeck, B=1 person/torture), but that does not imply that we can assign precise values to A and B and then determine how many A are equivalent to one B.

Why do some people believe that we can precisely say how much worse the torture of 1 individual... (read more)

[-]Bob330

Joseph,

The point of using 3^^^3 is to avoid the need to assign precise values, which I agree seems impossible to do with any confidence. Once you accept the premise that A is less than B (with both being finite and nonzero), you need to accept that there exists some number k where kA is greater than B. The objections have been that A=0, B is infinite, or the operation kA is not only nonlinear, but bounded. The first may be valid for specks but misses the point - just change it to "mild hangnail" or "banged funnybone." I cannot take ... (read more)

[-]Ian_C.-10

"Renormalizing intuition" - that sounds like making sure all the intuitions are consistent and proportional to each other. Which is analogous to a coherence theory of truth as against a correspondence one. But you can make something as internally consistent as you like and maybe it still bears no relation to reality. It is necessary to know where the intuitions came from and what they mean.

Ideas such as good and evil are abstract, and the mind of a newborn can't hold abstract ideas, only simple concretes. So those ideas can't have already been th... (read more)

Bob: "The point of using 3^^^3 is to avoid the need to assign precise values".

But then you are not facing up to the problems of your own ethical hypothesis. I insist that advocates of additive aggregation take seriously the problem of quantifying the exact ratio of badness between torture and speck-of-dust. The argument falls down if there is no such quantity, but how would you arrive at it, even in principle? I do not insist on an impersonally objective ratio of badness; we are talking about an idealized rational completion of one's personal pre... (read more)

Mitchell, if I say an average second of the torture is about equal 10,000 distributed dust specks (notice I said "average second"; there is absolutely no claim that torture adds up linearly or anything like that), then something less than 2 trillion dust specks will be about equal to 50 years of torture. I would arrive at the ratio by some comparison of this sort, trying to guess how bad an average second is, and how many dust specks I would be willing to inflict to save a man from that amount of harm.

Notice that 3^^^3 is completely unnecessary here. That's why I said previously that N doesn't have to be particularly large.

Thanks for the explanations, Bob.

Bob: The point of using 3^^^3 is to avoid the need to assign precise values... Once you accept the premise that A is less than B (with both being finite and nonzero), you need to accept that there exists some number k where kA is greater than B.

This still requires that they are commensurable though, which is what seeking a strong argument for. Saying that 3^^^3 dust specks in 3^^^3 eyes is greater harm than 50 years of torture means that they are commensurable and that whatever the utilities are, 3^^^3 specks divided by 50 ... (read more)

A < B < C < D doesn't imply that there's some k such that kA>D

Yes it does.

Again, we return to the central issue:

Why must we accept an additive model of harm to be rational?

If you don't accept the additivity of harm, you accept that for any harm x, there is some number of people y where 2^y people suffering x harm is the same welfare wise as y people suffering x harm.

(Not to mention that when normalized across people, utils are meant to provide direct and simple mathematical comparisons. In this case, it doesn't really matter how the normalization occurs as the inequality stands for any epsilon of dust-speck harm greater than zero.)

Polling people to find if they will take a dust speck grants an external harm to the torture (e... (read more)