The "Intuitions" Behind "Utilitarianism"

Followup toCircular AltruismResponse toKnowing your argumentative limitations, OR "one [rationalist's] modus ponens is another's modus tollens."

(Still no Internet access.  Hopefully they manage to repair the DSL today.)

I haven't said much about metaethics - the nature of morality - because that has a forward dependency on a discussion of the Mind Projection Fallacy that I haven't gotten to yet.  I used to be very confused about metaethics.  After my confusion finally cleared up, I did a postmortem on my previous thoughts.  I found that my object-level moral reasoning had been valuable and my meta-level moral reasoning had been worse than useless.  And this appears to be a general syndrome - people do much better when discussing whether torture is good or bad than when they discuss the meaning of "good" and "bad".  Thus, I deem it prudent to keep moral discussions on the object level wherever I possibly can.

Occasionally people object to any discussion of morality on the grounds that morality doesn't exist, and in lieu of jumping over the forward dependency to explain that "exist" is not the right term to use here, I generally say, "But what do you do anyway?" and take the discussion back down to the object level.

Paul Gowder, though, has pointed out that both the idea of choosing a googolplex dust specks in a googolplex eyes over 50 years of torture for one person, and the idea of "utilitarianism", depend on "intuition".  He says I've argued that the two are not compatible, but charges me with failing to argue for the utilitarian intuitions that I appeal to.

Now "intuition" is not how I would describe the computations that underlie human morality and distinguish us, as moralists, from an ideal philosopher of perfect emptiness and/or a rock. But I am okay with using the word "intuition" as a term of art, bearing in mind that "intuition" in this sense is not to be contrasted to reason, but is, rather, the cognitive building block out of which both long verbal arguments and fast perceptual arguments are constructed.

I see the project of morality as a project of renormalizing intuition.  We have intuitions about things that seem desirable or undesirable, intuitions about actions that are right or wrong, intuitions about how to resolve conflicting intuitions, intuitions about how to systematize specific intuitions into general principles.

Delete all the intuitions, and you aren't left with an ideal philosopher of perfect emptiness, you're left with a rock.

Keep all your specific intuitions and refuse to build upon the reflective ones, and you aren't left with an ideal philosopher of perfect spontaneity and genuineness, you're left with a grunting caveperson running in circles, due to cyclical preferences and similar inconsistencies.

"Intuition", as a term of art, is not a curse word when it comes to morality - there is nothing else to argue from.  Even modus ponens is an "intuition" in this sense - it's just that modus ponens still seems like a good idea after being formalized, reflected on, extrapolated out to see if it has sensible consequences, etcetera.

So that is "intuition".

However, Gowder did not say what he meant by "utilitarianism".  Does utilitarianism say...

  1. That right actions are strictly determined by good consequences?
  2. That praiseworthy actions depend on justifiable expectations of good consequences?
  3. That probabilities of consequences should normatively be discounted by their probability, so that a 50% probability of something bad should weigh exactly half as much in our tradeoffs?
  4. That virtuous actions always correspond to maximizing expected utility under some utility function?
  5. That two harmful events are worse than one?
  6. That two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one?
  7. That for any two harms A and B, with A much worse than B, there exists some tiny probability such that gambling on this probability of A is preferable to a certainty of B?

If you say that I advocate something, or that my argument depends on something, and that it is wrong, do please specify what this thingy is... anyway, I accept 3, 5, 6, and 7, but not 4; I am not sure about the phrasing of 1; and 2 is true, I guess, but phrased in a rather solipsistic and selfish fashion: you should not worry about being praiseworthy.

Now, what are the "intuitions" upon which my "utilitarianism" depends?

This is a deepish sort of topic, but I'll take a quick stab at it.

First of all, it's not just that someone presented me with a list of statements like those above, and I decided which ones sounded "intuitive".  Among other things, if you try to violate "utilitarianism", you run into paradoxes, contradictions, circular preferences, and other things that aren't symptoms of moral wrongness so much as moral incoherence.

After you think about moral problems for a while, and also find new truths about the world, and even discover disturbing facts about how you yourself work, you often end up with different moral opinions than when you started out.  This does not quite define moral progress, but it is how we experience moral progress.

As part of my experienced moral progress, I've drawn a conceptual separation between questions of type Where should we go? and questions of type How should we get there?  (Could that be what Gowder means by saying I'm "utilitarian"?)

The question of where a road goes - where it leads - you can answer by traveling the road and finding out.  If you have a false belief about where the road leads, this falsity can be destroyed by the truth in a very direct and straightforward manner.

When it comes to wanting to go to a particular place, this want is not entirely immune from the destructive powers of truth.  You could go there and find that you regret it afterward (which does not define moral error, but is how we experience moral error).

But, even so, wanting to be in a particular place seems worth distinguishing from wanting to take a particular road to a particular place.

Our intuitions about where to go are arguable enough, but our intuitions about how to get there are frankly messed up.  After the two hundred and eighty-seventh research study showing that people will chop their own feet off if you frame the problem the wrong way, you start to distrust first impressions.

When you've read enough research on scope insensitivity - people will pay only 28% more to protect all 57 wilderness areas in Ontario than one area, people will pay the same amount to save 50,000 lives as 5,000 lives... that sort of thing...

Well, the worst case of scope insensitivity I've ever heard of was described here by Slovic:

Other recent research shows similar results. Two Israeli psychologists asked people to contribute to a costly life-saving treatment. They could offer that contribution to a group of eight sick children, or to an individual child selected from the group. The target amount needed to save the child (or children) was the same in both cases. Contributions to individual group members far outweighed the contributions to the entire group.

There's other research along similar lines, but I'm just presenting one example, 'cause, y'know, eight examples would probably have less impact.

If you know the general experimental paradigm, then the reason for the above behavior is pretty obvious - focusing your attention on a single child creates more emotional arousal than trying to distribute attention around eight children simultaneously.  So people are willing to pay more to help one child than to help eight.

Now, you could look at this intuition, and think it was revealing some kind of incredibly deep moral truth which shows that one child's good fortune is somehow devalued by the other children's good fortune.

But what about the billions of other children in the world?  Why isn't it a bad idea to help this one child, when that causes the value of all the other children to go down?  How can it be significantly better to have 1,329,342,410 happy children than 1,329,342,409, but then somewhat worse to have seven more at 1,329,342,417?

Or you could look at that and say:  "The intuition is wrong: the brain can't successfully multiply by eight and get a larger quantity than it started with.  But it ought to, normatively speaking."

And once you realize that the brain can't multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever.  You don't get the impression you're looking at the revelation of a deep moral truth about nonagglomerative utilities.  It's just that the brain doesn't goddamn multiply.  Quantities get thrown out the window.

If you have $100 to spend, and you spend $20 each on each of 5 efforts to save 5,000 lives, you will do worse than if you spend $100 on a single effort to save 50,000 lives.  Likewise if such choices are made by 10 different people, rather than the same person.  As soon as you start believing that it is better to save 50,000 lives than 25,000 lives, that simple preference of final destinations has implications for the choice of paths, when you consider five different events that save 5,000 lives.

(It is a general principle that Bayesians see no difference between the long-run answer and the short-run answer; you never get two different answers from computing the same question two different ways.  But the long run is a helpful intuition pump, so I am talking about it anyway.)

The aggregative valuation strategy of "shut up and multiply" arises from the simple preference to have more of something - to save as many lives as possible - when you have to describe general principles for choosing more than once, acting more than once, planning at more than one time.

Aggregation also arises from claiming that the local choice to save one life doesn't depend on how many lives already exist, far away on the other side of the planet, or far away on the other side of the universe.  Three lives are one and one and one.  No matter how many billions are doing better, or doing worse. 3 = 1 + 1 + 1, no matter what other quantities you add to both sides of the equation.  And if you add another life you get 4 = 1 + 1 + 1 + 1.  That's aggregation.

When you've read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you've seen the "Dutch book" and "money pump" effects that penalize trying to handle uncertain outcomes any other way, then you don't see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty.  It just goes to show that the brain doesn't goddamn multiply.

The primitive, perceptual intuitions that make a choice "feel good" don't handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency.  So you reflect, devise more trustworthy logics, and think it through in words.

When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don't think that their protestations reveal some deep truth about incommensurable utilities.

Part of it, clearly, is that primitive intuitions don't successfully diminish the emotional impact of symbols standing for small quantities - anything you talk about seems like "an amount worth considering".

And part of it has to do with preferring unconditional social rules to conditional social rules.  Conditional rules seem weaker, seem more subject to manipulation.  If there's any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.

So it seems like there should be an unconditional social injunction against preferring money to life, and no "but" following it.  Not even "but a thousand dollars isn't worth a 0.0000000001% probability of saving a life".  Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.

The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise.  So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.

On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.

But you don't conclude that there are actually two tiers of utility with lexical ordering.  You don't conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity.  You don't conclude that utilities must be expressed using hyper-real numbers.  Because the lower tier would simply vanish in any equation.  It would never be worth the tiniest effort to recalculate for it.  All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority.

As Peter Norvig once pointed out, if Asimov's robots had strict priority for the First Law of Robotics ("A robot shall not harm a human being, nor through inaction allow a human being to come to harm") then no robot's behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.

Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off.  When you reveal a value, you reveal a utility.

I don't say that morality should always be simple.  I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up.  I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination.  And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more complex than I know. 

But that's for one event.  When it comes to multiplying by quantities and probabilities, complication is to be avoided - at least if you care more about the destination than the journey.  When you've reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as "Shut up and multiply."

Where music is concerned, I care about the journey.

When lives are at stake, I shut up and multiply.

It is more important that lives be saved, than that we conform to any particular ritual in saving them.  And the optimal path to that destination is governed by laws that are simple, because they are math.

And that's why I'm a utilitarian - at least when I am doing something that is overwhelmingly more important than my own feelings about it - which is most of the time, because there are not many utilitarians, and many things left undone.

</rant>

200 comments, sorted by
magical algorithm
Highlighting new comments since Today at 1:54 PM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

Eliezer, to be clear, do you still think that 3^^^3 people having momentary eye irritations--from dust-specs--is worth torturing a single person for 50 years, or is there a possibility that you did the math incorrectly for that example?

No. I used a number large enough to make math unnecessary.

I specified the dust specks had no distant consequences (no car crashes etc.) in the original puzzle.

Unless the torture somehow causes Vast consequences larger than the observable universe, or the suicide of someone who otherwise would have been literally immortal, it doesn't matter whether the torture has distant consequences or not.

I confess I didn't think of the suicide one, but I was very careful to choose an example that didn't involve actually killing anyone, because there someone was bound to point out that there was a greater-than-tiny probability that literal immortality is possible and would otherwise be available to that person.

So I will specify only that the torture does not have any lasting consequences larger than a moderately sized galaxy, and then I'm done. Nothing bound by lightspeed limits in our material universe can morally outweigh 3^^^3 of anything noticeable. You'd have to leave our physics to do it.

You know how some people's brains toss out the numbers? Well, when you're dealing with a number like 3^^^3 in a thought experiment, you can toss out the event descriptions. If the thing being multiplied by 3^^^3 is good, it wins. If the thing being multiplied by 3^^^3 is bad, it loses. Period. End of discussion. There are no natural utility differences that large.

Unless the torture somehow causes Vast consequences larger than the observable universe, or the suicide of someone who otherwise would have been literally immortal, it doesn't matter whether the torture has distant consequences or not.

What about the consequences of the precedent set by the person making the decision that it is ok to torture an innocent person, in such circumstances? If such actions get officially endorsed as being moral, isn't that going to have consequences which mean the torture won't be a one-off event?

There's a rather good short story about this, by Ursula K LeGuin:

The Ones Who Walk Away From Omelas

If such actions get officially endorsed as being moral, isn't that going to have consequences which mean the torture won't be a one-off event?

Why would it?

And I don't think LeGuin's story is good - it's classic LeGuin, by which I mean enthymematic, question-begging, emotive substitution for thought, which annoyed me so much that I wrote my own reply.

I've read your story three times now and still don't know what's going on in it. Can I have it in the form of an explanation instead of a story?

Sure, but you'll first have to provide an explanation of LeGuin's.

There is this habitation called Omelas in which things are pretty swell for everybody except one kid who is kept in lousy conditions; by unspecified mechanism this is necessary for things to be pretty swell for everybody else in Omelas. Residents are told about the kid when they are old enough. Some of them do not approve of the arrangement and emigrate.

Something of this form about your story will do.

There is this city called Acre where things are pretty swell except for this one guy who has a lousy job; by a well-specified mechanism, his job makes him an accessary to murders which preserve the swell conditions. He understands all this and accepts the overwhelmingly valid moral considerations, but still feels guilty - in any human paradise, there will be a flaw.

"Omelas" contrasts the happiness of the citizens with the misery of the child. I couldn't tell from your story that the tradesman felt unusually miserable, nor that the other people of his city felt unusually happy. Nor do I know how this affects your reply to LeGuin, since I can't detect the reply.

Since the mechanism is well-specified, can you specify it?

I thought it was pretty clear in the story. It's not easy coming up with analogues to crypto, and there's probably holes in my lock scheme, but good enough for a story.

I thought it was pretty clear in the story.

Please explain it anyway.

(It never goes well for me when I reply to this sort of thing with snark. So I edited away a couple of drafts of snark.)

It's a prediction market where the predictions (that we care about, anyway) are all of the form "I bet X that Y will die on date Z."

Okay, and I imagine this would incentivize assassins, but how is this helping society be pretty swell for most people, and what is the one guy's job exactly? (Can you not bet on the deaths of arbitrary people, only people it is bad to have around? Is the one guy supposed to determine who it's bad to have around or something and only allow bets on those folks? How does he determine that, if so?)

Everything you'd want to know about assassination markets.

but how is this helping society be pretty swell for most people, and what is the one guy's job exactly?

Incentive to cooperate? A reduction in the necessity of war, which is by nature an inefficient use of resources? From the story:

The wise men of that city had devised the practice when it became apparent to them that the endless clashes of armies on battlefields led to no lasting conclusion, nor did they extirpate the roots of the conflicts. Rather, they merely wasted the blood and treasure of the people. It was clear to them that those rulers led their people into death and iniquity, while remaining untouched themselves, lounging in comfort and luxury amidst the most crushing defeat.

It was better that a few die before their time than the many. It was better that a little wealth go to the evil than much; better that conflicts be ended dishonorably once and for all, than fought honorably time and again; and better that peace be ill-bought than bought honestly at too high a price to be borne. So they thought.

Moving on.

(Can you not bet on the deaths of arbitrary people, only people it is bad to have around?

Nope, "badness" is determined by the market.

Is the one guy supposed to determine who it's bad to have around or something and only allow bets on those folks? How does he determine that, if so?)

The "merchant of death" diffuses the legal culpability associated with betting on the assassination market. The tension in the narrative comes from him feeling ever so slightly morally culpable for the assassinations, even though he only "causes" them indirectly. Again from the story:

Through judicious use of an intermediary (the merchant of death), the predictor could make his prediction, pay the fee, and collect the reward while remaining unknown to all save one.

I think I get it. I have worldbuilding disagreements with this but am no longer bewildered. Thank you!

So, I have some questions: how could you actually make money from this? It seems like the idea is that people place bets on the date that they're planning to assassinate the target themselves. So... where's the rest of the money come from, previous failed attempts? I'm not sure that "A whole bunch of guys tried to assassinate the president and got horribly slaughtered for their trouble. That means killing him'd make me rich! Where's my knife?" is a realistic train of thought.

how could you actually make money from this?

The gamblers collect their winnings; the merchant of death charges a fee, presumably to compensate for the hypothetical legal liability and moral hazard. See the last quote from the story in grandparent.

It seems like the idea is that people place bets on the date that they're planning to assassinate the target themselves. So... where's the rest of the money come from, previous failed attempts?

Or they want someone else to become more motivated to assassinate the target.

I'm not sure that "A whole bunch of guys tried to assassinate the president and got horribly slaughtered for their trouble. That means killing him'd make me rich! Where's my knife?" is a realistic train of thought.

It's not, because that's not how the information on how much a certain death is worth propagates. The assassination market needs to be at least semi-publicly observable -- in the story's case, the weight of the money in the named cylinder pulls it down, showing how much money is in the cylinder. If someone wanted a high-risk target, they'd have to offer more money to encourage the market to supply the service.

The assassination market needs to be at least semi-publicly observable -- in the story's case, the weight of the money in the named cylinder pulls it down, showing how much money is in the cylinder.

Ahh, that was the bit I missed. Okay, that makes sense now.

Edit: Upon rereading, I think this could perhaps be a bit clearer.

To one side of him were suspended cylinders. And each hung at a different height, held by oiled cords leading away into the depths. And upon each cylinder was inscribed a name. The merchant looked at one marked 'Sammael'. A man he had never met, and never would.

Cylinders hung suspended, okay. Held by cords leading into the "depths" - what?

Into one of the holes by that particular cylinder, he dropped several heavy gold coins. Some time after their clinkings ceased to echo, the cylinder hoisted ever so slightly. Into the other hole he dropped a pouch containing: a parchment note listing a particular date, a fat coin in fee, and a stout lock.

Holes by that cylinder- presumably in the wall or floor? The money goes into the locked treasure room, not the cylinder. And it causes (somehow) the cylinder to rise, not fall.

Holes by that cylinder- presumably in the wall or floor? The money goes into the locked treasure room, not the cylinder. And it causes (somehow) the cylinder to rise, not fall.

The idea is that the room in the dungeons has two compartments which the two holes lead to: one contains the locks and predictions, and only the 'winning' lock is used when the person is assassinated (my offline analogue to crypto signatures), but the other just holds the money/rewards, and is actually a big cup or something held up by the cord which goes up to the ceiling, around a pulley, and then down to the cylinder. Hence, the more weight (money) inside the cup, the higher the cylinder is hoisted.

I guess ropes and pulleys are no longer so common these days as to make the setup clear and not requiring further explanation?

(This is one of the vulnerabilities as described - what's to stop someone from dumping in some lead? As I said, real-world equivalents to crypto are hard. Probably this could be solved by bringing in another human weak point - eg. specifying that only the merchants are allowed to put money in.)

The described pulley setup will simply accelerate until it reaches one limit or the other depending on the balance of weights. In order to have the position vary with the load, you need a position-varying force, such as

  • A spring.
  • A rotating off-center mass, as in a balance scale. (This is nonlinear for large angles.)
  • An asymmetric pulley, i.e. a cam (in the shape of an Archimedean spiral).
  • A tall object (of constant cross-section) entering a pool of water.

For what it's worth, some people read "Omelas" as being about a superstition that torturing a child is necessary (see the bit about good weather) rather than a situation where torturing a child is actually contributing to public welfare.

And the 'wisdom of their scholars' depends on the torture as well? 'terms' implies this is a magical contract of some sort. No mechanism, of course, like most magic and all of LeGuin's magic that I've read (Earthsea especially).

It's worth noting, for 'number of people killed' statistics, that all of those people were going to die anyway, and many of them might have been about to die for some other reason.

Society kills about 56 million people each year from spending resources on things other than solving the 'death' problem.

that all of those people were going to die anyway

Some of whom several decades later. (Loss of QALYs would be a better statistic, and I think it would be non-negligible.)

I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number." or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World."

As a separate response, what's wrong with this calculation: I base my judgments largely on the duration of the disutility. After 1 second, the dust specks disappear and are forgotten, and so their disutility also disappears. The same is not true of the torture; the torture is therefore worse. I can foresee some possible problems with this line of thought, but it's 2:30 am in New Orleans and I just got done with a long evening of drinking and Joint Mathematics Meeting, so please forgive me if I don't attempt to formalize it now.

An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0.

Secondly, by virtue of your asserting that there exists an action with minimal disutility, you've shown that the Field of Utility is very different from the field of, say, the Real numbers, and so I am incredulous that we can simply "multiply" in the usual sense.

I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number."

You can say anything, but Graham's number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.

or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World."

If your utility function ceases to correspond to utility at extreme values, isn't it more of an approximation of utility than actual utility? Sure, you don't need a model that works at the extremes - but when a model does hold for extreme values, that's generally a good sign for the accuracy of the model.

An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0.

If utility is to be compared relative to lifetime utility, i.e. as (LifetimeUtility + x / LifetimeUtility), doesn't that assign higher impact to five seconds of pain for a twenty-year old who will die at 40 than to a twenty-year old who will die at 120? Does that make sense?

Secondly, by virtue of your asserting that there exists an action with minimal disutility, you've shown that the Field of Utility is very different from the field of, say, the Real numbers, and so I am incredulous that we can simply "multiply" in the usual sense.

Eliezer's point does not seem to me predicated on the existence of such a value; I see no need to assume multiplication has been broken.

if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.

Yes, this seems like a good argument that we can't add up disutility for things like "being bumped into by particle type X" linearly. In fact, it seems like having 1, or even (whatever large number I breathe in a day) molecules of air bumping into me is a good thing, and so we can't just talk about things like "the disutility of being bumped into by kinds of particles".

If your utility function ceases to correspond to utility at extreme values, isn't it more of an approximation of utility than actual utility?

Yeah, of course. Why, do you know of some way to accurately access someone's actually-existing Utility Function in a way that doesn't just produce an approximation of an idealization of how ape brains work? Because me, I'm sitting over here using an ape brain to model itself, and this particular ape doesn't even really expect to leave this planet or encounter or affect more than a few billion people, much less 3^^^3. So it's totally fine using something accurate to a few significant figures, trying to minimize errors that would have noticeable effects on these scales.

Sure, you don't need a model that works at the extremes - but when a model does hold for extreme values, that's generally a good sign for the accuracy of the model.

Yes, I agree. Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that's a bad sign for your model.

doesn't that assign higher impact to five seconds of pain for a twenty-year old who will die at 40 than to a twenty-year old who will die at 120? Does that make sense?

Yeah, absolutely, I definitely agree with that.

Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that's a bad sign for your model.

That would be failing, but 3^^^3 people blinking != you blinking. You just don't comprehend the size of 3^^^3.

Yeah, absolutely, I definitely agree with that.

Well it's self evident that that's silly. So, there's that.

How confident are you that physics has anything to do with morality?

Please don’t build a machine that will torture me to save you from dust specks.

Sean, one problem is that people can't follow the arguments you suggest without these things being made explicit. So I'll try to do that:

Suppose the badness of distributed dust specks approaches a limit, say 10 disutility units.

On the other hand, let the badness of (a single case of ) 50 years of torture equal 10,000 disutility units. Then no number of dust specks will ever add up to the torture.

But what about 49 years of torture distributed among many? Presumably people will not be willing to say that this approaches a limit less than 10,000; otherwise we would torture a trillion people for 49 years rather than one person for 50.

So for the sake of definiteness, let 49 years of torture, repeatedly given to many, converge to a limit of 1,000,000 disutility units.

48 years of torture, let's say, might converge to 980,000 disutility units, or whatever.

Then since we can continuously decrease the pain until we reach the dust specks, there must be some pain that converges approximately to 10,000. Let's say that this is a stubbed toe.

Three possibilities: it converges exactly to 10,000, to less than 10,000, or more than 10,000. If it converges to less, than if we choose another pain ever so slightly greater than a toe-stubbing, this greater pain will converge to more than 10,000. Likewise, if it converges to more than 10,000, we can choose an ever so slightly less pain that converges to less than 10,000. If it converges to exactly 10,000, again we can choose a slightly greater pain, that will converge to more than 10,000.

Suppose the two pains are a stubbed toe that is noticed for 3.27 seconds, and one that is noticed for 3.28 seconds. Stubbed toes that are noticed for 3.27 seconds converge to 10,000 or less, let's say 9,999.9999. Stubbed toes that are notice for 3.28 seconds converge to 10,000.0001.

Now the problem should be obvious. There is some number of 3.28 second toe stubbings that is worse than torture, while there is no number of 3.27 second toe stubbings that is worse. So there is some number of 3.28 second toe stubbings such that no number of 3.27 second toe stubbings can ever match the 3.28 second toe stubbings.

On the other hand, three 3.27 second toe stubbings are surely worse than one 3.28 second toe stubbings. So as you increase the number of 3.28 second toe stubbings, there must be a magical point where the last 3.28 second toe stubbing crosses a line in the sand: up to that point, multiplied 3.27 second toe stubbings could be worse, but with that last 3.28 second stubbing, we can multiply the 3.27 second stubbings by a googleplex, or by whatever we like, and they will never be worse than that last, infinitely bad, 3.28 second toe stubbing.

So do the asymptote people really accept this? Your position requires it with mathematical necessity.

I assert the use of 3^^^3 in a moral argument is to avoid the effort of multiplying.

Yes, that's what I said. If the quantities were close enough to have to multiply, the case would be open for debate even to utilitarians.

Demonstration: what is 3^^^3 times 6?

3^^^3, or as close as makes no difference.

What is 3^^^3 times a trillion to the trillionth power?

3^^^3, or as close as makes no difference.

...that's kinda the point.

So it seems you have two intuitions. One is that you like certain kinds of "feel good" feedback that aren't necessarily mathematically proportional to the quantifiable consequences. Another is that you like mathematical proportionality.

Er, no. One intuition is that I like to save lives - in fact, as many lives as possible, as reflected by my always preferring a larger number of lives saved to a smaller number. The other "intuition" is actually a complex compound of intuitions, that is, a rational verbal judgment, which enables me to appreciate that any non-aggregative decision-making will fail to lead to the consequence of saving as many lives as possible given bounded resources to save them.

I'm feeling a bit of despair here... it seems that no matter how I explain that this is how you have to plan if you want the plans to work, people just hear, "You like neat mathematical symmetries." Optimal plans are neat because optimality is governed by laws and the laws are math - it has nothing to do with liking neatness.

50 years of being tortured is not (50 years 365 days 24 hours * 3600 seconds)-times worse than 1-second of torture. It is much (non-linearly) worse than that.

Utilitarianism does not assume that multiple experiences to the same person aggregate linearly.

Yes, I agree that it is non-linearly worse.

It is not infinitely worse. Just non-linearly worse.

The non-linearity factor is nowhere within a trillion to the trillionth power galaxies as large as 3^^^3.

If it were, no human being would ever think about anything except preventing torture or goals of similar importance. You would never take a single moment to think about putting an extra pinch of salt in your soup, if you felt a utility gradient that large. For that matter, your brain would have to be larger than the observable universe to feel a gradient that large.

I do not think people understand the largeness of the Large Number here.

You've left out that some people find utility in torture.

And also (I think this with somewhat less certainty) made a claim that people shouldn't care about their own quality of life if they're utilitarians.

[-]Bob3
7 points