ITYM 1. 100 people die, with certainty.
Care to test your skills against the Repugnant Conclusion? http://plato.stanford.edu/entries/repugnant-conclusion/
A life barely worth living is worth living. I see no pressing need to disagree with the Repugnant Conclusion itself.
However, I suspect there is a lot of confusion between "a life barely worth living" and "a life barely good enough that the person won't commit suicide".
A life barely good enough that the person won't commit suicide is well into the negatives.
Not to mention the confusion between "a life barely worth living" and "a life that has some typical number of bad experiences in it and barely any good experiences".
Whilst the your analysis of life-saving choices seems fairly uncontentious, I'm not entirely convinced that the arithmetic of different types of suffering add together the way you assume. It seems at least plausible to me that where dust motes are individual points, torture is a section of a contiuous line, and thus you can count the points, or you can measure the lengths of different lines, but no number of the former will add up to the latter.
A dust speck takes a finite time, not an instant. Unless I'm misunderstanding you, this makes them lines, not points.
I'm sorry, but I find this line of argument not very useful. If I remember correctly (which I may not be doing), a googolplex is larger than the estimated number of atoms in the universe. Nobody has any idea of what it implies except "really, really big", so when your concepts get up there, people have to do the math, since the numbers mean nothing. Most of us would agree that having a really really lot of people bothered just a bit is better than having one person suffer for a long life. That has little to do with math and a lot to do with o...
Nobody has any idea of what it implies except "really, really big", so when your concepts get up there, people have to do the math, since the numbers mean nothing.
This applies just as much to numbers such as million and billion, which people mixes up regularly; the problem though is that people dont do the math, despite not understanding the magnitudes if the numbers, and those numbers of people are actually around.
Personaly, if I first try to visualize a crowd of a hundred people, and then a crowd of a thousand, the second crowd seems about three times as large. If I start with a thousand, and then try a hundred, this time around the hundred people crowd seems a lot bigger than it did last time. And the bigger numbers I try with, the worse it gets, and there is a long way to go to get to 7'000'000'000(# of people of earth). All sorts of biases seems to be at work here, anchoring among them. Result: Shut up and multiply!
[Edit: Spelling]
One can easily make an argument like the torture vs. dust specks argument to show that the Repugnant Conclusion is not only not repugnant, but certainly true.
More intuitively, if it weren't true, we could find some population of 10,000 persons at some high standard of living, such that it would be morally praiseworthy to save their lives at the cost of a googolplex galaxies filled with intelligent beings. Most people would immediately say that this is false, and so the Repugnant Conclusion is true.
Eliezer, I am skeptical that sloganeerings ("shut up and calculate") will not get you across this philosophical chasm: Why do you define the best one-off choice as the choice that would be prefered over repeated trials?
Can someone please post a link to a paper on mathematics, philosophy, anything, that explains why there's this huge disconnect between "one-off choices" and "choices over repeated trials"? Lee?
Here's the way across the philosophical "chasm": write down the utility of the possible outcomes of your action. Use probability to find the expected utility. Do it for all your actions. Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.
You mi...
Consider these two facts about me:
(1) It is NOT CLEAR to me that saving 1 person with certainty is morally equivalent to saving 2 people when a fair coin lands heads in a one-off deal.
(2) It is CLEAR to me that saving 1000 people with p=.99 is morally better than saving 1 person with certainty.
Models are supposed to hew to the facts. Your model diverges from the facts of human moral judgments, and you respond by exhorting us to live up to your model.
Why should we do that?
Torture vs dust specks, let me see:
What would you choose for the next 50 days:
The consequence of choice 2 would be the death of one person.
Yudkowsky would choose 2, I would choose 1.
This is a question of threshold. Below certain thresholds things don't have much effect. So you cannot simply add up.
Another example:
What do you choose? Can we add up the discomfort caused by the one coin on each of 1,000,000 people?
These are simply false comparisons.
Had Eliezer talked about torturing someone through the use of googelplex of dust specks, your comparison might have merit, but as is it seems to be deliberately missing the point.
Certainly, speaking for someone else is often inappropriate, and in this case is simple strawmanning.
The comparison is invalid because the torture and dust specks are being compared as negatively-valued ends in themselves. We're comparing U(torture one person for 50 years) and U(dust speck one person) * 3^^^3. But you can't determine whether to take 1 ml of water per day from 100,000 people or 10 liters of water per day from 1 person by adding up the total amount of water in each case, because water isn't utility.
Perhaps this is just my misunderstanding of utility, but I think his point was this: I don't understand how adding up utility is obviously a legitimate thing to do, just like how you claim that adding up water denial is obviously not a legitimate thing to do. In fact, it seems to me as though the negative utility of getting a dust speck in the eye is comparable to the negative utility of being denied a milliliter of water, while the negative utility of being tortured for a lifetime is more or less equivalent to the negative utility of dying of thirst. I don't see why it is that the one addition is valid while the other isn't.
If this is just me misunderstanding utility, could you please point me to some readings so that I can better understand it?
Eliezer, can you explain what you mean by saying "it's the same gamble"? If the point is to compare two options and choose one, then what matters is their values relative to each other. So, 400 certain lives saved is better than a 90% chance of 500 lives saved and 10% chance of 500 deaths, which is itself better than 400 certain deaths.
Perhaps it would help to define the parameters more clearly. Do your first two options have an upper limit of 500 deaths (as the second two options seem to), or is there no limit to the number of deaths that may occur apart from the lucky 4-500?
Many were proud of this choice, and indignant that anyone should choose otherwise: "How dare you condone torture!" I don't think that's a fair characterization of that debate. A good number of people using many different reasons thought something along the lines of negligible "harm" * 3^^^3<50 years of torture. That many people spraining their ankle or something would be a different story. Those harms are different enough that it's by no means obvious which we should prefer, and it's not clear that trying to multiply is really productive, whereas your examples in this entry are indeed obvious.
"The primary thing is to help others, whatever the means. So shut up and multiply!"
Would you submit to torture for 50 years to save countless people? I'm not sure I would, but I think I'm more comfortable with the idea of being self-interested and seeing all things through the prism of self interest.
Similar problem: if you had this choice--you can die peacefully and experience no afterlife, or literally experience hell for 100 years if one was rewarded with an eternity of heaven, would you choose the latter? Calculating which provides the greatest utility, the latter would be preferable, but I'm not sure I would choose it.
Eliezer, as I'm sure you know, not everything can be put on a linear scale. Momentary eye irritation is not the same thing as torture. Momentary eye irritations should be negligible in the moral calculus, even when multiplied by googleplex^^^googleplex. 50 years of torture could break someone's mind and lead to their destruction. You're usually right on the mark, but not this time.
Would you pay one cent to prevent one googleplex of people from having a momentary eye irration?
Torture can be put on a money scale as well: many many countries use torture in war, but we don't spend huge amounts of money publicizing and shaming these people (which would reduce the amount of torture in the world).
In order to maximize the benefit of spending money, you must weigh sacred against unsacred.
To get back to the 'human life' examples EY quotes. Imagine instead the first scenario pair as being the last lifeboat on the Titanic. You can launch it safely with 40 people on board, or load in another 10 people, who would otherwise die a certain, wet, and icy death, and create a 1 in 10 chance that it will sink before the Carpathia arrives, killing all. I find that a strangely more convincing case for option 2. The scenarios as presented combine emotionally salient and abstract elements, with the result that the emotionally salient part will tend to be foreground, and the '% probabilities' as background. After all no-one ever saw anyone who was 10% dead (jokes apart).
Eliezer's point would have been valid, had he chosen almost anything other than momentary eye irritation. Even the momentary eye-irritation example would work if the eye irritation would lead to serious harm (e.g. eye inflammation and blindness) in a small proportion of those afflicted with the speck of dust. If the predicted outcome was millions of people going blind (and then you have to consider the resulting costs to society), then Eliezer is absolutely right: shut-up and do the math.
GreedyAlgorithm, this is the conversation I want to have.
The sentence in your argument that I cannot swallow is this one: "Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences." This is circular, is it not?
You want to establish that any decision, x, should be made in accordance w/ maximum expected utility theory ("shut up and calculate"). You ask me to consider X = {x_i}, the set of many decisions over my life ("after a while"). You sa...
(I should say that I assumed that a bag of decisions is worth as much as the sum of the utilities of the individual decisions.)
I'm seconding the worries of people like the anonymous of the first comment and Wendy. I look at the first, and I think "with no marginal utility, it's an expected value of 400 vs an expected value of 450." I look at the second and think "with no marginal utility, it's an expected value of -400 vs. an expected value of -50." Marginal utility considerations--plausible if these are the last 500 people on Earth--sway the first case much more easily than they do the second case.
So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort...
Eliezer, your readiness to assume that all 'bad things' are on a continuous scale, linear or no, really surprises me. Put your enormous numbers away, they're not what people are taking umbrage at. Do you think that if a googol doesn't convince us, perhaps a googolplex will? Or maybe 3^^^3? If x and y are finite, there will always be a quantity of x that exceeds y, and vice versa. We get the maths, we just don't agree that the phenomena are comparable. Broken ankle? Stubbing your toe? Possibly, there is certainly more of a tangible link there, but you're still imposing your judgment on how the mind experiences and deals with discomfort on us all and calling it rationality. It isn't.
Put simply - a dust mote registers exactly zero on my torture scale, and torture registers fundamentally off the scale (not just off the top, off) on my dust mote scale.
You're asking how many biscuits equal one steak, and then when one says 'there is no number', accusing him of scope insensitivity.
Well, he didn't actually identify dust mote disutility as zero; he says that dust motes register as zero on his torture scale. He goes on to mention that torture isn't on his dust-mote scale, so he isn't just using "torture scale" as a synonym for "disutility scale"; rather, he is emphasizing that there is more than just a single "(dis)utility scale" involved. I believe his contention is that the events (torture and dust-mote-in-the-eye) are fundamentally different in terms of "how the mind experiences and deals with [them]", such that no amount of dust motes can add up to the experience of torture... even if they (the motes) have a nonzero amount of disutility.
I believe I am making much the same distinction with my separation of disutility into trivial and non-trivial categories, where no amount of trivial disutility across multiple people can sum to the experience of non-trivial disutility. There is a fundamental gap in the scale (or different scales altogether, à la Jones), a difference in how different amounts of disutility work for humans. For a more concrete example of how this might work, suppose I steal one cent each from one billi...
Eliezer - the way question #1 is phrased, it is basically a choice between the following:
Be perceived as a hero, with certainty.
Be perceived as a hero with 90% probability, and continue not to be noticed with 10% probability.
This choice will be easy for most people. The expected 50 extra deaths are a reasonable sacrifice for the certainty of being perceived as a hero.
The way question #2 is phrased, it is similarly a choice between the following:
Be perceived as a villain, with certainty.
Not be noticed with 90% probability, and be perceived as a villain with 10% probability.
Again, the choice is obvious. Choose #2 to avoid being perceived as a villain.
If you argue that the above interpretations are then not altruistic, I think the "Repugnant Conclusion" link shows how futile it is to try to make actual "altruistic decisions".
I don't think even everyone going blind is a good excuse for torturing a man for fifty years. How are they going to look him in the eye when he gets out?
The problem is not that I'm afraid of multiplying probability by utility, but that Eliezer is not following his own advice - his utility function is too simple.
It will be interesting to see if this is one of the mistakes Eliezer quietly retracts, or one of the mistakes that he insists upon making over and over no matter what the criticism.
I'm betting 10 credibility units on Yudkowsky publicly admitting that he was wrong on this one.
I think I understand the point of the recent series of posts, but I find them rather unsatisfying. It seems to me that there is a problem with translating emotional situations into probability calculations. This is a very real and interesting problem, but saying "shut up and multiply" is not a good way to approach it. Borrowing from 'A Technical Explanation' it's kind of like the blue tentacle question. When I am asked what would I do when faced with the choice between a googolplex of dust specks or 50 years of torture, my reaction is: But that would never happen! Or, perhaps, I would tell the psychopath who was trying to force me to make such a choice to go f- himself.
It seems a lot of people are willing to write off minimal discomfort and approximate that to 0 discomfort, I don't think thats fair at all.
If we are talking in terms of this 'discomfort' lets start out with two sets of K people out of a population of X >>> K people, with each set having the same 'discomfort' applied to them each, set A and set B. One set must bear with the discomfort, which set should we pick?.
Clearly at the start, both are defined to be the same. So we then double the number in set A while halving their discomfort.
One way to defi...
Mr. Yudkowsky, I'm not sure the duration/intensity of the torture is the only bad thing relevant here. A friend of mine pointed out that a problem with 50 years of torture is that it permanently destroys someone's life. (I think it was in one of the "fake altruism" family of posts that you pointed out that belief of utility != utility of belief.) So the utility curve would be pretty flat for the first couple thousand dust specks, beginning to slope down in proportion to the pain through a few minutes of torture. After that, it would quickly become steeper as the torture began to materially alter the person tortured. Another factor to consider is the difference between pain during which you can do other things, and pain during which you can't. So the 50-year-torturee's (or even a 1-minute torturee's) life is effectively shortened in a way that even a 1,000,000-dust-speck person's life is not. So I'm not sure people aren't implicitly including those factors sometimes, when they get mad about torture. I'd rather five years of chronic back pain than five minutes of permanently soul-crushing torture.
You might argue that it's still irrational, but it's not as obvious as you make it out to be.
This form of reasoning, while correct within a specified context, is dangerously flawed with regard to application within contexts sufficiently complex that outcomes cannot be effectively modeled. This includes much of moral interest to humans. In such cases, as with evolutionary computation, an optimum strategy exploits best-known principles synergisticly promoting a maximally-coherent set of present values, rather than targeting illusory, realistically unspecifiable consequences. Your "rationality" is correct but incomplete. This speaks as well to the well-known paradoxes of all consequentialist ethics.
I'm not sure I understand at what point the torture would no longer be justified. It's easy to say that it is preferable to a googolplex of people with dust specks is worse than one person being tortured, but there has to be some number at which this is no longer the case. At some point even your preferences should flip, but you never suggest a point where it would be acceptable. Would it be somewhere around 1.5-1.6 billion, assuming the dust specks were worth 1 second of pain? Is it acceptable if it is just 2 people affected? How many dust specks go ...
"I think people would be more comfortable with your conclusion if you had some way to quantify it; right now all we have is your assertion that the math is in the dust speck's favor."
The actual tipping point depends on your particular subjective assessment of relative utility. The actual tipping point doesn't matter; what matters is that there is crossover at some point, therefore such reasoning about preferences, like San Jose --> San Francisco --> Oakland --> San Jose is incoherent.
Roland, I'll take that bet.
The idea of an ethical discontinuity between something that can destroy a life (50 years of torture, or 1 year) and something that can't (1 minute of torture, a dust speck) has some intuitive plausibility, but ultimately I don't buy it. It very much seems like death must be in the same 'regime' as torture, but also that death is in the same regime as trivial harms, because people risk death for trivial benefit all the time - I imagine anyone here would drive across town for $100 or $500 or $1000, even though it's slightly more da...
I think "Shut up and Multiply" would be a good tagline for this blog, and a nice slogan for us anti-bias types in general!
How come these examples and subsequent narratives never mention the value of floors and diminishing returns? Is every life valued the same? If there was a monster or disease that will kill everyone in the world there is a floor involved. Choice 1 of saving 400 lives ensures that humanity continues (assuming 400 people are enough to re-populate the world). While have a 90% chance of saving 500 leaves 10% chance that humanity on earth ends. Would you agree that floors are important factors that do change the value of an optimal outcome when they are one time events? In other words the marginal utility of a life is diminishing in this example.
The idea of saving someone life has a great value to the person who did the saving. They are a hero even if it is only one life. The subsequent individuals diminish in the utility they deliver because being a hero carries such great a return and only requires saving one person verse everyone. People who choose option 1 are: not doing the math or value life differently between individuals because of the effect it has on them.
The value placed on items is really what matters because we don’t value everything the same. The true question is why do we value them differently or are we really just miscalculating the expected value? Every equation has to be learned from 2+2=4 on and maybe we are just heading up that learning curve.
Save 400 lives, with certainty. Save 500 lives, with 90% probability; save no lives, 10% probability.
I'm surprised how few people are reacting to the implausibility of this thought experiment. When not in statistics class, God rarely gives out priors. Probabilities other than 0+epsilon and 1-epsilon tend to come from human scholarship, which is an often imperfect process. It is hard to imagine a non-contrived situation where you would have as much confidence in the 90/10 outcome as in the certain outcome.
Suppose the "90/10" figure comes from cure...
Your conclusion follows very clearly from the research results but it does not apply to the new situation. Doing the math is a false premise. Few people have personal experience of being tortured and more importantly no one who disagrees with you understands what you personally mean by the dust-speck. Perhaps if it was sawdust or getting pool water splashed in your eye, then it would finally register more clearly. Again, you (probably) haven't been tortured but you have gone through life without even conciously registering a dust speck in your eye. With a little adjustment above a threshold many people might switch sides. Pain is not linear.
what matters is that there is crossover at some point
But there isn't necessarily one. That's the point - Eliezer is presuming that dust speck harm is additive and that enough of such harms will equal torture. This presumption does not seem to have a basis in rational argument.
The comments on this post are no better than those on the Torture vs. Dust Specks post. In other words, simply bring the word "torture" into the discussion and people automatically become irrational. It's happened to some of the other threads as well, when someone mentioned torture.
It strongly suggests that not many of the readers have made much progress in overcoming their biases.
By the way, Eliezer has corrected the original post; anonymous was correct about the numbers.
Lee:
Models are supposed to hew to the facts. Your model diverges from the facts of human moral judgments, and you respond by exhorting us to live up to your model.
Be careful not to confuse "is" and "ought". Eliezer is not proposing an empirical model of human psychology ("is"); what he is proposing is a normative theory ("ought"), according to which human intuitive judgements may turn out to be wrong.
If what you want is an empirical theory that accurately predicts the judgements people will make, see denis bider's comment of January 22, 2008 at 06:49 PM.
I don't think even everyone going blind is a good excuse for torturing a man for fifty years. How are they going to look him in the eye when he gets out?
That's cold brother. Real cold....
The idea of an ethical discontinuity between something that can destroy a life (50 years of torture, or 1 year) and something that can't (1 minute of torture, a dust speck) has some intuitive plausibility, but...
Sorry, no. 'Torture' and 'dust speck' are not two different quantities of the same currency. I wouldn't even be confident trying to add up individual minutes of torture to equal one year. Humans do not experience the world like disinterested machines. They don't even experience a logarithmic progression of 'amount of discomfort.' 50 years of torture does things to the mind and body that one year (for 50 people) can never do. One year of torture does things one minute can never do. One minute of torture does things x dust specks in x people's eyes could never do. None of these things registers on each others' scales.
Cash, possessions, whatever, I'm with you and Eliezer. Pure human perception is different, even when you count neurons. And no, this isn't a blind irrational reaction to the key ...
As was pointed out last time, if you insist that no quantity of dust-specks-in-individual-eyes is comparable to one instance of torture, then what is your boundary case? What about 'half-torture', 'quarter-torture', 'millionth-torture'? Once you posit a qualitative distinction between the badness of different classes of experience, such that no quantity of experiences in one class can possibly be worse than a single experience in the other class, then you have posited the existence of a sharp dividing line on what appears to be a continuum of possible indi...
Ben, according to your poll suggestion, we should forbid driving, because each particular person would no doubt be willing to drive a little bit slower to save lives, and ultimately having no one drive at all would save the most lives. But instead, people continue to drive, thereby trading many lives for their convenience.
Agreeing with these people, I'd be quite willing to undergo the torture personally, simply in order to prevent the dust specks for the others. And so this works in reverse against your poll.
Mitchell: "You're in the same boat with the...
I think this article could have been improved by splitting it into two; one of them to discuss the original problem (is it better to save 400 for sure than to gamble on saving 500 with certainty 90%), and the other to discuss the reasons why people pick the other one if you rephrase the question. They're both interesting, but presenting them at once makes the discussion too confused.
And the second half... specks of dust in the eye and torture can both be described as "bad things". That doesn't mean they're the same kind of thing with different magnitudes. That was mostly a waste of time to me.
Eliezer,
What do specks have to do with circularity? Where in last posts you explain that certain groups of decision problems are mathematically equivalent, independent on actual decision, here you argue for a particular decision. Note that utility is not necessarily linear of number of people.
It looks like there was an inferential distance problem resulting from the fact that many either haven't read or don't remember the original torture vs dust specks post. Eliezer may have to explain the circularity problem in more detail.
ultimately having no one drive at all would save the most lives
And ultimately no dust specks and no torture and lollipops all round would be great for everyone. Stick to the deal as presented. You have a choice to make. Speed is quantifiable. Death is very quantifiable. Pain - even physical pain - goes in the same category as love, sadness, confusion. They are abstract nouns because you cannot hold, measure or count them. Does N losing lottery tickets spread equally over N people equal one dead relative's worth of grief?
Reconsider my poll scenario: Wouldn'...
Unknown,
such that a trillion people suffering for that length of time or that degree of pain would always be preferable to one person suffering for one second longer or suffering a pain ever so slightly greater.
As I wrote yesterday, a dust mote registers exactly zero on my torture scale, and torture registers fundamentally off the scale (not just off the top, off) on my dust mote scale. Torture can be placed, if not discretely quantified, on my torture scale. Dust motes can not. If it's a short enough space of time that you could convince me N dust motes would be worse, I'd say your idea of torture is different from mine.
Ben: the poll scenario might persuade me if all the people actually believed that the situation with the dust specks, as a whole, were better than the torture situation. But this isn't the case, or we couldn't be having this discussion. Each person merely thinks that he wouldn't mind suffering a speck as an individual in order to save someone from torture.
As for a speck registering zero on your torture scale: what about being tied down with your eyes taped open, and then a handful of sand thrown in your face? Does that register zero too? The point would be...
My utility function doesn't add the way you seem to think it does. A googolplex of dusty eyes has the same tiny negative utility as one dusty eye as far as I'm concerned. Honestly. How could anyone possibly care how many people's eyes get dusty. It doesn't matter. Torture matters a lot. But that's not really even the point. The point is that a bad thing happening to n people isn't n times worse than a bad thing happening to one person.
Do we put no weight on the fact that if you polled the 3^^^3 people and asked them whether they would all undergo one dust speck to save one person from 50 years of torture, they'd almost certainly all say yes?
What if they each are willing to be tortured for 25 years? Is it better to torture a googolplex people for 25 years than one person for 50 years?
The assumption that harms are additive is a key part of the demonstration that harm/benefit calculations can be rational.
So, has it been demonstrated that one cannot be rational without making that assumption?
Caledonian, of course that cannot be demonstrated. But who needs a demonstration? Larry D'anna said, "A googolplex of dusty eyes has the same tiny negative utility as one dusty eye as far as I'm concerned." If this is the case, does a billion deaths have the same negative utility as one death?
To put it another way, everyone knows that harms are additive.
what about being tied down with your eyes taped open, and then a handful of sand thrown in your face
Unknown - this is called torture, and as such would register on my torture scale. Is it as bad as waterboarding? No. Do I measure them on a comparable scale? Yes. Can I, hence, imagine a value for N where N(Sand) > (Waterboarding)? Yes, I can. I stand by my previous assertion.
However, I'm beginning to see that this is a problem of interpretation. I am fully on board with Eliezer's math, I'm happy to shut up and multiply lives by probabilities, and I do h...
Ben, I think you might not have understood what I was saying about the poll. My point was that each individual is simply saying that he does not have a problem with suffering a dust speck to save someone from torture. But the issue isn't whether one individual should suffer a dust speck to save someone, but whether the whole group should suffer dust specks for this purpose. And it isn't true that the whole group thinks that the whole group should suffer dust specks for this purpose. If it were, there wouldn't be any disagreement about this question, since ...
To put it another way, everyone knows that harms are additive.
Is this one of the intuitions that can be wrong, or one of those that can't?
After trying several ideas, I realized that my personal utility function converges, among its other features. And it's obvous in retrospect. After all, there's only so much horror I can feel. But while you call this nasty names like "scope insensitivity", I embrace it. It's my utility function. It's not good or bad or wrong or biased, it just is. (Scope insensitivity with regard to probabilities is, of course, still bad.)
I still think that one man should be tortured a lot instead of many being tortured slightly less, because higher individual suffering results in a higher point of convergence.
This also explains why our minds reject "Pascal's mugging".
OK, my final response on the subject, which has had me unable to think about anything else all day. Thanks to all involved for helping me get my thoughts in order on this topic, and sorry for hijacking.
therefore burying the whole group in dust
You've forgotten the rules of the game. There's no 'burying everyone in dust.' You either have a speck of dust in your eye and blink it away, or you don't. And that's for every individual in the group. Playing with the numbers doesn't change the scenario much either.
My #1 complaint is that no-one seems bothered by thi...
Once again we've highlighted the immaturity of present-day moral thinking -- the kind that leads inevitably to Parfit's Repugnant Conclusion. But any paradox is merely a matter of insufficient context; in the bigger picture all the pieces must fit.
Here we have people struggling over the relative moral weight of torture versus dust specks, without recognizing that there is no objective measure of morality, but only objective measures of agreement on moral values.
The issue at hand can be modeled coherently in terms of the relevant distances (regardless of...
Ben, you are right. Two people with dusty eyes is worse than one. But it isn't twice as worse. It's not even nearly twice as worse. On the other hand I would say that two people being tortured is almost twice as bad as one, but not quite. I'm sure I can't write down a formula for my utility function in terms of number of deaths, or dusty eyes, or tortures, but I know one thing: it is not linear. There's nothing inherently irrational about choosing a nonlinear utility function. So I will continue to prefer any number of dusty eyes to even one torture. I would also prefer a very large number of 1-day tortures to a singe 50-year one. (far far more than 365 * 50). Am I being irrational? How?
Caledonian, of course that cannot be demonstrated.
Of course? It is hardly obvious to me that such a thing is beyond demonstration, even if we currently do not know.
But who needs a demonstration?
People interested in rational thinking who aren't idiots. At the very least.
So, which factor rules you out?
I think I'm going to have to write another of my own posts on this (hadn't I already?), when I have time. Which might not be for a while -- which might be never -- we'll see.
For now, let me ask you this Eliezer: often, we think that our intuitions about cases provide a reliable guide to morality. Without that, there's a serious question about where our moral principles come from. (I, for one, think that question has its most serious bite right on utilitarian moral principles... at least Kant, say, had an argument about how the nature of moral claims l...
Unknown: "There is not at all the same intuitive problem here; it is much like the comparison made a while ago on Overcoming Bias between caning and prison time; if someone is given few enough strokes, he will prefer this to a certain amount of prison time, while if the number is continually increased, at some point he will prefer prison time."
It may be a psychological fact that a person will always choose eventually. But this does not imply that those choices were made in a rationally consistent way, or that a rationally consistent extension of ...
I agree that as you defined the problems, both have problems. But I don't agree that the problems are equal, for the reason stated earlier. Suppose someone says that the boundary is that 1,526,216,123,000,252 dust specks is exactly equal to 50 years of torture (in fact, it's likely to be some relatively low number like this rather than anything like a googleplex.) It is true that proving this would be a problem. But it is no particular problem that 1,526,216,123,000,251 dust specks would be preferable to the torture, while the torture would be preferable to 1,526,123,000,253 dust specks would be worse than the torture: the point is that the torture would differ from each of these values by an extremely tiny amount.
But suppose someone defines a qualitative boundary: 1,525,123 degrees of pain (given some sort of measure) has an intrinsically worse quality from 1,525,122 degrees, such that no amount of the latter can ever add up to the former. It seems to me that there is a problem which doesn't exist in the other case, namely that for a trillion people to suffer pain of 1,525,122 degrees for a trillion years is said to be preferable to one person suffering pain of 1,525,123 degrees for one year.
In other words: both positions have difficult to find boundaries, but one directly contradicts intuition in a way the other does not.
My first reaction to this was, "I don't know; I don't understand 3^^^3 or a googol, or how to compare the suffering from a dust speck with torture." After I thought about it, I decided I was interpreting Eliezer's question like this: as the amount of suffering per person, say a, approaches zero but the number of people suffering, say n, goes to infinity, is the product a*n worse than somebody being tortured for 50 years?" The limiting product is undefined, though, isn't it? If a goes to zero fast enough, for example by ceasing to be suffer...
Any question of ethics is entirely answered by arbitrarily chosen ethical system, therefore there are no "right" or "better" answers.
Wrong, anon. If there are objective means by which ethical systems can be evaluated, there can be both better and right answers.
Unknown, there is nothing inherently illogical about the idea of qualitative transitions. My thesis is that a speck of dust in the eye is a meaningless inconvenience, that torture is agony, and that any amount of genuinely meaningless inconvenience is preferable to any amount of agony. If those terms can be given objective meanings, then a boundary exists and it is a coherent position.
I just said genuinely meaningless. This is because, in the real world, there is going to be some small but nonzero probability that the speck of dust causes a car crash, for...
Mitchell, my sentiments exactly. Dust causing car crashes isn't part of the game as set up here - the idea is that you blink it away instantly, hence 'the least bad thing that can happen to you'.
The only stickler in the back of my mind is how I am (unconsciously?) categorising such things as 'inconvenience' or 'agony'. Where does stubbing my toe sit? How about cutting myself shaving? At what point do I switch to 3^^^3(Event) = Torture?
TGGP, are you familiar with the teachings of Jesus?
Anon wrote: "Any question of ethics is entirely answered by arbitrarily chosen ethical system, therefore there are no "right" or "better" answers."
Matters of preference are entirely subjective, but for any evolved agent they are far from arbitrary, and subject to increasing agreement to the extent that they reflect increasingly fundamental values in common.
TGGP, are you familiar with the teachings of Jesus?
Yes, I was raised Christian and I've read the Gospels. I don't think they provide an objective standard of morality, just the jewish Pharisaic tradition filtered through a Hellenistic lens.
Matters of preference are entirely subjective, but for any evolved agent they are far from arbitrary, and subject to increasing agreement to the extent that they reflect increasingly fundamental values in common.
That is relevant to what ethics people may favor, but not to any truth or objective standard. Agreement among people is the result of subjective judgment.
TGGP -- how about internal consistency? How about formal requirements, if we believe that moral claims should have a certain form by virtue of their being moral claims? Those two have the potential to knock out a lot of candidates...
Ben and Mitchell: the problem is that "meaningless inconvenience" and "agony" do not seem to have a common boundary. But this is only because there could be many transitional stages such as "fairly inconvenient" and "seriously inconvenient," and so on. But sooner or later, you must come to stages which have a common boundary. Then the problem I mentioned will arise: in order to maintain your position, you will be forced to maintain that pain of a certain degree, suffered by any number of people and for any length of ...
Great New Theorem in color perception : adding together 10 peoples' perceptions of light pink is equivalent to one person's perception of dark red. This is demonstrable, as there is a continuous scale between pink and red.
Enough with the abstract. It's difficult to make a valid equation since dust=x, torture=y, and x=!y. So why don't you just replace dust in the equation with torture. Like a really small amount of torture, but still torture. Maybe, say, everybody gets a nipple pierced unwillingly.
Tcpkac: wonderful intuition pump.
Gary: interesting -- my sense of the nipple piercing case is that yes, there's a number of unwilling nipple piercings that does add up to 50 years of torture. It might be a number larger than the earth can support, but it exists. I wonder why my intuition is different there. Is yours?
Paul, is there a number of dust specks that add up to stubbing your toe - not smashing it or anything, but stubbing it painfully enough that you very definitely notice, and it throbs for a few seconds before fading?
Followup to: Torture vs. Dust Specks, Zut Allais, Rationality Quotes 4
Suppose that a disease, or a monster, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:
Most people choose option 1. Which, I think, is foolish; because if you multiply 500 lives by 90% probability, you get an expected value of 450 lives, which exceeds the 400-life value of option 1. (Lives saved don't diminish in marginal utility, so this is an appropriate calculation.)
"What!" you cry, incensed. "How can you gamble with human lives? How can you think about numbers when so much is at stake? What if that 10% probability strikes, and everyone dies? So much for your damned logic! You're following your rationality off a cliff!"
Ah, but here's the interesting thing. If you present the options this way:
Then a majority choose option 2. Even though it's the same gamble. You see, just as a certainty of saving 400 lives seems to feel so much more comfortable than an unsure gain, so too, a certain loss feels worse than an uncertain one.
You can grandstand on the second description too: "How can you condemn 100 people to certain death when there's such a good chance you can save them? We'll all share the risk! Even if it was only a 75% chance of saving everyone, it would still be worth it - so long as there's a chance - everyone makes it, or no one does!"
You know what? This isn't about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn't even a feather in the scales, when a life is at stake. Just shut up and multiply.
Previously on Overcoming Bias, I asked what was the least bad, bad thing that could happen, and suggested that it was getting a dust speck in your eye that irritated you for a fraction of a second, barely long enough to notice, before it got blinked away. And conversely, a very bad thing to happen, if not the worst thing, would be getting tortured for 50 years.
Now, would you rather that a googolplex people got dust specks in their eyes, or that one person was tortured for 50 years? I originally asked this question with a vastly larger number - an incomprehensible mathematical magnitude - but a googolplex works fine for this illustration.
Most people chose the dust specks over the torture. Many were proud of this choice, and indignant that anyone should choose otherwise: "How dare you condone torture!"
This matches research showing that there are "sacred values", like human lives, and "unsacred values", like money. When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).
My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life. After rejecting the report, the agency decided not to implement the measure.
Trading off a sacred value (like refraining from torture) against an unsacred value (like dust specks) feels really awful. To merely multiply utilities would be too cold-blooded - it would be following rationality off a cliff...
But let me ask you this. Suppose you had to choose between one person being tortured for 50 years, and a googol people being tortured for 49 years, 364 days, 23 hours, 59 minutes and 59 seconds. You would choose one person being tortured for 50 years, I do presume; otherwise I give up on you.
And similarly, if you had to choose between a googol people tortured for 49.9999999 years, and a googol-squared people being tortured for 49.9999998 years, you would pick the former.
A googolplex is ten to the googolth power. That's a googol/100 factors of a googol. So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort, and multiplying by a factor of a googol each time, until we choose between a googolplex people getting a dust speck in their eye, and a googolplex/googol people getting two dust specks in their eye.
If you find your preferences are circular here, that makes rather a mockery of moral grandstanding. If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, you may have fun driving, but you aren't going anywhere. Maybe you think it a great display of virtue to choose for a googolplex people to get dust specks rather than one person being tortured. But if you would also trade a googolplex people getting one dust speck for a googolplex/googol people getting two dust specks et cetera, you sure aren't helping anyone. Circular preferences may work for feeling noble, but not for feeding the hungry or healing the sick.
Altruism isn't the warm fuzzy feeling you get from being altruistic. If you're doing it for the spiritual benefit, that is nothing but selfishness. The primary thing is to help others, whatever the means. So shut up and multiply!
And if it seems to you that there is a fierceness to this maximization, like the bare sword of the law, or the burning of the sun - if it seems to you that at the center of this rationality there is a small cold flame -
Well, the other way might feel better inside you. But it wouldn't work.
And I say also this to you: That if you set aside your regret for all the spiritual satisfaction you could be having - if you wholeheartedly pursue the Way, without thinking that you are being cheated - if you give yourself over to rationality without holding back, you will find that rationality gives to you in return.
But that part only works if you don't go around saying to yourself, "It would feel better inside me if only I could be less rational."
Chimpanzees feel, but they don't multiply. Should you be sad that you have the opportunity to do better? You cannot attain your full potential if you regard your gift as a burden.
Added: If you'd still take the dust specks, see Unknown's comment on the problem with qualitative versus quantitative distinctions.