"Whoever saves a single life, it is as if he had saved the whole world."
    – The Talmud, Sanhedrin 4:5

    It's a beautiful thought, isn't it? Feel that warm glow.

    I can testify that helping one person feels just as good as helping the whole world. Once upon a time, when I was burned out for the day and wasting time on the Internet - it's a bit complicated, but essentially, I managed to turn someone's whole life around by leaving an anonymous blog comment. I wasn't expecting it to have an effect that large, but it did. When I discovered what I had accomplished, it gave me a tremendous high. The euphoria lasted through that day and into the night, only wearing off somewhat the next morning. It felt just as good (this is the scary part) as the euphoria of a major scientific insight, which had previously been my best referent for what it might feel like to do drugs.

    Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.

    But if you ever have a choice, dear reader, between saving a single life and saving the whole world - then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference.

    For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.) Why might it not be obvious? Well, suppose there's a qualitative duty to save what lives you can - then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend - so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost - and thus passing to the entire world changes little.

    I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world - not to be confused with pretend rhetorical saving the world - it is as if they had saved an intergalactic civilization.

    Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the child. I'm nearby, within reach, so I leap forward and drag one child off the railroad tracks - and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. "Quick!" you scream to me. "Do something!" But (I call back) I already saved one child from the train tracks, and thus I am "unimaginably" far ahead on points. Whether I save the second child, or not, I will still be credited with an "unimaginably" good deed. Thus, I have no further motive to act. Doesn't sound right, does it?

    Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don't think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives. Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer.
     

    Addendum:  It's not cognitively easy to spend money to save lives, since cliche methods that instantly leap to mind don't work or are counterproductive.  (I will post later on why this tends to be so.)  Stuart Armstrong also points out that if we are to disdain the philanthropist who spends life-saving money inefficiently, we should be consistent and disdain more those who could spend money to save lives but don't.

    New Comment
    84 comments, sorted by Click to highlight new comments since: Today at 9:06 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    Also, whoever saves a person to live another fifty years, it is as if they had saved fifty people to live one more year. Whoever saves someone who very much enjoys life, it is as if they saved many people who are not sure they really want to live. And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.

    6adamisom11y
    Which is why I"m still puzzled by a simplistic moral dilemma that just won't go away for me: are we morally obligated to have children, and as many as we can? Sans using that using energy or money to more efficiently "save" lives, of course. It seems to me we should encourage people to have children, a common thing that many more people will actually do than donate philanthropically, in addition to other philanthropy encouragements.

    are we morally obligated to have children, and as many as we can?

    Cost of a first-world child is.... checks random Google result $180,000 to get them to age 18. Cost of saving a kid in Africa from dying of Malaria is ~$1,000.

    Right now having children is massively selfish, because there's options that are more than TWO magnitudes of order more effective. It'd be like blowing up the train in order to save the deaf kids from the original post :)

    1MugaSofer11y
    5handoflixue11y
    Well, yes, but my point is that this is a rather unreasonable clause, since if we actually pay attention to what we can efficiently do, "have children" doesn't even make the Top 100. So why would you possibly focus on "have children", and treat it as a dilemma? I interpreted that line as a cached-though / brush off, not "of course I've done the math, and there's a thousand more effective things, but I still find it odd that having children can EVER be a positive act. I mean, ew, babies! Those can't be good for the world o.o"
    4MugaSofer11y
    I suppose it's the difference between asking "is it better to blow up the train" and asking "can it be better to blow up the train?" It's worth noting that even if we have an obligation to create lives, our obligation to save them is easier to fulfill; but it's still worth knowing if the two are actually equivalent. Reading the original comment, adam does, in fact, seem to have assumed that having children would be the right choice if it mattered, so ... point, I guess.
    4handoflixue11y
    Oooh, I like that distinction, and will try to remember it in the future :)
    6dimension108y
    Not necessarily. A full argument would consider the opportunities available to a child you raise -- it's perfectly possible for a single first-world child to be a more productive than 180 kids in Africa. There's also the counter-point (to my previous point) that having children discourages other people from having children, due to the forces of the market (greater demand for stuff available to children => greater costs of stuff available to children). Of course, the effect on demand is spread out to stuff other than just stuff available to children, so overall this does not cause an equal and opposite reaction. If you successfully teach your child to be utilitarian, effective altruist, etc., though, the utility of both previous points are dwarfed by this (the second point is dwarfed because the average first-world child probably wouldn't pick up utilitarianism, EA). I'm not sure what the probability of a child picking up stuff like that is (and it would make one heck of a difficult experiment), but my guess is that if taught properly it would be likely enough to dwarf the utility of the first two points.
    4MugaSofer11y
    A lot of people don't consider failure to exist the same as dying. Of course, we need some level of procreation as long as there is death, and humanity would probably continue to expand even then.
    1adamisom11y
    Why? Because dying is painful? Beyond that, I see them equivalently.

    Non-existing is not the same thing as ceasing to exist.

    4A1987dM11y
    Among other reasons, if you die there will be people mourning you, whereas if you had never existed in the first place there won't.
    2Capla9y
    But the whole point of the post above is that our personal feelings are negligible next the enormity of the utilitarian consequences behind our feelings. Caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway. Even without the feeling. The fact that no one knows the unborn person yet doesn't mean that she doesn't matter.
    1A1987dM9y
    I was using mourning synecdochically to refer to all the externalities your death would have on other people, not just their feelings.
    0AshwinV9y
    The goal behind altruism is to improve the quality of life for the human race. The motivation for altruism maybe due to evolutionary reasons such as propogation of the species etc., but it is not the same as altruism. This post is however about the latter as you have rightly pointed out. Nevertheless, and therefore, the way to go about maximizing is to first ensure that all people currently alive remain alive and well taken care of. After that there's plenty of time to go about having more babies :)
    -1MugaSofer11y
    Source?
    3wedrifid11y
    Robin Hanson. He was speaking in quotable prose but expressing his own opinion.
    1Gurkenglas11y
    That last one sounds like we should try to make a simple, self-improving GAI whose goal it is to tile the universe with smiley faces.
    1TitaniumDragon11y
    I will note that this is one of the fundamental failings of utilitarianism, the "mere addition" paradox. Basically, take a billion people who are miserable, and one million people who are very happy. If you "add up" the happiness of the billion people, they are "happier" on the whole than the million people; therefore, the billion are a better solution to use of natural resources. The problem is that it always assumes some incorrect things: 1) It assumes all people are equal 2) It assumes that happiness is transitive 3) It assumes that you can actually quantify happiness in a meaningful way in this manner 4) It assumes the additive property for happiness - that you can add up some number of miserable people to get one happy person. None of these assumptions are necessarily true. Of course, all moral philosophies are going to fail at some level. Note that, for instance, in this case there is an obvious difference: adding 50 years to one life is actually significantly better than extending 50 lives by 1 year each, as the investment to improve one person for 50 years is considerably less, and one person with 50 years can do considerably larger, longer, and grander projects.

    Interesting to choose a rich philanthropist for your analogy. Very wealthy people give less, as a percentage of their wealth and income, than people of much more limited means. For wealthy philanthropists, the value from giving may be in status from the publicity of large gifts.

    Robin's comment raises the interesting question of whether creating a new life is as good as saving one. It definitely seems to be easier to create a new one, at least at first (the long term effort is probably greater). Most people manage to create a new life or two, but probably never save any. We don't tend to celebrate new-life creators as much as we do life-savers, perhaps because it is seen as too easy.

    No. It's way, way easier to save one. According to the Disease Control Priorities Project (http://tinyurl.com/y9wpk5e) you can save lives for about $3 per year. That's, what, $225 for a whole life? Creating a life requires nine months of pregnancy, during which you can't work as well, and you have to pay for food while you're eating for two, and that's just assuming you give the child up for adoption. You also can only do it once every nine months, and you have to be a girl, whereas you can save a life every time you earn $225.

    9Alicorn14y
    That means it's cheaper and possible to do in greater volume - not easier. It's probably uncommon indeed to save lives by accident, let alone while actively trying not to, which happens in the creation department all the time. Easier certainly doesn't mean cheaper, or people would behave differently with credit cards.
    5tut14y
    Having established that, would you say that a pregnancy (or several, since the average pregnancy produces less than one child) is easier or harder than mailing a check?
    7Psychohistorian14y
    I think Alicorn's point was that being pregnant might be more unpleasant/expensive/"difficult" than mailing a check, but getting pregnant is much, much easier. So easy, in fact, one can do it accidentally.
    2tut14y
    But getting pregnant is not enough to make a child.
    9Psychohistorian14y
    Barring spontaneous miscarriage or starvation, it is the default. A woman has to refrain from taking certain actions, but, once she's pregnant, she doesn't actively have to do much but not starve.
    -2wedrifid14y
    Let me talk to some of my female friends and get back to you. I'll see how many comments along the lines of "labor is easy" and "Vomiting off and on for months on end? Yeah, but it's just a default." it takes until I get slapped. I argue that default or not it is easier to abort a child than to carry it to term and give birth. That human instincts cry out to us to produce offspring at huge cost doesn't mean it is easy.
    5MatthewB14y
    What I am going to say may be extremely unpopular,but everything you have stated about bringing a child to terms could well fall within the terminology used by Psychohistorian: Even given all of the discomforts and difficulties you have mentioned, they are more about the Mother than the child, and as long as the puking, cramping, cranky, hormonal mother does not starve, the child should be delivered. I think that most of what you have mentioned are just difficulties with the attempt to make certain the mother does not starve. But, then, at this point, it's really all just semantics / terminology that we are talking about
    -1wedrifid14y
    It doesn't particularly bother me but you are mistaken. Yes. All of which combined is harder than saving a life at the current margin. Labour doesn't have much to do with not starving.
    5Benquo13y
    Taboo "difficulty." Creating a human life doesn't require a lot of advance planning/willpower, while saving a life does require you to think about the problem in advance and decide on an inconvenient course of action when nobody's forcing you to do so. The costs of creating a human life in effort, financial expense, suffering, and willpower, considered as a whole, are greater than the costs of saving a life.
    3tut14y
    Good medicine does a bit more than that. If the only thing you do is "not starve" the probability is somewhere between 1/3 and 2/3 that the child will die. Quite possibly killing the mother as well. ETA: Not doing things can also be hard, if the consequences are unpleasant enough

    I'm 99% sure you're missing the point.

    Falling down a hill is quite painful. However, once you start falling down a hill, you're going to keep falling until you reach the bottom, unless you make a conscious and coordinated decision to stop yourself, if you are even able to. In that sense, once you start falling down a hill, it is very easy to keep falling, because it's what happens if you don't try to change things.

    In the same sense (and I apologize for the unpleasant metaphor) pregnancy is easy; once a woman is pregnant, barring miscarriage, she's gonna have a kid. It's going to be painful and at times miserable, but it's going to happen. I agree with you that there's less disutility experienced if she has an abortion, but she has to make a conscious choice to do that, go to a clinic, and pay a bill. It may be the more pleasant way out, but, in this context, it's not easier; it doesn't really happen by accident (excepting spontaneous miscarriage, which is admittedly fairly common, but besides the point).

    Everything you describe is something she endures; there's no willpower to it. This is in contrast with saving a life as discussed earlier, which requires a deliberate, conscious de... (read more)

    3DanArmak14y
    I think the crucial point here is the disparity between sexes. The amount of effort required for a man to induce a pregnancy, the cost of the dating&mating game, is certainly not "easy". I expect this is also the case for some (least-generally-attractive) women.
    8Alicorn14y
    I'd distinguish here between "difficult" as in requiring discomfort and "difficult" as in requiring optional effort. By optional effort, I mean effort that one could feasibly take the null action rather than exert. None of the effort expended in carrying a baby to term is really optional at the time. If I were to get pregnant, I could at no time say to myself, "Well, I'd really rather not vomit right now, so I'll take the null action." Even if there were something I could have done earlier to enable the null action at that time, once it gets to that point, it's happening whether I like it or not. Similar with labor. I don't think anyone will perform an abortion when one is literally about to extrude an infant, practically speaking, so although labor is an immense effort, it is not an optional effort once it's gotten to that point. Taking Plan B, going through with an abortion, and yes - mailing a check, are all optional effort.
    7tut14y
    We do celebrate life creators quite a bit. But we celebrate their good fortune rather than their altruism, since the parents are among the people who benefit most from their parenthood.

    In a Big World, which this one appears to be on at least three counts (spatially infinite open universe, inflationary scenario in Standard Model, and Everett branches), everyone who could exist already exists with probability 1. Thus, the issue is not so much creating new people, but ensuring that good things happen to people given that they exist. Creating a new person helps when you can provide them with good outcomes, because what you're really doing is increasing the frequency of good outcomes from that starting point.

    Or at least that's one anthropic interpretation of ethics. But it is one reason why I don't endorse running out and creating lots of people if that lowers the average standard of living. In a Big World, it's the average standard of living that you care about.

    5Capla9y
    Tell me if I'm wrong, but doesn't a many worlds reality mean that all possible states of those people also occur with a probability of one? How can you possibly "[increase] the frequency of good outcomes"? All the outcomes occur in some world, irrespective of our actions.

    Eliezer, I hope we can agree that your conclusion is intriguing, but far from clearly true. After all, if every possible person exists, then so does every possible history for every possible person. How then could you effect any relative frequencies?

    I have a paper on this problem of infinities in ethics: http://www.nickbostrom.com/ethics/infinite.pdf

    It is a difficult topic.

    Where does this end? If a philanthropist saves one life instead of two he is damned as any murder. Surely we in the more prosperous countries could easily save many lives by cutting back on luxuries, but we choose not to (this would no doubt apply to nearly everyone in these countries) does that make us all murderers?

    Yes. We just aren't socially condemned for it.

    Eliezer, whatever it is you were getting at in your comment, it was waaay over my head. When I searched on Wikipedia for Big World, I got an album by Joe Jackson. When I looked for Everett branches, I found an intriguing article about the Piscataquog River. Could you point me to some further reading? I hate to feel left out of the loop here.

    6Normal_Anomaly12y
    Google "Tegmark big universe" (without the quotes), or read the Quantum Physics Sequence.

    Robin: And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.

    I have to question that comparison. When you save a life that already exists, you are delivering them from a particular existential danger, even if not from the generic existential danger they face constantly by virtue of being alive. But when you create a life, you are delivering a new "hostage to fortune" and creating an existentially endangered being where none previously existed.

    I think on re-reading this that Robin's initial comment was meant to be ironic, or at least a provocative extension of Eliezer's ideas.

    As far as Eliezer's point, I would imagine that rabbis and other moral philosophers would agree that saving two lives is better than saving one. Beyond that the calculus of human lives is a difficult problem. Many people would say we should not sacrifice one to save two. There is this distinction between active and passive actions, which are judged very differently. It's all something of a mess.

    5phob14y
    Utilitarianism to the rescue, then.
    3AndyC10y
    Utilitarianism is unlikely to rescue anyone from the conundrum (unless it's applied in the most mindless way -- in which case, you might as well not think about it). There's an obvious social benefit to being secure against being randomly sacrificed for the benefit of others. You're not going to be able to quantify the utility of providing everyone in society this benefit as a general social principle, and weigh the benefit of consistency on that point against the benefit of violating the principle in any given instance, any more easily than you could have decided the issue without any attempt at quantification.

    Charles, you might want to read some of Peter Singer's writings on this point.

    Robin, it's clear that relative frequencies exist and matter somehow, even though it might seem like they shouldn't (e.g. because of the ordering problem described in Dr. Bostrom's paper). We observe random events with nonuniform distributions to occur according to the distribution, as opposed to uniformly. We don't live in an extremely bizarre, acausal world even though there are an infinite number throughout spacetime, because the laws of physics are such as to make bizarre worlds rarer than normal ones (even though there are many more possible bizarre worlds than normal ones). "Difficult topic" is probably an understatement.

    Robin, we can definitely agree that my notion about relative conditional frequencies is not at all clearly true. This is one of those rare, rare issues that still confuses even me. As such - this is an important general principle, that I'd like to emphasize - when you try to model things that are deeply confusing and mysterious to you, you should not be very confident in your judgments about them.

    If infinite people exist, how do our subjective probabilities come out right - why don't we always see every possible die roll with probability 1/6, even when the dice are loaded? How is computation possible, when every if statement always branches both ways? I seriously don't know. Maybe the numbers are finite but just very large. But, if for whatever reason it is possible to flip a biased coin and indeed see mostly heads, then we can try to shape the outcomes of people's lives so that their futures are mostly happy. I don't claim to be sure of this. It is just my attempt to make things add up to normality.

    Jeremy, see Nick Bostrom's paper.

    and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child.

    But what if, while sipping this diet coke, you're fiddling on your laptop computer to organise the shipment of HIV drugs to Africa? If, at the moment the second child's skull gets crushed by the onrushing train, you manage to secure governmental support for a deal which will save thousands of people? If you wipe the splattered blood from your suit, as you rush off to open a new orphanage in India?

    Something still feels wrong. I think our intellectual urge to save lives is... (read more)

    There's also the issue of immediacy.

    Your organisation of HIV drugs is likely something that could wait a minute, especially with an excuse as universally acceptable as saving a child from being run over by the train.

    Thus, it is nigh certain that you could achieve both goals.

    As for the philanthropist, I think the relevant heuristic is that we approve of anyone who saves lives, to socially reinforce the urge for others to do so. If our instincts developed in a tribal environment, then saving a life, or a small group of lives, is the best that anyone can realistically do, so we had no need to scale our admiration to a larger scale.

    But if we are to become less biased, and disdain the philanthropist who spends his life-saving money inefficiently, we should be totally consistent about it, and disdain far, far more those who could spend money to save lives and don't (unfortunately, that probably includes most of us).

    In a Big World, which this one appears to be on at least three counts (spatially infinite open universe, inflationary scenario in Standard Model, and Everett branches), everyone who could exist already exists with probability 1.

    Infinite reasoning has many issues. Since we can only have finite amounts of evidence, then there will always be an "X" such that the probability of "the universe is of size X or bigger" becomes tiny - swamped by uncertainties in our reasoning, even the possible uncertainties in our logic (by the way, if this is ... (read more)

    I am rather surprised that no one is questioning the unspoken presupposition that all human lives are of equal value.

    They certainly aren't in my estimation.

    Acksiom, yes, I find it strange as well. Certainly, people in our immediate community are more valuable than people we have never met in other continents. However, I don't think "community" should include beyond those who we actually interact with. It shouldn't include abstract groupings such as "state" or "nation". Supporting your high-school sports team is fine.

    7phob14y
    I don't see why they should be more valuable. From a selfish perspective, it might feel worse to lose someone you know, but from a charitable perspective, I don't value someone merely because I am familiar with them.

    You can be more about what actions are likely to save a life than about what actions are likely to save many lives.

    What if, as we approach the Singularity, it is provably or near-provably necessary to do unethical things like killing a few people or letting them die to avoid the worst of Singularity outcomes?

    (I am not referring here to whether we may create non-Friendly AGI. I am referring to scenarios even before the AGI "takes over.")

    Such scenarios seem not impossible, and creates ethical dilemmas along the lines of what Yudkowsky mentions here.

    Certainly, people in our immediate community are more valuable than people we have never met in other continents.

    On a personal level, of course. But morally and ethically, and especially if you are looking for universal ethical values, this is most definitely not the case.

    I am rather surprised that no one is questioning the unspoken presupposition that all human lives are of equal value.

    That presupposition is an unjustified bias, but I feel a practical one. We've seen in the past what happens when human lives were openly valued at different levels, and the... (read more)

    3christopherj10y
    I'm not sure it makes for bad universal ethics. "Every man for himself, if he is able" is a very sturdy sentiment, not prone to abuse. Similarly, every individual, family, social circle, city, state, nation, is responsible first for their own well-being, and only then for that of others. Certainly, you could do more overall good by helping those whose needs are most cost effective, or more concentrated good by helping those whose needs are most severe -- but that is not an evolutionary stable strategy. Not that I'm a big fan of evolution, especially when it comes to ethics, but let me put it this way -- it is very rare to see someone who works overtime so that they can donate money to save strangers. If you're trying to figure out a universal code of ethics, it would probably help if lots of people, or at least yourself, are willing to follow it. If not, it might still have some use if people shift their morality at least a little toward the optimum, and a lot of value if it can be implemented in an AI. But for now, it would be more useful to have a decently good but popular code of ethics, and that probably means valuing "our own" more than "others". Thought experiment: what's the social status of someone who follows a benefit for community member fund raising drive, reminding people how many lives in Africa could be saved with that same money?
    [-][anonymous]17y00

    Also, human lives can have different instrumental value but the same inherent value, such that (for instance) a researcher in area X that has the potential to save many, many lives is worth more instrumentally than a random man-on-the-street.

    Joshua, what kind of scenarios could those be? (But I would do a straightforward expected-lives-saved calculation, keeping in mind the uncertainty of whether it would actually move the Singularity forward, and whether bad PR and having the police on my tail could delay the Singularity. The actual action would depend qu... (read more)

    "Certainly, people in our immediate community are more valuable than people we have never met in other continents. However, I don't think "community" should include beyond those who we actually interact with. It shouldn't include abstract groupings such as "state" or "nation". Supporting your high-school sports team is fine."

    Yeah, wouldn't the world be a great place if everyone thought like this... screw helping the world... let's just help ourselves and those whom we interact with. Oh yeah, and while we are at it, ... (read more)

    0Capla9y
    You're not making your point. It's prisoner's dilemma. You can't control how the other party acts.

    The choice between an "averagist" and "totalist" model of optimal human welfare is a tough one. The averagist wants to maximize average happiness (or some such measure of welfare); the totalist wants to maximize total happiness. Both lead to unfortunate reductio arguments. Average human welfare can be improved by eliminating everyone who is below average. This process can be repeated successively until we have only one person left, the happiest man in the world. The totalist would proceed by increasing the population to the very edge of... (read more)

    3CronoDAS13y
    "Obvious" counterargument: If you kill everyone else, the happiest man in the world will become less happy.
    0Normal_Anomaly12y
    Except that if they still believe their lives are worth living, then you are causing them disutility by violating their preference to survive. It also causes everyone everyone else disutility because they don't want other people killed, and because they become worried about themselves or their families dying if they become unhappy. It also eliminates the future possibility of the killed people's lives improving.

    I believe some models of physics require the universe to be infinite

    These are dependent on certain assumptions, the most general of which is the fact that the laws of physics be the same everywhere (the universe is seen as "isotropic and homogeneous"). But those sort of principles arise from observation.

    And we can never be entirely sure that they are true. Now, normally this doesn't matter - the probability of them being false is so tiny that we can consider them true. But infinity is nasty. Let's put an probability estimate on "There are mo... (read more)

    4christopherj10y
    The idea that the universe is "isotropic and homogeneous" does not require it to be infinite. For example, the universe could be shaped like a sphere's surface, which is closed and finite. The size and shape of the universe, and its ultimate fate, are answered by the question "What is the sum of the angles of a very large triangle?" (this turns out to be equivalent to measuring Omega, the density parameter of the universe). If Omega > 1, then the universe is closed, shaped like a sphere, finite, and will collapse in a Big Crunch. If Omega = 1, then the universe is flat (but could still be finite, eg doughnut-shaped, or infinite, like the Cartesian plane), and end in heat death. If Omega < 1, then the universe is open, shaped like an infinite saddle, and will end in a Big Rip. To the best of our knowledge, Omega appears to be 1, plus or minus Omega might actually be bigger or smaller than 1. Shape of the Universe Fate of the Universe

    I'd be curious to know if there is a principled model for optimal human happiness which does not conflict so violently with our moral instincts.

    Seems we need to take "creating" and "destroying" humans out of the equation - total or average happiness can work fine in a fixed population (and indeed are the same). We can tweak the conditions maybe, and count the dead and the unborn as having a certain level of happiness - but it will still lead to assumptions that violate our instincts; there will always be moments where creating a new lif... (read more)

    Stuart, as far as the infinities go, I can imagine arguments that suggest that an infinite universe is more likely than a finite one, especially a finite one that is extremely large. For example, if the laws of physics were to turn out to be much simpler for an infinite universe, given our observations, that would be evidence in that direction. Conceptually, infinity is a simpler concept than particular very large numbers, so Occam's razor might lead us to choose infinity.

    In fact I would argue that if your prior has a non-zero probability for infinite size... (read more)

    Joe, that wasn't my point. I believe ethical theories can and should try to capture the spirit of morality. A truer way to appreciate the value of a stranger's life is to understand that to many people close to her, she is not merely a stranger. I was mainly giving a possible reason for not DRASTICALLY sacrificing your money for donation.

    And I fully agree with Stuart that it should an exception and not a rule.

    But (I call back) I already saved one child from the train tracks, and thus I am "unimaginably" far ahead on points. Whether I save the second child, or not, I will still be credited with an "unimaginably" good deed. Thus, I have no further motive to act. Doesn't sound right, does it?

    This isn't a problem with the claim that a human life is of infinite value as such. It's a problem with the claim that it's morally appropriate to attach the concept of comparable value to human lives at all. It's what happens when you start taking most u... (read more)

    Paul, since my background is in AI, it is natural for me to ask how a "duty" gets cashed out computationally, if not as a contribution to expected utility. If I'm not using some kind of moral points, how do I calculate what my "duty" is?

    How should I weigh a 10% chance of saving 20 lives against a 90% chance of saving one life?

    If saving life takes lexical priority, should I weigh a 1/googleplex (or 1/Graham's Number) chance of saving one life equally with a certainty of making a billion people very unhappy for fifty years?

    Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization.

    Eliezer: that's a good point as far as it goes. But the answer many contemporary deontologists would give is that you can't expect to be able to computationally cash out all decision problems, and particularly moral decision problems. (Who said morality was easy?) In hard cases, it seems to me that the most plausible principles of morality don't provide cookie-cutter determinate answers. What fills in the void? Several things kick in under various versions of various theories. For example, some duties are understood as optional rather than necessary,... (read more)

    how a "duty" gets cashed out computationally, if not as a contribution to expected utility. If I'm not using some kind of moral points, how do I calculate what my "duty" is?

    We humans don't seem to act as if we're cashing out an expected utility. Instead we act as if we had a patchwork of lexically distinct moral codes for different situations, and problems come when they overlap.

    Since current AI is far from being intelligent, we probably shouldn't see it as compelling argument for how humans do or should behave.

    Such questions form the b... (read more)

    I don't see the relevancy of Mr. Burrows' statement (correct, of course) that "Very wealthy people give less, as a percentage of their wealth and income, than people of much more limited means. For wealthy philanthropists, the value from giving may be in status from the publicity of large gifts."

    This is certainly of concern if our goal is to maximize the virtue of rich people. If it is to maximize general welfare, it is of no concern at all. The recipients of charity don't need a percentage's worth of food, but a certain absolute amount.

    Is there anyone else who reads this and thinks, "but my altruism is ultimately grounded in the emotional effect that altruism has on myself; it cannot be otherwise. I'm only delusion myself to think that more lives are better, since from my perspective, they feel the same (and I'm trapped in my perceptive. My perspective is the only one that can matter in my decision making)." That is, I don't actually try an maximize utility generally, just my own utility. It just so happens that the primary way to maximize my utility in most situations is to he... (read more)

    0lackofcheese9y
    There is no need for morality to be grounded in emotional effects alone. After all, there is also a part of you that thinks that there is, or might be, something "horrible" about this, and that part also has input into your decision-making process. Similarly, I'd be wary of your point about utility maximisation. You're not really a simple utility-maximising agent, so it's not like there's any simple concept that corresponds to "your utility". Also, the concept of maximising "utility generally" doesn't really make sense; there is no canonical way of adding your own utility function together with everyone else's. Nonetheless, if you were to cash out your concepts of what things are worth and how things ought to be, then in principle it should be possible to turn them into a utility function. However, there is a priori no reason that that utility function has to only be defined over your own feelings and emotions. If you could obtain the altruistic high without doing any of the actual altruism, would it still be just as worthwhile?
    0hyporational9y
    The high is a mechanism by which values are established. Reward or punishment in the past but not necessarily in the present is sufficient for making you value something in the present. Because of our limited memories introspection is pretty useless for figuring out whether you value something because of the high or not.
    0lackofcheese9y
    If you have the values already and you don't have any reason to believe the values themselves could be problematic, does it matter how you got them? It may be that an altruistic high in the past has led you to value altruism in the present, but what matters in the present is whether you value the altruism itself over and above the high.

    The idea is that valuing a life as that important is what guides the HOW to save the nation. The how is with utmost regard for all people's existence - especially their exposure to suffering.

    By valuing people, in this case human life, to that great a degree, it establishes respectful acknowledgement of the great forces which were set in motion to create such a marvel as a human being.

    Plus, there IS always the accountability for having devalued a human life when that is the beginning of the end of good policy, behavior, ethics, decency. To value one human so much? Makes you valuable to humans. Etc..

    I know I'm way behind for this comment, but still: this point of view makes sense on a level, that saving additional people is always(?) virtuous and you don't hit a ceiling of utility. But, and this is a big one, this is mostly a very simplistic model of virtue calculous, and the things it neglected turn out to have a huge and dangerous impact.

    Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer.

    First case in point: can a ... (read more)