Max Tegmark observed that we have three independent reasons to believe we live in a Big World:  A universe which is large relative to the space of possibilities.  For example, on current physics, the universe appears to be spatially infinite (though I'm not clear on how strongly this is implied by the standard model).

If the universe is spatially infinite, then, on average, we should expect that no more than 10^10^29 meters away is an exact duplicate of you.  If you're looking for an exact duplicate of a Hubble volume - an object the size of our observable universe - then you should still on average only need to look 10^10^115 lightyears.  (These are numbers based on a highly conservative counting of "physically possible" states, e.g. packing the whole Hubble volume with potential protons at maximum density given by the Pauli Exclusion principle, and then allowing each proton to be present or absent.)

The most popular cosmological theories also call for an "inflationary" scenario in which many different universes would be eternally budding off, our own universe being only one bud.  And finally there are the alternative decoherent branches of the grand quantum distribution, aka "many worlds", whose presence is unambiguously implied by the simplest mathematics that fits our quantum experiments.

Ever since I realized that physics seems to tell us straight out that we live in a Big World, I've become much less focused on creating lots of people, and much more focused on ensuring the welfare of people who are already alive.

If your decision to not create a person means that person will never exist at all, then you might, indeed, be moved to create them, for their sakes.  But if you're just deciding whether or not to create a new person here, in your own Hubble volume and Everett branch, then it may make sense to have relatively lower populations within each causal volume, living higher qualities of life.  It's not like anyone will actually fail to be born on account of that decision - they'll just be born predominantly into regions with higher standards of living.

Am I sure that this statement, that I have just emitted, actually makes sense?

Not really.  It dabbles in the dark arts of anthropics, and the Dark Arts don't get much murkier than that.  Or to say it without the chaotic inversion:  I am stupid with respect to anthropics.

But to apply the test of simplifiability - it seems in some raw intuitive sense, that if the universe is large enough for everyone to exist somewhere, then we should mainly be worried about giving babies nice futures rather than trying to "ensure they get born".

Imagine taking a survey of the whole universe.  Every plausible baby gets a little checkmark in the "exists" box - everyone is born somewhere.  In fact, the total population count for each baby is something-or-other, some large number that may or may not be "infinite" -

(I should mention at this point that I am an infinite set atheist, and my main hope for being able to maintain this in the face of a spatially infinite universe is to suggest that identical Hubble volumes add in the same way as any other identical configuration of particles.  So in this case the universe would be exponentially large, the size of the branched decoherent distribution, but the spatial infinity would just fold into that very large but finite object.  And I could still be an infinite set atheist.  I am not a physicist so my fond hope may be ruled out for some reason of which I am not aware.)

- so the first question, anthropically speaking, is whether multiple realizations of the exact same physical process count as more than one person.  Let's say you've got an upload running on a computer.  If you look inside the computer and realize that it contains triply redundant processors running in exact synchrony, is that three people or one person?  How about if the processor is a flat sheet - if that sheet is twice as thick, is there twice as much person inside it?  If we split the sheet and put it back together again without desynchronizing it, have we created a person and killed them?

I suppose the answer could be yes; I have confessed myself stupid about anthropics.

Still:  I, as I sit here, am frantically branching into exponentially vast numbers of quantum worlds.  I've come to terms with that.  It all adds up to normality, after all.

But I don't see myself as having a little utility counter that frantically increases at an exponential rate, just from my sitting here and splitting.  The thought of splitting at a faster rate does not much appeal to me, even if such a thing could be arranged.

What I do want for myself, is for the largest possible proportion of my future selves to lead eudaimonic existences, that is, to be happy.  This is the "probability" of a good outcome in my expected utility maximization.  I'm not concerned with having more of me - really, there are plenty of me already - but I do want most of me to be having fun.

I'm not sure whether or not there exists an imperative for moral civilizations to try to create lots of happy people so as to ensure that most babies born will be happy.  But suppose that you started off with 1 baby existing in unhappy regions for every 999 babies existing in happy regions.  Would it make sense for the happy regions to create ten times as many babies leading one-tenth the quality of life, so that the universe was "99.99% sorta happy and 0.01% unhappy" instead of "99.9% really happy and 0.1% unhappy"?  On the face of it, I'd have to answer "No."  (Though it depends on how unhappy the unhappy regions are; and if we start off with the universe mostly unhappy, well, that's a pretty unpleasant possibility...)

But on the whole, it looks to me like if we decide to implement a policy of routinely killing off citizens to replace them with happier babies, or if we lower standards of living to create more people, then we aren't giving the "gift of existence" to babies who wouldn't otherwise have it.  We're just setting up the universe to contain the same babies, born predominantly into regions where they lead short lifespans not containing much happiness.

Once someone has been born into your Hubble volume and your Everett branch, you can't undo that; it becomes the responsibility of your region of existence to give them a happy future.  You can't hand them back by killing them.  That just makes their average lifespan shorter.

It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular.

And that's why, when there is research to be done, I do it not just for all the future babies who will be born - but, yes, for the people who already exist in our local region, who are already our responsibility.

For the good of all of us, except the ones who are dead.

New Comment
72 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm completely not getting this. If all possible mind-histories are instantiated at least once, and their being instantiated at least once is all that matters, then how does anything we do matter?

If you became convinced that people had not just little checkmarks but little continuous dials representing their degree of existence (as measured by algorithmic complexity), how would that change your goals?

Also "standard model" doesn't mean what you think it means and "unpleasant possibility" isn't an argument.

the most important adaptation an ideology can make to improve its inclusive fitness for consumption by the human brain is to

  1. refrain from making falsifiable claims
  2. convince its followers to aggressively expand

1 is accomplished by making the ideology rest on a priori claims. everything that rests on top of that claim can be perfectly logical given the premise. since most people don't examine their beliefs axiomatically, few will question the premise as long as they are provided the bare minimum of comfort. 2 is accomplished by activating the "mor... (read more)

The data you point to only seem to suggest the universe is large; how do they also suggest it "is large relative to the space of physical possibilities"? The likelihood ratio seems pretty close as far as I can see.

With steven, I don't see how, on your account, any of your actions can in fact effect the "proportion of my future selves to lead eudaimonic existences". If people in your past couldn't effect the total chance of your existing, how is it that you can effect the total chance of any particular future you existing? And how can there be a differing relative chance if the total chances all stay constant?

Thanks for the Portal reference. That was great.

Steven, I call the little continuous dials the "amount of reality-fluid" to remind myself of how confused I am.

"Unpleasant possibility" isn't an argument but I didn't feel like going into the rather complex issues involved (probability of UnFriendly AI running ancestor simulations, how many of them, versus probability of Friendly AI, versus probability of hitting the Unhappy Valley with a near-miss FAI or a meddling-dabbler AGI trained on smiling faces, versus probability of inhuman aliens creating minds that we care about, plus going into the issues of QTI).

Nazgul, you can act swiftly to capture all resources in your immediate vicinity regardless of whether you plan to share them out among few or many individuals.

Robin, spatial infinity would definitely be large relative to the volume of physical possibilities (infinite versus finite). With many-worlds and a mangling cutoff... then not every physical possibility would be realized, but I would expect most possible babies would be. All the babies worth making could be duplicated many times over among the Everett branches of all moral civilizations, even if any given branch kept their populations low and living standards high. Does it look different to you?

Most of the concepts here are ethical. Whether some contraption has the same personal identity as you do, and whether it's good to have that contraption copied/destroyed, is a moral question, in a case when the unnatural concept of what's right gets extended to very strange situations. Whether we cut this question in terms of personal identity or patterns of elementary particles is a matter of cognitive algorithm used to determine the decision. It doesn't matter whether an upload is called "the same person" as its biological preimage, it only mat... (read more)

Eliezer, our data only show that the universe looks pretty flat, not that it is exactly flat. And it could be finite and exactly flat with a non-trivial topology. On if all babies are duplicated in MWI, it seems to depends on exactly what part of the local physical state is required to be the same.

Vladimir, many of these anthropic-sounding questions can also translate directly into "What should I expect to see happen to me, in situations where there are a billion X-potentially-mes and one Y-potentially-mes?" If X is a kind of me, I should almost certainly expect to see X; if not, I should expect to see Y. I cannot quite manage to bring myself to dispense with the question "What should I expect to see happen next?" or, even worse, "Why am I seeing something so orderly rather than chaotic?" For example, saying "I only care about people in orderly situations" does not cut it as an explanation - it doesn't seem like a question that I could answer with a utility function.

I have not been able to dissolve "the amount of reality-fluid" without also dissolving my belief that most people-weight is in ordered universes and that most of my futures are in ordered universes, without which I have no explanation for why I find myself in an ordered universe and no expectation of a future that is ordered as well.

In particular, I have not been able to dissolve reality-fluid into my utility function without concluding that, by virtue of carin... (read more)

Eliezer, I don't think your reality fluid is the same thing as my continuous dials, which were intended as an alternative to your binary check marks. I think we can use algorithmic complexity theory to answer the question "to what degree is a structure (e.g. a mind-history) implemented in the universe" and then just make sure valuable structures are implemented to a high degree and disvaluable structures are implemented to a low degree. The reason most minds should expect to see ordered universes is because it's much easier to specify an ordered ... (read more)

and where I just said "universe" I meant a 4D thing, with the dials each referring to a 4D structure and time never entering into the picture.

I was going to make about the same objection steven makes -- if you take this stuff (MWI, anthropic principle, large universes) seriously as a guide to practical, everyday ethical decision-making, it seems to lead inexorably to nihilism -- no decision you make matters very much. That doesn't sound at all desireable, so my instinct is to suspect that there is something wrong either with the physics ideas, or (more likely) with the way they are being applied. But maybe not! Maybe nihilism is valid, but then why are we bothering to be rational or to do any... (read more)

mtraven, Why we are "bothering to be rational or to do anything at all" (rather than being nihilists), if nihilism seems likely to be valid? Well, as long as there is a chance, say, only a .0000000000000001 chance, that nihilism is invalid, there is nothing to lose and possibly something to gain from assuming that nihilism is invalid. This refutes nihilism completely as a serious alternative. I think basically the same is true about Yudkowsky's fear that there are infinitely many copies of each person. Even if there is only a .0000000000000001 chance that there are only finitely many copies of each of us, we should assume that that is the case, since that is the only type of scenario where there can be anything to gain or lose, and thus the only possible type of scenario that might be a good idea to assume to be the case. That is, given the assumption that one cannot affect infinite amounts by adding, no matter how much one adds. To this, I am an agnostic, if not an atheist. For example, adding an infinite amount A to an infinite amount A can, I think, make 2A rather than 1A. Ask yourself which you would prefer: 1) Being happy one day per year and suffer the rest of the time of each year, for an infinite number of years, or 2) The other way around? Would you really not care which of these two would happen? You would. Note that this is the case even when you realize that a year is only finitely more than a day, meaning that each of alternatives 1 and 2 would give you infinitely much happiness and infinitely much suffering. This strongly suggests that adding an infinite amount A to an infinite amount A produces more than A. Then why wouldn't also adding a finite amount B to an infinite amount A produce more than A? I would actually suggest that, even given classical utilitarianism, my life would not be worthless just because there are infinitely much happiness and infinitely much suffering in the world with or without me. Each person's finite amount of happiness mu

"It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular."

This doesn't make sense to me. A superintelligence could:

  1. A superintelligence could create a semi-random plausible human brain emulation de novo, and whatever this emulation was, it would be the continuation of some set of human lives.

  2. A superintelligence could conduct simulations to explore the likely distribution of minds across the multiverse, as wel

... (read more)

Carl, that assumes QTI, i.e., no subjective conditional probability ever contains a Death event. Things do get strange then.

Eliezer: I'm not sure you'd really get much interference effects between indistinguishable hubble volumes.

What I mean is you'd need some event that has in its causal history stuff from two "equivalent" hubble volumes, right?

Otherwise, well, how would any nontrivial interference effects related to the indistinguishability between multiple hubble volumes manifest? Configuration space isn't over the hubble volumes but over the entirety of the universe, right?

I still see no adequate answer to the question of how you can change P(A|B) if you can't change P(A) or P(B). If every possible mind exists somewhere, and if all that matters about a mind is that it exists somewhere, then no actions make any difference to what matters.

The idea is that you can't change whether a mind exists but you can, possibly, change how much of it exists, or perhaps, how much of different futures it has. By multiply instantiating it? I guess so. It doesn't seem to make much sense, but if I don't presume something like this, I have to weight Boltzmann brains the same as myself.

I'm not trying to rest this argument on the details of the anthropics. Something more along the lines of - in a Big World, I don't have to worry as much about creating diversity or giving possibilities a chance to exist, rel... (read more)


Eliezer, it seems you are just expressing the usual intuition against the the "repugnant conclusion", that as long as the universe has a lot more creatures than are on Earth now, having even more creatures can't be very important relative to each one's quality of life.

But in technical terms if you can talk about how much of a mind exists, and can promote more of one kind of mind relative to another, then you can talk about how much they all exist, and can want to promote more minds existing to a larger degree.

Well, this is morality we're talking about, right? So in that case we should ask ourselves what we want.

Let's say that there are already 10^10^20 people out there, and you're suddenly blessed with a thousand times the resources. Would you rather have 10^(10^20 + 3) people in existence, or raise the standard of living by a factor of a thousand?

To look at it another way, let's say that you recently glanced up out of the corner of your eye and saw a dust speck. I have a thousand units of resource. Would you prefer that I simulate a thousand different versions of Robin who saw the dust speck in slightly different locations in a 10 x 10 x 10 grid, or would you rather have a thousand times as much money?

For me, the value of creating new existences is linked to their diversity; as you create more people, you run out of diversity, and so it becomes more important to create the best people rather than to create new people.

Suppose that Earth were the only planet, the only branch, and the only region in all of existence. Then we might want to have mathematicians share all possible developments with each other, in order to prevent them from duplicating each other's work and let them prove... (read more)

"So in that case we should ask ourselves what we want."


The standard problem is that people have incoherent preferences over various population scenarios. They prefer to substantially increase the population in exchange for a small change in QOL, but they reject the result of many such tradeoffs in sequence. Critical-level views, or ones that weight both QOL and total independently, all fail at resolution.

Carl is right; this is a minefield in terms of misleading intuitions. Also, there is already a substantial philosophy literature dealing with it; best to start with what they've learned.


Vladimir, many of these anthropic-sounding questions can also translate directly into "What should I expect to see happen to me, in situations where there are a billion X-potentially-mes and one Y-potentially-mes?" If X is a kind of me, I should almost certainly expect to see X; if not, I should expect to see Y. I cannot quite manage to bring myself to dispense with the question "What should I expect to see happen next?" or, even worse, "Why am I seeing something so orderly rather than chaotic?" For example, saying &qu
... (read more)

I'm familiar with Parfit's Repugnant Conclusion, and was actually planning to do a post on it at some point or another, because I took one look and said "Isn't that just scope insensitivity?" But I also automatically translated the problem into Small World terms so that new people were actually being brought into existence; and, in retrospect, even then, visualized it in terms of a number of people small enough that they could have reasonably unique experiences (that is, not a thousand copies of Robin Hanson looking at a dust speck in slightly different places).

With those provisos in place, the Repugnant Conclusion is straightforwardly "repugnant" only because of scope insensitivity. By specification, each new birth is something to celebrate rather than to regret - it can't be an existence just marginally good enough to avoid mercy-killing after being born, with the disutility of the death taken into account. It has to be an existence containing enough joys to outweigh any sorrows, so that we celebrate its birth. If each new birth is something to celebrate, then the "repugnance" of the Repugnant Conclusion is just because we're tossing the thousand... (read more)

I'm just incredibly skeptical of attempts to do moral reasoning by invoking exotic metaphysical considerations such as anthropics, even if one is confident that ultimately one will have to do so. Human rationality has enough trouble dealing with science. It's nice that we seen to be able to do better than that, but THIS MUCH better? REALLY? I think that there are terribly strong biases towards deciding that "it all adds up to normality" involved here, even when it's not clear what 'normality' means. When one doesn't decide that, it seems that the tendency is to decide that it all adds up to some cliche, which seems VERY unlikely. I'm also not at all sure how certain we should be of a big universe, but personally I don't feel very confident of it. I'd say it's the way to bet, but not at what odds it remains the way to bet. I rarely find myself in practical situations where my actions would be different if I had some particular metaphysical belief rather than another, though it does come up and have some influence on e.g. my thoughts on vegetarianism.


Good lives versus many lifeforms? Yes please.

I confessed myself confused! Really, I did! But even being confused, I've got to update as best I can. In a sufficiently large universe, I care more about better lives and less about creating more people. Is that really so complicated?

You might be interested in the last section of Motion Mountain, the free online physics textbook. It presents absolute limits for various measures of the universe, derived from quantum mechanics and general relativity. It appears that we live in a finite universe, though all of this stuff is pretty speculative.

I find it suspicious that people's preferences over population, lifespan, standard of living, and diversity seem to be "kinked" near their familiar world. A world with 1% of the population, standard of living, lifespan, or diversity of their own world seems to most a terrible travesty, almost a horror, while a world with 100 times as much of one of these factors seems to them at most a small gain, hardly worth mentioning. I suspect a serious status quo bias.

Couldn't this argument cut the other way? Maybe the only reason we think a small population with an average utility of 100 is worse than a billion people with an average utility of 99 is that we're "kinked" to a world inhabited by billions. Personally, when I read "The City and the Stars," which takes place on a very sparsely populated future Earth, I agreed with the author that it was a bad thing that the local population was less ambitious and curious than the humans of the past. But I did not think it was a horrible travesty that there were so few people. I assume that for the duration of my reading I empathized with the inhabitants, and hence found their current population levels desirable. I've noticed the same thing when reading other books set in sparsely populated settings. I wish the inhabitants were better off, but don't think there need to be more of them. A typical argument against "quality" focused population ethics is that they favor much smaller populations with higher qualities of life than we currently have, while an argument against "quantity" focused population ethics is that they favor much larger populations with lower qualities of life than we currently have. Both of these seem counter-intuitive, but which intuition should be kept and which should be rejected? Considering that our moral intuitions developed in small hunter gatherer bands, I wouldn't be surprised if the quality focused population ethics was actually the correct one.
... huh. I started to disagree with you, and found all the examples I came up with didn't actually seem that bad - up to and including a lone loner roaming an empty universe. On the other hand, they do seem a bit ... dull? Lacking the sort of explosive variety I picture in the Good Future.
I agree, I think that the reason that sparsely populated scenarios seem repugnant to us isn't because we want to maximize total utility, and they have a lower total utility level. Rather it's because we value things like diversity, friendship, love, and interpersonal entanglements, and we find the idea of a future where these things do not exist to be repugnant. One argument hardcore total utilitarians use to claim people have inconsistent preferences about population ethics is that when ranking the following populations: A) Ten billion people with ten thousand utility each, for a total utility of 100 trillion. B) 200 trillion people with one utility each, for a total utility of 200 trillion. C) One utility monster with 50 trillion utility. People consider A to be better than both B and C. "Aha!" cry the total utilitarians. "So in one scenario utility is too heavily concentrated, and in another it isn't concentrated enough! Intransitive preferences! Status quo bias!" What the hardcore total utilitarians fail to realize is that the reason people find C repugnant isn't because utility is heavily concentrated, it's that in order to have such high utility when it is the lone being in the universe, the utility monster must place no value at all on diversity, friendship, love, and interpersonal entanglements, and so forth. C isn't repugnant because utility is too concentrated, or because of status quo bias, it's repugnant because the lone inhabitant of C lacks a large portion of the gifts we give to tomorrow. To test this theory I decided to compare populations A, B, and C again, with the stipulation that the multitude inhabiting of A and B were all hermits who never saw each other, and instead of diverse individuals they were repeated genetic duplicates of the same person. Sure enough I found all three populations repugnant. But I might have found C to be a little less repugnant than A and B.
It's possible I'm more of a loner than you, so I find the idea of hermits less repugnant. On the other hand, clones tend to really mess up my intuitions regardless of the hypothetical. I'm pretty sure they should be penalized for lacking diversity, but as for the actual amount ... EDIT: also, be careful you're not imagining these hermits not doing anything fun. Agents getting utility from things we don't value is a surefire way to suck the worth out of a number.
Maybe I was using too strong a word when I said I found it "repugnant." I took your advice and tried to imagine the hermits doing things I like doing when I am alone. That was hard at first, since most of the things I like doing alone still require some other personat some point (reading a book requires an author, for instance). But imagining a hermit studying nature, interacting with plants and animal (the animals obviously have to be bugs and other nonsapient, nonsentient animals to preserve the purity of the scenario, but that's fine with me), doing science experiments, etc, that doesn't seem repugnant at all. But I still prefer, or am indifferent to, one utility monster hermit vs. many normal hermits, especially if the hermits are all clones living in very similar environments. I'm not sure how much I value diversity that isn't appreciated. I think I'd prefer a diverse group of hermits to a nondiverse group, but the fact that the hermits never meet and are unable to appreciate each others diversity seems to make it less valuable to me, the same way a painting that's locked in a room where no one will ever see it is less valuable. That may come back to my belief that value usually needs both an objective and subjective component. On the other hand I might value diversity terminally as well, as I said the fact that no one appreciated the hermit's diversity made it less valuable to me, but not valueless.


Some brute preferences and values may be inculcated by connected social processes. Social psychology seems to point to flexible moral learning among young people (e.g. developing strong moral feelings about ritual purity as one's culture defines it through early exposure to adults reacting in the prescribed ways). Sexual psychology seems to show similar effects: there is a dizzying variety of learned sexual fetishes, and they tend to be culturally laden and connected to the experiences of today, but that doesn't make them wrong. Moral education dedic... (read more)

Be honest, how many of you finished the Portal Song at the end of this post?

Robin, I think I'm being consistent in caring about lifespan, standard of living, and diversity while not caring about population. (Diversity will look like concern for population but it will run into diminishing returns; still, if our Earth were the only civilization, then indeed there would be lots of experiences as-yet unrealized and the diversity motive would be strong. In other words, I'd consistently want a hundred times as much diversity as what we see in the immediate world around us.)

Suppose that instead of talking about people, we were just tal... (read more)

Not sure global diversity, as opposed to local diversity or just sheer quantity of experience, is the only reason I prefer there to be more (happy) people.

Since I probably don't care about abstract existence of music, but about experiencing music, this is correct for music for the wrong reasons, namely limited attention bandwidth. Analogy seduces, but doesn't seem to carry over...

in a Big World, I don't have to worry as much about creating diversity or giving possibilities a chance to exist, relative to how much I worry about average quality of life for sentients.

Can't say fairer than that.

Eliezer, given the proportion of your selves that get run over every day, have you stopped crossing the road? Leaving the house?

Or do you just make sure that you improve the standard of living for everyone in your Hubble Sphere by a certain number of utilons and call it a good day on average?

Eliezer, you know perfectly well that the theory you are suggesting here leads to circular preferences. On another occasion when this came up, I started to indicate the path that would show this, and you did not respond. If circular preferences are justified on the grounds that you are confused, then you are justifying those who said that dust specks are preferable to torture.

it seems in some raw intuitive sense, that if the universe is large enough for everyone to exist somewhere, then we should mainly be worried about giving babies nice futures rather than trying to "ensure they get born".

That's an interesting intuition, but one that I don't share. I concur with Steven and Vladimir. The whole point of the classical-utilitarian "Each to count for one and none for more than one" principle is that the identity of the collection of atoms experiencing an emotion is irrelevant. What matters is increasing the num... (read more)

I'm finding Eliezer's view attractive, but it does have a few counterintuitive consequences of its own. If we somehow encountered shocking new evidence that MWI, &c. is false and that we live in a small world, would weird people suddenly become much more important? Did Eliezer think (or should he have thought) that weird people are more important before coming to believe in a big world?

I think many value the quality of life of their friends and loved ones more than they value hypothetical far-future abstractions. This has to do with evolution's impact on psychology - and doesn't have much to do with how big the universe is.

Eliezer, whenever you start thinking about people who are completely causally unconnected with us as morally relevant, alarm bells should go off.

What's worse though, is that if your opinion on this is driven by a desire to justify not agreeing with the "repugnant conclusion", it may signify problems with your morality that could annihilate humanity if you give your morality to an AI. The repugnant conclusion requires valuing the bringing into existence of hypothetical people with total utility x by as much as reducing the utility of existing peop... (read more)

Eliezer, also consider this: suppose I am a mad scientist trying to decide between making one copy of Eliezer and torturing it for 50 years, or on the other hand, making 1000 copies of Eliezer and torturing them all for 50 years.

The second possibility is much, much worse for you personally. For in the first possibility, you would subjectively have a 50% chance of being tortured. But in the second possibility, you would have a subjective chance of 99.9% of being tortured. This implies that the second possibility is much worse, so creating copies of bad expe... (read more)


You shouldn't waste your time figuring out how to act in an expanding multiverse, as opposed to a simple, single and unitary world. The problem of how to act and live even in the latter case is tough enough. Conditioning your choices on the former perspective is trying to think a god, when you're in fact an animal.

Ever since I realized that physics seems to tell us straight out that we live in a Big World, I've become much less focused on creating lots of people, and much more focused on ensuring the welfare of people who are already alive.

I don't like that reasoning. If you create an interesting person here, in our hubble volume, their interestingness can reflect back to you. The other "copies" 10^(10^50) or so light years away will never have anything to do with you.

I noticed you changed units between the average distance of another you and the average distance of another identical universe. That seems rather pointless. A lightyear is only 16 orders of magnitude larger than a meter, and is lost in rounding compared to 10^115 orders of magnitude.

You mentioned a portion of people. I don't think there's any reason to believe that the universe is this big but still finite, and if it is infinite, there's no way to measure a fraction of people. There are infinity people who's lives are worth living and infinity who's lives ... (read more)


What I do want for myself, is for the largest possible proportion of my future selves to lead eudaimonic existences, that is, to be happy. This is the "probability" of a good outcome in my expected utility maximization. I'm not concerned with having more of me - really, there are plenty of me already - but I do want most of me to be having fun.

Are you attracted to quantum suicide to win the lottery then? (Put to one side for a moment the consequences for your friends, etc who would have to deal with your passing away)

How does quantum suicide increase the proportion of one's future selves who are happy?
You could, for example, play the lottery and correlate your survival with winning...
As long as you don't count the future selves who die in the other worlds in the denominator. It's not clear to me that they shouldn't count. Using that logic, though, you could just commit painless suicide anytime you're slightly unhappy, and your only surviving selves would never be unhappy!
And what's wrong with this idea? Evolution gave us a strong instinct to not die, but evolution also gave us the false impression that our progression through time resembled a line rather than a tree, and that there's only one planet earth. Knowing now that you are (the algorithm of) a tree, perhaps it is worth rethinking the dying=bad idea? Death, if used selectively, could mean a very happy (if less dense) tree. If we live in a big world, this logic becomes very compelling. Who cares about killing 99% of yourself if you're infinite anyway, and the upside is that you end up with an infinite amount of happiness rather than an infinite sad/happy mixture?
I can't tell if you're playing devil's advocate or not... Surely you've heard of the categorical imperative and can predict the radical decrease in the happiness density of the universe if that was the reasoning employed by the all sapient beings.
Sure, if everyone realized what a great idea quantum suicide was. But I think you can rest assured that that's not going to happen. Assuming, that is, that it is actually a good idea... Also I don't govern my action with the categorical imperative. It works in some cases, but in general it is awful.
You have to assume that everyone will join in on this scheme, if you're trying to argue in favor of it. If only a limited subset of people kill themselves when they're unhappy, then that leaves a huge number of people mourning the (to them) meaningless death of their loved ones. You'd have to not only kill yourself, but also make sure that anyone who was hurt by your death died as well.
I was assuming that you were unconcerned with the sadness/mourning of those around you, or were prepared to make that tradeoff for some reason. (For example, egoism, or perhaps lack of friends/relations, or extreme need for the money)
4Eliezer Yudkowsky
To be precise, the argument would run that the universe will end up being dominated by beings that care more about their measure, and so there is a categorical imperative for happier beings to care more about their measure.
I'm not following. If all sapient beings applied this reasoning, only the most happy would decide not to die, and the happiness density would increase.
Wrote this and hit reload, but Kaj beat me to it. I'm thinking most intelligences would kill themselves a lot in this scenario leading to a very empty universe for any particular one of them. The relevant density is "super happy entity per cubic parsec" not "super happy entity per total entities". Consider, right now, if all members of some religion killed themselves unless their miracles started coming true. From the perspective of almost all the measure of non-members of the religion, it would look like a simple suicide cult. Or imagine the LHC really could create a black hole and destroy the earth. Everyone votes on a low probability positive event and we trigger the LHC if it doesn't happen. From the perspective of the measure of almost all the aliens in the universe (if they exist) our sun has a black hole orbiting at 93 million miles. If this sort of process was constantly happening among all intelligent species on all planets, we'd be in an empty universe (well, one with a lot of little blackholes anyway). The probability of running into other intelligent life "post anthropic principle" would be their practically non-existent measure times our practically non-existent measure. Something I've actually wondered about is whether the first replicating molecule with the evolutionary potential to generate intelligent life was radically unlikely (requiring a feat of quantum chicanery), and that's why the universe appears empty to us. I don't know of anyone who published this first, but I assume someone beat me to it because it often seems to me that all thinkable thoughts have generally been generated by someone else decades or centuries ago :-P
Huh. That's the most interesting explanation for the Fermi paradox in a while. (Not exactly plausible, mind you, but an interesting idea nevertheless.)
I've read something like this here.
Huh. Copenhagen interpretation of quantum mechanics isn't pretty, but I'm not ready to die for it.

Do you have any pointer on why you believe so firmly in an infinite universe ? Reading books on physics (from mainstream authors like Stephen Hawking or Christian Magnan, or from less conventional books like Julian Barbour's End of Time) I got the impression that the current consensus is that the universe is finite, expanding, but currently finite. There may be no limit of its size if, as it seems now, the expansion rate is growing - but right now it has a finite size.

And from a purely theoretical point of view, infinity doesn't seem very coherent to me. I... (read more)

Try this or this or this. Popular physics books are really bad about these things.

... But there's no sense crying over every mistake, you just keep on trying till you run out of negentropy.

May I suggest 'But there's no sense crying over every inaccuracy / you just keep on trying till you use up your negentropy'? Rhymes and balances the syllable count.
Brb, writing rationalist hymn. :P

I'm worried this is just an elaborate justification to not have as many children as possible. But I'm not convinced that I'm obligated to help all other 'beings', of any class or category, instead of merely not harming (most of) them.

I don't think "infinite space" is enough to have infinite copies of me. You'd also need infinite matter, no?


[putting aside "many worlds" for a moment]