New Answer
New Comment

3 Answers sorted by

Ruby

60

An argument sometimes given for colonizing space is as a measure against existential risk. Human settlements beyond Earth might offer some measure of redundancy and backup in the event of catastrophe on Earth.

Whether one thinks this is a argument for space colonization will depend on what one thinks the likely catastrophes on Earth might be and how well space colonization overcomes them. Notably, many would consider space colonization (certainly "nearby" settlement) to offer at best limited protection from unsafe AI.

I do expect the nature of this argument to shift depending on the timescale. On very short timescales in which humanity is at most aspiring to colonize Mars, then constructing refuges on Earth might be a better investment. On longer timescales (the timescales over which we might aspire to colonize interstellar and intergalactic colonization), we might imagine human civilization has matured past any significant existential risk. If not, there could certainly be "safety" in sending some of our civilization out at speeds which cause those refuges to be safely out of reach due to expansion of the universe.

Ruby

50

The assumption that we can colonize the stars is core to the Astronomical Waste Argument made in favor of working on existential risk reduction. If this assumption is weakened, so is the case for prioritizing work existential risk reduction.

Most things are impossible. Perhaps our belief that we could possible colonize the stars is based only our ignorance. If we actually tried to colonize the stars (or simply tried to actually look into the possibility), we would find that we shouldn't take it for granted at all that space colonization is a realistic possibility.

Summary of the Astronomical Waste Argument

Nick Bockstrom's 2003 paper, Astronomical Waste: The Opportunity Cost of Delayed Technological Development:

With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore a corresponding opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large.

Bostrom arrives at different estimates of the potential number of human minds depending on whether we are satisfied with running "human" minds on computers or wish to stick with biological instantiation.

Using digital instantiation:

As a rough approximation, let us say the Virgo Supercluster contains 10^13 stars. One estimate of the computing power extractable from a star and with an associated planet-sized computational structure, using advanced molecular nanotechnology, is 10^42 operations per second. A typical estimate of the human brain’s processing power is roughly 10^17 operations per second or less. Not much more seems to be needed to simulate the relevant parts of the environment in sufficient detail to enable the simulated minds to have experiences indistinguishable from typical current human experiences. Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.

Using biological instantiation:

Suppose that about 10^10 biological humans could be sustained around an average star. Then the Virgo Supercluster could contain 10^23 biological humans. This corresponds to a loss of potential of over 10^13 potential human lives per second of delayed colonization.

Bostrom clarifies that not only utilitarians should care about this immense potential value which might be reached:

Utilitarians are not the only ones who should strongly oppose astronomical waste. There are many views about what has value that would concur with the assessment that the current rate of wastage constitutes an enormous loss of potential value. For example, we can take a thicker conception of human welfare than commonly supposed by utilitarians (whether of a hedonistic, experientialist, or desire-satisfactionist bent), such as a conception that locates value also in human flourishing, meaningful relationships, noble character, individual expression, aesthetic appreciation, and so forth. So long as the evaluation function is aggregative (does not count one person’s welfare for less just because there are many other persons in existence who also enjoy happy lives) and is not relativized to a particular point in time (no time-discounting), the conclusion will hold.
These conditions can be relaxed further. Even if the welfare function is not perfectly aggregative (perhaps because one component of the good is diversity, the marginal rate of production of which might decline with increasing population size), it can still yield a similar bottom line provided only that at least some significant 5 component of the good is sufficiently aggregative. Similarly, some degree of time discounting future goods could be accommodated without changing the conclusion.

Clearly, the extent to which we can actually colonize star systems beyond our own affects how strong an argument there is from astronomical waste (or as I would rather call it, our astronomical potential). If we can in fact be confident that we can colonize the entire reachable universe, that might be 10^17 stars instead of the 10^13 in just the Virgo Supercluster. An even stronger argument than Bostrom states. On the other hand, if we can't even colonize beyond our star system, we're just at 10^0 stars. Then there'd be no astronomical argument at all.

Addendum:

In an Open Philanthropy Project blog post, The Moral Value of the Far Future, Holden Karnofsky mentions Nick Bostrom's Astronomical Waste argument to say that he does not consider it robust enough to play an overwhelming role in his belief systems and actions.

In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral
... (read more)

Ruby

40

The kinds of numbers thrown around in the astronomical waste argument are sometimes accused of being a Pascal's Mugging. Even if one has doubts about whether to work on existential risk reduction, it could be argued that because the Far Future has such overwhelming and immense value that the expected value of working on existential risk outweighs all other opportunities, e.g. near-term altruistic projects like global poverty, global health, and animal welfare.

Having sharper estimates of the potential of the Far Future, bounded by how much of the universe we can actually reach, could help us relate to astronomical waste arguments with far more principle than "aahhh, these are such big numbers!!"

They're big numbers, but not all numbers are equally big.