This is a linkpost for

EA Forum cross-post

This article is the script of the YT video linked above. It is an animated introduction to the idea of longtermism. The video also briefly covers "The Case for Strong Longtermism" by William MacAskill and then goes over Nick Bostrom's "Astronomical Waste: The Opportunity Cost of Delayed Technological Development". If you would like a refresher about Longtermism, you may read this script, and if you are curious about the animations, you can head to the video. This time the narrator is not me, but Robert Miles.

Consider this: scientific progress and the collective well-being of humanity have been on an upward trajectory for centuries, and if it continues, then humanity has a potentially vast future ahead in which we might inhabit countless star systems and create trillions upon trillions of worthwhile lives.

This is an observation that has profound ethical implications, because the actions we take today have the potential to impact our vast future, and therefore influence an astronomically large number of future lives.

Hilary Greaves and William MacAskill, in their paper “The Case for Strong Longtermism” define strong longtermism as the thesis that says, simplifying a little bit: “in a wide class of decision situations, the best action to take is the one that has the most positive effects on the far future”. It’s easy to guess why in light of what I just said. This is a consequence of the fact that the far future contains an astronomically large number of lives.

The question is: what are the actions that have the most positive effect on the far future? There are a few positions one could take, and if you want a deep dive into all the details I suggest reading the paper that I mentioned [note: see section 3 of "The Case for Strong Longtermism] .

In this video, I will consider two main ways in which we could most positively affect the far future. They have been brought forward by Nick Bostrom in his paper “Astronomical Waste: The Opportunity Cost of Delayed Technological Development”.

Bostrom writes: “suppose that 10 billion humans could be sustained around an average star. The Virgo supercluster could contain 10^23 humans”.

For reference, 10^23 is 10 billion humans multiplied by 10 trillion stars! And remember that one trillion is one thousand times a billion.

In addition, you need to also consider that future humans would lead vastly better lives than today’s humans due to the enormous amount of technological progress that humanity would have reached by then.

All things considered, the assumptions made are somewhat conservative - estimating 10 billion humans per star is pretty low considering that Earth already contains almost 8 billion humans, and who knows what may be possible with future technology?

I guess you might be skeptical that humanity has the potential to reach this level of expansion, but that’s a topic for another video. And the fact that it is not a completely guaranteed scenario doesn’t harm the argument (back to this later).

Anyway, considering the vast amount of human life that the far future may contain, it follows that delaying technological progress has an enormous opportunity cost. Bostrom estimates that just one second of delayed colonization equals 100 trillion human lives lost. Therefore taking action today for accelerating humanity’s expansion into the universe yields an impact of 100 trillion human lives saved for every second that it’s is brought closer to the present.

You can picture this argument like this. Here’s a graph with time on the horizontal axis and value (human lives, for example) on the vertical axis. We assume progress to be positive over time. If we accelerate key technological discoveries early on, then we are shifting all of this line to the left. See this area between the two lines? This is all the value we save.

But. Don’t get the idea that is even close to the most impactful thing we could do today.

Consider this: what if humanity suffers a catastrophe so large as to either wipe it out or curtail its potential forever? This would annihilate all of humanity’s future value in one go, which potentially means billions of years of lost value. Since we realistically measure the impact of accelerating scientific progress in just years or decades of accelerated progress, from an impact perspective, reducing the risk of an existential catastrophe trumps hastening technological development by many orders of magnitude. Bostrom estimates that a single percentage point of existential risk reduction yields the same expected value as advancing technological development by 10 million years

Now, I mentioned “expected value”, what is that? Well it’s an idea that’s at the heart of decision theory and game theory, and here, we’re using it to convert preventing risk to impact. How it basically works is that you multiply impact by its probability of occurring in order to estimate the impact in the average case. For example: if there is a 16% probability of human extinction, as estimated by Toby Ord, in order to get a number for how much value we are losing in expectation we multiply 0.16 by the number of future human lives. Consider that also when measuring the impact of accelerating technological progress, or even distributing malaria nets in Africa we must calculate the expected value of our impact and we can’t use crude impact without correcting it with a probability. There are no interventions that yield a 100% probability of working.

In his paper, Bostrom also makes the case that trying to prevent existential risk is probably the most impactful thing that someone who cared about future human lives (and lives, in general) could do. But there is another ethical point of view taken into consideration, which is called the “person affecting view”. If you don’t care about giving life to future humans who wouldn’t have existed otherwise, but you only care about present humans, and humans that will come to exist, then preventing existential risk and advancing technological progress have a similar impact. Here the reasoning:

All the impact would derive from increasing the chance that current humans will get to experience extremely long and fulfilling lives as a result of hastening transformative technological development. Potentially enabling virtually unlimited lifespans and enormously improving the quality of life for many people alive today or in the relatively near future.

If we increase this chance, either by reducing existential risk or by hastening technological progress, our impact will be more or less the same, and someone holding a person affecting view ought to balance the two.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 2:19 AM

Bostrom estimates that just one second of delayed colonization equals 100 trillion human lives lost. Therefore taking action today for accelerating humanity’s expansion into the universe yields an impact of 100 trillion human lives saved for every second that it’s is brought closer to the present.

I don't much care for this rhetorically sneaky way of smudging the way we feel the import of "lives lost" and "lives saved" so as to try to make it also cover "lives that never happen" or "lives that might potentially happen." There's an Every Sperm is Sacred silliness at work here. Do you mourn the millions of lives lost to vasectomy?

I kind of have similar feelings. I'd need an answer for the Mere addition paradox/repugnant conclusion before I could compare these. I do find the conclusion repugnant, so I must take issue with the premises somehow. My current inclination is to reject the first step: the idea that a universe with more lives worth living is better than one with less, but I'm not especially confident that I've entirely resolved it that way.

Living in Many Worlds has really influenced my thinking about future population sizes. It's more important to me that quality of life is high than that we maximize lives barely worth living. That could also be taken to extremes: why not have a population of one? But I think there are good reasons not to take it that far.

Well, there was some love for the person affecting view at the end of the video. Note that one that ascribes to the totalist view might not only mourn every sperm but every potential worthwhile mind.


The view of most people - arguably one that could be considered rational - is that unless an event has a non zero chance of being something we can personally experience, it doesn't matter.

This is likely the reason for most major civilizations happening to use religion. Most religions contain some promised form of accounting for our actions.

Moreover this is why in this community if there were not cryonics or AI - potential developments that have nonzero chances of allowing at least some of us here to personally see this future - this community wouldn't exist. If there is no hope there can be no progress.