This article is the script of the Rational Animations video linked above. It is based on William MacAskill's new book "What We Owe the Future". I've had the opportunity to read it in advance thanks to the Forethought Foundation, which reached out asking if we could make a video on the occasion of the book launch. I was happy to collaborate.

Here we focus on the question, "can we make the long run future go better?" which is at the heart of one of the three claims at the basis of longtermism:

1. Future people matter. 

2. There could be a lot of them. 

3. We can make their lives go better.

In this video, we also correct some claims of our previous longtermism video and continue laying out the consequences of our previous video about why we might be living in the most important century in history.

Crossposted to the EA Forum.

--------------------------------------------------------------------

In our previous video about longtermism, we said that humanity might have a vast future ahead, containing trillions upon trillions of lives and lasting trillions of years. We said that the vastness of our future implies an enormous ethical responsibility for present humans, because the actions we take today have the potential to impact countless future lives.

Currently, the most liked comment under that video raises an objection:

It seems to be impossible to predict if a certain event will have a negative or positive impact on the future.

The world is an extremely chaotic system, so the idea that we can discern the value of our actions a million years from now seems naive.

This is a reasonable objection. If we had no way of being confident our actions are on net good for the long-term future, then the longtermist philosophy would be moot.

In fact, the potential vastness of humanity’s future is not in itself enough to justify the central tenet of longtermism: that concern for the long-term future should be a key priority of our time.

In his new book “What We Owe The Future'', the philosopher William MacAskill identifies three fundamental claims as the core justifications for longtermism. First: future people matter. Second: there could be a lot of them. Third: We can make their lives go better.

The first claim concerns what we value: the long-term future of humanity is a key priority of our time only if we care about the beings who will inhabit it.

The second claim is that the future is potentially huge. If humanity lasts as long as the typical mammalian species — around one million years — then future generations will outnumber our generation by a thousand to one. And we could last a lot longer than that!

But the first two claims would be of no practical importance without the third: we can make future lives go better. This is the thesis that humans today have the power to positively influence the far future in a  predictable way.

In his book, William MacAskill explores the three claims at length. In this video, we will focus on the third.

One reason to think that we can positively and predicatably influence the far future comes from the past. There are examples of historical figures deliberately aiming to influence the long-term future in a particular way and, to varying degrees, succeeding.

William MacAskill identifies a few:

Shakespeare, in his sonnet “shall I compare thee to a summer day” notes that through his poetry, he can preserve the memory of a young man he admires through all eternity.

In the fifth century BC, Thucydides wrote “History of the Peloponnesian War” so that the generations to come could clearly understand what happened during that war. Thucydides wrote: “My work is not a piece of writing designed to meet the taste of an immediate public, but was done to last forever.”

A more recent example comes from the United States Founding Fathers. They were aware that the norms in the constitution, once set in place, would stick unchanged for a long time. John Adams, the second president of the United States, commented: “The institutions now made in America will not wholly wear out for thousands of years. It is of the last importance, then, that they should begin right. If they set out wrong, they will never be able to return, unless it be by accident, to the right path.”

Benjamin Franklin, founding father and celebrated polymath, was famous for his concern about the long-term health of the United States. In 1784 the French mathematician Charles Joseph Maton de la Cour wrote a friendly satire of Franklin suggesting that he should invest his money for 500 years, letting them accrue compound interest to then be used on social projects centuries later. Benjamin Franklin thanked the mathematician for the great idea and implemented it. In 1790 he invested 2000 pounds, about 135000 USD in today’s value—1000 for the city of Boston and 1000 for Philadelphia. After one hundred years, three-quarters of the funds would be paid out, and the rest after 200 years. In 1990 the final funds were distributed. The donation had grown to $2.3 million for Philadelphia and $5 million for Boston.

Another reason why we should expect to be able to influence the far future in a positive way is that we can already think of actions that will very likely have positive effects. In other words, actions and policies we expect to have a positive influence in a wide range of outcomes.

William MacAskill suggests decarbonisation, which means replacing fossil fuels with cleaner energy sources, as one such policy.

Decarbonisation is usually considered good for its positive shorter-term effects. Pollution increases the risk of chronic health conditions such as cancer, heart disease, and respiratory disease, resulting in 3.6 million premature deaths every year. Decarbonisation would therefore cause immediate health benefits for the population and ameliorate the longer-term effects of climate change.

But perhaps the biggest effects of decarbonisation are its other effects on the far future. By promoting the development of clean energy, it would speed up a good kind of technological progress and therefore help prevent technological or economic stagnation. Moreover, by keeping some fossil fuels in the ground, we make scenarios of unrecoverable collapse less likely. If humanity ever suffers a catastrophe that makes it collapse and reduces its population to a small percentage of what it was before, it would be extremely useful to still have access to even a small amount of fossil fuels. This resource makes it more likely that humanity would be able to reindustrialize and rise again to its previous level of development.

Decarbonisation would therefore constitute a win on many fronts, and it's really hard to see in what future it might turn out to be bad.

If the proportion of futures in which an action turns out good vastly outweighs the proportion of futures in which it turns out bad, then we can be pretty sure that the action is good to take.

For William MacAskill, decarbonisation is a proof of concept of robust longtermist action and a baseline against which to evaluate other actions aimed at improving the long term future.

There are other actions that may be at least as robustly positive, which are discussed at length in the book. For example, policies that would reduce the chance of a deadly pandemic and other existential risks that could wipe out humanity and annihilate its future.

Another powerful reason to think we can probably positively affect the far future is that we are particularly well-positioned to do so. If you have watched our last video, you are familiar with the arguments for why we are potentially living in the most critical century in human history. Our condition is changing dramatically, and economic growth could explode with the advent of transformative artificial intelligence, upholding a long-run historical trend of super-exponential growth.

An important insight is that the current rate of change, even without any acceleration, cannot continue. If the current growth of about 2 percent per year continued unabated for the next ten thousand years, the economy would be 10^86 times larger than it is today. That means we would produce ten million trillion times as much economic output as our current world produces for every atom that we could in principle access. This hardly seems possible.

That means that our current period of extreme growth is probably just a small page of humanity’s history, but an extraordinarily unusual and consequential one. The technologies we develop this century might spell our destruction or set the values and form of future humanity. Influencing technological development and society’s values affects the shape of humanity in the far future and if we’ll live to see that future in the first place.

Another point of uniqueness of our time, other than extremely fast growth, is our unprecedented interconnectedness, which is also poised to end. For over fifty thousand years, humanity was split into groups across the globe, without any way for people on different continents to communicate with each other. Today we can communicate almost instantly no matter how far we are from each other. In the future, if we spread through the stars, communication between different star systems will take an extremely long time due to the speed of light limit. Sending a single message would take years, at the very least. But most importantly, communication between causally disconnected groups of galaxies will become impossible. The local group, the group of galaxies containing the milky way and us, will become forever separated from the millions of other groups of galaxies in the rest of the universe due to the accelerating expansion of space. The speed of light won’t be enough to overcome the universe’s expansion, and it will become impossible to communicate with other groups. The parts of humanity that will have spread to other groups of galaxies will become forever separated, and humanity as a whole will become disconnected again.

Humanity’s temporary condition of connectedness means that small groups of people can influence the whole of it. A unique occasion that might not repeat again.

Together, the rapid pace of progress and our level of interconnectedness constitute an unstable state. They both amplify the magnitude of the effects of our actions. That means that humanity is temporarily out of equilibrium. Tiny changes we make today might influence the stable state we’ll reach after this unique period of transformation is over.

William MacAskill uses this metaphor: 
“Imagine a giant ball rolling rapidly over a rugged landscape. Over time it will lose momentum and slow, settling at the bottom of some valley or chasm. Civilisation is like this ball: while still in motion, a small push can affect in which direction we roll and where we come to rest”.

And this was the last of our arguments. To wrap up, our main arguments for why we should be able to positively influence the far future are:
There are precedents of historical figures aiming for long-term impact and, to varying degrees, succeeding.
Plausible robustly positive actions exist, such as decarbonisation.
We live in a unique historical time in which influencing the long-term future of humanity is unusually tractable.

If the insights we have presented inspire you to do your part in steering humanity in a good direction, then this video might have positively influenced the long-term future and achieved its goal. But you might still disagree with our arguments, so if you still aren’t convinced or even if you think you have something important to add, please leave a comment.

New Comment
1 comment, sorted by Click to highlight new comments since:
[-]Shmi133

I find the arguments extremely unconvincing, they are very much cherry-picked. If you think for 5 minutes, you can find equally good examples of good intentions leading to unexpected disastrous consequences in the long or medium term. Give it a try. In addition, there is nothing to compare these "positive influence" actions against. They tend to be implicitly compared against a hypothetical counterfactual world where no action is taken, even though we have no way of knowing how such a world would develop. 

Here are a couple of counter-examples where doing medium- and long-term good backfires, after 30 sec of thinking:

  • Colonization, an obvious long-term good for Europeans, ended up wiping out most of the American indiginous population.
  • Dissolving the Soviet Union led to several bloody wars in Europe
  • Spreading the world of Jesus or Mohammad resulted in extermination of millions of people over the millennia

And one fictional but illuminating example is Asimov's The End of Eternity.

Basically, the claim that "We can make their lives go better" long-term holds no water. Your predictability horizon dies off pretty quickly with time.