Why I am not a longtermist

[Apologies for yet another “philosophizing” blog post, hope to get back to posts with more equations soon… Should also mention that one response to this piece was that “anyone who writes a piece called “Why I am not a longtermist” is probably more of a longtermist than 90% of the population” 🙂 –Boaz]

Longtermism” is a moral philosophy that places much more weight on the well-being of all future generations than on the current one. It holds that “positively influencing the long-term future is a key moral priority of our time,” where “long term” can be really long term, e.g., “many thousands of years in the future, or much further still.”   At its core is the belief that each one of the potential quadrillion or more people that may exist in the future is as important as any single person today.

Longtermism has recently attracted attention, some of it in alarming tones. The reasoning behind longtermism is natural: if we assume that human society will continue to exist for at least a few millennia, many more people will be born in the future than are alive today. However, since predictions are famously hard to make, especially about the future, longtermism invariably gets wrapped up with probabilities. Once you do these calculations, preventing an infinitely bad outcome, even if it would only happen with tiny probability, will have infinite utility. Hence longtermism tends to focus on so-called “existential risk”:  The risk that humanity will go through in an extinction event, like the one suffered by the Neanderthals or Dinosaurs, or another type of irreversible humanity-wise calamity.

This post explains why I do not subscribe to this philosophy. Let me clarify that I am not saying that all longtermists are bad people. Many “longtermists” have given generously to improve people’s lives worldwide, particularly in developing countries. For example, none of the top charities of Givewell (an organization associated with the effective altruism movement, in which many prominent longtermists are members) focus on hypothetical future risks. Instead, they all deal with current pressing issues, including Malaria, childhood vaccinations, and extreme poverty. Overall the effective altruism movement has done much to benefit currently living people. Some of its members donated their kidneys to strangers: These are good people- morally better than me. It is hardly fair to fault people that are already contributing more than most others for caring about issues that I think are less significant.

Benjamin Todd’s estimates of Effective Altruism resource allocations

This post critiques the philosophy of longtermism rather than the particular actions or beliefs of “longtermists.” In particular, the following are often highly correlated with one another:

  1. Belief in the philosophy of longtermism.
  2. A belief that existential risk is not just a concern for the far-off future and a low-probability event, but there is a very significant chance of it happening in the near future (next few decades or at most a century).
  3. A belief that the most significant existential risk could arise from artificial intelligence and that this is a real risk in the near future.

Here I focus on (1) and explain why I disagree with this philosophy. While I might disagree on specific calculations of (2) and (3), I fully agree with the need to think and act regarding near-term risks.  Society tends to err on the side of being too myopic. We prepare too little even for risks that are not just predictable but are also predicted, including climate change, pandemics, nuclear conflict, and even software hacks. It is hard to motivate people to spend resources for safety when the outcome (bad event not happening) is invisible. It is also true that over the last decades, humanity’s technological capacities have grown so much that for the first time in history, we are capable of doing irreversible damage to our planet. 

In addition to the above, I agree that we need to think carefully about the risks of any new technology, particularly one that, like artificial intelligence, can be very powerful but not fully understood.  Some AI risks are relevant to the shorter term: they are likely over the next decade or are already happening. There are several books on these challenges. None of my critiques apply to such issues. At some point, I might write a separate blog post about artificial intelligence and its short and long-term risks. 

My reasons for not personally being a “longtermist” are the following:

The probabilities are too small to reason about.

Physicists know that there is no point in writing a measurement up to 3 significant digits if your measurement device has only one-digit accuracy. Our ability to reason about events that are decades or more into the future is severely limited. At best, we could estimate probabilities up to an order of magnitude, and even that may be optimistic. Thus, claims such as Nick Bostrom’s, that “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives” make no sense to me.  This is especially the case since these “probabilities” are Bayesian, i.e., correspond to degrees of belief. If, for example, you evaluate the existential-risk probability by aggregating the responses of 1000 experts, then what one of these experts had for breakfast is likely to have an impact larger than 0.001 percent (which, according to Bostrom, would correspond to much more than 10²⁰ human lives). To the extent we can quantify existential risks in the far future, we can only say something like “extremely likely,” “possible,” or “can’t be ruled out.” Assigning numbers to such qualitative assessments is an exercise in futility. 

I cannot justify sacrificing current living humans for abstract probabilities.

Related to the above, rather than focusing on specific, measurable risks (e.g., earthquakes, climate change), longtermism is often concerned with extremely hard to quantify risks. In truth, we cannot know what will happen 100 years into the future and what would be the impact of any particular technology. Even if our actions will have drastic consequences for future generations, the dependence of the impact on our choices is likely to be chaotic and unpredictable. To put things in perspective, many of the risks we are worried about today, including nuclear war, climate change, and AI safety, only emerged in the last century or decades. It is hard to underestimate our ability to predict even a decade into the future, let alone a century or more.

Given that there is so much suffering and need in the world right now, I cannot accept a philosophy that prioritizes abstract armchair calculations over actual living humans. (This concern is not entirely hypothetical: Greaves and MacAskill estimate that $100 spent on AI safety would, in expectation, correspond to saving a trillion lives and hence would be “far more than the near-future benefits of bednet distribution [for preventing Malaria],” and recommend that it is better that individuals “fund AI safety rather than developing world poverty reduction.”)

Moorhouse compares future humans to ones living far away from us. He says that just like “something happening far away from us in space isn’t less intrinsically bad just because it’s far away,” we should care about humans in the far-off future as much as we care about present ones. But I think that we should care less about very far away events, especially if it’s so far away that we cannot observe them. E.g., as far as we know, there may well be trillions of sentient beings in the universe right now whose welfare can somehow be impacted by our actions. 

We cannot improve what we cannot measure.

An inherent disadvantage of probabilities is that they are invisible until they occur. We have no direct way to measure whether a probability of an event X has increased or decreased. So, we cannot tell whether our efforts are working or not. The scientific revolution involved moving from armchair philosophizing to making measurable predictions. I do not believe we can make meaningful progress without concrete goals. For some risks, we do have quantifiable goals (Carbon emissions, number of nuclear warheads). Still, there are significant challenges to finding a measurable proxy for very low-probability and far-off events. Hence, even if we accept that the risks are real and vital, I do not think we can directly do anything about them before finding such proxies. 

Proxies do not have to be perfect: Theoretical computer science made much progress using the imperfect measure of worst-case asymptotic complexity. The same holds for machine learning and artificial benchmarks. It is enough that proxies encourage the generation of new ideas or technologies and achieve gradual improvement. One lesson from modern machine learning is that the objective (aka loss function) doesn’t have to perfectly match the task for it to be useful.

Long-term risk mitigation can only succeed through short-term progress.

Related to the above, I believe that addressing long-term risks can only be successful if it’s tied to shorter-term advances that have clear utility. For example, consider the following two extinction scenarios:

1. The actual Neanderthal extinction.
2. A potential human extinction 50 years from now due to total nuclear war.

I argue that the only realistic scenario to avoid extinction in both cases is a sequence of actions that improve some measurable outcome. While sometimes extinction could theoretically be avoided by a society making a huge sacrifice to eliminate a hypothetical scenario, this could never actually happen.

While the reasons for the Neanderthal extinction are not fully known, most researchers believe that Neanderthals were out-competed by our ancestors – modern humans – who had better tools and ways to organize society. The crucial point is that the approaches to prevent extinction for Neanderthal were the same ones to improve their lives in their current environment. They may not have been capable of doing so, but it wasn’t because they were working on the wrong problems.

Contrast this with the scenario of human extinction through total nuclear war. In such a case, our conventional approaches for keeping nuclear arms in check, such as international treaties and sanctions, have failed. Perhaps in hindsight, humanity’s optimum course of action would have been a permanent extension of the middle ages, stopping the scientific revolution from happening through restricting education, religious oppression, and vigorous burning-at-stake of scientists.  Or perhaps humanity could even now make a collective decision to go back and delete all traces of post 17th-century science and technology.

I cannot rule out the possibility that, in hindsight, one of those outcomes would have had more aggregate utility than our current trajectory. But even if this is the case, such an outcome is simply not possible. Humanity can not and will not halt its progress, and solutions to significant long-term problems have to arise as a sequence of solutions to shorter-range measurable ones, each of which shows positive progress. Our only hope to avoid a total nuclear war is through piecemeal quantifiable progress.  We need to use diplomacy, international cooperation, and monitoring technologies to reduce the world’s nuclear arsenal one warhead at a time. This piecemeal, incremental approach may or may not work, but it’s the only one we have. 

Summary: think of the long term, but act and measure in the short term.

It is appropriate for philosophers to speculate on hypothetical scenarios centuries into the future and wonder whether actions we take today could influence them. However, I do not believe such an approach will, in practice, lead to a positive impact on humanity and, if taken to the extreme, may even have negative repercussions. We should maintain epistemic humility. Statements about probabilities involving fractions of percentage points, or human lives in the trillions, should raise alarm bells. Such calculations can be particularly problematic since they can lead to a “the end justifies the means” attitude, which can accept any harm to currently living people in the name of the practically infinite multitudes of future hypothetical beings.

We need to maintain the invariant that, even if motivated by the far-off future, our actions “first do no harm” to living, breathing humans. Indeed, as I mentioned, even longtermists don’t wake up every morning thinking about how to reduce the chance that something terrible happens in the year 1,000,000 AD by 0.001%. Instead, many longtermists care about particular risks because they believe these risks are likely in the near-term future. If you manage to make a convincing case that humanity faces a real chance of near-term total destruction, then most people would agree that this is very very bad, and we should act to prevent it. It doesn’t matter whether humanity’s extinction is two times or a zillion times worse than the death of half the world’s population. Talking about trillions of hypothetical beings thousands of years into the future only turns people off. There is a reason that Pascal’s Wager is not such a winning argument, and I have yet to meet someone who converted to a particular religion because it had the grisliest version of hell.

This does not mean that thinking and preparing for longer-term risks is pointless. Maintaining seed banks, monitoring asteroids, researching pathogens, designing vaccine platforms, and working toward nuclear disarmament, are all essential activities that society should take. Whenever a new technology emerges, artificial intelligence included, it is crucial to consider how it can be misused or lead to unintended consequences. By no means do I argue that humanity should spend all of its resources only on actions that have a direct economic benefit. Indeed, the whole enterprise of basic science is built on pursuing directions that, in the short term, increase our knowledge but do not have practical utility. Progress is not measured only in dollars, but it should be measured somehow. Epistemic humility also means that we should be content with working on direct, measurable proxies, even if they are not perfect matches for the risk at hand. For example, the probability of extinction via total nuclear war might not be a direct function of the number of deployed nuclear warheads. However, the latter is still a pretty good proxy for it.

Similarly, even if you are genuinely worried about long-term risk, I suggest you spend most of your time in the present. Try to think of short-term problems whose solutions can be verified, which might advance the long-term goal. A “problem” does not have to be practical: it can be a mathematical question, a computational challenge, or an empirically verifiable prediction. The advantage is that even if the long-term risk stays hypothetical or the short-term problem turns out to be irrelevant to it, you have still made measurable progress. As has happened before, to make actual progress on solving existential risk, the topic needs to move from philosophy books and blog discussions into empirical experiments and concrete measures.

Acknowledgments: Thanks to Scott Aaronson and Ben Edelman for commenting on an earlier version of this post.

6 thoughts on “Why I am not a longtermist

  1. I appreciate that more academics are engaging in an important philosophical debate. Brief thoughts:
    – I found it interesting that you are willing to bite the bullet and say that you did think further away humans should be discounted without explicitly rejecting the analogy between physically and temporally far away humans. Does this mean you think that if people were far away enough in space (i.e. on Jupiter) that we here on earth should care less about their lives?
    – Related: you say you can’t justify sacrificing living humans for abstract probabilities but don’t explain why longtermism means sacrificing living humans. It also seems like we are implicitly making tradeoffs like this all the time e.g. with speed limits on highways. If we take your statement seriously then maybe we should ban all cars on highways because otherwise we are sacrificing human lives for abstract probabilities, right?
    – As for the third point: “we cannot improve what we cannot measure”: I get and appreciate the concern with worrying about something that is very difficult to measure (it’s easy to fool ourselves) but I consider this a very weak critique of longtermism because there are obviously many things we improve without measuring (the success of a marriage, the happiness of employees etc)

    I think there are other critiques to be made for longtermism, but I personally didn’t find the argument above very convincing for the aforementioned reasons.

Leave a comment