Lenses
Techno-optimism is the belief that the advancement of technology is generally good and has historically made society better. Techno-pessimism is the opposite belief, that technology has generally made the world worse. Both are lenses, or general ways of looking at the world that bring some aspects of reality into focus while obscuring others. Judging which narrative is correct "on balance" is less useful than understanding what each has to offer.
Our World in Data is one of many sources making the case for techno-optimism. Development in Progress makes a (balanced) case for techno-pessimism. This post is an attempt to steelman techno-pessimism.
Boundaries
One can question techno-pessimism in terms of its point of reference. Does its critique of "modernity" only apply to industrial tools, or could it extend all the way back to the plough, or even fire? I see this as a mirror of the challenge faced by those who are optimistic about technology in general but pessimistic about AI.
For the pessimist, the harms of technology are structural and continuous, not the result of some particular technology being exceptional. But this continuity need not be total. I assume that most pessimists draw a cutoff somewhere. For me, the most natural boundary is the agricultural revolution, since that is when the most powerful mechanisms of unintended consequences seemed to begin taking on a life of their own. Others might place it earlier or later.
Three Pillars of Techno-Pessimism
Techno-pessimism has three core arguments, each of which is sufficient to make the overall narrative's case if accepted fully. They can also add together if accepted partially.
1. Moral Atrocity
Modern humans live at the expense of domestic animals tortured in factory farms, wild animals driven to extinction, and indigenous cultures genocided out of existence. The harms from these mass killings outweigh any benefits to the "winners."
2. Self Termination
Quite a lot of extinction (or at least catastrophe) level threats have emerged in the last 100 years, including nuclear war, global warming, biodiversity loss, and runaway AI. The time since the industrial (or even agricultural) revolution is a historical eyeblink in the context of ~300,000 years of homo sapiens' existence, so the timing of these threats is not a coincidence. The elevated risk of self-termination negates any temporary benefits of humanity's present conditions.
3. Subjective Wellbeing
It's not even clear that present-day humans are better off than our distant ancestors. Yes, there are quite a few metrics on which things have improved. And yes, if one adds up all of the obvious things to measure, the balance looks positive. But how do we know the metrics aren't cherry-picked? Or perhaps the selection process is biased because the positives are for some reason systematically easier to notice than the negatives? The most meaningful measures must be holistic, and the best available data for such a holistic assessment is subjective measures of wellbeing. The most obvious of these include rates of depression and suicide. It's hard to get data on this from pre-modern and especially pre-civilizational times, but I would be surprised if these are massively down in the modern era. Put simply, ask yourself: how much happier and satisfied with life are you than a pre-colonial Native American or modern Pirahã? One can object that diminished (or just non-elevated) subjective wellbeing is irrational, but this is changing the subject to discussing the cause of the problem, not its existence.
Is it Really Different this Time?
One can object to each of the pillars by arguing that "the only way out is through," or that future technology will solve the problems resulting from past technology. Genetic engineering could bring back extinct species. Synthetic meat could replace factory farms. Nuclear power could replace fossil fuels. But there are reasons to be skeptical.
First, no one set out to commit moral atrocity, diminish subjective well being, and certainly not trigger self-termination. These are all unintended consequences of people pursuing other goals, so why we should expect by default that new "solutions" won't have unintended consequences of their own?
Second, technological improvements don’t displace harmful practices where market dynamics absorb the gains while leaving externalized costs intact. For example, the argument that nuclear energy will replace fossil fuels assumes that these two energy sources are substitutes for each other. But if one instead assumes that societies find ways to use as much energy as they can get then one should expect that these two sources will add to each other and the environment will suffer the full consequences of both. The latter assumption is supported by the Jevons paradox, where gains in efficiency cause new industries to become profitable, which increases energy demand that outweighs any impact of efficiency.
Population Ethics
One can challenge each of the pillars on the basis of totalist population ethics. In my back-of-the-envelope calculation, the increase in utilitarian benefit of human population increase since pre-agricultural times outweighs the cost of both wild animal population reduction and domestic animal suffering, given defensible assumptions about the relative value of human vs. animal life.[1] I haven't run the math, but I can imagine self-termination working out similarly, given that humanity (and life on Earth generally) would eventually die off in the absence of technology (when the Sun explodes at the very latest), so a giant spike in utility could potentially compensate for an early cutoff, especially given the possibility of space colonization. More people being alive today could also potentially compensate for subjective wellbeing going down, as long as the result isn't negative. When one combines all three of these, I expect the math to get assumption-dependent and uncertain, but to hold for conservative estimates.
Leaning on population ethics is a legitimate move, but it’s also a highly unintuitive one, and should be made explicit. From other moral systems, the story looks different. A deontologist might argue that killing other populations to expand one’s own is seriously not cool, math be damned. A virtue ethicist might see species and cultural loss as a failure of stewardship and a sacrifice of our moral integrity.
Mechanisms of Techno-Pessimism
Population ethics aside, one reason that techno-pessimism can seem implausible is the difficulty in seeing a viable mechanism. Moral atrocity and self-termination don’t require much explanation, since the potential causes are relatively obvious: immorality, shortsightedness, and coalition politics. Whether you find them persuasive depends largely on your moral values and forecasting.
The third pillar requires more unpacking. Optimists can point to clear, local improvements, so for techno-pessimism to make sense, something else must be worsening enough to offset those improvements. These offsets may fully counter the gains or simply make them appear more modest, depending on how strongly one weighs the third pillar.
But why would people collectively choose to make their lives worse? Techno-pessimists don’t actually need to answer this question to justify their worldview, since it’s possible to observe a pattern without knowing its cause. Considering potential mechanisms is still worthwhile, however, because identifying causes is the first step toward finding solutions.
Self-Fulfilling Prophecy
Issues of moral atrocity and self-termination are overblown and subjective wellbeing would be fine if it wasn't for the alarmist narratives fueling misguided policy and public anxiety. Implied solution: dispute the techno-pessimist lens to interrupt its self-fulfilling nature. Stop playing the victim and be grateful for what you have!
Externalized Cost
Benefits of technologies tend to be direct: clearly measurable and accruing to the people using the tech, whereas downsides are often indirect and externalized. Where the former are easier to see and incentivize, people can take actions that cause more harm than good globally while causing more good than harm locally. Implied solution: internalize the externalized costs.
Adapt or Die
Tech that is adaptive becomes obligate. Once a technology exists that provides a relative benefit to the people who choose to use it, anyone who doesn't use it is at a competitive disadvantage, with the end result that everyone has no choice but to use it even if the resulting equilibrium makes everyone worse off. Implied solution: coordinate to prevent anyone from benefitting from anti-social actions, under threat of majority punishment.
Unintended Consequences
Technology may be designed with a specific use case in mind, but its effect is to make certain types of actions easier, which often facilitates a whole range of other use cases. All of these uses in aggregate shift more meta things like the societal equilibrium and peoples' experience of the world, all of which has ripple effects that, among other things, influence the trajectory of which types of tech are built next, creating all kinds of unpredictable feedback loops. One's expectations regarding whether the overall result of such feedback loops are good or bad depend on one's beliefs regarding techno-optimism/pessimism. Implied solution: be more cautious about what you create, using tools like system dynamics to at least try to approximate second order effects.
Out of Distribution Effects
Technologies have shifted the societal equilibrium of the world in a way that tends to take us further from the conditions of our ancestral environment. Agriculture, for example, led to societies with populations far exceeding Dunbar's number, which then required people to consciously design government structures. Moving out of distribution like this resulted in a series of nonintuitive challenges, in turn leading to countless “dumb” mistakes and local minima, in the form of fragile and exploitative political systems. Implied solution: Treat complexity as having a cost. Design future technologies with an aim towards making daily life and larger systems more intuitive to navigate. Consider also (incrementally) eliminating systems that introduce a lot of complexity for relatively small gains.
Solutions
A major objection to techno-pessimism takes the form: "OK, so what if things are getting worse? What do you want, to go back to living in caves?!" This is what I call buffer reasoning, or refusal to engage with a question out of dislike for an assumed solution. But it is entirely consistent to recognize a problem while rejecting the most obvious solutions. Going back to pre-agricultural ways of living, for example, is obviously untenable for the simple reason that the world population is far larger than can be supported by pre-modern production methods. Such a transition, if it occurred rapidly, would involve mass death.
Real solutions require deep, comprehensive understanding of the relevant problems and often involve trade-offs. As can be seen from the Mechanisms section above, each diagnosis comes with a different implied solution. Most of these require some form of restraint on development, which has a cost. This is why it is worth being deliberate about how we balance the techno-optimist and pessimist lenses: our assumptions about the overall balance of harms and benefits anchors our sense of which trade-offs are worthwhile.
Relevance to AI Safety
Narratives about technology inform default assumptions about new technologies, which in turn inform policy beliefs. For example, given a techno-optimist narrative, believing that governments should pause AI development requires accepting a seemingly extraordinary claim that this particular technology is exceptional. Alternatively, one can argue that AI is better framed as a new form of intelligence than as a new form of technology (and also that the former is dangerous). These are by no means insurmountable hurdles, but their presence starts AI safety advocates off at a disadvantage in conversations about policy. In contrast, if one holds a more techno-pessimistic worldview, then AI being dangerous is a natural and default expectation. This is not to say that one should choose a narrative based on whether it outputs political conclusions you like, only that narratives are worth noticing. The lens you choose shapes the futures you see, and the paths we take to realize them.
The linked spreadsheet is a back-of-the-envelope calculation for the change in the value of life since 10,000 BCE (pre agriculture). For humans, I start by taking the population * average life expectancy to calculate year-adjusted population. I set the life expectancy of early humans to 30 to include infant mortality. One could defensibly ignore this factor and set life expectancy to 55, but this has a negligible impact on the overall calculation. Next, I multiply year-adjusted population by quality of life (qol) for a qol-adjusted population. I subtract the 10,000 BCE result from the modern result and then multiply that by the moral value of a human to get the change in total value of humanity.
I assume that the moral value of a human and also the average quality of life for humans in 10,000 BCE is 1 because these are reference values to anchor other judgements. If one believes that quality of life has doubled in modern times (ignoring life expectancy increases because those are already accounted for), then modern qol would be 2. If one believes that a wild animal has 1 hundredth the moral value of a human, then the moral value fields for animals should be set to 0.01. Numbers in bold are meant to be changed by the reader based on their beliefs.
I make a similar calculation for wild animals, domestic animals, and fish. These could have been lumped into one group, but I wanted to distinguish between animals whose populations have been reduced by habitat destruction (wild animals and fish) but otherwise live as they used to vs. animals who have been brought into an existence that involves great suffering (domestic animals) and have their qol set to a negative value (also intended to be adjusted by the reader). I don't distinguish between animals in factory farms and house pets within domestic animals because the latter are a much smaller population. I also wanted to distinguish between land animals vs. fish so that I could set different groups of animals as having very different moral values (take that, fish!)
Finally, I add the totals for each group together, where humans are positive and wild animals, domestic animals, and fish are negative. I notice that for reasonable estimates of populations and intuitive free-variable values (modern human qol, moral value of animals and fish, and domestic animal qol) the balance comes out positive.
This is not what I expected! My intention in creating this spreadsheet was to demonstrate just the opposite, that one would have to assume incredibly low moral values for animals for the balance to come out positive, but this is not where my math led me. This doesn't mean my argument for techno-pessimism is necessarily wrong, but I shouldn't ground it in utilitarian math.
Please feel free to copy this spreadsheet to change the empirical or variable numbers, change the categories or whatever else, and let me know if you come to different or otherwise interesting conclusions.