Book review: Deep Utopia: Life and Meaning in a Solved World, by Nick Bostrom.

Bostrom's previous book, Superintelligence, triggered expressions of concern. In his latest work, he describes his hopes for the distant future, presumably to limit the risk that fear of AI will lead to a The Butlerian Jihad-like scenario.

While Bostrom is relatively cautious about endorsing specific features of a utopia, he clearly expresses his dissatisfaction with the current state of the world. For instance, in a footnoted rant about preserving nature, he writes:

Imagine that some technologically advanced civilization arrived on Earth ... Imagine they said: "The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads ... What a tragedy if this rich natural diversity were replaced with a monoculture of healthy, happy, well-fed people living in peace and harmony." ... this would be appallingly callous.

The book begins as if addressing a broad audience, then drifts into philosophy that seems obscure, leading me to wonder if it's intended as a parody of aimless academic philosophy.

Future Technology

Bostrom focuses on technological rather than political forces that might enable utopia. He cites the example of people with congenital analgesia, who live without pain but often face health issues. This dilemma could be mitigated by designing a safer environment.

Bostrom emphasizes more ambitious options:

But another approach would be to create a mechanism that serves the same function as pain without being painful. Imagine an "exoskin": a layer of nanotech sensors so thin that we can't feel it or see it, but which monitors our skin surface for noxious stimuli. If we put our hand on a hot plate, ... the mechanism contracts our muscle fibers so as to make our hand withdraw

Mass Unemployment

As technology surpasses human abilities at most tasks, we may eventually face a post-work society. Hiring humans would become less appealing when robots can better understand tasks, work faster, and cost less than feeding a human worker.

That conclusion shouldn't be controversial given Bostrom's assumptions about technology. Unfortunately, those assumptions about technology are highly controversial, at least among people who haven't paid close attention to trends in AI capabilities.

The stereotype of unemployment is that it's a sign of failure. But Bostrom points to neglected counterexamples, such as retirement and the absence of child labor. Reframing technological unemployment in this light makes it appear less disturbing. Just as someone in 1800 might have struggled to imagine the leisure enjoyed by children and retirees today, we may have difficulty envisioning a future of mass leisure.

If things go well, income from capital and land could provide luxury for all.

Bostrom notes that it's unclear whether automating most, but not all, human labor will increase or decrease wages. The dramatic changes from mass unemployment might occur much later than the automation of most current job tasks.

Post-Instrumentality Purpose

Many challenges that motivate human action can, in principle, be automated. Given enough time, machines could outperform humans in these tasks.

What will be left for humans to care about accomplishing? To a first approximation, that eliminates what currently gives our lives purpose. Bostrom calls this the age of post-instrumentality.

Much of the book describes how social interactions could provide adequate sources of purpose.

He briefly mentions that some Eastern cultures discourage attachment to purpose, which seems like a stronger argument than his main points. It's unclear why he treats this as a minor detail.

As Robin Hanson puts it:

Bostrom asks his question about people pretty close to him, leftist academics in rich Western societies.

If I live millions of years, I expect that I'll experience large changes in how I feel about having a purpose to guide my life.

Bostrom appears too focused on satisfying the values reflecting the current culture of those debating utopia and AI. These values mostly represent adaptations to recent conditions. The patterns of cultural and value changes suggest we're far from achieving a stable form of culture that will satisfy most people.

Bostrom seems to target critics whose arguments often amount to proof by failure of imagination. Their true objections might be:

  • Arrogant beliefs that their culture has found the One True Moral System, so any culture adapting to drastically different conditions will be unethical.
  • Fear of change: critics belonging to an elite that knows how to succeed under current conditions may be unable to predict whether they'll retain their elite status in a utopia.

The book also dedicates many pages to interestingness, asking whether long lifespans and astronomical population sizes will exhaust opportunities to be interesting. This convinced me of my confusion regarding what interestingness I value.

Malthus

A large cloud on the distant horizon is the pressure to increase population to the point where per capita wealth is driven back down to non-utopian levels.

We can solve this by ... um, waiting for his next book to explain how? Or by cooperating? Why did he bring up Malthus and then leave us with too little analysis to guess whether there's a good answer?

To be clear, I don't consider Malthus to provide a strong argument against utopia. My main complaint is that Bostrom leaves readers confused as to how uncomfortable the relevant trade-offs will be.

Style

The book's style sometimes seems more novel than the substance. Bostrom is the wrong person to pioneer innovation in writing styles.

The substance is valuable enough to deserve a wider audience. Parts of the book attempt to appeal to a broad readership, but the core content is mostly written in a style aimed at professional philosophers.

Nearly all readers will find the book too long. The sections (chapters?) titled Tuesday and Wednesday contain the most valuable ideas, so maybe read just those.

Concluding Thoughts

Bostrom offers little reassurance that we can safely navigate to such a utopia. However, it's challenging to steer in that direction if we only focus on dystopias to avoid. A compelling vision of a heavenly distant future could help us balance risks and rewards. While Bostrom provides an intellectual vision that should encourage us, it falls short emotionally.

Bostrom's utopia is technically feasible. Are we wise enough to create it? Bostrom has no answer.

Many readers will reject the book because it relies on technology too far from what we're familiar with. I don't expect those readers to say much beyond "I can't imagine ...". I have little respect for such reactions.

A variety of other readers will object to Bostrom's intuitions about what humans will want. These are the important objections to consider.

I'll rate this book 3.5 on the nonstandard scale used for this Manifold bounty.

P.S. While writing this review, I learned that Bostrom's Future of Humanity Institute has shut down for unclear reasons, seemingly related to friction with Oxford's philosophy department. This deserves further discussion, but I'm unsure what to say. The book's strangely confusing ending, where a fictional dean unexpectedly halts Bostrom's talk, appears to reference this situation, but the message is too cryptic for me to decipher.

New Comment
14 comments, sorted by Click to highlight new comments since:

OP quoting Bostrom:

Imagine that some technologically advanced civilization arrived on Earth ... Imagine they said: "The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads ... What a tragedy if this rich natural diversity were replaced with a monoculture of healthy, happy, well-fed people living in peace and harmony." ... this would be appallingly callous.

I have some sympathy with that technologically advanced civilisation. I mean, what would you rather they do? Intervene to remould humans into their preferred form? Or only if their preferred form just happened to agree with yours?

I would go further, and say that replacing human civilization with “a monoculture of healthy, happy, well-fed people living in peace and harmony” does in fact sound very bad. Never mind these aliens (who cares what they think?); from our perspective, this seems like a bad outcome. Not by any means the worst imaginable outcome… but still bad.

Doing nothing might be preferable to intervening in that case. But I'm not sure if the advanced civilization in Bostrom's scenario is intervening or merely opining. I would hope the latter.

If they’re merely opining, then why should we be appalled? Why would we even care? Let them opine to one another; it doesn’t affect us.

If they’re intervening (without our consent), then obviously this is a violation of our sovereignty and we should treat it as an act of war.

In any case, one “preserves” what one owns. These hypothetical advanced aliens are speaking as if they own us and our planet. This is obviously unacceptable as far as we’re concerned, and it would behoove us in this case to disabuse these aliens of such a notion at our earliest convenience.

Conversely, it makes perfect sense to speak of humans as collectively owning the natural resources of the Earth, including all the animals and so on. As such, wishing to preserve some aspects of it is entirely reasonable. (Whether we ultimately choose to do so is another question—but that it’s a question for us to answer, according to our preferences, is clear enough.)

[-]J12

This is a major theme in Star Trek: The Next Generation, where they refer to it as the Prime Directive. It always bothered me when they violated the Prime Directive and intervened because it seemed like it was an act of moral imperialism. But I guess that's just my morals (an objection to moral imperialism) conflicting with theirs.

A human monoculture seems bad for many reasons analogous to the ones that make an agricultural monoculture bad, though. Cultural diversity and heterogeneity should make our species more innovative and more robust to potential future threats. A culturally heterogeneous world would also seem harder for a central entity to gain control of. Isn't this largely why the British, Spanish, and Roman empires declined?

But another approach would be to create a mechanism that serves thesame function as pain without being painful. Imagine an “exoskin”: alayer of nanotech sensors so thin that we can’t feel it or see it,but which monitors our skin surface for noxious stimuli. If we put ourhand on a hot plate, … the mechanism contracts our muscle fibers soas to make our hand withdraw

I recommend Gwern’s discussion of pain to anyone who finds this sort of proposal intriguing (or anyone who is simply interested in the subject).

Given utopian medicine, Gwern's points seem not very important.

Does Bostrom address human modification/amplification? I'd think he would, but I'm not sure he actually did, at least not in any depth.

A world in which we all get sad because we can't make new philosophy breakthroughs and don't bother to engineer out that sadness seems quite implausible. Yet I didn't hear this addressed in his interview with Liv Boeree.

And I'm not going to buy and read it just to find out.

He predicts that it will be possible to do things like engineer away sadness. He doesn't devote much attention to convincing skeptics that such engineering will be possible. He seems more interested in questions of whether we should classify the results as utopian.

Thanks! I'm also uninterested in the question of whether it's possible. Obviously it is. The question is how we'll decide to use it. I think that answer is critical to whether we'd consider the results utopian. So, does he consider how we should or will use that ability?

I can't recall any clear predictions or advice, just a general presumption that it will be used wisely.

[-]J10

I've only skimmed it but so far I'm surprised bostrom didn't discuss a possible future where ai 'agents' act as both economic producers and consumers. Human population growth would seem to be bad in a world where ai can accommodate human decline (i.e. Protecting modern economies from the loss of consumers and producers), since finite resources will be a pie that gets divided either into smaller slices or larger ones depending on the number of humans around to allocate them to. And larger slices would seem to increase average well-being. Maybe he addressed but I missed it in my skim.

You seem to assume we should endorse something like average utilitarianism. Bostrom and I consider total utilitarianism to be closer to the best moral framework. See Parfit's writings if you want deep discussion of this topic.

[-]J10

Thanks! Just read some summaries of parfit. Do you know any literature that addresses this issue within the context of a) impacts to other species, or b) using artificial minds as the additional population? I assume the total utilitarianism theory assumes arbitrarily growing physical space for populations to expand into and would not apply to finite spaces or resources (I think I recall bostrom addressing that).

Reading up on parfit also made me realize that deep utopia really has prerequisites and you were right that it's probably more readily understood by those with philosophy background. I didn't really understand what he was saying about utilitarianism until just reading about parfit.