Today's post, Harmful Options was originally published on 25 December 2008. A summary (taken from the LW wiki):

 

Offering people more choices that differ along many dimensions, may diminish their satisfaction with their final choice. Losses are more painful than the corresponding gains are pleasurable, so people think of the dimensions along which their final choice was inferior, and of all the other opportunities passed up. If you can only choose one dessert, you're likely to be happier choosing from a menu of two than from a menu of fourteen. Refusing tempting choices consumes mental energy and decreases performance on other cognitive tasks. A video game that contained an always-visible easier route through, would probably be less fun to play even if that easier route were deliberately foregone. You can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken. And what if a worse option is taken due to a predictable mistake? There are many ways to harm people by offering them more choices.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Imaginary Positions, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 3:55 PM

Barry Schwartz's The Paradox of Choice—which I haven't read, though I've read some of the research behind it—talks about how offering people more choices can make them less happy.

A simple intuition says this shouldn't ought to happen to rational agents: If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X. So a rational agent shouldn't do worse by having more options. The more available actions you have, the more powerful you become—that's how it should ought to work.

This is false if the agent has to deal with other agents and the agents do not have complete knowledge of each other's source code (and memory). Agents can learn about how other agents work by observing what choices they make; in particular, if you offer Agent A additional alternatives and Agent A rejects them anyway, then Agent B can learn something about how Agent A works from this. If Agent A values Agent B not knowing how it works, then Agent A should in general value not being given more options even if it would reject them anyway.

That's only because you're not just adding an option, you're changing the choice you already have. You're not going from X to a choice between X and Y, but rather from X to a choice between X+(B knows A prefers this choice to the other) and Y+(B knows A prefers this choice to the other). This has nothing to do with there being another agent about, it's just the fact that you're changing the options available at the same time as you add extra ones.

EDIT: As a general principle, there shouldn't be any results that hold except when there are other agents about, because other agents are just part of the universe.

Aha. Thanks for the correction.