Barry Schwartz's The Paradox of Choice—which I haven't read, though I've read some of the research behind it—talks about how offering people more choices can make them less happy.
A simple intuition says this shouldn't ought to happen to rational agents: If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X. So a rational agent shouldn't do worse by having more options. The more available actions you have, the more powerful you become—that's how it should ought to work.
This is false if the agent has to deal with other agents and the agents do not have complete knowledge of each other's source code (and memory). Agents can learn about how other agents work by observing what choices they make; in particular, if you offer Agent A additional alternatives and Agent A rejects them anyway, then Agent B can learn something about how Agent A works from this. If Agent A values Agent B not knowing how it works, then Agent A should in general value not being given more options even if it would reject them anyway.
That's only because you're not just adding an option, you're changing the choice you already have. You're not going from X to a choice between X and Y, but rather from X to a choice between X+(B knows A prefers this choice to the other) and Y+(B knows A prefers this choice to the other). This has nothing to do with there being another agent about, it's just the fact that you're changing the options available at the same time as you add extra ones.
EDIT: As a general principle, there shouldn't be any results that hold except when there are other agents about, because other agents are just part of the universe.
Today's post, Harmful Options was originally published on 25 December 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Imaginary Positions, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.