Strategyproof Mechanisms: Impossibilities

5Sniffnoy

5badger

3VAuroch

3[anonymous]

1badger

6Vaniver

0badger

3trist

5badger

New Comment

9 comments, sorted by Click to highlight new comments since: Today at 4:12 PM

However, since Arrow deals with social welfare functions which take a profile of preferences as input and outputs a full preference ranking, it really says something about aggregating a set of preferences into a single group preference.

I'm going to nitpick here -- it's possible to write down forms of Arrow's theorem where you do get a single output. Of course, in that case, unlike in the usual formulation, you have to make assumptions about what happens when candidates drop out -- considering what you have as a voting system that yields results for an election among any subset of the candidates, rather than just that particular set of candidates. So it's a less convenient formulation for proving things. Formulated this way, though, the IIA condition actually becomes the thing it's usually paraphrased as -- "If someone other than the winner drops out, the winner stays the same."

Edit: Spelling

Since Arrow and GS are equivalent, it's not surprising to see intermediate versions. Thanks for pointing that one out. I still stand by the statement for the common formulation of the theorem. We're hitting the fuzzy lines between what counts as an alternate formulation of the same theorem, a corollary, or a distinct theorem.

Arrow's theorem doesn't apply to rating systems like approval or range voting. However, Gibbard-Satterthwaite still holds. It holds more intensely if anything since agents have more ways to lie. Now you have to worry about someone saying their favorite is ten times better than their second favorite rather than just three times better in addition to lying about the order.

In which the limits of dominant-strategy implementation are explored. The Gibbard-Satterthwaite dictatorship theorem for unrestricted preference domains is stated, showing no universal strategyproof mechanisms exist, along with a proof for a special case.Due to the Revelation Principle, most design questions can be answered by studying incentive compatible mechanisms, as discussed in the last post. Incentive compatibility comes in many different flavors corresponding to different solution concepts—dominant-strategy IC and Bayes-Nash IC being the two most common. In this post, I’ll delve into what’s possible under dominant strategy incentive compatibility.

Recall that a strategy is

dominantif playing it always leads to (weakly) higher payoffs for an agent than other strategy would, no matter what strategies other agents play. A social choice function isdominant-strategy incentive compatibleif honest revelation is a dominant strategy in the direct mechanism for that SCF^{1}. The appeal of implementation in dominant strategies is that an agent doesn't need to think about what other agents will do. Even in the worst case, a dominant-strategy IC social choice function leaves an agent better off for being honest. Since the need for strategic thinking is eliminated, dominant-strategy IC is also referred to asstrategyproofness.## Gibbard-Satterthwaite: universal strategyproof mechanisms are out of reach

Arrow’s theorem is well-known for showing dictatorships are the only aggregators of ordinal rankings that satisfy a set of particular criteria. The result is commonly interpreted as saying there is no perfect voting system for more than two candidates. However, since Arrow deals with

social welfare functionswhich take a profile of preferences as input and outputs a full preference ranking, it really says something about aggregating a set of preferences into a single group preference. Most of the time though, a full ranking of candidates will be superfluous—all we really need to know is who wins the election. Although Arrow doesn’t give social welfare functions much room to maneuver, maybe there are still some nice social choice functions out there.Alas, it’s not to be. Alan Gibbard and Mark Satterthwaite have shown that, in general, the only strategyproof choice from three or more alternatives that is even slightly responsive to preferences is a dictatorship by one agent.

Stated formally:

Being strategyproof seems intuitively more desirable to me than the properties in Arrow’s theorem (especially the much critiqued

independence of irrelevant alternativescriterion). As it turns out though, Gibbard-Satterthwaite and Arrow are equivalent! Weakening our ambition from social welfare functions to social choice functions didn't give us anything.Gibbard-Satterthwaite seems a great blow to mechanism design at first glance. On reflection, perhaps we were asking for too much. A completely generic strategyproof mechanism that can be used in any situation does sound too good to be true.

There are three ways to proceed from this point:

On that last path, note that switching from ordinal to cardinal preferences can’t save us unless we offset that generalization with other assumptions—cardinal information expands the ordinal type space, which can only make implementation more difficult.

Since dictatorships are the only universal strategyproof mechanisms for choosing between three options, it’s worth investigating why life is pleasant with only two options. In this case, lots of non-trivial strategyproof mechanisms exist. For example, suppose a committee is restricted to painting a bike shed either red or blue due to zoning restrictions. The social choice function that paints the shed red if and only if a majority have red as their first choice is strategyproof, as is the social choice to paint the shed red if and only if the committee unanimously lists red as their top choice.

Of course, not every voting rule for binary outcomes will be strategyproof, like the rule that paints the shed red if and only if an odd number of committee members top-rank red. While I can’t imagine anyone arguing in favor of this rule, what’s going wrong here? The problem is this odd rule isn’t

monotonic—raising red in someone’s preference ranking can cause the outcome to switch away from red.Here are the formal definitions of strategyproofness and monotonicity:

Take a moment to convince yourself that monotonicity is just a slight rephrasing of strategyproofness and hence equivalent. Though they are identical at heart, monotonicity carries two useful interpretation as an invariance property. First, if an agent submits a new ranking where the previous outcome goes up, the outcome can’t change. Second, submitting a new ranking where the rank of the previous outcome relative any other option is unchanged has to leave the outcome unchanged as well.

When there are only two alternatives, we can think of the implicit outcome space as one dimensional, with one outcome on the left and the other on the right. Going towards one outcome corresponds exactly with going away from the other. With three or more alternatives, we don’t have the same nice structure, leading to the impossibility of a non-trivial monotonic rule.

In the next post, I’ll describe domains where we can enrich the outcome space beyond a binary choice and still get strategyproofness. Since the outcome space won’t naturally have a nice order structure, we’ll have to ensure it does by restricting the preferences agents can have over it. Even though we don’t have universal strategyproof mechanisms other than dictatorships, we can uncover strategyproof mechanisms for specific applications. In the meantime, here’s a proof of Gibbard-Satterthwaite for the curious.

## Proof of Gibbard-Satterthwaite (in a special case)

Suppose the committee consists of two people, Betty and Bill. In addition to red and blue, the city recently approved yellow as an option to paint bike sheds. Each person has six possible rankings over the three colors, and there are 36 preference profiles containing one ranking from each. The 36 will fall into six relevant classes. Here is an example of each class along with some shorthand to describe it:

r>b>y: Both agents agree on the ranking of red over blue over yellow.r>b^y: Both agents agree red is the best. Betty puts blue as her second choice, while Bill has yellow as his second choice.b^y>r: Both agree red is worst. Betty thinks blue is better than yellow, while Bill thinks yellow is better.r∣ (b>y): Both agents agree blue is better than yellow. Betty thinks red is better than them both, while Bill thinks red is worse than both.b>y) ∣r: Both agree blue is better than yellow. Betty thinks red is worst, while Bill thinks red is best.r^b^y: Betty has the ranking red over blue over yellow, while Bill has the reverse.For the notation summarizing the profile,

r>bindicated both agree that red is better than blue.r^bsays there is a conflict between the two with Betty preferring red.r∣ (b>y) says there are two preference conflicts, with Betty preferring red over the other two options, so we alternatively think of this profile asr^y,r^b, andy>b.Now, we’ll assign the 36 profiles a color, using each at least once, in a way that is monotonic. This will happen in six steps as depicted in the following diagram.

b^y>rat the bottom of the diagram. The profiley>b>rgot yellow in 1f, so if Betty starts liking blue better from 2, the outcome has to stay yellow or switch to blue. Since monotonicity can’t tell us more than that, we have to make a choice. Let’s decide in favor of Betty and pick blue.b^yasb, three more colorings follow since we must resolve this particular conflict in the same way everywhere. Keep in mind thatb^yis a different conflict thany^bsince it might matter who prefers which color. Consider 3b. This can’t be yellow since yellow increases between this profile and 2, but 2 isn’t yellow. It also can’t be red since 1k isn’t red, so we conclude 3b must be blue. Now consider 3a. Even though the conflictb^yresolves in favor of blue, the outcome can’t be blue since 1c isn’t blue. Hence 3a must be red. From 3a, we conclude thatr^ywas resolved asr, so this new rule must apply everywhere. From 3c, we get a third rule thatb^r=b.r^b=r,y^b=y, andy^r=y.We’ve found a monotonic, onto coloring! Notice that this is a dictatorship by Betty, choosing her top-ranked color for each profile. Everything comes down to favoring Betty over Bill in step two. Since Betty was pivotal once, she ends up having complete control. Of course, we could have resolved

b^yasyinstead, which would have given us a dictatorship by Bill. That choice in step two was the only degree of latitude we had, so these are the only two monotonic, onto colorings.This special-case proof of Gibbard-Satterthwaite was inspired by Hansen (2002), “Another graphical proof of Arrow’s impossibility theorem”. Full proofs of the theorems, done simultaneously side-by-side, are given in Reny (2001), “Arrow’s theorem and the Gibbard-Satterthwaite theorem: a uniﬁed approach”.

Previously on:Incentive-Compatibility and the Revelation PrincipleNext up:Strategyproof Mechanisms: PossibilitiesRecall that the direct mechanism for an SCF is where we simply ask the agents what type they are and then assign the outcome prescribed by the SCF for that type profile.↩

The function

fisontoif every outcome inAoccurs for at least one input. The main role of this assumption to preventffrom covertly restricting its image to two elements, so it’s almost without loss of generality.↩