# 36

Suppose that we, a group of rationalists, are trying to come to an agreement on which course of action to take. All of us have the same beliefs about the world, as Aumann's Agreement Theorem requires, but everyone has different preferences. What decision-making procedure should we use? We want our decision-making procedure to satisfy a number of common-sense criteria:

- Non-dictatorship: No one person should be able to dictate what we should do next. The decision-making process must take multiple people's preferences into account.

- Determinism: Given the same choices, and the same preferences, we should always make the same decisions.

- Pareto efficiency: If every member of the group prefers action A to action B, the group as a whole should also prefer A to B.

- Independence of irrelevant alternatives: If we, as a group, prefer A to B, and a new option C is introduced, then we should still prefer A to B, regardless of what we think about C.

Arrow's Theorem says that: Suppose we have a list of possible courses of action, {A, B, C, D... Q}. Everyone gets a pencil and a piece of paper, and writes down what their preferences are. Eg., if you thought that Q was the best, D was the second best, and A was the worst, you would write down Q > D > (...) > A. We then collect all the slips of paper, and put them in a big bin. However, we can't take all these slips of paper and produce a set of preferences for the group as a whole, eg., {H > E > I > C > ... > D > A}, in a way that satisfies all of the above criteria.

Therefore, (so the story goes), much woe be unto us, for there is no way we can make decisions that satisfies this entirely reasonable list of requirements. Except, this isn't actually true. Suppose that, instead of writing down a list of which actions are preferred to which other actions, we write down a score next to each action, on a scale from 0-10. (For simplicity, we allow irrationals as well as integers, so, say, sqrt(2) or 7/3 are perfectly valid scores). We add up the numbers that each person puts down for each action. Say, if person A puts down 5 for action A, and person B puts down 7 for action A, then A has a total score of 12. We then decree: we will, as a group, prefer any action with a higher total score to any action with a lower total score. This procedure does satisfy all of the desired criteria:

- Non-dictatorship: Suppose there are two possible actions, A and B, and ten people in the group. Each person gives a score of zero to A and a ten to B, so B has a total score of 100, as compared to A's score of zero. If any one person changes their score for A to X, and their score for B to Y, then the total score for A is now X, and the score for B is now 90 + Y. Since we know that 0 <= X <= 10 and 0 <= Y <= 10, we can derive that 90 + Y >= 90 > 10 >= X, so B is still preferable to A. Therefore, no one person can switch the group's decision from B to A, so no one person is a dictator.

- Determinism: All we do is add and rank the scores, and adding and ranking are both deterministic operations, so we know this procedure is deterministic.

- Pareto efficiency: Suppose that there are N people in the group, and two possible actions, A and B. Every person prefers A to B, so person X will assign some score to B, B_X, and some score to A, A_X, where A_X > B_X. Let C_X = A_X - B_X; we know that C_X > 0. The total score for B is (B_1 + B_2 + B_3 + ... + B_N), and the total score for A is (A_1 + A_2 + A_3 + ... + A_N) = (B_1 + B_2 + B_3 + ... + B_N) + (C_1 + C_2 + C_3 + ... + C_N) > (B_1 + B_2 + B_3 + ... + B_N), so the group as a whole also prefers A to B.

- Independence of irrelevant alternatives: If we prefer A to B as a group, and we all assign the same scores to A and B when C is introduced, then the total score of A must still be higher than that of B, even after C is brought in. Hence, our decision between A and B does not depend on C.

What happened here? Arrow's Theorem isn't wrong, in the usual sense; the proof is perfectly valid mathematically. However, the theorem only covers a specific subset of decision algorithms: those where everyone writes down a simple preference ordering, of the form {A > B > C > D ... > Q}. It doesn't cover all collective decision procedures.

I think what's happening here, in general, is that a). there is a great deal of demand for theorems which show counterintuitive results, as these tend to be the most interesting, and b). there are a large number of theorems which give counterintuitive results about things in the world of mathematics, or physics. For instance, many mathematicians were surprised when Cantor showed that the number of all rational numbers is equal to the number of all integers.

Therefore, every time someone finds a theorem which appears to show something counterintuitive about real life, there is a tendency to stretch the definition: to say, wow, this theorem shows that thing X is actually impossible, when it usually just says "thing X is impossible if you use method Y, under conditions A, B and C". And it's unlikely you'll get called on it, because people are used to theorems which show things like 0.999... = 1, and we apparently treat this as evidence that a theorem which (supposedly) says "you can't design a sane collective decision-making procedure" or "you can't do electromagnetic levitation" is more likely to be correct.

More examples of this type of fallacy:

- Earnshaw's theorem states that no stationary collection of charges, or magnets, is stable. Hence, you cannot put a whole bunch of magnets in a plate, put the plate over a table with some more magnets in it, and expect it to stay where it is, regardless of the configuration of the magnets. This theorem was taken to mean that electromagnetic levitation of any sort was impossible, which caused a great deal of grief, as scientists then still thought that atoms were governed by electromagnetism rather than quantum mechanics, and so the theorem would appear to imply that no collection of atoms is stable. However, we now know that atoms stay stable by virtue of the Pauli exclusion principle, not classical E&M. And, it turns out, it is possible to levitate things with magnets: you just have to keep the thing spinning, instead of leaving it in one place. Or, you can use diamagnetism, but that's another discussion.

- Heisenberg's Student Confusion Principle, which does not say "we can't find out what the position and momentum of a particle is", has already been covered earlier, by our own Eliezer Yudkowsky.

- Conway's Free Will Theorem says that it's impossible to predict the outcomes of certain quantum experiments ahead of time. This is a pretty interesting fact about quantum physics. However, it is only a fact about quantum physics; it is not a fact about this horribly entangled thing that we call "free will". There are also numerous classical, fully deterministic computations that we can't predict the output ahead of time; eg., if you write a Turing Machine to output every twin prime, will it halt after some finite number of steps? If we could predict all such computations ahead of time, it would violate the Halting Theorem, and so there must be some that we can't predict.

- There are numerous law of economics which appear to fall into this category; however, so far as I can tell, few of them have been mathematically formalized. I am not entirely sure whether this is a good thing or a bad thing, as formalizing them would make them more precise, and clarify the conditions under which they apply, but also might cause people to believe in a law "because it's been mathematically proven!", even well outside of the domain it was originally intended for.

# 36

New Comment

The procedure you're proposing collapses into approval voting immediately.

Nobody has any reason to vote anything else than 10 (possibly minus epsilon) or 0 (possibly plus epsilon). If you like A slightly more than B, and estimate A will get 81 points, while B will get 90 points, the optimal behaviour is to vote 10 for A (and everything better than A), and 0 for B (and everything worse than B), any other vote is strictly worse for you. There is no scenario under which voting anything else than 0 and 10 is better than extreme votes.

Approval voting is prone to tactical voting (this should added to requirements, Arrow's Theorem talks about preferences, as it assumes voting is uniquely determined by them) - you never need to order candidates in a way that reverses your preferences, but approval threshold depends on what you think others will vote like. If you think A > B > C, and think C is leading, you vote 10,10,0. If you think B is leading, you vote 10,0,0. It also fails at determinism.

If we give people predefined allowance of points (like n alternatives need scores 1 to n, or get n points for arbitrary distribution) it fails independence immediately, and is prone to preference reversal in tactical voting, which is even worse (if A > B > C, and C is leading, but B has some chances, you vote 0, 10, 0, with A < B).

"The procedure you're proposing collapses into approval voting immediately."

This only holds when you already know the outcome of every other vote, and not otherwise. (Of course, in the real world, you don't normally know the outcome of every other vote). Suppose, for instance, that you have three possible outcomes, A, B and C, where A is "you win a thousand dollars", B is "you get nothing", and C is "you lose a thousand dollars". Suppose that (as a simple case) you know that there's a 50% chance of the other scores being (92, 93, 90) and a 50% chance of the other scores being (88, 92, 99).

If you vote 10, 0, 0, then 50% of the time you win a thousand dollars, and 50% of the time you lose a thousand dollars, for a net expected value of \$0. If you vote 10, 10, 0, you always get nothing, for a net expected value of \$0. If you vote 10, 8, 0, however, you win \$1000 50% of the time and get nothing 50% of the time, for a total expected value of \$500.

taw is correct for most realistic situations. If there is a large voting population, and your probability distribution over vote outcomes is pretty smooth, then the marginal expected utility of each +1 vote shouldn't change that much as a result of your miniscule contribution. In that case, if you vote anything other than 0, you may as well vote 10.

Doesn't this assume that you're a causal decision theorist?

Doesn't this assume that you're a causal decision theorist?

No. (That is, to make taw incorrect you have to assume much more than that you are not a CDT agent. For example, making assumptions about what the other agents are, what they want and what they know about you.)

It seems to me that the same sort of decision-theoretic considerations that motivate me to vote at all in a large election would make it valid for me to vote my actual (relative) preference weighting in that election.

That's true, in the limit as the number of voters goes to infinity, if you only care about which preference is ranked the highest. However, there are numerous circumstances where this isn't the case.

Specifically, it isn't the case if you believe that the disparity between potential winners is smaller than the maximum vote number minus the minimum and that other people do not believe the same and adjust their votes accordingly.

Relevant section from Steve Rayhawk's nursery effect link:

If an 0-99 range voter has given 0s and 99s to the candidates she considers most likely to win, and now asks herself "how should I score the remaining no-hope candidates?", the strategic payoff for exaggerating to give them 0s or 99s can easily be extremely small, because the probability of that causing or preventing them winning can easily be below 10^-100. Supposing a voter gets even a single molecule worth of "happiness neurotransmitter" from voting honestly on these candidates, that happiness-payoff is worth more to that voter than the expected payoff from exaggerating about these candidates via "approval style" range-voting. Therefore, range voters will often cast substantially-honest range votes, even those inclined to be "strategic."

This system is a reinvention of range voting.

Like taw says, for causally rational voters this procedure collapses into approval voting. (Though Warren D. Smith of the Center for Range Voting argues that range voting is still better. In computer simulations (technical report), under many conditions, voters who (causally irrationally) choose non-extreme scores collectively reduce aggregated Bayesian regret. Non-extreme scores may help voters simulate non-causal rationality by a "nursery effect".)

I've thought about another voting system that might not collapse. In this system, each voter has the same power as in approval voting. Each voter can choose their approvals themselves, if they have the resources to choose tactically good votes themselves. The system also has a "public voting tactician", which is to voting what a public defender is to legal defense. Voters can give their preferences to the public voting tactician in the form of a utility function, and the tactician finds an equilibrium approval voting strategy for all the voters given their preferences.

(The real reason I am interested in this system is that it might get around problems of incommensurability and scaling of utility functions while still using full utility information.)

The tactician should be chosen to minimize the advantage which voters or groups of voters can get by using any other procedure to choose their votes, or by reporting their preferences non-honestly.

Parts of this system which that description doesn't define are:

• What information does the tactician have? Does it have the preferences of the other voters? Does it have the votes of the voters who did not give preferences to the tactician? If yes, then this may not be a secret ballot, and non-public-tactician voting tactics may be at an unfair disadvantage, preventing competitive pressure to change the public tactician. If no, then where does its beliefs about other voters' votes come from?
• What kind of equilibrium? A voter only has a causal incentive to vote sincerely and admissibly (i.e. monotonic non-decreasing and non-constant in their preferences) when their vote can make a difference. What prevents "all the voters vote for a least collectively preferred candidate" from being an equilibrium, if, in this equilibrium, no voter has a causal incentive to change their vote? Are mixed strategies or uncertainty needed to define the equilibrium? I think the system reduces the voters' utility functions to simple preference orderings unless it models more than infinitesimal uncertainty about other voters' strategies.
• Will there be voters or groups of voters who can do better by using a non-deterministic method to choose their input to the voting system? If so, should the tactician sacrifice determinism?
• Is the best public voting tactician computationally intractable?

I wrote a blog post a while back about equilibrium strategies in elections. I defined a "cabal equilibrium" as one in which, not only does no single player wish to change strategies, but no group of players wishes to change strategies together.

For example, (D,D) is not a cabal equilibrium in PD, because both players would prefer to change strategies together to (C,C). But (C,C) is not a cabal equilibrium either, because either player would prefer to change to D. PD has no cabal equilibria.

Elections have cabal equilibria iff there's a Condorcet winner, and a cabal equilibrium elects the Condorcet winner.

It sounds like you rediscovered the "Core".

Related: "Strategic approval voting in a large electorate" by Jean-François Laslier. Smith's report also considers tactical voting, and is unrefereed but more thorough.

Even if you can only state a preference ordering and nothing else, there's no reason to keep IIA, as the ranking of irrelevant alternatives is evidence of preference strength.

Apparently Arrow's theorem and the Gibbard-Satterthwaite theorem are the same thing, so there's a real impossibility theorem in there, with the real lesson being that you can't avoid tactical voting.

The Gibbard-Satterthwaite theorem is like Arrow's theorem, in that it only applies to voting systems which work solely based on preference ordering. Under my system, there's no incentive for "tactical voting", in the sense of giving a higher score to a candidate who you think is worse; a candidate can only do better if they're ranked more highly.

But there's lots of incentive to misstate preference strengths. (I think you can prove this always has to be true by applying G-S to preferences over gambles. Rejecting determinism here means saying the winner can depend on something else than anyone's preference strengths, which is bad.)

[-][anonymous]12y 0

True, but there's lots of incentive for misstating preference strengths.

Great post, voted up. I would add Amartya Sen's Liberal Paradox to the list, particularly relevant because it's regarded as a corollary of Arrow's Theorem.

It purports to show a problem with liberalism (actually, what most people would now call libertarianism). However, it does so by equating rights with obligations -- in other words, preventing people from waiving or otherwise trading away their rights, which is pretty much the essence of (what its proponents consider) the chief mechanism by which libertarianism achieves Pareto-optimality.

So, to Sen, a property right to an apple means you can never transfer the right to the apple to another person, which is of questionable applicability to real scenarios. He uses this constraint to show that libertarianism isn't Pareto-optimal because pesky rights will get in the way of Pareto-improvements. But if a move is really Pareto-optimal, then the relevant parties will waive the relevant rights! (Neat tongue-twister there, by the way.)

Strangely, in developing the alleged problem, Sen passes right by a much stronger argument against libertarianism, that property rights can hinder the well-being of disproportionately many people, even if it wouldn't be Pareto-optimal to make the move (e.g. the one hold-out is made worse off).

I can only guess that his purpose in presenting the Liberal Paradox was to get that argument across to readers' minds, but without having to step out as a defender of that view. That would actually have been a clever move.

I've never understood why this issue is taken seriously.

I'm not sure how critical it is, but your method fails the IIA test described in Formal Statement of the Theorem on the wiki page. Changing the strengths assigned to outcomes can alter the social choice without changing individual rankings.

And it appears Tyler Cowen agrees.

"There are numerous law of economics which appear to fall into this category; however, so far as I can tell, few of them have been mathematically formalized."

For better or worse, just about everything in economics has been mathematically formalized. If you are wondering about a specific "law" or concept I can try to dig up examples.

I don't see how this can possibly be correct. Suppose we require that no person can give the same score to 2 different options. Your reasoning would indicate that we can then combine these scores in a way that produces a total score for every option that satisfies the four conditions. Conditions 1,2, and 4 are stated in ways that don't depend on whether we use rankings or scores; condition 3 explicitly assumes the use of a score. So if we take the output scores, and list them in order, we get an output ranking, which must still meet those conditions, hence violating Arrow's theorem.

"So if we take the output scores, and list them in order, we get an output ranking, which must still meet those conditions, hence violating Arrow's theorem."

If we only take into account rankings as data, and only output rankings, then of course; the point is that we don't, because it's an artificial constraint, and there's no reason for us to adhere to it.

You can take your input rankings, map them into scores, use your process on the numbers, and get output scores. These output scores satisfy the 4 criteria. You can then map these output scores back into rankings. Because the 4 criteria don't depend on whether you use rankings or scores, they must still be satisfied, except for determinism. Determinism might not be satisfied because the mapping of rankings into scores is one-to-many.

Does Arrow's theorem allow us to find solutions to the original ranking problem if we throw out the determinism criterion? I don't see how determinism is useful.

By Arrow's Theorem, any process by which rankings are mapped to scores must violate one of these criteria, once the scores are processed using my algorithm. This is not terribly surprising, as you lose all the information about relative importance in doing so.

Yes; but can we find a process which violates only determinism? Because we don't need determinism.

but can we find a process which violates only determinism?

That's exactly what tommccabe's process does. It violates determinism in Arrow's sense of the word, but it satisfies the other conditions. Moreover, it violates Arrow-style determinism in a way that we arguably want anyway.

Note, according to the wikipedia article listed, Arrow's theorem is valid "if the decision-making body has at least two members and at least three options to decide among". This makes me suspect the Pareto-efficiency counter-example as this assumes we have only 2 options.

It doesn't matter if there are ten thousand other options. If you sum numbers A-1 through A-N, and you sum numbers B-1 through B-N, and A-X > B-X for all X, then A must be larger than B; it doesn't matter how many alternatives there are.

Fair enough. Although in considering the implications of more than two options for the other conditions, I noticed something else worrisome.

The solution you present weakens a social welfare function, after all if I have two voters, and they vote (10,0,5) and (0,10,5) the result is an ambiguous ordering, not a strict ordering as required by Arrow's theorem (which is really a property of very particular endomorphisms on permutation groups).

It seems like a classic algorithmic sacrifice of completeness for power. Was that your intent?

[...] - Independence of irrelevant alternatives: If we prefer A to B as a group, and we all assign the same scores to A and B when C is introduced, then the total score of A must still be higher than that of B, even after C is brought in. Hence, our decision between A and B does not depend on C.

Wrong : this assumes we assign scores to alternatives in a way that doesn't depend of he others. This is wrong for two reasons:

• Tactical voting, as taw pointed out

• We value thing as compared to alternatives : if A is good and B is so-so, you'll rate B differently if there was a third alternative C that's downright awful.

You didn't "beat" arrow's theorem by finding an alternate way to express preferences, but by making the fourth constraint less strict. Once you release that constraint, I'm sure you can find an algorithm that uses preference ordering instead of preference rating that fulfills the constraints just as well (i.e. not very well).

(Still, I agree with the general point, that we should be wary of over-interpreting what mathematical models tell us)

That's not what independence of irrelevant alternatives means; it means that, given the same relative rankings, the voting algorithm will always make the same decision, with or without the alternative. It doesn't mean that the voters will make the same decisions.

The independence of irrelevant alternatives ("pairwise independence") can also apply to individuals, it is just that Arrow's theorem doesn't directly require it. However, a better statement of the independence of irrelevant alternatives for Arrow is "if all individuals have preferences satisfying pairwise independence, then the group's decision function should also satisfy it". So for group IIA to matter we should have individual IIA

I believe that Arrow's Theorem has been widely misinterpreted, and with a slightly different interpretation it makes more sense.

Here is my argument:

For every collection of preferences, there must be some outcome that satisfies Arrow's Theorem. Like, for voting there must be some winner or there is some sort of tie. Either there is a winner or there is not. Arrow's Theorem can't demand that there has to be a winner and there cannot be a winner.

Since for any collection of preferences there must be some outcome that satisfies Arrow's Theorem, we can make an acceptable voting system by simply looking at each collection of preferences and going through the finite list of outcomes until we find one that satisfies Arrow's Theorem.

The reason people say that no voting system can always work, is that they imagine that voting systems cannot have ties. Obviously, no voting system that never results in a tie can be completely fair, nor will it satisfy Arrow's Theorem.

When there are more than two candidates and more than one voter, you can have a tie in new ways, it isn't necessary to have two candidates with exactly the same number of votes. You can also have a tie when -- one way or another -- the voters collectively prefer A to B, B to C, and C to A. Then of course there is no valid way to say which they prefer most. If the voting system prefers A then it's wrong because they prefer C to A, etc.

So a good voting system should not declare a winner in that case.

Here's a further conclusion. A voting system which does not collect enough information, cannot detect that A > B > C > A and so it will not know to declare a tie. So voting systems which satisfy Arrow's Theorem must somehow ask the right questions.

This voting system fails Pareto efficiency. Given three alternatives A, B and C, which I love, like, and hate respectively, I would vote 10-10-0. Get a second person like me, and we have options A and B equal even though we both prefer A.

And yes, this is a rational way of voting, if my dislike of C trumps any difference between A and B, and if I didn't know the preferences of the other voter.

Again, that's not what Pareto efficiency means; it means that the voting algorithm will rank A higher if everyone votes A higher than B. Going by your definition, no voting algorithm could ever be Pareto efficient, because everyone could just lie and say they preferred B to A when it's actually the reverse.

Actually, it similarly fails the Independence of irrelevant alternatives. In the example where there are three options, A, B and C which I love, like and hate respectively, I would asign 10-10-0 to the three options, but would assign 10-0 to if A and B were the only two options.

That's not what independence of irrelevant alternatives means; it means that, given the same relative ranking between A and B, the voting algorithm will always make the same decision, with or without the alternative. It doesn't mean that the voters will make the same decisions.

Why do you expect the voting algorithm to make the same decision when the voters make different decisions? If the addition of Option C causes supporters of Option A over Option B to decrease the difference in their scores between A and B more than the supporters of Option B over Option A, it can shift the algorithms output from Option A to Option B.

It says that if the voters make the same decisions once the alternative is added, then the voting algorithm does. See Arrow's definition of IIA on page 27 of the 2nd edition of Social Choice and Individual Values (you can find this part for free on Amazon).

[-][anonymous]11y 0

Which seems to me obviously a bad criterion. You can add a compromise option that none of the voters would choose by way of maximizing their individual utilities, but which would maximize the sum of their expected utilities, for example.

I stand corrected. Your post does indeed do what it claims to do. Upvoting.

an other example: cox's theorem.

Which laws of economics are you thinking of?

Your system fails the first two criteria:

Dictatorship: (ETA: My point here was wrong. I missed that the voters could only select numbers from 0 to 10.) Suppose that one of the voters can write numbers much, much larger than any number that that the other voters can write. (Maybe this voter can write faster and knows clever notations for writing very large numbers.) Then this voter can be dictator by writing a number next to its choice that exceeds the sums of the largest numbers that the other voters can write.

Determinism: The same preference orders, but expressed with different numbers, can lead to different outcomes. (Examples available upon request.)

For the first: You can only write numbers up to 10.

For the second: Yes, that's the point, and that's not what determinism means; determinism just means that there is no randomness involved. Relative distances between preferences matter! Suppose the vote was between option A, you get a thousand dollars, option B, you get nothing, and option C, your country gets destroyed in nuclear war. You would want a way to express that you dislike C much, much more than you like A.

For the first: You can only write numbers up to 10.

I had noticed that and edited my post, but too late for you to see, I guess. Sorry for the hasty conclusion.

For the second: Yes, that's the point, and that's not what determinism means; determinism just means that there is no randomness involved.

No, "determinism" in Arrow's theorem means that the preference orders assigned by the algorithm cannot change unless the preference orders of the voters change. Randomness is one way that an algorithm could be nondeterministic, but it isn't the only way. Your system gives another way to be nondeterministic in the sense of Arrow's Theorem.

If you want to use the word "determinism" in that sense, then a far better definition would be "the voting outcome is not affected by anything other than the votes of the voters", which my system does hold to. As I said above, I haven't claimed to found a flaw in the mathematical proof of Arrow's Theorem, just a mismatch between the content of the theorem and how voting systems work in real life. Certainly, in real life, we should want to distinguish between "a vote between the Democrats, the Greens, and the Republicans", and "a vote between the Democrats, the Greens, and superintelligent UFAI", even if our preference order is the same in both cases.

If you want to use the word "determinism" in that sense, then a far better definition would be "the voting outcome is not affected by anything other than the votes of the voters", which my system does hold to.

Fair enough, but it's not a matter of how I want to use the word "determinism". That word in Arrow's Theorem has a certain technical meaning, and your system does not qualify as deterministic under that technical meaning.

You're making the case that determinism, in Arrow's sense, is not such the desideratum that it's usually made out to be. FWIW, I'm coming around to your point there.

How do you break numerical ties without randomness?

You can just choose the first one in alphabetical order, or some equivalent.

I volunteer to choose the language and wording of the preferences.

Okay, then, we all agree before the vote to break ties based on some arbitrary binary digit of pi, which we don't look at until after the vote is done.

Or, equivalently, we can agree to break ties based on the outcome of the rand() function seeded with the time in milliseconds at the time of agreement.

Neither of those fulfills the relevant functional definition of determinism. The same inputs either do or do not lead to the same outputs; if they do, then it is deterministic and subject to Arrow, and if they do not, they they are functionally random.

Neither of those fulfills the relevant functional definition of determinism.

Exactly. Both do or both don't. As (I assume) you were implying with your earlier question, ties in this system are broken either arbitrarily (limited dictator situation) or functionally randomly. Which of these categories 'alphabetical' and 'PI picking' could vary by assumption but one of the two is implied.

The same inputs either do or do not lead to the same outputs; if they do, then it is deterministic and subject to Arrow, and if they do not, they they are functionally random.

Yes.

The suggested methods for tiebreaking are therefore not helpful, because the voters have to know the voting function beforehand, without even epistemic randomness. I was just noting that this system is extremely sensitive to tactical voting on the margin.

The suggested methods for tiebreaking are therefore not helpful, because the voters have to know the voting function beforehand, without even epistemic randomness. I was just noting that this system is extremely sensitive to tactical voting on the margin.

So was I. (I was expanding grandparent somewhat along these lines as you were replying.)

Eg., if you thought that Q was the best, D was the second best, and A was the worst, you would write down Q > B > (...) > A.

Did you mean to write "B was the second best"?

Yes, fixed.

I may have misunderstood something, but I don't think your system excludes pivotal voters. For example, with the aforementioned ten people, it is possible that everyone gave every alternative except B score 0, and alternative B got score 1 from every member of the group. If one of the members then changes her score for A to 10 and for B to 0, we have A(10) > B(9).

This does appear fixable with a scale tailored to the number of people and alternatives.

[-][anonymous]12y 7

You've misunderstood the axiom of non-dictatorship: it requires that there not be a person who is always capable of determining the outcome.

It is impossible, given Pareto efficiency, for there to be no person sometimes capable of determining the outcome (i.e. pivotal voter), because a unanimously pro-A group, which must elect A, can be turned into a unanimously pro-B group, which must elect B, by repeatedly changing the preference of just one individual. (But not the same individual every time, of course.)

Oh, right. I should check before posting.

I don't quite see the second part, but thank you for the explanation.

[-][anonymous]12y 3

In a world of black and white, if there's both a black place and a white place, then somewhere, black and white are right next to each other. If you can change an election where Epsilon wins to an election where Delta wins by changing a whole bunch of votes, then for some set of votes, you can do it by changing just one vote.