Jameson Quinn

[not ongoing] Thoughts on Proportional voting methods

*V0.7.3 Still tweaking terminology. Now, Vote Power Fairness, Average Voter Choice, Average Voter Effectiveness. Finished (at least first draft) of closed list/Israel analysis.*

[not ongoing] Thoughts on Proportional voting methods

*V 0.7.2: A terminology change. New terms: Retroactive Power, Effective Voting Equality, Effective Choice, Average Voter Effectiveness. (The term "effective" is a nod to* *Catherine Helen Spence). The math is the same except for some ultimately-inconsequential changes in when you subtract from 1. Also, started to add a closed list example from Israel; not done yet.*

[not ongoing] Thoughts on Proportional voting methods

*V 0.7.1: added a digression on dimesionality, in italics, to the "*Measuring "Representation quality", separate from power" *section. Finished converting the existing examples from RF to VW.*

[not ongoing] Thoughts on Proportional voting methods

*V 0.7.0: Switched from "Representational Fairness" to the more-interpretable "Vote Wastage". Wrote enough so that it's possible to understand what I mean by VW, but this still needs revision for clarity/convincingness. Also pending, change my calculations for specific methods from RF to VW.*

[not ongoing] Thoughts on Proportional voting methods

*I am rewriting the overall "*XXX: a xxx proportionality metric*" section because I've thought of a more-interpretable metric. So, where it used to be "*Representational fairness: an overall proportionality metric*", now it will be "*Vote wastage: a combined proportionality metric*". Here's the old version, before I erase it:*

Since we've structured RQ_d as an "efficiency" — 100% at best, 0% at worst — we can take each voter's "quality-weighted voter power" (QWVP) to be the sum of their responsibility for electing each candidate, times their RQ_1 for that candidate. Ideally, this would be 1 for each voter; so we can define the overall "quality-weighted proportionality" (QWP) of an outcome as the average of squared differences between voters' QWVP and 1, shifted and scaled so that no difference gives a QWP of 100% and uniform zeros gives a QWP of 0. (Note that in principle, a dictatorship could score substantially less than 0, depending on the number of voters).

(To do: better notation and LaTeX)

Since realistic voting methods will usually have at least 1 Droop quota of wasted votes (or, in the case of Hare-quota-based methods, just over half a Hare quota of double-powered votes and just under half of wasted votes; which amounts to much the same thing in QWP terms), the highest QWP you could reasonably expect for a voting method would be S/(S+1).

(show the math for the QWP of the IRV example above. Key point: the D>C>A>B voters have zero responsibility for electing A, so all they do is lower the average RQ of the B>C>A>D and C>B>A>D voters)

Note that this QWP metric, in combining the ideas of overall equality and representation quality, is no longer perfectly optimizing either of those aspects in itself. That is to say, in some cases it will push methods to sacrifice proportionality, in search of better representation, in a way that would tend to hurt the MSE from God's perspective. I think those cases are likely to be rare enough, especially for voting methods that weren't specifically designed to optimize to this metric, that I'm OK with this slight mis-alignment. That is to say: I think the true ideal quality ordering would be closer to a lexical sort with priority on proportionality ("optimize for proportionality, then optimize RQ only insofar as it doesn't harm proportionality`); but I think most seriously-proposed, practically-feasible voting methods are far enough from the Pareto frontier that "optimize the product of the two" is fine as an approximation of that ideal goal.

One more note: in passing, this rigorous framework for an overarching proportional metric also helps define the simple concept of "wasted vote"; any vote with 0 responsibility for electing any winner. Although "wasted votes" are already commonly discussed in the political science literature, I believe this is actually the first time the idea has been given a general definition, as opposed to ad-hoc definitions for each voting method.

[not ongoing] Thoughts on Proportional voting methods

*V 0.6.0: Coined the term "*Representational Fairness*" for my metric. Did a worked example of* Single transferrable vote (STV), *and began to discuss the example. Bumping version because I'm now beginning to actually discuss concrete methods instead of just abstract metrics.*

[not ongoing] Thoughts on Proportional voting methods

*V 0.5.5: wrote a key paragraph about* NESS: similar outcomes *just before* Pascal's Other Wager? (The Problem of Points). *Added the obvious normalizing constant so that average voter power is 1. Analyzed some simple plurality cases in* Retrospective voting power in single-winner plurality.

[not ongoing] Thoughts on Proportional voting methods

*V 0.5.4: Meaningful rewrite to "*Shorter "solution" statement"*, which focuses not on power to elect an individual, but power to elect some member of a set, of whom only 1 won in reality.*

[AN #109]: Teaching neural nets to generalize the way humans would

Finding "Z-best" is not the same as finding the posterior over Z, and in fact differs systematically. In particular, because you're not being a real Bayesian, you're not getting the advantage of the Bayesian Occam's Razor, so you'll systematically tend to get lower-entropy-than-optimal (aka more-complex-than-optimal, overfitted) Zs. Adding an entropy-based loss term might help — but then, I'd expect that H already includes entropy-based loss, so this risks double-counting.

The above critique is specific and nitpicky. Separately from that, this whole schema feels intuitively wrong to me. I think there must be ways to do the math in a way that favors a low-entropy, causally-realistic, quasi-symbolic likelihood-like function that can be combined with a predictive, uninterpretably-neural learned Z to give a posterior that is better at intuitive leaps than the former but beter at generalizing than the latter. All of this would be intrinsic, and human alignment would be a separate problem. Intuitively it seems to me that trying to do human alignment and generalizability using the same trick is the wrong approach.

You seem to be comparing Arrow's theorem to Lord Vetinari, implying that both are undisputed sovereigns? If so, I disagree. The part you left out about Arrow's theorem — that it only applies to ranked voting methods (not "systems") — means that its dominion is far more limited than that of the Gibbard-Satterthwaite theorem.

As for the RL-voting paper you cite: thanks, that's interesting. Trying to automate voting strategy is hard; since most voters most of the time are not pivotal, the direct strategic signal for a learning agent is weak. In order to deal with this, you have to give the agents some ability, implicit or explicit, to reason about counterfactuals. Reasoning about counterfactuals requires make assumptions, or have information, about the generative model that they're drawn from; and so, that model is super-important. And frankly, I think that the model used in the paper bears very little relationship to any political reality I know of. I've never seen a group of voters who believe "I would love it if any two of these three laws pass, but I would hate it if all three of them passed or none of them passed" for any set of laws that are seriously proposed and argued-for.