interesting. altho what we really care about is social utility efficiency (voter satisfaction efficiency), and it's famously hard to model that in multi-winner systems. SPAV is already very nice tho for being simple, and defaulting to the excellent approval voting method in the single-winner case.
i would advocate using spav with the 1/(2m+1) rule instead of 1/(m+1) because the standard formula (jefferson/d'hondt) systematically biases results in favor of larger coalitions. the webster method (1/(2m+1)) is unbiased, ensuring that a group with x% of the vote receives as close to x% of the seats as possible, regardless of whether they are a large majority or a small minority. strictly speaking, webster minimizes the total error of representation via standard rounding, whereas jefferson effectively rounds down to the detriment of smaller factions.
Low-ish effort post just sharing something I found fun. No AI-written text outside the figures.
I was recently nerd-sniped by proportional representation voting, and so when playing around with claude code I decided to have it build a simulation.
Hot take:
Other key points:
The voter model:
The metrics:
The contenders:
Just averaging everything into two numbers:
Why I think PAV is the tentative winner:
Sequential Proportional Approval Voting. At each step, add the candidate to the list of winners who most increases the 'satisfaction' of all voters, where if I already have N winners I approve of, getting another winner I approve of only gives me 1/(N+1) units of 'satisfaction.' Repeat until you have enough winners.
(not the party-based methods or the random baseline)
In fact, STV is very slightly beyond the Pareto frontier formed as you change voter strategy with PAV. The closest point in the the sweep I did to check this had average distance to nearest winner 0.170 STV / 0.178 PAV, average distance to median winner 0.808 STV / 0.807 PAV (in arbitrary simulated voter preference space units).