i'm trying to understand the point about the vitamin d example. if a futarchy market is set up to predict whether "increasing vitamin d consumption" will "increase average lifespan," wouldn't participants who believe that wealth (or another confounder) is the actual causal factor, and not vitamin d itself, be incentivized to bet against the vitamin d policy leading to increased lifespan?
the market's incentive structure seems designed to filter for beliefs about the causal efficacy of the proposed intervention, not merely correlations. if people believe wealth is the cause, they wouldn't expect a vitamin d policy to succeed in raising lifespans, and would bet accordingly. it appears there might be a slight confusion between correlation in observational data and the causal impact of a direct intervention as assessed by a prediction market.
ethics is just the heuristics genes use to get themselves copied. we're all trying to maximize our own expected utility, but since none of us wants to let any others become a dictator, there is a game theoretical equilibrium where we agree to have rules like "murder is illegal" because even though it stops me from murdering you, it also stops you from murdering me. our rational goal is to shrink the circle of people included in this decision to the smallest possible group that includes ourselves. hence why we wouldn't want to sacrifice our own interests for future humans.
this evolutionary ethics view of things avoids any other classic paradoxes like the repugnant conclusion.
often at this point the philosophy student will interject something about the is-ought fallacy. but that's not really a fallacy since there's no such thing as "ought". there's just preferences.
extensive computer simulations show that approval voting works extremely well, especially when voters are more tactical (game the system). and it has worked phenomenally well so far in fargo and st louis.
https://rpubs.com/Jameson-Quinn/vse6
Utilitarianism is certainly correct. You can observe this by watching people make decisions under uncertainty. Preferences aren't merely ordinal.
But yes, doing the math has its own utility cost, so many decisions are better off handled with approximations. This is how you get things like the Allais paradox.
I'm not sure what "moral" means here. The goal of a gene is to copy itself. Ethics isn't about altruism.
Rational utilitarianism means maximizing your own expected utility. (Technically from the gene's perspective; so caring for your children is selfish.) Social contracts (voting, laws against killing, etc) are just the game theoretical result of everyone acting selfishly.
It's about selfishness not altruism.
Your point about minimum wage, is exactly the point I made about price controls more generally. Bravo.
https://medium.com/effective-economics/the-problem-with-price-controls-14b4ee116bf7
This is just a variation of asset voting. I like it too. I could see an argument that you should start by redistributing the votes from the people who are guaranteed enough votes for a seat, because that could change elimination order. There are a bunch of different heuristics you could use.
It's not at all clear this is a problem. If all the winners are the closest to the centroid, then you will have statistically about the same overall ideological center within the group regardless of whether a proportionality is used. You might expect the lack of different perspectives to cause a problem, but a bunch of centrists can solicit expertise from multiple perspectives. Which makes sense since they are vying for every vote. Not to mention that a body of centrists will tend to get along a lot better than a bunch of quarreling extremists like American leftists and Trumpists.
But as was noted, there are things like sequential proportional approval voting if you really want PR.
Quadratic Voting is a very bad idea. Score Voting (aka Range Voting) or STAR Voting are better.
interesting. altho what we really care about is social utility efficiency (voter satisfaction efficiency), and it's famously hard to model that in multi-winner systems. SPAV is already very nice tho for being simple, and defaulting to the excellent approval voting method in the single-winner case.
i would advocate using spav with the 1/(2m+1) rule instead of 1/(m+1) because the standard formula (jefferson/d'hondt) systematically biases results in favor of larger coalitions. the webster method (1/(2m+1)) is unbiased, ensuring that a group with x% of the vote receives as close to x% of the seats as possible, regardless of whether they are a large majority or a small minority. strictly speaking, webster minimizes the total error of representation via standard rounding, whereas jefferson effectively rounds down to the detriment of smaller factions.