abramdemski

Sequences

Consequences of Logical Induction
Partial Agency
Alternate Alignment Ideas
Filtered Evidence, Filtered Arguments
CDT=EDT?
Embedded Agency
Hufflepuff Cynicism

Comments

Deconstructing 321 Voting

Nice find!

From the wikipedia article:

Each voter rank-orders all of the candidates on their ballot. If at any time one candidate is ranked first (among non-eliminated candidates) by an absolute majority of the voters, that candidate wins. Otherwise, the candidate ranked last (again among non-eliminated candidates) by the largest number of (or a plurality of) voters is eliminated.

The first condition, "if at any time one candidate is ranked first by an absolute majority", is different from ED -- I only included the second clause. I'm guessing Coomb's method is probably an improvement in some sense, although I haven't thought through any details yet.

But wikipedia also says my variant has been discussed in the literature:

In some sources, the elimination proceeds regardless of whether any candidate is ranked first by a majority of voters, and the last candidate to be eliminated is the winner.[2] This variant of the method can result in a different winner than the former one (unlike in instant-runoff voting, where checking to see if any candidate is ranked first by a majority of voters is only a shortcut that does not affect the outcome).

Thanks for the pointer!

Deconstructing 321 Voting

OK, so approval and score don't exhibit the trade-off, but other methods do. So my question is whether there's a real trade-off -- is this just an artifact of poor voting methods, or is it something that quality voting methods have to deal with?

If the trade-off were only ever exhibited by voting methods what are worse than score voting, then it would in some sense not be a real trade-off.

But another point to recognize is: under honest voting, score voting with high granularity (a big range of possible scores) is literally as good as you can possibly get, at least in VSE terms. So, any advantage over score has to be in dealing with strategic voting (IE incentivizing honest voting, or, making outcomes good even under strategy).

Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare

Does this have a doomsday-argument-like implication that we're close to the end of the Earth's livable span, because if life takes a long time to evolve observers then it's overwhelmingly probable that consciousness arises shortly before the end of the livable span?

Transparent Technologies

I recently finished the book The Dictator's Handbook. One thing that surprised me at the end: after many chapters examining (dys)functions of governments, the book suddenly applied the same concepts to corporations.

The main thrust of the book is that we can put countries on a spectrum from dictatorship to democracies, by examining the number of key supporters needed for the government to stay in place. Democracies predictably deliver much more value to citizens. It's a classic story of incentives.

If we look at corporate incentives, most (but not all) corporations have a small number of major shareholders. They do a pretty good job of delivering value to those shareholders.

If we look at the corporations that are more customer-owned, they do a better job of delivering value to customers. (Unlike many of the other claims in the book, this claim was not backed up with a bunch of data, only with a case study AKA anectdote. But it makes sense!)

Customer ownership is implemented by EG setting limits on how many shares any one individual can own, so that you can't get a bunch of shares accumulating in a few hands.

This was an aha moment for me: it's not economically inevitable that, as you put it, 

The Invisible Hand compels companies to extract maximum profit from whatever leverage they have.

A prepackaged commercial product designed to make things easier for consumers tends to contain anti-features.

Instead, it's a consequence of typical corporate ownership structure.

Pontor's Shortform

Seconded. Also Skyrms' "Evolution of the Social Contract".

Is Success the Enemy of Freedom? (Full)

The contrast between this post and lsusr's rejoinder reminds me of the Objectivist idea that success within the system always corrupts (to put it in LW terminology, you can't feed Moloch without feeding Moloch), and of Moral Mazes:

Its thesis is that this is the result of a vicious cycle arising from competitive pressures among those competing for their own organizational advancement. Over time, those who focus more on and more value such competitions win them, gain power and further spread their values, unless they are actively and continuously opposed.

Once things get bad in an organization they tend to only get worse, but things in general get better because such organizations then decay and are replaced by new ones. Unfortunately, our society now slows or prevents that process, with these same organizations and their values increasingly running the show. 

Investment and flexibility become impossible. Even appearing to care about anything except the competition itself costs you your allies. Thus things inevitably decay and then collapse, flexibility returns, cycle repeats.

Involvement with such patterns is far more destructive to humans than is commonly known.

Is Success the Enemy of Freedom? (Full)

I don't think that's all there is to it. Big firms have R&D. (Indeed, I have been told that big firms target spending specific percentages on R&D because spending too much or too little looks bad and would make their stocks go down, or something like that.)

I think big firms get eaten by Moloch in some fashion (lost purposes turn everything fake?), whereas startups have nicely aligned incentives (because it's worth nothing unless it succeeds, so weird internal power dynamics are not that valuable to fight over compared to fighting for the common goal).

Success Buys Freedom

I'm enjoying the counter-alkjash series! In contrast to my comment on the previous installment, I think this installment has a structured model with gears, and told me something about the world. (Still not much mental tech to counter failure modes.)

Thoughts on Voting Methods

OK, I basically don't like the voting system.

Scott pointed out to me that the condorcet criterion makes more sense if we include stochastic outcomes. In the cases where the Condorcet winner is the utilitarian-worst candidate, a mixture of other candidates will win over the Condorcet winner. (So that candidate won't really be the Condorcet winner, if we include stochastic outcomes as "candidates".)

But that's not what's going on here, because this technique always selects a Condorcet winner, if there is one.

So (apparently) it's not including stochastic outcomes in the right way.

We can do better by modifying the game:

We specify a symmetric two-player zero-sum game where each player selects a distribution over candidates. You score points based on how many more votes your proposed distribution would get against the other player's. The game's Nash equilibrium (a distribution over distribution over candidates) is the output distribution.

However, I'm a bit suspicious of this, since I didn't especially like the basic proposal and this is the same thing one level up.

Since this is the unique voting system under some consistency conditions, I must not agree with some of the consistency conditions, although I'm not sure which ones I disagree with.

Load More