The current karma system tracks one number for each user. This number is used both as an indicator of "is this user's writing worth paying attention to" and as a voting weight (using the new powers-of-five system). I think this is hugely broken and splitting these two apart could make the karma system more effective and also simpler in some sense.

One important way in which the current system is broken is that it forces a painful tradeoff between wanting to easily delegate moderation power (which should ideally have a more local scope, and be used up rather than taken away), but also wanting to conservatively signal the importance of contributions (which should include both positive and negative votes over long time horizons).

I expect the idea to not be taken seriously enough, but meh sometimes I just feel like trying anyway. Here go some (tentative) rules of how such as system could work:

  • "voting" is a transaction that takes some positive amount X of dynamic karma from one user, and to another user gives:
    • upvote: X amount of static karma, as well as X (for some ) of dynamic karma
    • downvote: -X amount of static karma, no change in dynamic karma
  • every user's dynamic and static karma is the lifetime sum of what they received by being voted on, starting from zero
    • however, moderators receive a constant stream of dynamic karma for free (e.g. 50 per day)
  • static karma is publicly visible, and dynamic karma is private
  • how much dynamic karma to use for a single vote is left to decide by the voter
    • the default (one click) could be 1% of currently owned dynamic karma (or 1 if an user has less than 100 karma)
    • it should probably be capped at some fixed value to reduce abuse (e.g. max 10 points per post and 5 per comment, coming from the same account)
  • if a moderator is not using their dynamic karma, then above some value (e.g. 500) their "free" dynamic karma should instead be distributed to the community as "dividends" paid proportionally to (positive) stakes of static karma - this is to prevent stagnation from having not enough karma in circulation

A quick example for :

  • M is moderator, A and B are two new users on the site
  • M upvotes A's post (+5):
    • now A has +5 static (publicly visible karma) and +2.5 dynamic karma in their private account
  • A likes B's comment, and upvotes it twice:
    • now A has +5 static and +0.5 dynamic
    • B has +2 static and +1 dynamic
  • B upvotes A's reply:
    • A has +6 static, +1 dynamic
    • B has +2 static, 0 dynamic
  • actually turns out both A and B were sock puppets of a spammer, who tries to again upvote between the two accounts. A has one dynamic karma left to upvote B:
    • A has +6 static, 0 dynamic
    • B has +3 static, 0.5 dynamic
  • that's it, there's nothing more the spammer can do until someone else upvotes them (bringing in external karma)
  • indeed, no individual (or clique) can increase their static karma more than 100% above the amount they were trusted with by other users, regardless of how many accounts they open
  • also as a price for trying to cheat they lose moderation power, which seems fair game
New Comment
9 comments, sorted by Click to highlight new comments since: Today at 5:20 AM

Appreciate the thought here. I don't see any of this as intrinsically objectionable, but I'm still not really sure what the root issues you're trying to resolve are. What are examples of cases you're either worried about the current system specifically failing, or areas where we just little don't have anything even trying to handle a particular use-case?

In the previous thread, you mentioned "voting weight rewards people who invest time on the site, which may not be the thing you want", and I'm not sure if this was meant to help with that case or not.

I'm still not really sure what the root issues you're trying to resolve are. What are examples of cases you're either worried about the current system specifically failing, or areas where we just little don't have anything even trying to handle a particular use-case?

Sure, I can list some examples, but first note that while I agree that examples are useful, focusing on them too much is not a good way in general to think about designing systems.

A good design can preempt issues that you would never have predicted could happen; a bad design will screw you up in similarly unpredictable ways. What you want to look out for is designs which reflect the computational structure of the problem you are trying to solve. If you cannot think about it in these terms, I don't think you'll be persuaded by any specific list of benefits of the proposed system, such as:

  • multiple voting is natural, safe, and easy to implement
  • reduced room for downvote abuse, spamming and nepotism (see example in the post)
  • it's possible to change how static karma translates into voting power without disrupting the ecosystem (because the calculations you change affect the marginal voting power, not the total voting power)
  • users can choose their voting strategy (e.g. many low-impact votes or few high-impact ones) without being incentivized to waste time (in the current system, more clicks = more impact)
  • moderation power is quantitative, allowing things like partial trust, trial periods and temporary stand-ins without bad feelings afterward (instead of "become moderator" and then "unbecome moderator" we have "I trust you enough to give you 1000 moderation power, no strings attached - let's see what you do with it")
  • each user is incentivized to balance downvotes against upvotes (in the current system, the incentive is to downvote everything you don't upvote to double your signal - and this would stopped only by some artifical intervention?)

Etc. etc. Does it feel like you could generate more pros (and cons) if you set a 5 minute timer? Ultimately, there is nothing I can say to replace you figuring these out.

I like this idea. Ultimately the primary constraint on almost any feature on LessWrong is UI complexity, and so there is a very strong prior against any specific passing the very high bar to make it into the final UI, but this is a pretty good contender.

I am particularly interested in more ideas for communicating this to the user in a way that makes intuitive sense, and that they can understand with existing systems and analogies they are already familiar with.

The other big constraint on UI design are hedonic-gradients. While often a system can be economically optimal, because of hyperbolic discounting and naive reinforcement learning, you often end up with really bad equilibria if one of the core interactions on your site is not fun to use. This in particular limits the degree to which you can force the user to spend a limited number of resources, since it both strongly increases mental overhead (instead of just asking themselves "yay or nay?" after reading a comment, they now need to reason about their limited budget and compare it to alternative options), and because people hate spending limited resources (which results in me having played through 4 final fantasy games, and only using two health potions in over 200 hours of gameplay, because I really hate giving up limited resources, and I might really need them in the next fight, even though I never, ever will).

Ultimately the primary constraint on almost any feature on LessWrong is UI complexity, and so there is a very strong prior against any specific passing the very high bar to make it into the final UI

On the low end, you can fit the idea entirely inside of the existing UI, as a new fancy way of calculating voting weights under the hood (and allowing multiple clicks on the voting buttons).

Then, in a rough order of less to more shocking to users:

  • showing the user some indication of how many points their one click is currently worth
  • showing how many unused "voting points" they still have (privately)
  • showing a breakdown of recevied feedback into positive and negative votes
  • some simple configuration that allows to change the default allocation of budget to one click (e.g. how many percent, or pick a fixed value)

And that's probably all you ever need?

This in particular limits the degree to which you can force the user to spend a limited number of resources, since it both strongly increases mental overhead (instead of just asking themselves "yay or nay?" after reading a comment, they now need to reason about their limited budget and compare it to alternative options)

This should be much less of an issue if the configuration of this is global and has reasonable defaults. Then it's pretty much reduced to "new fancy way of calculating voting weights", and the users should be fine with just being roughly aware that if they vote lots lots or don't post anything on their own, their individual votes will have less weight.

in the current system, the incentive is to downvote everything you don't upvote to double your signal

For some reason, I don't do that. The interesting comments I upvote, the few annoying ones I downvote, but half or more comments fall in the "meh" territory where I neither appreciate them, nor mind them, so I simply do not vote on those.

Not voting is not exactly halfway between an upvote and a downvote. Upvote costs you a click, downvote costs you a click, but no-vote costs nothing.

Sure, and that's probably what almost all users do. But the situation is still perverse: the broken incentives of the system are fighting against your private incentive to not waste effort.

This kind of conflict is especially bad if people have different levels of the internal incentive, but also bad even they don't, because on the margin it pushes everyone to act slightly against their preferences. (I don't think this particular case is really so bad, but the more general phenomenon is and that's what you get if you design systems with poor incentives)

To me the proposed system seems complicated enough that I would expect that most users wouldn't fully understand it.

I like people thinking up ways to improve the karma system in general. I'm concerned that this system as described weights the preferences of the moderators (and people that the moderators like, etc.) too heavily.

Suppose, hypothetically, that a clique moved in that wrote posts that the moderators (and the people they like) dislike, but that are otherwise extremely good. I would want that clique to be able to gain karma corresponding to how good their posts are anyway, and it seems like in this system that's harder than I'd want it to be.

Your point can partially be translated to "make reasonably close to 1" - this makes the decisions less about what the moderators want, and allows longer chains of passing the "trust buck".

However, to some degree "a clique moved in that wrote posts that the moderators (and the people they like) dislike" is pretty much the definition of a spammer. If you say "are otherwise extremely good", what is the standard by which you wish to judge this?