Summary: We probably can't just hand out a dollar per upvote and expect things to end well. But maybe there are other, less-corrosive ways to encourage Forum contributions:
The idea of putting a dollar value on LessWrong and EA karma points has occurred to many people over the years. Clearly, LW / EAF / AF are producing things of value to the world, in a way that (if divided by the total number of upvotes) usually gives a value of at least a few dollars per Karma point in aggregate:
It would be great if we could incentivize people to produce more of this value. But of course we can’t just start handing out money in exchange for Karma, as the Good Heart Project demonstrates. It threatens to create an incentive to mindlessly churn out endless comments purely for dollars, or (assuming the people of LessWrong will always put in the effort to search out and destroy Good-Heart-abusing mutual-upvoting societies wherever they arise) to degrade quality in other ways by encouraging people to produce a higher volume of lower-quality drafts.
So, Good Heart Tokens as currently implemented (direct payments for newly-created karma) are probably insane. But are there other, less-crazy ideas about how to use Forum Karma to evaluate impact or reward content creation? So help me God, I honestly believe there might be a few:
Sometimes we might want to estimate the relationship between Karma and dollars even if we never intend to pay anybody anything:
Any attempt to use Karma as a proxy for value will immediately run into some problems, even if we aren’t creating Goodhart problems by paying money for upvotes. Nuño Sempere on the EA Forum lists at least 11 problems, which I will reproduce here just to emphasize that Karma is only a loose proxy for value:
More easily accessible content, or more introductory material gets upvoted more.Material which gets shared more widely gets upvoted more.Content which is more prone to bikeshedding gets upvoted more.Posts which are beautifully written are more upvoted.Posts written by better known authors are more upvoted (once you've seen this, you can't unsee).The time at which a post is published affects how many upvotes it gets.Other random factors, such as whether other strong posts are published at the same time, also affect the number of upvotes.Not all projects are conducive to having a post written about them.The function from value to upvotes is concave (e.g., like a logarithm or like a square root), in that a project which results in a post with a 100 upvotes is probably more than 5 times as valuable as 5 posts with 20 upvotes each. This is what you'd expect if the supply of upvotes was limited.Upvotes suffer from inflation as EA forum gets populated more, so that a post which would have gathered 50 upvotes two years might gather 100 upvotes now.Upvotes may not take into account the relationship between projects, or other indirect effects. For example, projects which contribute to existing agendas are probably more valuable than otherwise equal standalone projects, but this might not be obvious from the text.
It might be possible to correct for some of these factors mathematically (like #9 and #10), while other problems seem small enough that they wouldn’t be devastating for most purposes (#6 and #7 are semi-random effects that will wash out over the course of many posts), but others are quite serious. Sometimes this would torpedo ideas completely; while for some of my ideas below it might be helpful to use an "adjusted karma-based score" that attempts to correct for a few of the problems Nuño mentions.
Between the extremes of “completely fake internet points” versus “1 Karma = $1”, there is probably a landscape of different applications that an improved version of Karma could be used for. The goal would be to give Karma points some additional power / influence / usefulness, without incurring as many side effects as Good Heart Tokens:
Karma could be used as the basis for a voting system.
Karma could be used as a pan-rationalist reputation system.
How close can we fly to the “1 Karma = $1” flame? Maybe pretty close:
Karma could be used to assign vote power for a quadratic-funding system devoted to Rationalist / EA community-building efforts, and other community public goods. With this system, users could quadratically vote on how much funding should be directed to different goals: Forum website improvements, or hosting conferences, or providing in-person services like childcare to bay-area rationalists, or beefing up the community’s ability to support independent researchers, or etc.
Under this system, you could use your LessWrong karma points to influence a pot of public-goods funding towards projects that benefit you (like in-person Bay-Area services if you live in the Bay). But this is a pretty indirect effect, and the self-serving financial incentive is laundered through a process of feel-good community participation and broad-based public-goods production, so I think it would be a lot less dangerous than literal Good Heart Tokens.
Karma could probably somehow be used in combination with an Impact Certificate program.
The current idea behind Impact Certificates looks something like this:
In order to kick-start this market, it might be useful to airdrop impact-certs credit to people as a function of their EA & LessWrong forum reputation — you get free money, but you have to use it to buy impact certificates (which you can either hold forever as a form of charitable donation, or sell immediately at a low price to cash out, or wait until 2025 in the hopes of being bought out by OpenPhil for a higher price). Providing the initial airdrop could help in two ways: it would create lots of initial trading interest (getting a bunch of smart people looking at the certificates and doing grant evaluations), and it would help create a new cultural norm that holding impact certificates is a cool way of donating to charity, showcasing your values, and supporting the rationalist community.
With an impact-certs airdrop, there would be a danger that many people immediately cash out and don't engage with the system, although it might be possible to disincentive this somehow. Even if not, I think the danger of creating bad incentives would still be much lower than with Good Heart Tokens -- perhaps even low enough that it would be a good idea -- precisely because the money would be given out as a one-time grant based on people's retrospective community contributions. Any incentive to start churning out trash on the forum would be very weak, since it would have to be based only in the vague hope that a future project might someday do a similar airdrop.
I think there are some legit ways that karma could possibly be used as part of interesting experiments in decentralized grantmaking and community decisionmaking. Despite the fact that the Good Heart Project is hilarious and obviously insane, the promise of sweet, sweet $1 upvotes was the very thing that motivated me to write and publish this post, so I would like to thank the organizers of the Good Heart Project for doing all the work necessary to run this wacky but intensely thought-provoking experiment.
Austin from Manifold here - thanks for the shoutout! I would also note on a personal level that Good Heart tokens led me to read/post a lot more on LessWrong than I do on a normal day.
Manifold's already kind of evolving into a forum/discussion site stapled to a prediction market, and spending our currency kind of looks like an upvote if you squint (especially in Free Response markets; placing a bet on an answer is very very similar to upvoting an answer on LessWrong/StackOverflow/Quora).
Incidentally, I've also had the same idea for combining impact certs with karma. See here: https://manifold.markets/Austin/will-manifold-implement-retroactive . Would love to find time to chat more on these ideas; feel free to find a time here!
There have been experiments with attack-resistant trust metrics. One notable project was Advogato. I'm not sure why it was archived. It might be worthwhile to look into Advogato's Trust Metric.
On the topic of weird voting systems, I like EigenTrust and friends.
Basic idea: a trustworthy agent is someone who upvotes other trustworthy agents, and who downvotes untrustworthy agents.
For EigenTrust, essentially:
(This iterative approach somewhat approximates calculating the eigenvalues/vectors of the vote matrix, if you're wondering where the name comes from)
(There are similar approaches that rely on explicitly calculating the eigenvalues/vectors of the vote matrix instead of doing an iterative approach.)
(There are ambiguous cases, such as if A and B are entirely disconnected from each other. I'm not actually sure if this is a problem?)
Unfortunately, computing this is likely computationally infeasible at scale.
(And it has failure modes similar to PageRank link farms...)
If you did this system, you could do something Good Heart like with the primary eigenvector...
I’m apparently going to make about $70 from today. I don’t think I have created anything remotely near $70 of value, but I note that the idea of getting money did in fact incentivize me to put effort into LessWrong. If I expected to continue to get $1/karma in the future, I think I would spend multiple hours a week putting actual effort into hopefully higher-value LessWrong content.
I am not very good at directing my monkey brain, so it helped a lot that my System 1 really anticipated getting money from spending time on LessWrong today. Offering monetary rewards for especially high value posts doesn’t motivate me, because I don’t System-1 anticipate being able to make those in the near future (even if I think I should be able to develop the skill.) Voting power or whatever also doesn’t make me feel motivated; money is uniquely good at that.
There’s probably better systems than “literally give out $1/karma” but it’s surprisingly effective at motivating me in particular in ways that other things which have been tried very much aren’t.