In online discussions, the number of upvotes or likes a contribution receives is often highly correlated with the social status of the author within that community. This makes the community less epistemically diverse, and can contribute to feelings of groupthink or hero worship.

Yet both the author of a contribution and its degree of support contain bayesian evidenceabout its value, arguably an amount that should overwhelm your own inside view.

We want each individual to invest the socially optimal amount of resources into critically evaluating other people’s writing (which is higher than the amount that would be optimal for individual epistemic rationality). Yet we also all and each want to give sufficient weight to authority in forming our all-things-considered views.

As Greg Lewis writes:

The distinction between ‘credence by my lights’ versus ‘credence all things considered’ allows the best of both worlds. One can say ‘by my lights, P’s credence is X’ yet at the same time ‘all things considered though, I take P’s credence to be Y’. One can form one’s own model of P, think the experts are wrong about P, and marshall evidence and arguments for why you are right and they are wrong; yet soberly realise that the chances are you are more likely mistaken; yet also think this effort is nonetheless valuable because even if one is most likely heading down a dead-end, the corporate efforts of people like you promises a good chance of someone finding a better path.

Full blinding to usernames and upvote counts is great for critical thinking. If all you see is the object level, you can’t be biased by anything else. The downside is you lose a lot of relevant information. A second downside is that anonymity reduces the selfish incentives to produce good content (we socially reward high-quality, civil discussion, and punish rudeness.)

I have a suggestion for capturing (some of) the best of both worlds:

  • first, do all your reading, thinking, upvoting and commenting with full blinding
  • once you have finished, un-blind yourself and use the new information to
    • form your all-things-considered view of the topic at hand
    • update your opinion of the people involved in the discussion (for example, if someone was a jerk, you lower your opinion of them).

To enable this, there are now two user scripts which hide usernames and upvote counts on (1) the EA forum and (2) LessWrong 2.0. You’ll need to install the Stylish browser extension to use them.

Cross-posted here (clicking the link will unblind you!).

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 5:02 AM

Nice! Eventually it might make sense to have this functionality integrated into LW2, similar to what we had on LW1.0 with the anti-kibitzer, but this is a pretty good solution for now.

Perhaps instead the karma of a post ought not to be linear in the number of upvotes it receives? If the karma of a post is best used as a signal of the goodness of the post, then it is less noisy as more data points appear, but not linearly so.

There is perhaps still a place for karma as a linear reward mechanism - that is, pleasing 10 people enough to get them to upvote is, all other things being equal, 10 times as good as pleasing 1 person - but this might be best separated from the signal aspect.

Which of the things that karma is used for do you think would benefit from nonlinearity, and which nonlinearity?

I've updated the style to work at the new URL lesswrong.com.

>A second downside is that anonymity reduces the selfish incentives to produce good content (we socially reward high-quality, civil discussion, and punish rudeness.)

It also decreases some selfish incentives to avoid producing good content (say, because you think there's a chance you might be wrong and face humiliation).

I don't think these selfish incentives at all compare to the selfish incentives to produce good content. Looking at the sites that do enforce anonymity (the chans for example), I am very sceptical of its success.