The LessWrong Review is a pretty big experiment, and I am still very much uncertain what the best form of reviewing is. We were pretty vague on how to review, listing these bullet points on topics a review should consider:

  • How has this post been useful?
  • How does it connect to the broader intellectual landscape?
  • Is this post epistemically sound?
  • How could it be improved?
  • What further work would you like to see on top of the ideas proposed in this post?

I'm interested in seeing more of the last two. I'd like a key piece of the Review to be "authors rewriting posts if they can be improved", and "people writing followup posts that either attempt a better explanation of whatever the post was about, or do followup work exploring more of the post's ideas."

So, for negative reviews, if a post's central concept seems important but the post itself feels inadequate, I'd be interested in seeing more "how could this post be improved, such that it makes sense to include in a 'Best Of', or otherwise enter into Lesswrong's longterm memory.*"

*a concern I have about the current Review process is it focuses primarily on the best-of book, but not all important works make sense to include in a public facing artifact. Examples of this might include a) dry, technical posts, b) posts that are important but somehow controversial, c) posts that are doing a lot of "schlep" work (such as replications), where you don't necessarily care that everyone reads them, but you care that they exist and you can refer to them when doing later meta-analyses or whatnot)

I'm not sure about the best solution for this and this is still the subject of some debate on the LessWrong team, but I'd personally prefer if reviewers somewhat distinguished between "whether it makes sense for a post to be in the Best Of Book" and "whether the post is somehow important to the LessWrong intellectual project."

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 8:05 PM

I have thought about a problem related to this very often. There was an Amazon shareholders letter written by Jeff Bezos that elaborates on their culture of high standards. In particular, it talks about the cost of high standards when writing Amazon's "six-page memos." The idea of having teams with high standards on their written memos resonated with me, but I have not been able to apply it that much in my professional career.

My standards are higher than those of the organization around me, and when it came down to spending the relationship capital to criticize people't documents to the level I felt would make them really high-quality, I just can't do it. Some documents achieve the standard already, so it's not unachievable in general. What it really is is that to provide criticism that feels specific and kind, it feels like I would have to understand the underlying issue at the depth that I want that person to explain to me.

Basically, to get to that level of quality, I have to put in a large fraction of the effort of drafting the document, which I don't have time to do. In some cases, I can point out areas that I feel could offer more elaboration, but sometimes the document feels inadequate and I can't explain why without several hours of concentrated effort.

I feel the same issue could come up here. You can tell a really high quality post because it offers insights that are brilliant but unexpected, or it uncovers primary source data that is neglected and unknown, or it's just a really compelling written explanation. But explaining how to create that out of an average post feels like it requires me to become the expert I want the author to be.