LessWrong is currently doing a major review of 2018 — looking back at old posts and considering which of them have stood the tests of time. It has three phases:
- Nomination (ends Dec 1st at 11:59pm PST)
- Review (ends Dec 31st)
- Voting on the best posts (ends January 7th)
Authors will have a chance to edit posts in response to feedback, and then the moderation team will compile the best posts into a physical book and LessWrong sequence, with $2000 in prizes given out to the top 3-5 posts and up to $2000 given out to people who write the best reviews.
- Top 2018 posts sorted by karma
- 2018 posts aggregated by month
- You can see nominated posts here
- Voting Results
This is the first week of the LessWrong 2018 Review – an experiment in improving the LessWrong Community's longterm feedback and reward cycle.
This post begins by exploring the motivations for this project (first at a high level of abstraction, then getting into some more concrete goals), before diving into the details of the process.
Improving the Idea Pipeline
In his LW 2.0 Strategic Overview, habryka noted:
We need to build on each other’s intellectual contributions, archive important content, and avoid primarily being news-driven.
We need to improve the signal-to-noise ratio for the average reader, and only broadcast the most important writing
Modern science is plagued by severe problems, but of humanity’s institutions it has perhaps the strongest record of being able to build successfully on its previous ideas.
The physics community has this system where the new ideas get put into journals, and then eventually if they’re important, and true, they get turned into textbooks, which are then read by the upcoming generation of physicists, who then write new papers based on the findings in the textbooks. All good scientific fields have good textbooks, and your undergrad years are largely spent reading them.
Over the past couple years, much of my focus has been on the early-stages of LessWrong's idea pipeline – creating affordance for off-the-cuff conversation, brainstorming, and exploration of paradigms that are still under development (with features like shortform and moderation tools).
But, the beginning of the idea-pipeline is, well, not the end.
I want LessWrong to encourage extremely high quality intellectual labor. I think the best way to go about this is through escalating positive rewards, rather than strong initial filters.
Right now our highest reward is getting into the curated section, which... just isn't actually that high a bar. We only curate posts if we think they are making a good point. But if we set the curated bar at "extremely well written and extremely epistemically rigorous and extremely useful", we would basically never be able to curate anything.
My current guess is that there should be a "higher than curated" level, and that the general expectation should be that posts should only be put in that section after getting reviewed, scrutinized, and most likely rewritten at least once.
I still have a lot of uncertainty about the right way to go about a review process, and various members of the LW team have somewhat different takes on it.
I've heard lots of complaints about mainstream science peer review: that reviewing is often a thankless task; the quality of review varies dramatically, and is often entangled with weird political games.
Meanwhile: LessWrong posts cover a variety of topics – some empirical, some philosophical. In many cases it's hard to directly evaluate their truth or usefulness. LessWrong team members had differing opinions on what sort of evaluation is most useful or practical.
I'm not sure if the best process is more open/public (harnessing the wisdom of crowds) or private (relying on the judgment of a small number of thinkers). The current approach involves a mix of both.
What I'm most confident in is that the review should focus on older posts.
New posts often feel exciting, but a year later, looking back, you can ask if it actually has become a helpful intellectual tool. (I'm also excited for the idea that, in future years, the process could also include reconsidering previously-reviewed posts, if there's been something like a "replication crisis" in the intervening time)
Regardless, I consider the LessWrong Review process to be an experiment, which will likely evolve in the coming years.
Before delving into the process, I wanted to go over the high level goals for the project:
1. Improve our longterm incentives, feedback, and rewards for authors
2. Create a highly curated "Best of 2018" sequence / physical book
3. Create common knowledge about the LW community's collective epistemic state regarding controversial posts
Longterm incentives, feedback and rewards
Right now, authors on LessWrong are rewarded essentially by comments, voting, and other people citing their work. This is fine, as things go, but has a few issues:
- Some kinds of posts are quite valuable, but don't get many comments (and these disproportionately tend to be posts that are more proactively rigorous, because there's less to critique, or critiquing requires more effort, or building off the ideas requires more domain expertise)
- By contrast, comments and voting both nudge people towards posts that are clickbaity and controversial.
- Once posts have slipped off the frontpage, they often fade from consciousness. I'm excited for a LessWrong that rewards Long Content, that stand the tests of time, as is updated as new information comes to light. (In some cases this may involve editing the original post. But if you prefer old posts to serve as a time-capsule of your post beliefs, adding a link to a newer post would also work)
- Many good posts begin with an "epistemic status: thinking out loud", because, at the time, they were just thinking out loud. Nonetheless, they turn out to be quite good. Early-stage brainstorming is good, but if 2 years later the early-stage-brainstorming has become the best reference on a subject, authors should be encouraged to change that epistemic status and clean up the post for the benefit of future readers.
The aim of the Review is to address those concerns by:
- Promoting old, vetted content directly on the site.
- Awarding prizes not only to authors, but to reviewers. It seems important to directly reward high-effort reviews that thoughtfully explore both how the post could be improved, and how it fits into the broader intellectual ecosystem. (At the same time, not having this be the final stage in the process, since building an intellectual edifice requires four layers of ongoing conversation)
- Compiling the results into a physical book. I find there's something... literally weighty about having your work in printed form. And because it's much harder to edit books than blogposts, the printing gives authors an extra incentive to clean up their past work or improve the pedagogy.
A highly curated "Best of 2018" sequence / book
Many users don't participate in the day-to-day discussion on LessWrong, but want to easily find the best content.
To those users, a "Best Of" sequence that includes not only posts that seemed exciting at the time, but distilled reviews and followup, seems like a good value proposition. And meanwhile, helps move the site away from being time-sensitive-newsfeed.
Common knowledge about the LW community's collective epistemic state regarding controversial posts
Some posts are highly upvoted because everyone agrees they're true and important. Other posts are upvoted because they're more like exciting hypotheses. There's a lot of disagreement about which claims are actually true, but that disagreement is crudely measured in comments from a vocal minority.
The end of the review process includes a straightforward vote on which posts seem (in retrospect), useful, and which seem "epistemically sound". This is not the end of the conversation about which posts are making true claims that carve reality at it's joints, but my hope is for it to ground that discussion in a clearer group-epistemic state.
1 week (Nov 20th – Dec 1st)
- Users with 1000+ karma can nominate posts from 2018, describing how they found the post useful over the longterm.
- The nomination button is in the post dropdown-menu (available at the top of posts, or to the right of their post-item)
- For convenience, you can review posts via:
4 weeks (Dec 1st – Dec 31st)
- Authors of nominated posts can opt-out of the review process if they want.
- They also can opt-in, while noting that they probably won't have time to update their posts in response to critique. (This may reduce the chances of their posts being featured as prominently in the Best of 2018 book)
- Posts with sufficient* nominations are announced as contenders.
- We're aiming to have 50-100 contenders, and the nomination threshold will be set to whatever gets closest to that range
- For a month, people are encouraged to look at them thoughtfully, writing comments (or posts) that discuss:
- How has this post been useful?
- How does it connect to the broader intellectual landscape?
- Is this post epistemically sound?
- How could it be improved?
- What further work would you like to see people do with the content of this post?
- A good frame of reference for the reviews are shorter versions of LessWrong or SlatestarCodex book reviews (which do a combination of epistemic spot checks, summarizing, and contextualizing)
- Authors are encouraged to engage with reviews:
- Noting where they disagree
- Discussing what sort of followup work they'd be interested in seeing from others
- Ideally, updating the post in response to critique they agree with
1 Week (Jan 1st – Jan 7th)
Posts that got at least one review proceed to the voting phase. The details of this are still being fleshed out, but the current plan is:
- Users with 1000+ karma rate each post on a 1-10 scale, with 6+ meaning "I'd be happy to see this included in the 'best of 2018'" roundup, and 10 means "this is the best I can imagine"
- Users are encouraged to (optionally) share the reasons for each rating, and/or share thoughts on their overall judgment process.
Books and Rewards
Public Writeup / Aggregation
Soon afterwards (hopefully within a week), the votes will all be publicly available. A few different aggregate statistics will be available, including the raw average, and potentially some attempt at a "karma-weighted average."
Best of 2018 Book / Sequence
Sometime later, the LessWrong moderation team will put together a physical book, (and online sequence), of the best posts and most valuable reviews.
This will involve a lot of editor discretion – the team will essentially take the public review process and use it as input for the construction of a book and sequence.
I have a lot of uncertainty about the shape of the book. I'm guessing it'd include anywhere from 10-50 posts, along with particularly good reviews of those posts, and some additional commentary from the LW team.
Note: This may involve some custom editing to handle things like hyperlinks, which may work differently in printed media than online blogposts. This will involve some back-and-forth with the authors.
- Everyone whose work is featured in the book will receive a copy of it.
- There will be $2000 in prizes divided among the authors of the top 3-5 posts (judged by the moderation team)
- There will be up to $2000 in prizes for the best 0-10 reviews that get included in the book. (The distribution of this will depend a bit on what reviews we get and how good they are)
- (note: LessWrong team members may be participating as reviewers and potentially authors, but will not be eligible for any awards)