LessWrong is currently doing a major review of 2018 — looking back at old posts and considering which of them have stood the tests of time. It has three phases:

  • Nomination (ends Dec 1st at 11:59pm PST)
  • Review (ends Dec 31st)
  • Voting on the best posts (ends January 7th)

Authors will have a chance to edit posts in response to feedback, and then the moderation team will compile the best posts into a physical book and LessWrong sequence, with $2000 in prizes given out to the top 3-5 posts and up to $2000 given out to people who write the best reviews.

Helpful Links:

This is the first week of the LessWrong 2018 Review – an experiment in improving the LessWrong Community's longterm feedback and reward cycle.

This post begins by exploring the motivations for this project (first at a high level of abstraction, then getting into some more concrete goals), before diving into the details of the process.

Improving the Idea Pipeline

In his LW 2.0 Strategic Overview, habryka noted:

We need to build on each other’s intellectual contributions, archive important content, and avoid primarily being news-driven.

We need to improve the signal-to-noise ratio for the average reader, and only broadcast the most important writing


Modern science is plagued by severe problems, but of humanity’s institutions it has perhaps the strongest record of being able to build successfully on its previous ideas. 

The physics community has this system where the new ideas get put into journals, and then eventually if they’re important, and true, they get turned into textbooks, which are then read by the upcoming generation of physicists, who then write new papers based on the findings in the textbooks. All good scientific fields have good textbooks, and your undergrad years are largely spent reading them.

Over the past couple years, much of my focus has been on the early-stages of LessWrong's idea pipeline – creating affordance for off-the-cuff conversation, brainstorming, and exploration of paradigms that are still under development (with features like shortform and moderation tools).

But, the beginning of the idea-pipeline is, well, not the end.

I've written a couple times about what the later stages of the idea-pipeline might look like. My best guess is still something like this:

I want LessWrong to encourage extremely high quality intellectual labor. I think the best way to go about this is through escalating positive rewards, rather than strong initial filters.

Right now our highest reward is getting into the curated section, which... just isn't actually that high a bar. We only curate posts if we think they are making a good point. But if we set the curated bar at "extremely well written and extremely epistemically rigorous and extremely useful", we would basically never be able to curate anything.

My current guess is that there should be a "higher than curated" level, and that the general expectation should be that posts should only be put in that section after getting reviewed, scrutinized, and most likely rewritten at least once. 

I still have a lot of uncertainty about the right way to go about a review process, and various members of the LW team have somewhat different takes on it.

I've heard lots of complaints about mainstream science peer review: that reviewing is often a thankless task; the quality of review varies dramatically, and is often entangled with weird political games.

Meanwhile: LessWrong posts cover a variety of topics – some empirical, some philosophical. In many cases it's hard to directly evaluate their truth or usefulness. LessWrong team members had differing opinions on what sort of evaluation is most useful or practical.

I'm not sure if the best process is more open/public (harnessing the wisdom of crowds) or private (relying on the judgment of a small number of thinkers). The current approach involves a mix of both.

What I'm most confident in is that the review should focus on older posts. 

New posts often feel exciting, but a year later, looking back, you can ask if it actually has become a helpful intellectual tool. (I'm also excited for the idea that, in future years, the process could also include reconsidering previously-reviewed posts, if there's been something like a "replication crisis" in the intervening time)

Regardless, I consider the LessWrong Review process to be an experiment, which will likely evolve in the coming years. 


Before delving into the process, I wanted to go over the high level goals for the project:

1. Improve our longterm incentives, feedback, and rewards for authors

2. Create a highly curated "Best of 2018" sequence / physical book

3. Create common knowledge about the LW community's collective epistemic state regarding controversial posts

Longterm incentives, feedback and rewards

Right now, authors on LessWrong are rewarded essentially by comments, voting, and other people citing their work. This is fine, as things go, but has a few issues:

  • Some kinds of posts are quite valuable, but don't get many comments (and these disproportionately tend to be posts that are more proactively rigorous, because there's less to critique, or critiquing requires more effort, or building off the ideas requires more domain expertise)
  • By contrast, comments and voting both nudge people towards posts that are clickbaity and controversial.
  • Once posts have slipped off the frontpage, they often fade from consciousness. I'm excited for a LessWrong that rewards Long Content, that stand the tests of time, as is updated as new information comes to light. (In some cases this may involve editing the original post. But if you prefer old posts to serve as a time-capsule of your post beliefs, adding a link to a newer post would also work)
  • Many good posts begin with an "epistemic status: thinking out loud", because, at the time, they were just thinking out loud. Nonetheless, they turn out to be quite good. Early-stage brainstorming is good, but if 2 years later the early-stage-brainstorming has become the best reference on a subject, authors should be encouraged to change that epistemic status and clean up the post for the benefit of future readers.

The aim of the Review is to address those concerns by: 

  • Promoting old, vetted content directly on the site.
  • Awarding prizes not only to authors, but to reviewers. It seems important to directly reward high-effort reviews that thoughtfully explore both how the post could be improved, and how it fits into the broader intellectual ecosystem. (At the same time, not having this be the final stage in the process, since building an intellectual edifice requires four layers of ongoing conversation)
  • Compiling the results into a physical book. I find there's something... literally weighty about having your work in printed form. And because it's much harder to edit books than blogposts, the printing gives authors an extra incentive to clean up their past work or improve the pedagogy.

A highly curated "Best of 2018" sequence / book

Many users don't participate in the day-to-day discussion on LessWrong, but want to easily find the best content. 

To those users, a "Best Of" sequence that includes not only posts that seemed exciting at the time, but distilled reviews and followup, seems like a good value proposition. And meanwhile, helps move the site away from being time-sensitive-newsfeed.

Common knowledge about the LW community's collective epistemic state regarding controversial posts

Some posts are highly upvoted because everyone agrees they're true and important. Other posts are upvoted because they're more like exciting hypotheses. There's a lot of disagreement about which claims are actually true, but that disagreement is crudely measured in comments from a vocal minority.

The end of the review process includes a straightforward vote on which posts seem (in retrospect), useful, and which seem "epistemically sound". This is not the end of the conversation about which posts are making true claims that carve reality at it's joints, but my hope is for it to ground that discussion in a clearer group-epistemic state.

Review Process

Nomination Phase 

1 week (Nov 20th – Dec 1st)

  • Users with 1000+ karma can nominate posts from 2018, describing how they found the post useful over the longterm.
  • The nomination button is in the post dropdown-menu (available at the top of posts, or to the right of their post-item)
  • For convenience, you can review posts via:

Review Phase 

4 weeks (Dec 1st – Dec 31st)

  • Authors of nominated posts can opt-out of the review process if they want.
    • They also can opt-in, while noting that they probably won't have time to update their posts in response to critique. (This may reduce the chances of their posts being featured as prominently in the Best of 2018 book)
  • Posts with sufficient* nominations are announced as contenders.
    • We're aiming to have 50-100 contenders, and the nomination threshold will be set to whatever gets closest to that range
  • For a month, people are encouraged to look at them thoughtfully, writing comments (or posts) that discuss:
    • How has this post been useful?
    • How does it connect to the broader intellectual landscape?
    • Is this post epistemically sound?
    • How could it be improved?
    • What further work would you like to see people do with the content of this post?
  • A good frame of reference for the reviews are shorter versions of LessWrong or SlatestarCodex book reviews (which do a combination of epistemic spot checks, summarizing, and contextualizing)
  • Authors are encouraged to engage with reviews:
    • Noting where they disagree
    • Discussing what sort of followup work they'd be interested in seeing from others
    • Ideally, updating the post in response to critique they agree with

Voting Phase

 1 Week (Jan 1st – Jan 7th)

Posts that got at least one review proceed to the voting phase. The details of this are still being fleshed out, but the current plan is:

  • Users with 1000+ karma rate each post on a 1-10 scale, with 6+ meaning "I'd be happy to see this included in the 'best of 2018'" roundup, and 10 means "this is the best I can imagine"
  • Users are encouraged to (optionally) share the reasons for each rating, and/or share thoughts on their overall judgment process.

Books and Rewards

Public Writeup / Aggregation

Soon afterwards (hopefully within a week), the votes will all be publicly available. A few different aggregate statistics will be available, including the raw average, and potentially some attempt at a "karma-weighted average."

Best of 2018 Book / Sequence

Sometime later, the LessWrong moderation team will put together a physical book, (and online sequence), of the best posts and most valuable reviews

This will involve a lot of editor discretion – the team will essentially take the public review process and use it as input for the construction of a book and sequence. 

I have a lot of uncertainty about the shape of the book. I'm guessing it'd include anywhere from 10-50 posts, along with particularly good reviews of those posts, and some additional commentary from the LW team.

Note: This may involve some custom editing to handle things like hyperlinks, which may work differently in printed media than online blogposts. This will involve some back-and-forth with the authors.


  • Everyone whose work is featured in the book will receive a copy of it.
  • There will be $2000 in prizes divided among the authors of the top 3-5 posts (judged by the moderation team)
  • There will be up to $2000 in prizes for the best 0-10 reviews that get included in the book. (The distribution of this will depend a bit on what reviews we get and how good they are)
  • (note: LessWrong team members may be participating as reviewers and potentially authors, but will not be eligible for any awards)
New Comment
91 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I've added support for this on GreaterWrong; you can view nominated posts here and all 2018 posts here.


What is the plan for incorporation of comments into the book?

I'm guessing for most posts they'll just be omitted, and it'll be fine (or perhaps some curated selection of comments will make it into the book). But I notice that Unreal's Circling seems to be a historically relevant post that I would only want to endorse if it came along with a substantial fraction of the discussion in the comments (in a way that would dramatically lengthen its section, possibly 'taking over' the book).

I hunted your comment down here and upvoted it strongly.

I basically only write comments, and when I write "comments for the ages" that I feel proud of, I consider it a good sign if they (1) get many upvotes (especially votes that arrive after lots of competing sibling comments already exist) and (2) do not get any responses (except "Wow! Good! Thanks!" kind of stuff).

Looking at "first level comments" to worthwhile OPs according to a measure like this might provide some interesting and reasonably brief postscripts.

Applying the same basic measure to posts themselves, if an OP gets a large number of direct replies that are highly upvoted that OP may not be dense with relatively useful and/or flawless content. (Though there are probably exceptions that could be detected by thoughtful curating... for example, if the OP is a request for ideas then a lot of highly voted comments are kinda the point.)

A lot of the details are up in the air – over the next week I plan to write out a lot of my thoughts and open questions about the review process, and how it should feel into the overall end product. One option is to include a curated selection of comments from the post. Another is to sort of leave that up to reviewers, to distill those comments down into a more succinct encapsulation of them. In some cases it might be that the commenters "got it right the first time", and basically wrote a fine "review-like comment" back in 2018, and there should be some way of marking an old comment as a review, retroactively. A middle ground might be something like "in addition to summarizing key points from the previous discussion, reviewers can point to particular comments that seem worth including". In the end, the editors will make some judgment calls about how much fits – we definitely wouldn't include the entire comment section of Circling. My guess is that the upper bound of "amount of comments and/or reviews from a given post to include" is roughly the same as "the upper bound for a post." (In some cases posts are quite long, but maybe expect the median comments/reviews-length to be comparable to the median post length)
I would like to see some comments considered for inclusion - those that expand on a post in some way (the circling post is a good example). Also I read slack . I liked it and then G Gordon Worley's comment brought a new dimension to the concept and expanded my 'knowledge base' about things I've not really thought about.
We have a draft book that tried to do this for some posts on LessWrong. If you ping Ben you can probably take a look at it if you want. 

Occasionally I think about writing a review, but then feel like I'm too confused to do so.

Some of my open questions:

  • I'm unsure of what to write. The post says that "A good frame of reference for the reviews are shorter versions of LessWrong or SlatestarCodex book reviews (which do a combination of epistemic spot checks, summarizing, and contextualizing)", but this feels like weird advice for reviewing a blog post, which is much shorter than a book. Especially the "summarizing" bit - for most posts the content is already too short for further summarizing to make sense. This guideline confuses me more than it helps.
  • If I just ignore the guideline and think about what would make sense to me, it would be... something like my longer nomination comments. But I already posted those as nominations. Should I re-post some of them as reviews? That seems silly.
  • I don't know which posts I should review. I won't have the chance to review all of them, so I should pick just a few. But which ones? The post says "Posts that got at least one review proceed to the voting phase", which makes it sound like reviews are like nominations / votes; a post won't be included unless it gets at least one vote.
... (read more)
There should be a post coming up soon that goes into more examples of how to do Reviews. It's a bit tough question because different posts benefit from different types of reviews. A thing that I think is commonly useful is asking "what are the actual claims this post is making", and listing them succinctly, and writing up some thoughts about how we could actually empirically check if those claims are true. (Even if we don't actually run the experiment, I think operationalizing what observations we'd expect in the world is helpful for evaluating when/why/whether the post is valid)
One of the key ideas here is that I'd like posts to have gotten someone to "look into the dark". If the post wasn't as useful as it seemed, how would we know? If 10 years from now you no longer endorsed the post, why might that be?
Here's a review of mine that I think is pretty representative of the sort of review that I, personally, am most excited about.

Perhaps worth noting (ironically)

I just went to begin looking over the 2018 posts, thinking about my own nominations. I was immediately hit with a bit of paralysis of "aaah but I don't even know what standards to employ here – I feel like I want to take a long time to think about all the posts I might want to nominate and how they compare and how they fit into the big picture" (plus, a bit of Pat Modesto whispering in my ear saying "who are youuuu to decide what posts are good!?")

And, well, if I'm experiencing that it seemed like others might be as well. 

So, wanted to explicitly note: I think this process will be more fruitful (as well as more fun) if it's more like an evolving conversation than a bunch of people silently thinking independently. A lot of the value is in getting old posts back into the public spotlight in a concentrated way.

So, I'd err on the side of going ahead and nominating things that seem good – you can retract the nomination later if you feel like it was a mistake. You can also start with a relatively brief nomination-endorsement-message that gives the rough gist of why a post was valuable, and later follow it up with a more extensive message when you have time.

So, I don't necessarily think that all the details of this belong in the 2019 books, but... y'know, this is LessWrong, things just don't feel complete without a few levels of meta thrown in.

god damn it

...not even obviously wrong. If you're not gonna review your review process during your review process, when ARE you going to review it? (JK. We already reviewed it. #hashtagOzymandias) That said I'm kinda hoping we don't have to review Welcome to LessWrong
I dunno, if our About page makes it into the next book, that'll save us effort writing the preface.

Update: Posts need at least 2 nominations to proceed to the Review Phase.

I initially left this requirement as a somewhat vague "sufficient nominations", because I wasn't sure how many people would be engaging with the process and how thoroughly. I'm less worried about that now, and meanwhile I think it's there's a fairly substantial shift between "at least one person liked this and took time to say so" to "at least two people liked it."

(The goal is still to have the Review Phase include 50-100 posts, which could potentially mean the nomination-requirement

... (read more)

Is there some way I can see all the posts I upvoted in 2018 so I can figure out which I think are worthy of nomination?

Compiling the results into a physical book. I find there's something... literally weighty about having your work in printed form. And because it's much harder to edit books than blogposts, the printing gives authors an extra incentive to clean up their past work or improve the pedagogy.

Physical books are also often read in a different mental mode, with a longer attention span, etc. You could also sell it as a Kindle book to get the sa

... (read more)
Not currently – I agree that'd be a good feature, although there's probably a few other comparably good features worth building to improve the nomination UI experience and not sure if I'd get to them all this year. I'm not sure exactly, but I'd at least want clearer epistemic flags on things. (I can imagine a case where there are some posts that seem clearly important, but still have some questionable claims, and the author hasn't have time to update them. In that case, one option might be to include the work as-is, but follow it up with some commentary, either from a reviewer during the Review Phase or by one of the moderation team member).

Quick update: the Review UI is almost ready but has a few kinks to work out before me merge it into production. Apologies for delay.

Some open questions and meandering thoughts on 'What exactly do we want out of the Review Phase?'

There's a few different goals one might have for Review. I think ideally I'd like all of them, but I'm not sure how much bandwidth people will have.

I see two broad ontologies for "what I want reviewers to do"

Ontology A – What information do we want?

Different posts call for different types of evaluation. 

A post that makes a bunch of empirical claims should have at least some of those claims epistemic-spot-checked

A post that proposes ontologies and ca

... (read more)

I'm optimistic about the review process incentivizing high-quality intellectual engagement by means of "upping the stakes." Normally, if someone writes a bad post, I'm likely to just downvote or ignore it if I have better things to do with my time that day than argue on the internet.

But if someone writes a bad post and it gets multiple nominations to be included in a paper book allegedly representing the best my stupid robot cult has to offer, then that forces me to write a rebuttal, even though I'd kind of rather not, because I was planning on spending all of my spare energy this month on memoir-writing to help me process trauma and stop being so emotionally attached to this stupid robot cult that's bad for me. If other people feel the same way (higher stakes spur more effort), we could have some fruitful discussions that we otherwise wouldn't.

Thanks so much for organizing this! (Not sarcasm, actual sincere and enthusiastic thanks despite negative-valence words in previous paragraph.)

*expression of empathy for energy bottlenecks that force unfortunate tradeoffs.* One of my hopes for this process is that normally there's a tradeoff of "arguing on the internet often consumes a lot of time and energy that could be better spent on other things", but you do in fact need to argue on the internet (or something similar) in order to have healthy group epistemics. I'm hoping that concentrating overton-window fights into a) a relatively condensed month, b) narrowing them down to "concepts that multiple longterm community members actually want to make bids for community attention/endorsement of", can get us a better costs/benefit ratio, going forward.

overton-window fights

So, sorry in advance if I'm reading way too much into a casual choice of words, but—this is an incredibly ominous metaphor, right? (I'm definitely not blaming you for anything, because I've also used it in just this context, and it took me a while to notice how incredibly ominous it is.)

Maybe my rationality realism is showing, but I thought the premise and promise of the website is that there are laws of systematically correct reasoning as objective as mathematics—different mathematicians from different cultures might have different interests (like analysis or algebra or combinatorics) or be accustomed to different notations, but ultimately, they're all on the same cooperative quest for Truth—even if that cooperative process may occasionally involve some amount of yelling and crying.

("And being universals," said the Lady 3rd, "they bear no distinguishing evidence of their origin.")

The Overton window concept describes a process of social-pressure mind control, not rational deliberation: an idea is said to be "outside the Overton window" not on account of its being wrong, but on account of its being unacceptably unpopular. If a mathematician were to describe a

... (read more)
  I think it's ominous if Raemon used the word with that intended meaning, but I'm guessing he didn't (and most people around here don't?). When I think "Overton window", I just think "what is considered reasonable to discuss without it being regarded as weird or extreme or requiring extreme evidence to overcome a very low prior"  and think of the term being agnostic to how it got decided. In this sense, our community has an Overton window that definitely includes physics and history, presently really excludes Reiki and astrology, and perhaps has meditation/IFS on the border. I think overall the process by which we've ended up with this window has been much better than what most of broader society uses. My understanding of Ray's comments about "concentrating Overton window fights" was that just now was a period when we'd more than usual communally debate (using the correct and normative laws of reasoning) ideas which we're as yet still contentious with the community and increasing consensus of whether they were good or not– based on their epistemic merits. ... It's a separate question about what best way to use the term "Overton window" is and upon which I don't have a strong opinion at present.  
This is roughly how I intended it. But, it's not a coincidence that the word has the history that it does, and did seem worth reflecting on at least briefly.
(note: I think this conversation is important, but part of the point of the review is to have a large number of similarly important conversations. I will probably reply a couple more times. My current guess is that my budget for such conversations this month is going to be better spent this month on the object-level review process, and/or building code that's "meta-level" to support the object-level process) My off-the-cuff thought is that I agree with you about the shape of how this is worrisome, but probably disagree about it's magnitude.  (But, I notice as I say that that my brain is compressing magnitude into a region can easily compare. i.e. It's seems quite plausible the absolute magnitude of how "worrisome" this should be is 100, but my brain has 12 settings for importance and I've already compressed things down in a way optimized for comparing relevant plans and actions. i.e. if the fire alarm is always ringing, there's not much point in having a fire alarm) I think this depends on how the machine compares to other tools that calculate – if there are obviously better tools, you should probably use those. If those tools are strictly better, then the calculator should be abandoned. But if the calculator is currently the most accurate tool for calculating numbers, it probably makes more sense to continue using it (while looking for better tools). You can re-name it to "aspiring calculator", but in practice long names are clunky and hard to use on a day-to-day basis. Sometimes you don't actually have a better option than implementing FizzBuzz in TensorFlow, or implementing rationality on mental architecture that's at least partially optimized for politics. There is a certain sense in which this should have you sitting bolt upright in alarm, but, again, a constant-fire-alarm isn't very useful. It's definitely an instance of Goodhart's law (which subtype(s) probably depends on the particular discussion). The question is "do we have actually have better ideas
(actually, the thing I'm worried about here is that I expect this subthread to be much more enticing than figuring out the best answers to "how should the Review (and Voting) Phases be structured", despite the latter being much more actionably useful. And this seems like a concrete instance of "human brains are architected around politics, finding it easier to fight than to build, with 'overton-window-fight' being an unfortunately accurate description of what's going on a lot of the time")
I agree that focusing on the object-level review process is a much better use of your time than reacting to my perma-panicked cultural commentary. Happy to end this subthread here.
FWIW, I would be quite excited for you to devote thought to the "how to do a good review process?" question, if that's something you have in your motivation budget.

I also note that I'm looking afresh at many of my backburner post ideas, since getting them out before the end of December would mean they'd be available for review in 2020 instead of 2021. 

Hah. That's surprisingly amusing. One of the original seeds of the review idea came from a blogpost I once read arguing that the Oscars should be given out multiple years (preferably more like a decade) after a movie comes out, rather than the immediately next year. This would give the awards a benefit of hindsight, and "okay, but which movies do you actually still like a decade later." It's also remove the weird incentive for "Oscarworthy" movies to come out in Nov/Dec. I didn't think it made sense to do a full decade for the LW Review, because then you'd either go all the way back to the golden age (where, well, you have the sequences), or if you did a half-decade, you'd have the Dark Times, where there wasn't all that much interesting stuff going on. But, I still thought doing a full year would be enough to get some of the same effect. But, if around December people are like "oh shit my blogpost that I think is actually going to be really good... I should write that now so I only have to wait one year!", that's (somewhat) amusing way to accidentally introduce "Oscar Season Bias" again.
Note that the gap of a year cuts out a lot of recency bias, and I think availability favors posts in January (since some people will think they're going to go through all of 2018 in chronological order, and then maybe run out of steam at some point). So if all you cared about was winning, I think you'd actually want it to come out in January instead.
Yeah, I think getting rid of the recency bias is still good for exactly the reasons I'd intended, just amusing if there still turned out to be an "Oscar Season" anyway.

A couple of comments on nomination UI:

  • On mobile at least, the "nominate" pop-up has a "submit" button but not a "cancel" button, which is a little inconvenient in cases where I realize I'd like to go back to check some detail about the post that I'm nominating before I nominate it
  • It would be nice if existing nominations would have a button saying "endorse this nomination" or something, so if I essentially just agree with an existing nomination and don't have anything to add, I have an easy way to add another vote to it. Making a top-level comment saying
... (read more)
Yeah i think something in that space makes sense for endorsing nominations
Hmm, upon further reflection – I agree that better UI here would be helpful, but am wary of investing too much time into a UI element that will only be used for a week. (If we do this again next year, I'd definitely want to invest more in UI, but I put sizable probability on 'if we do it next year the whole process may be different') So, for the immediate future: my suggestion would be to just make a nomination that says "What Alice said, [link]". Especially because you might want your nomination to include "What Alice said [here], and and what Bob said [here]", and it's a bit tricky to figure out how to count multiple endorsements. (This is a bit weird as a commenting experience but I think pretty fine for now – in any case you have my blessing to do a slightly weird commenting thing)

I got an email about this, so I decided to check if the quality of content here has really increased enough to claim to have material for a new Sequence (I stopped coming here after the in my opinion botched execution of lw2).

I checked the posts, and I don't see anywhere near enough quality content to publish something called a Sequence, without cheapening the previous essays and what 'The Sequences' means in a LessWrong context.

(first, noting that if the site content isn't exciting, no worries. Thanks for at least checking it out and giving it another look – I appreciate it) I'd add to Habryka's comment that my longterm plan here is something like: * This year, we review the best posts of 2018. This turns into a fairly simple sequence that clusters relevant posts around each other, and helps people get a sense of the overall major conversation threads that happened in 2018. This sequence is meant to be "highly curated", but not meant to be thought in the same terms as "The Sequences™". Sequence is just a generic term meaning "a collection of posts." * In the coming years, there's an additional step where some older posts are considered for something more like canonization, where they are actually added to a Major Updates sequence that's more in the genre of "The Sequences™", i.e. that everyone participating on the site is supposed to have read. This process is something I'd want to put a lot of care into, and my expectation is something like there'd typically be 1-5 posts in any given year that I wanted to add to the site's common-knowledge-pool, and that I'd want multiple years to reflect on it.
Not even Local Validity? Note also that you can view this on GreaterWrong, with 2018 posts and nominated posts.
There are definitely some decent posts, but calling a couple of good posts a official LessWrong Sequence still seems to cheapen what that used to mean. Not to mention that I read this on facebook, so I barely associate it with here. Thanks, GreaterWrong seems to still be an improvement over the redesign for me. I'm back to using it.
Huh, there must be some confusion going on. The goal is not to add another sequence to Rationality: A-Z, the goal is just to compile a sequence of the type of which we already have many (like Luke's sequence on the neuroscience of happiness, or Kaj's multiagent sequence, or Anna's game theory sequence, etc.). 
I note that in my mind R:AZ is a different thing from The Sequences; it's abridged, and in a different order, and there's a big difference between "posts arranged in an order" and "Eliezer unrolling and serializing the dependency tree for a concept."
Yeah, I agree with that, but it seemed like the best way to disambiguate in the above context. Though note that "The Sequences" itself refers to at least three different orders and collections of posts, because the order of the posts was being actively edited on the wiki. So I don't think even that has a single coherent referent. 

I'm curious about negative or neutral endorsements. That is, I'm going through and looking at posts and thinking "should this be in the review? Why or why not?", and sometimes the answer comes back "no" for somewhat interesting reasons.

The example that prompted this question is Write a Thousand Roads to Rome. It's a clear statement of an important pedagogical point, but it's an exhortation to action that I don't think moved the community all that much (from my vantage point). If I want people to read it now, it's more because "hey, here's some advice we st

... (read more)

Personal/meta/process note:

I've particularly liked looking for posts to nominate, because it's revealed to me ideas that I now think should inform my thinking, but did not at the time. As such, it's somewhat sad that these posts are (as I understand it) not the sort of things I should nominate for the "Best of 2018", and I wish I had another way to signal-boost them, perhaps by nominating them for "Under-rated of 2018". (I guess I could just comment on them, but that doesn't seem like the sort of thing comments are for).

I'm uncertain about the best process here (this entire review is a bit of experiment and I think it's fine to tweak rules on the fly). I do think there's a particular value for checking which things have actually been employed in some fashion, as opposed to just "seeming good." I think it's probably fine to go ahead and nominate them, and in the nomination, note specifically if you haven't directly made use of them. One possible outcome is that this process reminds other people who have used it, and those people then write up their actual experiences. Another possible outcome is that we decide it's fine to include things that feel time-tested-if-not-actually-used the way Vaniver described. Another option is simply that you bump it into people's public consciousness, and then it isn't included this year, but next year people have the opportunity to suggest older posts that had previously fallen through the cracks. (If we do this again next year, my current guess is that it'd involve not just "Best of 2019" but sort of an ongoing appraisal of the LW-o-sphere's intellectual landscape, where "Best of 2019" is the primary new focus but at least some thought is dedicated to older stuff) That all said... Why ever not? That seems like a totally valid use of a comment.  
I certainly agree, and think that it makes a lot of sense to reward posts that have been valuable to their readers, as well as spreading them so that they can provide that same value to those who haven't yet read them. Understood. I think that comments should be used for advancing discussions and/or providing info that can't be provided other ways. To me, a comment saying "this is a good post that you should read" communicates an upvote plus the identity of the upvoter, and therefore seems primarily a social move.
That sounds about right, but I think there's a few aspects that make that social move valuable: a) on regular posts in regular circumstances, since comments are a bit higher effort than votes, and comments are at least somewhat more rewarding that votes (at least for me, as an author), I think it's good for at least a couple people to respond "this was great!". Writing a flawless excellent post and then receive upvotes-but-crickets-chirping is a sort of sad experience. I think if 2-3 people have already written such a comment it gets a bit repetitive but I think it's a fine norm. b) there's a practical element for replying to a post which is that it bumps the post to the top of recent discussion and gives it a bit more life. I think this is bad-in-excess, but fine in moderation – if a post is still good 2 years later, it's good to give it periodic spikes of attention. c) In particular, a comment two years after-the-fact that says "I just found this after two years and it still seems good" is conveying additional information beyond "I liked it" – it's saying something about how time-tested the content.
I'd be interested in a poll on this, since I don't have this experience for comments that don't build on the content of the post.
Nod. This makes sense as a thing people might vary quite a bit on. (To be clear, I certainly get dramatically more value out comments that actually engage). It'd be pretty reasonable for you to throw up a question-post about it or something.
Have thrown up the question post.
I am most excited about this as a sort of "things that stood the test of time," whether by being sleeper hits or by being good then and good now.
Curious to hear examples of this.
* This post on what academia is and isn't good at describes a true and important thing well (if somewhat verbosely), but didn't influence me partly because I already believed it and partly because I didn't pay it much attention or thought at the time. There are quite a few examples of these. * This post on the complete class theorems described in a clear way some foundational arguments about the wisdom of using probability theory and decision theory, and how they could be extended. It hasn't made its way into my thought about other things, and I don't think about it that much, but I'm glad I have the concept, and the post is a good reference for it.

The link to the 2018 posts sorted by karma is not working correctly for me; it redirects me to /allPosts for some reason.

I've updated some formatting in the links, see if it works now.
Still not sure what's causing the problem, but here are the direct links to the pages in question. (People who are having trouble – if you enter these directly, does it work?) https://www.lesswrong.com/allPosts?after=2018-01-01&before=2019-01-01&limit=20&timeframe=monthly&includeShortform=false&reverse=true https://www.lesswrong.com/allPosts?after=2018-01-01&before=2019-01-01&limit=100&timeframe=allTime
Those both work.
That's correct. It uses a set of URL parameters (all the weird stuff after the "/allPosts") to restrict the posts to the year 2018. We maybe should make the UI for that a bit clearer. 
No, I mean, it redirects me to https://www.lesswrong.com/allPosts with the weird stuff stripped out, and shows me all posts, not sorted by karma and including the one that was posted eight hours ago and so on.
This is also happening to me
Hmm. Super weird. Can you guys all share browser information? (either here or in a PM/intercom would be fine) Meanwhile, if you right-click on the link and choose "copy link" and paste it into your browser, does that work?
Chrome, MacBook.
Google Chrome for Android, 78.0.3904.108.
(This should by the way be fixed now, let us know if you still experience this problem)
I had this happen to me as well. Firefox 70 on Ubuntu 18.04
It does not - it still strips it all out and redirects me. Chrome on a Macbook Pro.
I got that when I first followed the links from this page, then re-opened them and then it took me to the right version. No idea what made the difference.

Could the LW team clarify how long and in-depth the nomination text should be?

The perhaps somewhat-annoying answer is "at least some text is better than none, but longer and more in-depth is better." Part of the hope here is build out a pretty comprehensive picture of how a concept went on to get used. The nomination I just wrote for Decoupled vs Contextualization is probably what I'm expecting/hoping for median length. (Having more details about specific conversations where the concept was useful would make it a more useful nomination, though. If you have pages worth of thoughts, go for it) (But again, just writing a sentence or two is preferable to not-that, if you're busy, and you can edit it or reply to it later if you have time)

For what it's worth, I bid for the review prizes to be based off of people voting for which reviews were useful. The alternatives, and why I think they're worse:

  • Karma mixes "I found this review useful" and "I already agree with this review but am glad somebody said it", which can reward things which everybody already knew (but I guess both components are important).
  • Moderator's picks have the problem that moderators suffer from the curse of knowledge, and may not be in touch with what's useful for the average voter.
I'm a bit worried that if you let users vote on reviews, you'll mostly get something identical to karma.
Hopefully it's different if you explicitly say "vote for helpful reviews, not just reviews that you agree with", or if you have one button for "I agree with this review" and a different button for "This review was helpful for my assessment of the post" (and it's possible to select both buttons).
Mmm, interesting point.

I alluded to this in this comment, but wanted to put it a bit more clearly:

I think it makes sense to think of The 2018 Review as like "an academic journal", where you submit ideas, and if the ideas seem valuable it get included into a curated work – but not a work that everyone is expected to have read.

By contrast, Rationality A-Z is more like "a textbook", which is foundational to the field. My current best guess it'll make sense for next year's review process to include considering which things make sense to add to a sequence that's similar in scope to R

... (read more)

What do you think about doing this for 2017 and years previous?

That was actually the original plan, but we decided that this process was complicated enough (at least as a first attempt), with the (relatively) narrow target of 2018. My guess is that in future years, once this process has gelled into something everyone understands and the kinks are ironed out, is that it will include some kind of mechanism for including older works.

I think I have enough karma, but I can't figure out where the nomination button is. Could someone share a screenshot?

Also available:  
Hmmm, am I doing something wrong? My karma: What I see when I click the three dots on a page:
That post isn't from 2018
Ack! My error! I see now.

Is the intent in the review phase to display the number of nominations received (which will impact which posts get reviewed) or not (which fails to display information that I am likely to find useful in using the list of posts that have been nominated by enough people to form a reading list)?

Number-of-nominations will probably be added as a UI element within the next day or so, and the fact that it's not there right now is mostly because of time-budgeting problems.

It would be nice if you could link your ""Best of 2018" sequence " text to the actual results of this process...

I am assuming that this outcome was actually reached?

Searching in the searchbox for "best of" gets me best quotes and this article, but not the "best of 2018" sequence.

4Ben Pace
Hey, actually no, we're currently reviewing 2018's posts. We've waited a year in order to give everyone the power of hindsight to figure out what was actually good. Btw, I think you might be the same user as user taryneast. That account is eligible to vote. I suggest logging in with that account, or contacting us via intercom (bottom right corner of the screen) if you'd like to reset your password.
1Taryn East
Yes that is my account, but I no longer have access to that email address, so can't get a standard password reset. I've been out of the LW community for a bit due to baby (single parenthood is hard). Maybe there's a way to get back my access anyway - I'll look into it, but it's not high priority ;)
The process isn't finished yet – it'll hopefully complete sometime in January of next year.