We're half way through the second annual review, and 121 posts have been nominated

We've had more than double the number of individual nominations than last year, but on reviews we're still playing catchup. Last year we had 118 reviews, yet this year we've only had 51 so far.

When there's so many posts, it can be daunting to figure out which ones to review, so to help out, I'm making this thread. Every comment on this thread will be a post, and you should vote on which ones you would like to read a review of. 

A review is something ideally that puts it in context of a broader conversation, describes its key contributions, its strengths and flaws, and where more work can be done.

(Or something else. Many people who write self-reviews often give a different flavor of review. And I've read many great short reviews, e.g. Jameson Quinn and Zvi last year did a lot of short reviews that communicated their impression of the post quite clearly.)

So I'm going to leave 122 comments on this post. 121 comments will just be a post title, and the other one will be for thread meta. (Search "Meta Thread".) I will remove my own votes from them, so they all start at zero. 

Please vote on the comments to show how much you'd like to see reviews of different posts! Feel free to add a comment about what sort of review you'd like to see.

(Yes, I will probably get a lot of karma from this thread. Mwahaha you have fallen for my evil trap.)

(Also, my thanks to reviewers magfrump and Zvi with 5 each, johnswentworth with 6 reviews, and to fiddler with 10 (!), all thoughtful and valuable.)

132 comments, sorted by Click to highlight new comments since: Today at 3:46 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
I'm specifically interested in a review of this post by someone who found these scenarios novel.

Risks from Learned Optimization: Introduction by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant.

2Matt Goldenberg2y
This is an excellent review. One thing I thought it could do better is in the vein of epistemic spot checks, pointing out places where the authors conjectures are far ahead of the science For instance, AFAICT, memory reconsolidation has only been proven for up to a few weeks in Mice models, but they talk about it being used to reconsolidate childhood memories in the book.
2Matt Goldenberg2y
I think there's a lot of value in this post articulating a certain frame. Don't know how this works since it's a link post, but would love to see a review that more explicitly pointed at the frame, how it's useful and not useful, and pointed out the ways the author could make the frame more explicit.

Thoughts on Human Models by Ramana Kumar, Scott Garrabrant.

4Matt Goldenberg2y
I made a comment when that post first came out that I thought this was missing the mark on what contextualizing is actually trying to get at (in particular, focusing on language meanings rather than the consequences of language use). I think this whole sequence by Zach is at it's best when it focuses on the upsides of decoupling, and at it's worst when it tries to explain contextualizing, and would like to see a review that covers both those strengths and weaknesses.
2Matt Goldenberg2y
I would love to see someone review this post! In particular, there was a critique about "falsifiability" - would love to hear the exact problems with what's unfalsifiable, and address them in a subsequent edit.
I'm surprised by the presence of negative scores on some posts. Is this to be interpreted as "please do not review this post" or as "please get to the others first"? All three seem to be good explanatory pieces in and of themselves; I wonder if there is a kind of performance penalty where if a post does not seem like it would benefit much from a review process, it gets pushed to the back of the line. This isn't bad really, in fact it seems fairly efficient, I just didn't expect it.
2Ben Pace2y
+1 was a bit surprised. Don't think it matters too much. Except mildly think it increases the chance those posts get reviewed.
Specifically interested in reviews covering related work, replication, follow-up, etc.
I (weakly) think this one is probably more important than the progress studies post on concrete.
4Matt Goldenberg2y
I made some critiques of this sequence when it first came out, related to the implicit framing that specificity is always good, and generalist is always sloppy ( this a frame, I don't think it's stated explicitly). Similar to my comment about Zach's sequence, I think this sequence is at its best when talking about the benefits of specificity, and at its worst when talking about the problems with non-specificity. I'd like to see a review that highlights these merits while pointing out the missed perspectives. Bonus points if it relates to the some of the notions about withholding specificity discussed in alkjashs post on Babble, which was included in last year's review.

Circle Games by sarahconstantin.