LessWrong is currently doing a major review of 2018 — looking back at old posts and considering which of them have stood the test of time. Info about what features we added to the site for writing reviews is in December's monthly updates post.

There are three phases:

  • Nomination (completed)
  • Review (ends Dec 31st [EDIT: Jan 13th])
  • Voting on the best posts (ends January 7th [EDIT: Jan 13th])

We’re now in the Review Phase, and there are 75 posts that got two or more nominations. The full list is here. Now is the time to dig into those posts, and for each one ask questions like “What did it add to the conversation?”, “Was it epistemically sound?” and “How do I know these things?”.

The LessWrong team will award $2000 in prizes to the reviews that are most helpful to them for deciding what goes into the Best of 2018 book. 

If you’re a nominated author and for whatever reason don’t want one or more of your posts to be considered for the Best of 2018 book, contact any member of the team - e.g. drop me an email at benitopace@gmail.com.

Creating Inputs For LW Users' Thinking

The goal for the next month is for us to try to figure out which posts we think were the best in 2018. 

Not which posts were talked about a lot when they were published, or which posts were highly upvoted at the time, but which posts, with the benefit of hindsight, you're most grateful for being published, and are well suited to be part of the foundation of future conversations.

This is in part an effort to reward the best writing, and in part an effort to solve the bandwidth problem (there were more than 2000 posts written in 2018) so that we can build common knowledge of the best ideas that came out of 2018.

With that aim, when I'm reviewing a post, the main question I'm asking myself is

What information can I give to other users to help them think clearly and accurately about whether a given post should be added to our annual journal?

A large part of the review phase is about producing inputs for our collective thinking. With that in mind, I’ve gathered some examples of things you can write that are help others understand posts and their impacts.

1) Personal Experience Reports

There were a lot of examples of this in the nomination phase, which I found really useful, and would find useful to read more of. Here are some examples:

Raemon:

This post... may have actually had the single-largest effect size on "amount of time I spent thinking thoughts descending from it."

Joh N. Swentworth

This post (and the rest of the sequence) was the first time I had ever read something about AI alignment and thought that it was actually asking the right questions. It is not about a sub-problem, it is not about marginal improvements. Its goal is a gears-level understanding of agents, and it directly explains why that's hard. It's a list of everything which needs to be figured out in order to remove all the black boxes and Cartesian boundaries, and understand agents as well as we understand refrigerators.

Swimmer963:

Used as a research source for my EA/rationality novel project, found this interesting and useful.

David Manheim:

Until seeing this post, I did not have a clear way of talking about common knowledge. Despite understanding the concept fairly well, this post made the points more clearly than I had seen them made before, and provided a useful reference when talking to others about the issue.

Eli Tyre:

One of my favorite posts, that encouraged me to rethink and redesign my honesty policy.

ryan_b:

I have definitely linked this more than any other post.

More detail is also really great. I'd definitely encourage the above users to be more thorough about how the ideas in the post impacted them. Here's a nomination that had a bunch more detail about how the ideas have affected them.

jacobjacob:

In my own life, these insights have led me to do/considering doing things like:
• not sharing private information even with my closest friends -- in order for them to know in future that I'm the kind of agent who can keep important information (notice that there is the counterincentive that, in the moment, sharing secrets makes you feel like you have a stronger bond with someone -- even though in the long-run it is evidence to them that you are less trustworthy)
• building robustness between past and future selves (e.g. if I was excited about and had planned for having a rest day, but then started that day by work and being really excited by work, choosing to stop work and decide to rest such that different parts of me learn that I can make and keep inter-temporal deals (even if work seems higher ev in the moment))
• being more angry with friends (on the margin) -- to demonstrate that I have values and principles and will defend those in a predictable way, making it easier to coordinate with and trust me in future (and making it easier for me to trust others, knowing I'm capable of acting robustly to defend my values)
• thinking about, in various domains, "What would be my limit here? What could this person do such that I would stop trusting them? What could this organisation do such that I would think their work is net negative?" and then looking back at those principles to see how things turned out
• not sharing passwords with close friends, even for one-off things -- not because I expect them to release or lose it, but simply because it would be a security flaw that makes them more vulnerable to anyone wanting to get to me. It's a very unlikely scenario, but I'm choosing to adopt a robust policy across cases, and it seems like useful practice

A special case here is data from the author themselves, e.g. “Yeah, this has been central to my thinking” or “I didn’t really think about it again” or “I actually changed my mind and think this is useful but wrong”. I would generally be excited for users to review their own posts now that they've had ~1.5 years of hindsight, and I plan to do that for all the posts I've written that were nominated.

If a post had a big or otherwise interesting impact on you, consider writing that up.

2) Big Picture Analysis (e.g. Book Reviews)

There are lots of great book reviews on the web that really help the reader understand the context of the book, and explain what it says and adds to the conversation.

Some good examples on LessWrong are be the reviews of Pearl's Book of Why, The Elephant in the Brain, The Secret of Our Success, Consciousness Explained, Design Principles of Biological Circuits, The Case Against Education (part 2, part 3), and The Structure of Scientific Revolutions.

Many of these reviews do a great job of things like

  • Talking about how the post fits into the broader conversation on that topic
  • Trying to pass the ITT of the author by explaining how they see the world
  • Looking at that same topic through their own worldview
  • Pointing out places they see things differently and offering alternative hypotheses.

A review of some LessWrong posts would be that time Scott reviewed Inadequate Equilibria. Oh, and don’t forget that time Scott reviewed Inadequate Equilibria.

Many of the posts we’re reviewing are shorter than most of the reviews I linked to, so it doesn’t apply literally, but much of the spirit of these reviews is great. Also check out others short book reviews and consider writing something in that style (e.g. SSC, Thing of Things).

Consider picking a book review style you like and applying it to one of the nominated posts. 

3) Testing Subclaims (e.g. Epistemic Spot Checks)

Elizabeth Van Nostrand has written several posts in this style.

For another example, in Scott's review of Secular Cycles, one way he tried to think about the ideas in the book was to gather a bunch of alternative data sets on which to test some of the author’s claims.

These things aren't meant to be full reviews of the entire book or paper, or advice on overall how to judge it. They take narrower questions that are definitively answerable, like is a random sample of testable claims literally true, and answers them as fully as possible.

If there is an important subclaim of a post you think you can check out, consider trying to verify/falsify the claim and writing up your results and partial results.

Go forth and think out loud!

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 8:24 PM

Update: We knew the Review Phase would need at least a month. Upon reflection, it was pretty silly to expect a lot of work during that month to happen while people were traveling for holidays.

So, we've decided to extend the Review Phase another two weeks.

By the end of this week we also aim to launch the voting system (see this post and some of it's comments for some current ideas about that). The plan now is for the Review Phase and the Voting Phase to overlap each other (both ending on January 13th). 

The voting phase will be 2 weeks long, and halfway through we'll announce a snapshot of what the results would be, if the current votes were tallied. (People can update their votes as often as they want until the 13th)

People are welcome to write new reviews and edit posts in response to the one-week-voting-information. (The idea is for the final two weeks of Review Phase, and 2 weeks of Voting, to be more like a conversation people can respond to than an immediate, final decision)

The Review Phase is a bit of an evolving process – I'm expecting us to learn over the course of the month what sort of reviews are most helpful.

One explicit update I made since last week is shifting the Review Phase from "write up whether you think this post should be included in the book" to "focus on providing information to other people who are evaluating the post."

The "judge" mindset seemed to be outputting less useful content than the "provide information to help evaluate" mindset. 

I do think including notes about what you think should be included in the book is still valuable, but is something it makes more sense to do after you've spent some time in "evaluate and add information" mode.

I do think including notes about what you think should be included in the book is still valuable, but is something it makes more sense to do after you've spent some time in "evaluate and add information" mode.

Yeah, that's the central question for the voting phase, which comes after the reviewing phase.