I agree it would be good to add a note about push polling, but it's also good to note that the absence of information is itself a choice! The most spare possible survey is not necessarily the most informative. The question of what is a neutral framing is a tricky one, and a question about the future that deliberate does not draw attention to responsibilities is not necessarily less push-poll-y than one that does.
One good idea to take out of this is that other people's ability to articulate their reasons for their belief can be weak—weak enough that it can distract from the strength of evidence for the actual belief. (More people can catch a ball than explain why it follows the arc that it does).
Agreeing that just the final paragraph would be a good idea to include; otherwise, I don't think this passes my bar for "worth including as best-of."
Building off Raemon's review, this feels like it is an attempt to make a 101-style point that everyone needs to understand if they don't already (not as rationalists, but as people in general) but that seems to me like it fails because those reading it will fall into the categories of (1) those who already got it and (2) those who need to get it but won't.
This is a very important point to have intuitively integrated into one's model, and I charge a huge premium to activities that require this kind of reliability. I hope it makes the cut.
I also note that someone needs to write The Costs of Unreliability and I authorize reminding me in 3 months that I need to do this.
Given all of the discussion around simulacra, I would be disappointed if this post wasn't updated in light of this.
I've already written a comment with a suggestion that this post needs a summary so that you can benefit from it, even if you don't feel like wading through a bunch of technical material.
S-curves are a concept that I use frequently.
I would love to see a more concise version of this.
This is an excellent post - my only question is how accurately this translates the Buddhism which is not something I'm qualified to have a strong opinion on. Nonethless, it matches my limited understanding of meditation.
In addition to my general comments when I curated this piece... it turns out that understanding how distributed teams work was pretty important in 2020.
The people around me reason this way a lot, and I think it's for some reason really unintuitive for most people to start doing. This post is clearly written and I like it as an artifact I can point people to, rather than explaining the thing from scratch myself every time.
So, this was apparently in 2019. Given how central the ideas have become, it definitely belongs in the review.
I don't particularly like dragging out the old coherence discussions, but the annual review is partly about building common knowledge, so it's the right time to bring it up.
This currently seems to be the canonical reference post on the subject. On the one hand, I think there are major problems/missing pieces with it. On the other hand, looking at the top "objection"-style comment (i.e. Said's), it's clear that the commenter didn't even finish reading the post and doesn't understand the pieces involved. I think this is pretty typical among people who object... (read more)
I think one of my favorite things about LW is that it has a clear-eyed view of the future, and things will be different and we should pick which way to make them different. While I don't think the theory of change underlying this specific proposal is here, I think having these sorts of proposals around, and being the sort of people who share these proposals instead of write them off, is important, and I think I've moved more in this direction over the intervening year, in part because of how positive my reaction was to this post.
Everybody knows this post belongs in the 2019 Review.
I use this concept often, including explicitly thinking about what (about) five words I want to be the takeaway or that would deliver the payload, or that I expect to be the takeaway from something. I also think I've linked to it quite a few times.
I've also used it to remind people that what they are doing won't work because they're trying to communicate too much content through a medium that does not allow it.
A central problem is how to create building blocks that have a lot more than five words, but where the five words in each block can do a reasonable substitute job when needed.
This post steps into a larger picture than what I see as normal rationality style optimization of life. I think on the margin people do far too little of this sort of dive into their motivations.
Really what I want is for Kaj's entire sequence to be made into a book. Barring that, I'll settle for nominating this post.
Vaniver has said most of the things I want to say here, but there are some additional things I want to say:
I think building models of the mind is really hard. I also notice that in myself, building models of the mind feels scary in a way that I often prevents me from thinking sanely in many important situations.
I think the causes of why it feels scary are varied and complicated, but a lot of it boils down to the fact that in order to model minds, a purely physically reductionistic approach is often difficult, and my standards for evidence often... (read more)