I think the truthseeking norms on LW are specifically useful for collective sense-making and sharing ideas about things where there isn't already a very confident consensus. As an example, during the COVID pandemic, many of the other groups of intellectuals who were trying to figure things out had various limitations:
That makes it difficult to use their output, at least if you are a layperson. Those constraints are less common in well-argued LW discussions about most topics.
Yeah, "collective sense-making" feels right to me. Individual aspiring rationalists sometimes say crazy things, but the rest of the group usually corrects them when they do.
As opposed to (in my opinion typical) situations outside of Less Wrong where:
So either the truths do not appear, or the falsehoods do not disappear. Basically a group of normies is usually approximately as smart/sane as its highest-status member. Aspiring rationali...
The edge of LW over a specialist textbook is, in my experience, about the audience that content is written for. All writing is shaped to some extent by the profile of the expected reader. The expected reader on LW is likely to be bright, curious, and epistemically exacting, while not being a specialist nor intending to specialize in the field of the any particular piece of writing.
Intermediate and advanced textbooks get to assume that the reader has already invested hours and years into comprehending the foundational materials of their field, so insights in them are less accessible to bystanders. Also textbooks tend to prioritize sharing all of the relevant information about a topic, instead of only the novel information or only the useful information.
News articles, by contrast, over-index on novelty, and are written for a target audience that seems expected to value entertainment and validation over technical precision.
I suspect that the process of distilling specialist knowledge into posts appealing to this expected reader is itself a good sieve for capturing which specialist insights lie in the intersection of novelty, explainability, and usefulness.
LW's other edge over textbooks is timeliness of information -- the lower expectations for a blog post vs a peer reviewed article allow faster publication and greater volume of candidate great posts, from which the community's voting can then filter and highlight the posts that look great to the most people.
Lesswrong is very good at taking known facts to their predictable conclusion, even when there isn't a society-wide consensus on them. Especially when the conclusion is outside the norm. Examples include:
Currently unresolved examples might include:
How long did they anticipate COVID?
If you want to learn something, usually the best sources are far from Lesswrong. If you're interested in biochemistry, you should pick up a textbook. Or if you're interested in business, find a mentor who gets the business triad and throw stuff at the wall till you know how to make money.
And yet, Lesswrong has had some big hits. For instance, if you just invested in everything Lesswrong thought might be big over the past 20 years, you'd probably have outperformed the stock market. And while no one got LLMs right, the people who were the least wrong seemed to cluster around Less Wrong. Heck, even super forecasters kept underestimating AI progress relative to Lesswrong. There's also Covid, where Lesswrong picked up on signs unusually early.
So Lesswrong plausibly has got some edge. Only, what is it? And why that?
__________________________________________________________________
Potential answers from a conversation I had recently:
Theory 1: Lesswrong stacked all its points into general epistemic rationality, and relatively few into instrumental rationality.
This is not a good fit for areas which have stable structures, low complexity and fast, low noise, cheap feedback loops. E.g. computer programming, condensed matter physics etc. Neither is it a good fit for areas which require focusing on what's useful, rather than what's true. E.g. business, marketing, politics etc.
It is useful for: things that have never happened before, are socially taboo to talk about, or require general reasoning ability.
I think this theory has some merit. It explains the aforementioned hits and misses of Lesswrong fairly well. And other hits like the correspondence theory of truth, subjective view of probability, bullishness on prediction markets etc. And, perhaps, also failures involving getting the details right, as that involves tight coupling to reality (?).
But one must beware the man of one theory.
Theory 2: Selection effects. Lesswrong selected for smart people.
This implies other smart groups should've done as well as Lesswrong. Did they? Take forecasters. I don't think forecasters outperformed Lesswrong on big AI questions, like whether GPT-4 would be so capable. That said, they do mostly match or exceed Lesswrong in the details. Or take physicists. As far as I'm aware, the physics community didn't circulate early warnings about Covid. (A potential test: did CS professors notice the impact and import of crypto early on?)
Conversely, Lesswrong had some fads that typical smart people didn't. Like nootropics, which basically don't work besides stimulants.
Theory 2.1: Theory 2 + Lesswrong selected for interested in big questions, the future and reasoning.
In other words, Lesswrong is a bunch of smart people with idiosyncratic interests and they do well at guessing what is going to happen there than other groups. Likewise, other groups of smart folks will do better than the norm at their own autistic special interests. E.g. a forum of smart body builders would know the best ways to get huge.
Consider Covid in this context. Lesswrong, and EAs, are very interested in existential risks. Pandemics are one such risk. So Lesswrong was primed to pay attention to signs of a big potential pandemic and take action accordingly. One nice feature of this theory is it doesn't predict that Lesswrong did better at predicting how the stock market would react to Covid. IIRC, we were all surprised at how well it did.
So it isn't so much a matter of "being more sane" than actually bothering to pay attention. Like Crypto. Wei Dai, Hal Finney and others were important early contributors to Lesswrong, and got the community interested in the topic. Lesswrong noticed had a chance to go "yeah, this makes sense" when other groups didn't. Yes, many didn't. But relatively speaking, I think Lesswrong did well. Though this was before my time on this site, and I'm relying on hearsay.
Perhaps an issue: why did Lesswrong pay attention to the big questions? Perhaps that's because of founder effects. EY and Robin Handsome emphasized big, important questions, which shaped the community's interests accordingly.
Which theory is right? I'm not sure. For one, these theories aren't mutually exclusive. Personally, I am a bit doubtful of theory 1, in part because it plays to my ego. Plus, it's suspicious that I can only point to a few clear, big epistemic wins.
Of course, I could spend 5 minutes actually thinking about tests that discriminate between these theories. But I've got to get this post done soon, and I think you all probably have more ideas and data that I'm missing. So, what is Lesswrong good for, and why?