I think the truthseeking norms on LW are specifically useful for collective sense-making and sharing ideas about things where there isn't already a very confident consensus. As an example, during the COVID pandemic, many of the other groups of intellectuals who were trying to figure things out had various limitations:
That makes it difficult to use their output, at least if you are a layperson. Those constraints are less common in well-argued LW discussions about most topics.
If you want to learn something, usually the best sources are far from Lesswrong. If you're interested in biochemistry, you should pick up a textbook. Or if you're interested in business, find a mentor who gets the business triad and throw stuff at the wall till you know how to make money.
And yet, Lesswrong has had some big hits. For instance, if you just invested in everything Lesswrong thought might be big over the past 20 years, you'd probably have outperformed the stock market. And while no one got LLMs right, the people who were the least wrong seemed to cluster around Less Wrong. Heck, even super forecasters kept underestimating AI progress relative to Lesswrong. There's also Covid, where Lesswrong picked up on signs unusually early.
So Lesswrong plausibly has got some edge. Only, what is it? And why that?
__________________________________________________________________
Potential answers from a conversation I had recently:
Theory 1: Lesswrong stacked all its points into general epistemic rationality, and relatively few into instrumental rationality.
This is not a good fit for areas which have stable structures, low complexity and fast, low noise, cheap feedback loops. E.g. computer programming, condensed matter physics etc. Neither is it a good fit for areas which require focusing on what's useful, rather than what's true. E.g. business, marketing, politics etc.
It is useful for: things that have never happened before, are socially taboo to talk about, or require general reasoning ability.
I think this theory has some merit. It explains the aforementioned hits and misses of Lesswrong fairly well. And other hits like the correspondence theory of truth, subjective view of probability, bullishness on prediction markets etc. And, perhaps, also failures involving getting the details right, as that involves tight coupling to reality (?).
But one must beware the man of one theory.
Theory 2: Selection effects. Lesswrong selected for smart people.
This implies other smart groups should've done as well as Lesswrong. Did they? Take forecasters. I don't think forecasters outperformed Lesswrong on big AI questions, like whether GPT-4 would be so capable. That said, they do mostly match or exceed Lesswrong in the details. Or take physicists. As far as I'm aware, the physics community didn't circulate early warnings about Covid. (A potential test: did CS professors notice the impact and import of crypto early on?)
Conversely, Lesswrong had some fads that typical smart people didn't. Like nootropics, which basically don't work besides stimulants.
Theory 2.1: Theory 2 + Lesswrong selected for interested in big questions, the future and reasoning.
In other words, Lesswrong is a bunch of smart people with idiosyncratic interests and they do well at guessing what is going to happen there than other groups. Likewise, other groups of smart folks will do better than the norm at their own autistic special interests. E.g. a forum of smart body builders would know the best ways to get huge.
Consider Covid in this context. Lesswrong, and EAs, are very interested in existential risks. Pandemics are one such risk. So Lesswrong was primed to pay attention to signs of a big potential pandemic and take action accordingly. One nice feature of this theory is it doesn't predict that Lesswrong did better at predicting how the stock market would react to Covid. IIRC, we were all surprised at how well it did.
So it isn't so much a matter of "being more sane" than actually bothering to pay attention. Like Crypto. Wei Dai, Hal Finney and others were important early contributors to Lesswrong, and got the community interested in the topic. Lesswrong noticed had a chance to go "yeah, this makes sense" when other groups didn't. Yes, many didn't. But relatively speaking, I think Lesswrong did well. Though this was before my time on this site, and I'm relying on hearsay.
Perhaps an issue: why did Lesswrong pay attention to the big questions? Perhaps that's because of founder effects. EY and Robin Handsome emphasized big, important questions, which shaped the community's interests accordingly.
Which theory is right? I'm not sure. For one, these theories aren't mutually exclusive. Personally, I am a bit doubtful of theory 1, in part because it plays to my ego. Plus, it's suspicious that I can only point to a few clear, big epistemic wins.
Of course, I could spend 5 minutes actually thinking about tests that discriminate between these theories. But I've got to get this post done soon, and I think you all probably have more ideas and data that I'm missing. So, what is Lesswrong good for, and why?