There is a general theme to a lot of the flashpoints I’ve witnessed so far on LW 2.0. We're having the usual disagreements and debates like “what is and isn’t allowed here” or “is it worse to be infantile or exclusive?”. I think underlying a lot of these arguments are mistaken notions about how ideas get made. There is also a lack of self awareness about how the open nature of LessWrong fundamentally caps how much trust can be expected among participants. These are systemic issues, they're not something Oliver and Ben can easily fix.

Let's start with the bit everyone knows. Changing your mind, really changing your mind is hard. It's stupid hard. A large part of why LW 2.0 might be useful is that it has the promise of helping people think better. In addition to its native difficulty, people are usually in a state of active defense against others getting them to do it. Even on LessWrong people are generally more interested in reading things that come with the pleasant feeling of insight rather than homework. In one of his recent posts Eliot expressed nothing less than outright contempt for such people. The problem is that this behavior makes sense.

See, not only is changing your mind hard, it's dangerous. There are people who make their living off getting others to change their mind in dangerous ways, and a lot of our reluctance to perform weird mental gymnastics is totally reasonable in the face of that. Strange formal systems of interaction, weird fake frameworks that are supposed to work well anyway, bizarre ideas about what the real threats facing humanity are, look I'm not saying this stuff is bad but you have to be acutely aware of what you're really asking from people. If you dress in rags and look vaguely threatening, people won't want to follow you into the dark alley of the human psyche.

Combine these two things together, and it's fairly obvious why CFAR finds that their methods translate well in person but are hard to teach over the internet. They seem convinced that the problem is in the explanation, the operative explanation. They're giving advice at the level of 'use this bolt', which is entirely missing the point. What's missing isn't instructions but trust. This is a high effort, weird thing being pushed by secretive private sector psychologists. Double Crux right now probably doesn't need another drop of ink spilled on explanation until you've got the mission and strategy levels of explanation covered. Not 'how this' but 'why this'.

A necessary consequence of this is that in person training camps, where instructors have the full use of their charisma and physicality to rely on for establishing trust, where people have explicitly set aside time to focus on this thing, have paid good money and are willing to tolerate cramped conditions for the thing, well you're going to see better results than you will as a semianonymous internet stranger. Another necessary consequence is that you're just not going to get the high trust buy in from people that you need for this on an open forum with loose incentives for it. The kind of work that needs to be done to develop the early stages of an idea looks completely different from the kind to develop an idea in its adolescence. Taking these principles seriously suggests the following.

When Ideas Are In Their Infancy

Before an idea has really been developed, it's very vulnerable. The author(s) won't have smooth answers to everyones probing questions, they won't have perfect rigor, some stuff will be just flat wrong, the seed might be good but the execution is stupid. If you have to get it perfect on the first iteration, then you're not going to try and have very many ideas. When people write a blog, you don't see all the effort they put into developing ideas before putting hands to keyboard. To me the poster child for this sort of thing is Dragon Army. Dragon Army was an idea that Duncan explicitly wanted workshop style feedback on, but the first draft was so horrifying to some people that it just became a disaster. There are not only serious epistemic but public relations benefits from adopting sane norms around baby ideas. These need:

  • A Space For People To Workshop Ideas With People They Trust - Right now I'm using various chat systems such as IRC and Discord for this purpose, combined with a wiki that is designed to be edited with version control and the ability to add other people to the editor list. Blog posts have a sense of permanency to them, they're meant to record a moment of time without being revisited for a while. This is stifling to the creation of new ideas. When I first think of something, it's not ready to be entered into the permanent record. I don't want to have it scrutinized by a bunch of unruly people with limited time. I want friends and peers who will give my ideas the time they deserve even if they wouldn't do it for someone they don't know as well. Conor for example complained about how people aren't giving him the trust he would expect after writing so much content for the site, and that's a necessary consequence of only having the mass mode of communication. Instead of pretending like everyone on LessWrong will ever be a trustworthy audience for most members, acknowledge reality and provide space for a small number of people to develop ideas and store data.
  • A Space For People To Privately Discuss Sketched Ideas With People They Trust - There's a big difference between starting from zero and starting from your outline or first draft. Once this is complete and you want feedback from a wider range of people, you're still not ready to publish to a blog. Generally it makes sense to run it by a handful of trustworthy reviewers who will catch things you missed or bring up fundamental objections and concerns before you're publicly committed to defending them. Right now I manually send copies of my stuff to people, but a mailing list could work well for this. The key thing is that usually it's not that I have a particular general audience in mind at this stage so much as specific people who I want to read the thing and comment. The 'personal blog' model does not accurately capture this, instead just providing a slightly more selective version of the mass audience for me to wrestle with.

When Ideas Are In Adolescence

Once an idea hits the blog post format, it's open to incremental improvement at best. Further, because the crowd is so large it will think of many things the author did not, and ruthlessly point them out. It's not necessarily malicious, it's just a natural consequence of scale and the desire/need to improve. Therefore authors should be prepared for criticism by the time it's coming, and readers need to not have their time wasted. In the service of that:

  • Have A Noise Filter - This would presumably be some kind of team of people that will seriously, for-real look at mostly developed concepts and judge whether they might be interesting or not. Upvotes do this kind of roughly, but they appear to be mostly working for now.
  • Test Promising Ideas - Early adopters need to be encouraged and supported in the ecosystem. Does Double Crux empirically work, what would it even mean for Double Crux to 'empirically work'? I don't know, and I don't really think a discussion about it can work until someone does. What we need isn't 100 comments, it's 10 people who will get off their ass and try the thing out in some kind of standard fashion for a month and tell us what they think.
  • Only Present The Reluctant With The Best - Once you've tested ideas, make a repository of the stuff people who really put effort in agree has value. Do not resent people who are reluctant to go on wild epistemic goose chases. For one thing, many people simply don't have the time. And if your community optimizes against people who don't have a ton of free time, you'll lose out on the membership of extraordinarily high value people that happen to have other things to be doing with their day. Influencing their behavior is a huge lever to move the world. For another, the people who don't want to try Double Crux are being essentially reasonable.
  • Continue To Evaluate And Test Ideas Throughout Their Life - LessWrong lionizes empiricism and science, but then never seems to produce any. If you're all serious about 'improving the art' then you have to work on it, really work on it. The Sequences are nearly a decade old now, and as a 'rationality textbook' they're beginning to seriously show their age. A lot of the core studies cited are victims of the replication crisis, tons of good work has been done in the mean time that isn't included such as Tetlock's bibliography, there are very few inline citations to help enthusiastic readers learn more about the source material drawn from. We can do a lot better, but it's not going to happen on its own.

(This post was originally published at https://namespace.obormot.net/Main/GuidedMentalChangeRequiresHighTrust, thanks to friends who gave feedback.)

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 3:10 PM

LessWrong lionizes empiricism and science, but then never seems to produce any.

I think you hit on something really important. More things in the vain of "So, I had this idea on how to do better at X, and I tried it out and (it didn't work at all)/(it seems promising)/(I'm still uncertain about...)" could make this a place where ideas are grown and forged.

Yes this would help. BUT. What we have tried to bring into culture is things like steelman. Encouraging the reader to make sense of the writing. This would help too.