There’s this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they’re not really sure if it’s true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole. Eventually, they give up and never actually publish the piece.

This post is about how to avoid that, without sacrificing good epistemics.

There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it.

I claim that this promotes better epistemics overall than always researching everything in depth.

Why?

It’s About The Process, Not The Conclusion

Suppose I have a box, and I want to guess whether there’s a cat in it. I do some tests - maybe shake the box and see if it meows, or look for air holes. I write down my observations and models, record my thinking, and on the bottom line of the paper I write “there is a cat in this box”.

Now, it could be that my reasoning was completely flawed, but I happen to get lucky and there is in fact a cat in the box. That’s not really what I’m aiming for; luck isn’t reproducible. I want my process to robustly produce correct predictions. So when I write up a LessWrong post predicting that there is a cat in the box, I don’t just want to give my bottom-line conclusion with some strong-sounding argument. As much as possible, I want to show the actual process by which I reached that conclusion. If my process is good, this will better enable others to copy the best parts of it. If my process is bad, I can get feedback on it directly.

Correctly Conveying Uncertainty

Another angle: describing my own process is a particularly good way to accurately communicate my actual uncertainty.

An example: a few years back, I wondered if there were limiting factors on the expansion of premodern empires. I looked up the peak size of various empires, and found that the big ones mostly peaked at around the same size: ~60-80M people. Then, I wondered when the US had hit that size, and if anything remarkable had happened then which might suggest why earlier empires broke down. Turns out, the US crossed the 60M threshold in the 1890 census. If you know a little bit about the history of computers, that may ring a bell: when the time came for the 1890 census, it was estimated that tabulating the data would be so much work that it wouldn’t even be done before the next census in 1900. It had to be automated. That sure does suggest a potential limiting factor for premodern empires: managing more than ~60-80M people runs into computational constraints.

Now, let’s zoom out. How much confidence should I put in this theory? Obviously not very much - we apparently have enough evidence to distinguish the hypothesis from entropy, but not much more.

On the other hand… what if I had started with the hypothesis that computational constraints limited premodern empires? What if, before looking at the data, I had hypothesized that modern nations had to start automating bureaucratic functions precisely when they hit the same size at which premodern nations collapsed? Then this data would be quite an impressive piece of confirmation! It’s a pretty specific prediction, and the data fits it surprisingly well. But this only works if I already had enough evidence to put forward the hypothesis, before seeing the data.

Point is: the amount of uncertainty I should assign depends on the details of my process. It depends on the path by which I reached the conclusion.

This carries over to my writing: if I want to accurately convey my uncertainty, then I need to accurately convey my process. Those details are relevant to how much certainty my readers should put in the conclusion.

So Should I Stop Researching My Claims?

No. Obviously researching claims still has lots of value. But you should not let uncertainty stop you from writing things up and sharing them. Just try to accurately convey your uncertainty, by communicating the process.

Bad Habits

It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong.

There’s a closely related sub-habit in which people try to only claim things with very high certainty. This makes sense in a persuasion/debate frame - any potential loophole could be exploited by “the other side”. Arguments are soldiers; we must show no weakness.

Good epistemic habits include living with uncertainty. Good epistemic discourse includes making uncertain statements, and accurately conveying our uncertainty in them. Trying to always research things to high confidence, and never sharing anything without high confidence, is a bad habit.

Takeaway

So you have some ideas which might make cool LessWrong posts, or something similar, but you’re not really confident enough that they’re right to put them out there. My advice is: don’t try to persuade people that the idea is true/good. Persuasion is a bad habit from high school. Instead, try to accurately describe where the idea came from, the path which led you to think it’s true/plausible/worth a look. In the process, you’ll probably convey your own actual level of uncertainty, which is exactly the right thing to do.

… and of course don’t stop researching interesting claims. Just don’t let that be a bottleneck to sharing your ideas.

Addendum: I'm worried that people will read this post think "ah, so that's the magic bullet for a LW post", then try it, and be heartbroken when their post gets like one upvote. Accurately conveying one's thought process and uncertainty is not a sufficient condition for a great post; clear explanation and novelty and interesting ideas all still matter (though you certainly don't need all of those in every post). Especially clear explanation - if you find something interesting, and can clearly explain why you find it interesting, then (at least some) other people will probably find it interesting too.

267

25 comments, sorted by Highlighting new comments since Today at 4:53 PM
New Comment

I really like this post for two reasons:

  1. I've noticed that when I ask someone "why do you believe X", they often think that I'm asking them to cite sources or studies or some such. This can put people on the defensive, since we usually don't have ready-made citations in our heads for every belief. But that's not what I'm trying to ask; I'm really just trying to understand what process actually caused them to believe X, as a matter of historical fact. That process could be "all the podcasters I listen to take X as a given", or "my general life experience/intuition has shown X to be true". You've put this concept into words here and solidified the idea for me: that it's helpful to communicate why you actually believe something, and let others do with that what they will.
  2. The point about uncertainty is really interesting. I'd never realized before that if you present your conclusion first, and then the evidence for it, then it sure looks like you already had that hypothesis for some reason before getting a bunch of confirming evidence. Which implies that you have some sort of evidence/intuition that led you to the hypothesis in addition to the evidence you're currently presenting.

I've wondered why I enjoy reading Scott Alexander so much, and I think that the points you bring up here are a big reason why. He explains his processes really well, and I usually end up feeling that I understand what actually caused him to believe his conclusions.

For the first thing I have been trying to shift lately to asking people to tell me the story of how they came to that belief. This is doubly useful because only a tiny fraction of the population actually has the process of belief formation explicit enough in their heads to tell me.

Curated. This is a great post. We (the mods) generally struggle to get people to write up thoughts worth hearing because they fear that they're not yet defensible enough. Until now, I'd have encouraged people to share all their thoughts at various stages of development/research/refinement/etc, just with appropriate epistemic statuses attached. This post goes further and provides an actual specific approach that one can follow to write up ideas at any level of development. More than that, it provides a social license of which I approve.

The ideal I endorse (as a mod) is something like LessWrong is a place where you can develop and share your ideas wherever they're at. It's great to publish a well-researched or well-considered post that you're confident in, but it can also be incredibly valuable to share nascent thoughts too. Doing so both allows you,  the writer, to get early feedback, but also can also often provide readers something good enough to learn from and build upon. And it's definitely much better to publish early than not at all!

A challenging aspect of posting less well-developed thoughts is they can elicit more negative feedback than a post that's had a lot effort invested to counter objections, etc. This feedback can be hard and unpleasant to receive, especially if it's worded bluntly. My ideal here, that might take work to achieve, is that our culture is one where commenters calibrate their feedback (or at least its tone) to be appropriate to the kind of post being made. If someone's exploring an idea, encourage the exploration even as you might point a flaw in the process.

For people who especially concerned their thoughts aren't ready for general publication, we built Shortform to be the home for earlier-stage material. The explicit purpose of Shortform is that you can share thoughts which only took you a relatively short amount of time to write. [However, posting as a regular LessWrong post can also be fine, if you're comfortable. And mods can help you decide where to post if you're unsure.]

In academia-land, there's a norm of collegiality - at least among my crowd. People calibrate their feedback based on context and relationship. Here, relationship is lacking and context is provided only by the content of the post itself. I think we're missing a lot of the information and incentives that motivate care in presentation and care in response. On the other hand, commenters' opinion of me matters not at all for my career or life prospects. To me, the value of the forum is in providing precisely this sort of divergence from real-world norms. There's no need to constantly pay attention to the nuances of relationships or try and get some sort of leverage out of them, so a very different sort of conversation and way of relating can emerge. This is good and bad, but LessWrong's advantage is in being different, not comfortable.

This is good and bad, but LessWrong's advantage is in being different, not comfortable.

Personally: If LessWrong is not comfortable for me to post on, I won't post. And, in fact, my post volume has decreased somewhat because of that. That's just how my brain is wired, it seems.

I have a lot of thoughts about a lot of things but my post history reveals that I'm like you and a lot like the people this post is geared towards; I don't share my thoughts because I never have much of an idea how to back them up. Worse though, I can't even follow this post's advice, as I mostly have no idea how I come up with any of the things I do, either; I've never bothered to pay attention to the process. :/

Secret secondary goal of this post: get people to pay attention to the process-which-generates their ideas/beliefs/etc.

Yeah, I’m only speaking for myself here :)

I think a norm of “somewhat comfortable by default; solicit maximally frank feedback with an end-of-post request” might be good? It may be easier to say “please be harsher on my claims” than “please be courteous with me.”

Oh yeah, I mean I don’t love the discomfort! I just feel like it’s more efficacious for me to just thicken my skin than to hope LW’s basic social dynamic improves notably. Like, when I look back at the tone of discussion when the site was livelier 10 years ago, it’s the same tone on the individual posts. It just comes off differently because of how many posts there are. Here, you get one person’s comment and it’s the whole reaction you experience. Then, there was a sort of averaging thing that I think made it feel less harsh, even though it was the same basic material. If that makes any sense at all :D

My advice is: don’t try to persuade people that the idea is true/good. Persuasion is a bad habit from high school. Instead, try to accurately describe where the idea came from, the path which led you to think it’s true/plausible/worth a look.

Wow, this makes a ton of sense. I can't believe I never realized/internalized it before.

Liked this essay and upvoted, but there's one part that feels a little too strong:

There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it. [...]

It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong.

Suppose that I have studied a particular field X, and this has given me a particular set of intuitions about how things work. They're not based on any specific claim that I could cite directly, but rather a more vague feeling of "based on how I understand things to generally work, this seems to make the most sense to me".

I now have an experience E. The combination of E and my intuitions gathered from studying X cause me to form a particular belief. However, if I had not studied X, I would have interpreted the experience differently, and would not have formed the belief.

If I now want to communicate the reasons behind my belief to LW readers, and expect many readers to be unfamiliar with X, I cannot simply explain that E happened to me and therefore I believe this. That would be an accurate account of the causal history, but it would fail to communicate many of the actual reasons. I could also say that "based on studying X, I have formed the following intuition", but that wouldn't really communicate the actual generators of my belief either.

But what I can do is to try to query my intuition and try to translate it into the kind of a framework that I expect LW readers to be more familiar with. E.g. if I have intuitions from psychology, I can find analogous concepts from machine learning, and express my idea in terms of those. Now this isn't quite the same as just writing the bottom line first, because sometimes when I try to do this, I realize that there's some problem with my belief and then I actually change my mind about what I believe. But from the inside it still feels a lot like "persuasion", because I am explicitly looking for ways of framing and expressing my belief that I expect my target audience to find persuasive.

This is definitely the use-case where "explain how you came to think Y" is hardest; there's a vague ball of intuitions playing a major role in the causal pathway. On the other hand, making those intuitions more legible (e.g. by using analogies between psych and ML) tends to have unusually high value.

I suspect that, from Eliezer's perspective, a lot of sequences came from roughly this process. He was trying to work back through his own pile of intuitions and where they came from, then serialize and explain as much of it as possible. It's been a generator for a lot of my own writing as well - for instance, the Constraints/Scarcity posts came from figuring out how to make a broad class of intuitions legible, and the review of Design Principles of Biological Circuits came from realizing that the book had been upstream of a bunch of my intuitions about AI. It's not coincidence that those were relatively popular posts - figuring out the logic which drives some intuitions, and making that logic legible, is valuable. It allows us to more directly examine and discuss the previously-implicit/intuitive arguments.

I wouldn't quite liken it to persuasion. I think the thing you're trying to point to is that the author does most of the work of crossing the inductive gap. In general, when two people communicate, either one can do the work of translating into terms the other person understands (or they can split that work, or a third party can help, etc... the point is that someone has to do it.). When trying to persuade someone, that burden is definitely on the persuader. But that's not exclusively a feature of persuasion - it's a useful habit to have in general, to try to cross most of the inductive gap oneself, and it's important for clear writing in general. The goal is still to accurately convey some idea/intuition/information, not to persuade the reader that the idea/intuition/information is right.

Addendum (also added to the post): I'm worried that people will read this post think "ah, so that's the magic bullet for a LW post", then try it, and be heartbroken when their post gets like one upvote. Accurately conveying one's thought process and uncertainty is not a sufficient condition for a great post; clear explanation and novelty and interesting ideas all still matter (though you certainly don't need all of those in every post). Especially clear explanation - if you find something interesting, and can clearly explain why you find it interesting, then (at least some) other people will probably find it interesting too.

This post is excellent, and should be part of the 'Welcome to LessWrong' introductory content IMO. :)

This is a very important point, and I’m happy someone made it and it’s been upvoted so quickly. I do have a million ideas for LW posts that I hesitate to contribute for many of the reasons above.

Thank you for writing this so there's common knowledge that it's okay to write this way on LW, or that it would be okay with the many upvoters. I certainly got the feeling that posts have to be well-researched dissertations or persuasive arguments on here sometimes. 

I thought this was fantastic, very thought-provoking. One possibly easy thing that I think would be great would be links to a few posts that you think have used this strategy with success.

Drawing from my own posts:

This reminds me of Julia Galef's video Something I like about Richard Dawkins.

I was was trying to put my finger on what it was, and I think I've pinned it down. Basically, Richard would bring up topics of conversation not because he had a a well-articulated, definitive opinion that he wanted to share about that topic, but because he thought it was interesting. And he didn't yet know what he thought about it.

So just for example, we ended up on the topic of communication styles. And he noted, with a hint of curiosity in his voice, that it actually comes across as kind of aggressive, or confrontational when someone speaks very clearly and to the point, without adding a lot of qualifying phrases and statements around their point.

And he mused aloud, "I wonder why that is? Why would that be?" And we thought about it together.

And I think that's very rare. Most people -- even intellectually curious people -- in conversation, will make points that they've basically already thought about, already decided how they feel about. And it's actually quite rare for someone to introduce topics of conversation and want to just figure it out together, on the spot.

This is great! Very related to Open Philanthropy's concept of reasoning transparency, especially the section on how to indicate different kinds of support for your view. 

I vividly remember learning about this within my first few weeks working at GiveWell: when I submitted an early draft of some research on malaria, my manager took one look at the footnotes and asked me to redo them. "Don't just find some serious-looking source - what actually led you to make the claim you're footnoting?" I was mindblown, both that this was such a novel thing to do and that I hadn't even really noticed I wasn't doing it.

Dunbar's Number: 150

Wentworth's Number: 60m

It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate

The goal of high school writing isn't to write anything that persuades the reader. It's to present an argument in a way that fits the expectations of the teacher.

When writing about intuitions or the not-fully-researched thought process that led you to believe something, could someone perceive that as an implict meta-claim that your reasoning process is unique or better than other reasoning processes or perspectives of approaching the same thing?

 

I agree that many people's reasoning processes and unique perspectives are valuable, I just don't see it as a way to sidestep the fact that you only tend to post when you believe you have something valuable to say. Some reasoning processes are just bad, and hence any site will have to enforce some quality bar on "is this reasoning process valuable".

It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong

Even in a debate? I can't see how things can be said to be bad or good out of context.

In any case, there are worse things: if you make a one sided point, you at least make a point. Unfocussed rambling is worse.