There’s this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they’re not really sure if it’s true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole. Eventually, they give up and never actually publish the piece.

This post is about how to avoid that, without sacrificing good epistemics.

There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it.

I claim that this promotes better epistemics overall than always researching everything in depth.

Why?

It’s About The Process, Not The Conclusion

Suppose I have a box, and I want to guess whether there’s a cat in it. I do some tests - maybe shake the box and see if it meows, or look for air holes. I write down my observations and models, record my thinking, and on the bottom line of the paper I write “there is a cat in this box”.

Now, it could be that my reasoning was completely flawed, but I happen to get lucky and there is in fact a cat in the box. That’s not really what I’m aiming for; luck isn’t reproducible. I want my process to robustly produce correct predictions. So when I write up a LessWrong post predicting that there is a cat in the box, I don’t just want to give my bottom-line conclusion with some strong-sounding argument. As much as possible, I want to show the actual process by which I reached that conclusion. If my process is good, this will better enable others to copy the best parts of it. If my process is bad, I can get feedback on it directly.

Correctly Conveying Uncertainty

Another angle: describing my own process is a particularly good way to accurately communicate my actual uncertainty.

An example: a few years back, I wondered if there were limiting factors on the expansion of premodern empires. I looked up the peak size of various empires, and found that the big ones mostly peaked at around the same size: ~60-80M people. Then, I wondered when the US had hit that size, and if anything remarkable had happened then which might suggest why earlier empires broke down. Turns out, the US crossed the 60M threshold in the 1890 census. If you know a little bit about the history of computers, that may ring a bell: when the time came for the 1890 census, it was estimated that tabulating the data would be so much work that it wouldn’t even be done before the next census in 1900. It had to be automated. That sure does suggest a potential limiting factor for premodern empires: managing more than ~60-80M people runs into computational constraints.

Now, let’s zoom out. How much confidence should I put in this theory? Obviously not very much - we apparently have enough evidence to distinguish the hypothesis from entropy, but not much more.

On the other hand… what if I had started with the hypothesis that computational constraints limited premodern empires? What if, before looking at the data, I had hypothesized that modern nations had to start automating bureaucratic functions precisely when they hit the same size at which premodern nations collapsed? Then this data would be quite an impressive piece of confirmation! It’s a pretty specific prediction, and the data fits it surprisingly well. But this only works if I already had enough evidence to put forward the hypothesis, before seeing the data.

Point is: the amount of uncertainty I should assign depends on the details of my process. It depends on the path by which I reached the conclusion.

This carries over to my writing: if I want to accurately convey my uncertainty, then I need to accurately convey my process. Those details are relevant to how much certainty my readers should put in the conclusion.

So Should I Stop Researching My Claims?

No. Obviously researching claims still has lots of value. But you should not let uncertainty stop you from writing things up and sharing them. Just try to accurately convey your uncertainty, by communicating the process.

Bad Habits

It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong.

There’s a closely related sub-habit in which people try to only claim things with very high certainty. This makes sense in a persuasion/debate frame - any potential loophole could be exploited by “the other side”. Arguments are soldiers; we must show no weakness.

Good epistemic habits include living with uncertainty. Good epistemic discourse includes making uncertain statements, and accurately conveying our uncertainty in them. Trying to always research things to high confidence, and never sharing anything without high confidence, is a bad habit.

Takeaway

So you have some ideas which might make cool LessWrong posts, or something similar, but you’re not really confident enough that they’re right to put them out there. My advice is: don’t try to persuade people that the idea is true/good. Persuasion is a bad habit from high school. Instead, try to accurately describe where the idea came from, the path which led you to think it’s true/plausible/worth a look. In the process, you’ll probably convey your own actual level of uncertainty, which is exactly the right thing to do.

… and of course don’t stop researching interesting claims. Just don’t let that be a bottleneck to sharing your ideas.

Addendum: I'm worried that people will read this post think "ah, so that's the magic bullet for a LW post", then try it, and be heartbroken when their post gets like one upvote. Accurately conveying one's thought process and uncertainty is not a sufficient condition for a great post; clear explanation and novelty and interesting ideas all still matter (though you certainly don't need all of those in every post). Especially clear explanation - if you find something interesting, and can clearly explain why you find it interesting, then (at least some) other people will probably find it interesting too.

New Comment
39 comments, sorted by Click to highlight new comments since: Today at 8:21 AM

I really like this post for two reasons:

  1. I've noticed that when I ask someone "why do you believe X", they often think that I'm asking them to cite sources or studies or some such. This can put people on the defensive, since we usually don't have ready-made citations in our heads for every belief. But that's not what I'm trying to ask; I'm really just trying to understand what process actually caused them to believe X, as a matter of historical fact. That process could be "all the podcasters I listen to take X as a given", or "my general life experience/intuition has shown X to be true". You've put this concept into words here and solidified the idea for me: that it's helpful to communicate why you actually believe something, and let others do with that what they will.
  2. The point about uncertainty is really interesting. I'd never realized before that if you present your conclusion first, and then the evidence for it, then it sure looks like you already had that hypothesis for some reason before getting a bunch of confirming evidence. Which implies that you have some sort of evidence/intuition that led you to the hypothesis in addition to the evidence you're currently presenting.

I've wondered why I enjoy reading Scott Alexander so much, and I think that the points you bring up here are a big reason why. He explains his processes really well, and I usually end up feeling that I understand what actually caused him to believe his conclusions.

For the first thing I have been trying to shift lately to asking people to tell me the story of how they came to that belief. This is doubly useful because only a tiny fraction of the population actually has the process of belief formation explicit enough in their heads to tell me.

[-]Ruby3y420

Curated. This is a great post. We (the mods) generally struggle to get people to write up thoughts worth hearing because they fear that they're not yet defensible enough. Until now, I'd have encouraged people to share all their thoughts at various stages of development/research/refinement/etc, just with appropriate epistemic statuses attached. This post goes further and provides an actual specific approach that one can follow to write up ideas at any level of development. More than that, it provides a social license of which I approve.

The ideal I endorse (as a mod) is something like LessWrong is a place where you can develop and share your ideas wherever they're at. It's great to publish a well-researched or well-considered post that you're confident in, but it can also be incredibly valuable to share nascent thoughts too. Doing so both allows you,  the writer, to get early feedback, but also can also often provide readers something good enough to learn from and build upon. And it's definitely much better to publish early than not at all!

A challenging aspect of posting less well-developed thoughts is they can elicit more negative feedback than a post that's had a lot effort invested to counter objections, etc. This feedback can be hard and unpleasant to receive, especially if it's worded bluntly. My ideal here, that might take work to achieve, is that our culture is one where commenters calibrate their feedback (or at least its tone) to be appropriate to the kind of post being made. If someone's exploring an idea, encourage the exploration even as you might point a flaw in the process.

For people who especially concerned their thoughts aren't ready for general publication, we built Shortform to be the home for earlier-stage material. The explicit purpose of Shortform is that you can share thoughts which only took you a relatively short amount of time to write. [However, posting as a regular LessWrong post can also be fine, if you're comfortable. And mods can help you decide where to post if you're unsure.]

In academia-land, there's a norm of collegiality - at least among my crowd. People calibrate their feedback based on context and relationship. Here, relationship is lacking and context is provided only by the content of the post itself. I think we're missing a lot of the information and incentives that motivate care in presentation and care in response. On the other hand, commenters' opinion of me matters not at all for my career or life prospects. To me, the value of the forum is in providing precisely this sort of divergence from real-world norms. There's no need to constantly pay attention to the nuances of relationships or try and get some sort of leverage out of them, so a very different sort of conversation and way of relating can emerge. This is good and bad, but LessWrong's advantage is in being different, not comfortable.

This is good and bad, but LessWrong's advantage is in being different, not comfortable.

Personally: If LessWrong is not comfortable for me to post on, I won't post. And, in fact, my post volume has decreased somewhat because of that. That's just how my brain is wired, it seems.

I have a lot of thoughts about a lot of things but my post history reveals that I'm like you and a lot like the people this post is geared towards; I don't share my thoughts because I never have much of an idea how to back them up. Worse though, I can't even follow this post's advice, as I mostly have no idea how I come up with any of the things I do, either; I've never bothered to pay attention to the process. :/

Secret secondary goal of this post: get people to pay attention to the process-which-generates their ideas/beliefs/etc.

Yeah, I’m only speaking for myself here :)

I think a norm of “somewhat comfortable by default; solicit maximally frank feedback with an end-of-post request” might be good? It may be easier to say “please be harsher on my claims” than “please be courteous with me.”

Oh yeah, I mean I don’t love the discomfort! I just feel like it’s more efficacious for me to just thicken my skin than to hope LW’s basic social dynamic improves notably. Like, when I look back at the tone of discussion when the site was livelier 10 years ago, it’s the same tone on the individual posts. It just comes off differently because of how many posts there are. Here, you get one person’s comment and it’s the whole reaction you experience. Then, there was a sort of averaging thing that I think made it feel less harsh, even though it was the same basic material. If that makes any sense at all :D

Liked this essay and upvoted, but there's one part that feels a little too strong:

There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it. [...]

It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong.

Suppose that I have studied a particular field X, and this has given me a particular set of intuitions about how things work. They're not based on any specific claim that I could cite directly, but rather a more vague feeling of "based on how I understand things to generally work, this seems to make the most sense to me".

I now have an experience E. The combination of E and my intuitions gathered from studying X cause me to form a particular belief. However, if I had not studied X, I would have interpreted the experience differently, and would not have formed the belief.

If I now want to communicate the reasons behind my belief to LW readers, and expect many readers to be unfamiliar with X, I cannot simply explain that E happened to me and therefore I believe this. That would be an accurate account of the causal history, but it would fail to communicate many of the actual reasons. I could also say that "based on studying X, I have formed the following intuition", but that wouldn't really communicate the actual generators of my belief either.

But what I can do is to try to query my intuition and try to translate it into the kind of a framework that I expect LW readers to be more familiar with. E.g. if I have intuitions from psychology, I can find analogous concepts from machine learning, and express my idea in terms of those. Now this isn't quite the same as just writing the bottom line first, because sometimes when I try to do this, I realize that there's some problem with my belief and then I actually change my mind about what I believe. But from the inside it still feels a lot like "persuasion", because I am explicitly looking for ways of framing and expressing my belief that I expect my target audience to find persuasive.

This is definitely the use-case where "explain how you came to think Y" is hardest; there's a vague ball of intuitions playing a major role in the causal pathway. On the other hand, making those intuitions more legible (e.g. by using analogies between psych and ML) tends to have unusually high value.

I suspect that, from Eliezer's perspective, a lot of sequences came from roughly this process. He was trying to work back through his own pile of intuitions and where they came from, then serialize and explain as much of it as possible. It's been a generator for a lot of my own writing as well - for instance, the Constraints/Scarcity posts came from figuring out how to make a broad class of intuitions legible, and the review of Design Principles of Biological Circuits came from realizing that the book had been upstream of a bunch of my intuitions about AI. It's not coincidence that those were relatively popular posts - figuring out the logic which drives some intuitions, and making that logic legible, is valuable. It allows us to more directly examine and discuss the previously-implicit/intuitive arguments.

I wouldn't quite liken it to persuasion. I think the thing you're trying to point to is that the author does most of the work of crossing the inductive gap. In general, when two people communicate, either one can do the work of translating into terms the other person understands (or they can split that work, or a third party can help, etc... the point is that someone has to do it.). When trying to persuade someone, that burden is definitely on the persuader. But that's not exclusively a feature of persuasion - it's a useful habit to have in general, to try to cross most of the inductive gap oneself, and it's important for clear writing in general. The goal is still to accurately convey some idea/intuition/information, not to persuade the reader that the idea/intuition/information is right.

My advice is: don’t try to persuade people that the idea is true/good. Persuasion is a bad habit from high school. Instead, try to accurately describe where the idea came from, the path which led you to think it’s true/plausible/worth a look.

Wow, this makes a ton of sense. I can't believe I never realized/internalized it before.

This post is excellent, and should be part of the 'Welcome to LessWrong' introductory content IMO. :)

Addendum (also added to the post): I'm worried that people will read this post think "ah, so that's the magic bullet for a LW post", then try it, and be heartbroken when their post gets like one upvote. Accurately conveying one's thought process and uncertainty is not a sufficient condition for a great post; clear explanation and novelty and interesting ideas all still matter (though you certainly don't need all of those in every post). Especially clear explanation - if you find something interesting, and can clearly explain why you find it interesting, then (at least some) other people will probably find it interesting too.

This is a very important point, and I’m happy someone made it and it’s been upvoted so quickly. I do have a million ideas for LW posts that I hesitate to contribute for many of the reasons above.

Thank you for writing this so there's common knowledge that it's okay to write this way on LW, or that it would be okay with the many upvoters. I certainly got the feeling that posts have to be well-researched dissertations or persuasive arguments on here sometimes. 

I thought this was fantastic, very thought-provoking. One possibly easy thing that I think would be great would be links to a few posts that you think have used this strategy with success.

Drawing from my own posts:

This is great! Very related to Open Philanthropy's concept of reasoning transparency, especially the section on how to indicate different kinds of support for your view. 

I vividly remember learning about this within my first few weeks working at GiveWell: when I submitted an early draft of some research on malaria, my manager took one look at the footnotes and asked me to redo them. "Don't just find some serious-looking source - what actually led you to make the claim you're footnoting?" I was mindblown, both that this was such a novel thing to do and that I hadn't even really noticed I wasn't doing it.

This reminds me of Julia Galef's video Something I like about Richard Dawkins.

I was was trying to put my finger on what it was, and I think I've pinned it down. Basically, Richard would bring up topics of conversation not because he had a a well-articulated, definitive opinion that he wanted to share about that topic, but because he thought it was interesting. And he didn't yet know what he thought about it.

So just for example, we ended up on the topic of communication styles. And he noted, with a hint of curiosity in his voice, that it actually comes across as kind of aggressive, or confrontational when someone speaks very clearly and to the point, without adding a lot of qualifying phrases and statements around their point.

And he mused aloud, "I wonder why that is? Why would that be?" And we thought about it together.

And I think that's very rare. Most people -- even intellectually curious people -- in conversation, will make points that they've basically already thought about, already decided how they feel about. And it's actually quite rare for someone to introduce topics of conversation and want to just figure it out together, on the spot.

I upvoted this highly for the review. I think of this as a canonical reference post now for the sort of writing I want to see on LessWrong. This post identified an important problem I've seen a lot of people struggle with, and writes out clear instructions for it. 

I guess a question I have is "how many people read this and had it actually help them write more quickly?". I've personally found the post somewhat helpful, but I think mostly already had the skill.

how many people read this and had it actually help them write more quickly?

I think I sorta implicitly already knew what this post is saying, and thus the value of this post for me was in crystalizing that implicit knowledge into explicit knowledge that I could articulate / reflect on / notice / etc.

I can’t recall a situation where this post “actually helped me write more quickly”. I vaguely recall that there were times that this post popped into my head while thinking about whether or not to write something at all, and maybe how to phrase and structure it.

I think in my case it's more likely the post helped me write more rigorously, rather than quickly. i.e. by default I write quickly without much rigor, and this post pointed a cheap-ish way to include more epistemic handholds.

Dunbar's Number: 150

Wentworth's Number: 60m

It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate

The goal of high school writing isn't to write anything that persuades the reader. It's to present an argument in a way that fits the expectations of the teacher.

I think this post does two things well:

  • helps lower the internal barrier for what is "worth posting" on LW
  • helps communicate the epistemic/communication norms that define good rationalish writing

Writing up your thoughts is useful. Both for communication and for clarification to oneself. Not writing for fear of poor epistemics is an easy failure mode to fall into, and this post clearly lays out how to write anyway. More writing equals more learning, sharing, and opportunities for coordination and cooperation. This directly addresses a key point of failure when it comes to groups of people being more rational. 

Good idea, and works in Medicine.

If you see a patient and say:

"You have X, this is what it is, take this medicine and this is what should happen but if this happens then do this",

They may not believe you ("But I just want antibiotics") and/or not follow advice (+/- trust in you). (It's more complicated in real life, of course, we explore the patient's ideas/concerns/expectations, and make sure these are also satisfied, which is usually more important than anything else, as well as making a plan together - fun to do in 10 min)

If you describe your logical thinking to the patient and say:

"Because you have said X and Y, and on examination I have found Z, and with these findings these are the possible diagnosis, and these are the diagnoses that aren't likely, so take this..."

They are much more likely to believe you and do as you suggest, especially if there is a lot of uncertainty (as there often is):

If you're not sure what is happening, rather than lying (which the patient can probably tell), explaining ones thinking and describing why there is uncertainty I find often leads to more confidence and trust:

"You have told me X, and you have Y symptoms, which is odd as they don't point to a particular condition. These symptoms may mean A, and those B. Although it is very unlikely that there is anything serious, I think we should do Q and P, and review a week later. If M or N happens, tell me sooner".

Hi! I wrote a summary with some of my thoughts in this post as part of an ongoing effort to stop sucking at researching stuff. This article was a big help, thank you!

This has influenced how I write things and how I give disclaimers (I read it a few months ago)

Thanks!

We should sort reasoning into the inductive and deductive types: inductive provides a working model, deductive provides a more consistent (less contradictory) model. Deductive conclusions are guaranteed to be true, as long as their premises are true. Inductive conclusions are held with a degree of confidence, and depend on how well the variables in the study were isolated. For the empire example in the original post, there are many variables other than computing power that affect the rise and fall of empires. Computing power is only one of many technologies, and besides technology, there is finance, military, culture, food, health, education, natural disaster, religion, etc. Adding to the uncertainty is the small sample size, relative to the number of variables.

However, we can more easily isolate the effect of computing power on census taking, as mentioned, just as we can draw a more confident conclusion between the printing press and literacy rates. Everything has its scale. Relate big to big, medium to medium, small to small. Build up a structure of microscopic relations to find macroscopic patterns.

There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it.

[-]TAG3y-20

It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong

Even in a debate? I can't see how things can be said to be bad or good out of context.

In any case, there are worse things: if you make a one sided point, you at least make a point. Unfocussed rambling is worse.

I would make the assumption that we are talking about communication situations where all parties want to find out the truth, not to 'win' an argument. Rambling that makes 0 points is worse than making 1 point, but making 2+ "two-sided" points that accurately communicates your uncertainty on the topic is better than selectively giving out only one-sided points from all the points that you have.

[-]TAG1y10

Finding the truth, and winning arguments, are not disjoint.

I argee that finding the truth and winning arguments are not disjoint by definition, but debate and finding the truth are mostly disjoint (I would not expect the optimal way to debate and the optimal way to seek truth to align much).

Also, I did not think you would mean "debate" as in "an activity where 2+ people trying to find the truth together by honestly sharing all the information"; what I think "debate" means is "an activity where 2+ people form opposing teams with preassigned side and try to use all means to win the argument". In a debate, I expect teams to uses methods that are bad in truth-seeking such as intentionally hiding important information that supports the other side. In this sense, debate is not a good example of truth-seeking activity.

At the end, my point is that in essentially all truth-seeking context, arguing one side is not optimal. I find it perceivable that some edge cases exists but debate is not one of them, because I don't think it is truth-seeking in the first place.

[-]TAG1y20

How well debate works in practice depends on the audience. If the audience have good epistemology, why would they be fooled by cheap tricks?

Debate is part of our best epistemological practices. Science is based on empiricism and a bunch of other things, including debate. If someone publishes a paper, and someone else responds with a critical paper ,arguing the opposite view, that's a debate. And one that's judged by a sophisticated audience.

You have an objection to the rule that debaters should only argue one side. One sidedness is a bad thing for individual rationality, but debate isn't individual rationality...it involves at least two, and often an audience. Each of two debaters will hear the other side:s view, and the audience will hear both.

Representatives in a trial are required to argue from one side only. This is not considered inimical to truth seeking, because it is the court that is seeking the truth, as a whole. If you create a "shoulder" prosecutor and defender to argue each side of a question, is that not rationality?

[+][comment deleted]3y10