I’m not famous or successful, so why should you care what I think? Well, I have some observations about the dynamics of writing on the internet that I think my (even more non-famous and non-successful) self would have benefited from when I started.

Human experience is vast.

The whole idea of writing is crazy: You have a pattern in your brain-meat, which you try to encode it into a linear series of words. Then someone else reads those words and tries to reconstruct the pattern in their brain-meat. But in this dance, how much of the work is being done by the words versus the lifetime of associations each person has built up around them?

Rather than a full blueprint for an idea, writing is often more like saying “Hey, look at concept #23827! Now look at concept #821! Now look at concept #112234! Are your neurons tingling in the way mine are? I hope so because there’s no way to check, bye!”

We have different personalities and spend our lives getting exposed to different information and thinking about different things. What concept #821 triggers for you may be vastly different than what it triggers for me.

I suspect that even when writing works, readers are often taking quite a “different trip” than the writer intended. Personally, I figure that’s fine and it’s better to just let people take their own trip, rather than going to insane lengths in a hopeless quest to make everything precise.

So: No matter what you do, sometimes your writing will fail. It’s impossible to predict all the ways it will fail. Really, it’s amazing that it works at all. But still, it’s possible to reduce the frequency of failure.

Failures can seem baffling.

Here are two examples of how things I’ve written have failed:

  1. I wrote an article suggesting ultrasonic humidifiers might put particulates into the air and harm health. The median response was something like this:

    "Please stop polluting the internet with speculation. If you can’t support your argument with peer-reviewed research papers you shouldn’t write it at all."

    This was… puzzling, because I had dozens of citations starting very early in the article. But many people in different forums independently left comments like this.
  2. I wrote an article about gas stoves and nitrogen dioxide where I stated in the overview that a normal range hood might not solve the problem. A top comment was that a high-quality hood would solve the problem and the fact that I claimed this without providing any data showed I was biased and bad (so bad).

    That’s fair, although after the overview finished there was a link to the “Can a range hood fix this?” section where I discuss an article that tested hoods in various houses and found that some worked well but most of them reduced nitrogen dioxide by 30% or less, and wait did I say that was fair?

Please holster your tiny violin, I know this is all normal. Many people get fed up with stuff like this and resolve to ignore all comments. That’s not my position. My position is that I failed and when I fail I’d like to know about it.

First, though, why does this happen?

What happens in forums isn’t always about you.

Why do people go to forums? Partly for links, yes, but also because they like the community. When an article on an interesting topic comes up, some people decide they’d rather see what people they trust think before investing time reading the article. And once they are there, maybe they see a comment they want to respond to.

I’d love to see statistics for this, but I’d guess that only a small fraction of people finish most articles before commenting, and many don’t even look at the article. This is not necessarily a bad thing! Sometimes the discussion detaches and goes into all sorts of interesting and unexpected directions.

Sometimes this can be funny, too. Once I wrote about a study by Pierson et al. (2020) who investigated how the racial mix of drivers stopped by police changes at different times of the year when it may be harder to see the drivers. The first comment was basically, “This is all wrong. There’s a study by Pierson et al. (2020) that investigated…”

Engagement has a sample bias.

The first few times I saw my posts discussed, I was shaken by the amount of negativity. This bothered me until I tried going to posts from others that I thought were great and reading the comments as if I were the author. Sometimes, umm, the comments were all glowing. But other times—with no clear pattern—most dismissed the article as pointless/obvious/wrong/bad. After this, negativity still bothered me, but I had a new variant of the old “Einstein was bad at math, I’m bad at math” fallacy to distract me.

Here’s something I’ve noticed about myself: If I read something great, I’ll sometimes write a short comment like “This was amazing, you’re the best!” Then I’ll stare at it for 10 seconds and decide that posting it would be lame and humiliating, so I delete it and go about my day. But on the rare occasions that I read something that triggers me, I get a strong feeling that I have important insights. Assuming that I’m not uniquely broken in this way, it explains a lot.

Listening to criticism is a superpower.

Do you have a friend who works in user-facing software? Sometime after they’ve had a few drinks, ask about the first time they saw one of their creations being tested on real people. Observe how somber their face becomes as they express their feelings of frustration and impotence. The things we build are no match by the might of human ingenuity to do everything wrong in unexpected ways.

It’s puzzling that there isn’t a stronger tradition of “user testing” for writing. Occasionally I’ll give a friend something I’ve written and implore them, “Please circle anything that makes you feel even slightly unhappy for any reason whatsoever.” Then I’ll ask them what they were thinking at each point. There are always “bugs” everywhere: Belaboring of obvious points, ambiguous phrases, unnecessary antagonistic language, tangential arguments about controversial things that don’t matter, etc.

Fixing these is great but your friends (let’s hope) don’t want to hurt your feelings. This makes it almost impossible to get them to say things like, “your jokes aren’t funny” or “you should delete section 3 because it’s horrendous and unsalvageable”. Good editors are gold.

Comments from the internet are the opposite. A downside is that you have much longer feedback loops, which makes it hard to figure out cause and effect. But you get feedback at a much higher scale and people are… substantially less worried about offending you.

Take the humidifiers example from before. Technically, the complaints were wrong. How could I “fix” the problem of not citing any papers when I had already cited dozens? That’s what I thought for months, during which people continued to read the post and have the same damned reaction. Eventually, I had to confront that even if they were “wrong”, something about my post was causing them to be wrong. Viewed that way, the problem was obvious: The idea that a humidifier could be bad for you is weird and disturbing, and weird and disturbing things are usually wrong so people are skeptical and tend to find ways to dismiss them.

Should they do that?

[Insert long boring polemic on Bayesian rationality]

It’s debatable—but it’s a fact that they do it. So I rewrote the post to be “gentle”. Previously my approach was to sort of tackle the reader and scream “HUMIDIFIERS → PARTICLES! [citation] [citation] [citation] [citation]” and “PARTICLES → DEATH! [citation] [citation] [citation]”. I changed it to start by conceding that ultrasonic humidifiers don’t always make particles and it’s not certain those particular particles cause harm, et cetera, but PEER-REVIEWED RESEARCH PAPERS says these things are possible, so it’s worth thinking about.

After making those changes, no one had the same reaction anymore.

Part of me feels like this is wrong, that it’s disingenuous to tune writing to make people have the reaction you want them to have. After all, I could be wrong, in which case it’s better if my wrongness is more obvious.

Maybe there’s a slippery slope here, but I think most people operate very close to the top of that hill. The goal of writing is to communicate, and it’s silly to ignore the effects it has on the actual people who read it.

So my advice is this: When you hear criticism, you need to guess if people even looked at the post. If they did, some negative reactions are inevitable. But if you repeatedly hear the same complaint, you should have a strong presumption that there is a problem, though it might be very different from the problem people state.

No one is better than the combined efforts of a large group of people.

If comments are often bad, does that mean you shouldn’t read them? If you’re very fragile, maybe. But you’ll be missing out. For one thing, you can often try to trace back the causal chain like above.

A bigger reason is just that sometimes comments are insanely great. Here’s a comment from __blockcipher__ on a post about methamphetamines:

There’s a common myth among tweakers about “n-iso”, which is structurally very similar to methamphetamine - similar enough that it will join the crystal lattice - but it is at best inert, but might actually cause undesirable side effects. The fact that n-iso exists is real, but if you look online you’ll see tons of tweakers convinced that they’ve been smoking n-iso and that it’s why they smoke meth and just get a headache and other bad physical side effects but don’t get the stimulation or the pleasurable rush. What’s actually happening is that they’ve spiked their tolerance so high that they’re getting almost exclusively the bad effects. It’s analogous to how if someone takes MDMA for 4 days straight, by the end of it they’re not going to “roll” at all because they’ve acutely downregulated their serotonin (and dopamine) receptors, and furthermore that they’ve literally (almost) exhausted their current pool of neurotransmitters, which need to be re-synthesized by the body.

Or here’s a comment from svat on a post about the proper usage of analogies:

In Indian/Sanskrit literary theory (poetics), in the discussion of figures of speech (rhetoric, etc), similes are called upamā (“her face is like the moon”, etc). The discussion of it in the literature is extensive and would fill several volumes (and I hardly know anything), but one thing recognized early is that in a simile/analogy, there needs to be a sādharaṇa-dharma, a shared property: the point is that there’s something in common (“her face is beautiful, like the moon”) while of course there is going to be a lot that is not (the intended meaning is not “like the moon, her face is pockmarked, full of craters”, etc). In any given instance, this intended shared property may either be stated explicitly, in which case the simile is called “complete”, or left implicit, in which case it’s called “partial”. Both can be highly effective.

Or here’s a comment from Nameless1995 on a post about if selfhood is real:

I think we often tend to conflate our lived experience of unity with the notion that the whole body has some centralized unitary consciousness. The lived experience is a momentary duration, and it doesn’t appear to me as a centralized and exclusive instance of consciousness — there could be multiple others (in the same body) that are inaccessible to “this” consciousness. Considered as such, mental disorders, DIDs, and split brains are not violations of unity of an instance of consciousness, but would be a result of “de-harmonization” of different instances of consciousness (due to information blockage and other reasons).

The idea of sitting down and finding the One Eternal Truth about anything is a fantasy. The universe has fractal levels of detail in every direction. There are a lot of ridiculously smart and well-informed people out there, and some of them will have deeper knowledge and insight about basically every facet of every thought you ever have. If you can motivate the collective hivemind to pay attention to something you care about, you’d be crazy not to listen.

Oddly, it seems to me that discussions that fully detach from the original article are on average better. If that doesn’t happen, it’s often because people got stuck arguing about minutia. This also happens for detached discussions of course, but they seem to have a better chance of reaching interesting places.

Aside: Techno-optimism is unfashionable at the moment, but I suspect we still haven’t come close to realizing the potential of even the internet technology of the 1990s. When thousands of people converge on a topic, the collective knowledge far exceeds any one person, but our current interaction models don’t do a great job of synthesizing it. It’s a difficult problem, but it’s hard to imagine that in a hundred years we won’t have more effective ways to interact.

The people who will pleasantly engage with you clearly signal their intention to be pleasant.

Sometimes I read a comment and I get a weird feeling, but then I convince myself, “They weren’t rude. They are making a sincere comment, and people shouldn’t have to humble themselves and stoke my fragile ego.” So I’ll try to respond.

As far as I can recall, this has never worked. Once after I wrote about the Monty Hall problem, someone curtly stated that I clearly didn’t understand it because I didn’t mention that Monty must choose a non-car door to open randomly. I thought about this and replied with an argument that if, say, Monty always chose the leftmost non-car door, everything was still fine. They responded that clearly I didn’t even try to read their comment, I’m just like everyone else who doesn’t get it, plus a lot of math that I found incoherent. I wondered—Am I stupid? Was I missing something? So I wrote 25 lines of Python code to simulate it and verified that this didn’t change the probabilities at all. After I posted that code, my correspondent thanked me for the correction changed nothing, acknowledged nothing, and stopped responding.

There have been many instances where someone wrote to me to say I was wrong, we had a productive back and forth, and they convinced me I was indeed partly or entirely wrong. But in every case, their first message looked like this:

Hello friend, I enormously enjoyed your recent fevered rant on [topic]. However, if I may be so bold I wish to point out errors in paragraphs 1, 2, 3, 5, 12, 17, 20, and 21. [errors] Sadly, these issues render your conclusion not just wrong but incoherent and arguably illegal. Still, you’ve done a great service by writing it and creating a stimulating discussion. Generations to come will admire you! Yours sincerely, Internet Person.

I exaggerate, but it was always overwhelmingly obvious from first contact that they were going to be nice. And people who seemed nice always were nice.

The tricky situation is cases where someone is mildly (or un-mildly) rude but also makes an intriguing point. After many failures, my policy is now to take their comments into account as much as I can and maybe reply with “thanks for your input”, but not to engage or ask follow-up questions.

I’m not sure why things are like this or if this pattern generalizes to other people. But I think everyone needs to build some pattern recognition for this and figure out a policy for when they want to engage.

Distribution, Pareto optimality, and quantity vs. quality.

There’s an argument that most writing has no value. It goes like this: Every hour, more text is produced than you could read in a lifetime. If you can write the best piece on a given topic, great, but otherwise we don’t need more content. And don’t kid yourself—to write the best piece, you’d need to pick a single topic, become a world expert, and spend months polishing the writing. Most writing is just people yelling over each other for their own reasons.

The standard response is to gesture towards Pareto optimality: There’s no “best” article on a given topic because there are many dimensions of quality, which people prioritize in their own ways. Unless another article is better than yours in every dimension simultaneously you have the potential to be the best article for someone.

That’s a nice thought. But surely it’s significant that we have no mechanism for that person to actually find the article that’s optimal for them? (Or maybe Google is really onto something and when you think you want a recipe what you really need is pages of SEO-optimized autogenerated gibberish.) To contribute value in practice, an article needs to be better than everything else for a decently large slice of the population.

That counter-counter-argument seems strong. Yet, I follow a lot of people who write about lots of different topics and it feels like I get value from this. Am I delusional?

I don’t think so, but even if I get value, there could be something else that provides more value. Still, I can’t shake the feeling that the people I follow truly are brightening my life on net. I have several hypotheses for why:

First, there are a lot of topics, and it’s not that hard to be the best. Often this is achieved by virtue of being the only article on a topic.

Second, it’s easier to understand writing by people you’re familiar with. They can get to it without wasting time establishing context.

Third, people have qualities that are fairly consistent across the stuff they write. If I’m familiar with the concepts someone uses and I get their sense of humor and like the way they choose examples, then lots of the stuff they write can immediately become the best article on a topic for me.

Fourth, the distribution problem works both ways. Take a model of the internet as millions of people screaming into the night, with readers just bumping into them at random. In this model, you only need to be above “average” to contribute value. Similarly, because distribution is so poor, writers help with “unknown unknowns”. I had no idea I wanted to learn about Ryszard Kapuscinski before Matt Lakeman wrote about him.

So here’s a thought experiment: What would things be like if you could plug your brain into a robot and automatically get whatever content is closest to your needs? On the margin, there would be less need to “follow” people, and more opportunity for “weirdness”. But it’s unclear what effect this would have on the reach of domain experts versus generalists. I think that comes down to how much we value information versus other qualities like shared context, readability, familiarity, and aesthetics.

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 1:01 AM

Just to be clear, when talking about how people behave in forums, I mean more "general purpose" places like Reddit. In particular, I was not thinking about Less Wrong where in my experience, people have always bent over backwards to be reasonable!

Often when I see someone puzzled about why their factually and logically sound post or comment is responded to so negatively it's because their tone does not encourage people to take their thoughts seriously.  Unfortunately, the tone of your post matters and the tone you need varies based upon your target audience.


(Not saying this applies to you, dynomight!)

I'm curious, because this may be a mistake I often make. How specifically do you "encourage people to take your thoughts seriously"? Do you mean the style of the text, or inserting links to supportive papers, or some kind of disclaimer like "hey everyone, this is serious", or...?

I don't really have any specific advice on how to write in this way. I don't think I consistently write in this way either, but not through lack of trying.

I'm talking about getting people to respond to the content of your post rather than responding to more nebulous social or emotional stuff that they take away from the phrasing, tone and subject matter of what you write...specifically if the social/emotional context is not intentional.

Some quick thoughts:

  1. Is the style of your text confrontational?  I think some people are able to get away with being confrontational and not turn people off from the content, but I think that's far from the norm.
  2. If you can frame things more like you're trying to discover something with your reader rather than "here's the thing that I figured out and you've been so wrong" it can make people less likely to feel like they're losing status by agreeing with you.
  3. In general try to not be a jerk!  Make it easier for others to respond to you in ways that you're not going to irrationally react negatively to.  

If you can frame things more like you're trying to discover something with your reader

Thanks, will try this.

I don't understand your Monty Hall example. If Monty always reveals the leftmost non-car door that you didn't pick (which I guess is what you mean), then he will reveal either (A) the leftmost door that you didn't pick or (B) the rightmost door that you didn't pick. In case (B) you can be sure that switching will result in a win, because if Monty passed over the leftmost door then it must be because it contained a car. On the other hand, in case (A) the chance is actually 50% that you win by switching, because Monty's algorithm gives no information as to whether the rightmost (unrevealed) door contains a car. In both cases the conclusion is different from the standard (random) Monty Hall problem, where you have a 2/3 chance to win by switching. What am I missing?

I might not have described the original debate very clearly. My claim was that if Monty chose "leftmost non-car door" you still get the car 2/3 of the time by always switching and 1/3 by never switching. Your conditional probabilities look correct to me. The only thing you might be "missing" is that (A) occurs 2/3 of the time and (B) occurs only 1/3 of the time. So if you always switch your chance of getting the car is still (chance of A)*(prob of car given A) + (chance of B)*(prob of car given B)=(2/3)*(1/2) + (1/3)*(1) = (2/3).

One difference (outside the bounds of the original debate) is that if Monty behaves this way there are other strategies that also give you the car 2/3 of the time. For example, you could switch only in scenario B and not in scenario A. There doesn't appear to be any way to exploit Monty's behavior and do better than 2/3 though.

Ah, I see, fair enough.

There are 3 doors

1, 2, 3

You pick one of them.

(1), (2), (3)


If Monty always chooses the left most door that doesn't have the car behind it, then, if he chooses, out of the doors you didn't pick, the left one, then you should switch. If he chooses the right door you shouldn't.


This is true. However, if you don't know in advance if Monty has the rule:

  • 'open the leftmost door that doesn't have a car behind it'
  • or
  • 'open the rightmost door that doesn't have a car behind it'

Then even if you knew that Monty had one of those rules, then which door Monty opens doesn't tell you anything (until after you find out the results of your choice).

This is a great article.

There are always “bugs” everywhere

I didn't see a lot of that here.

(This comment is currently a work in progress.)

The idea of sitting down and finding the One Eternal Truth about anything is a fantasy.

Is the world flat?

Sometimes much certainty may be possible - perhaps even easily. Overall, this may 'the exception rather than the rule' (for now).

acknowledged nothing, and stopped responding.

Perhaps this follows the same rule as:

Here’s something I’ve noticed about myself: If I read something great, I’ll sometimes write a short comment like “This was amazing, you’re the best!” Then I’ll stare at it for 10 seconds and decide that posting it would be lame and humiliating, so I delete it and go about my day. But on the rare occasions that I read something that triggers me, I get a strong feeling that I have important insights. Assuming that I’m not uniquely broken in this way, it explains a lot.

No one is better than the combined efforts of a large group of people.

I'd say it's more rare than impossible. There's also times when a group of people are all convinced of something and they're all wrong. Here, the nuance is may be less 'exceptional, rather than the rule' and more 'different groups of people, in disagreement and

  • they can't all be right
  • or
  • what's correct draws from the groups in disagreement, but is complicated (and may be unpopular both due to the groups and the complexity)
  • (empirical tests remain to be done to determine, etc.)'

Good editors are gold.

Fascinatingly, they may also be free. Perhaps other experiences are possible involving 'editors' - whether paid, or writers sharing what they write with each other, and this can give feedback, while being less unpleasant - if only because it might involve higher quality criticism. This may not entirely beat 'exposed to the internet' - more eyes can catch more bugs...but if groups of people working together are better than individuals, then why should posts -so often - be written by one person (modulo non-unitary selfhood, or whatever).

Looking through comments can also take time and effort. One person doing that...sounds like a bottleneck.*

*It might not always reach capacity and be an issue but...there are limits.

There’s an argument that most writing has no value. It goes like this: Every hour, more text is produced than you could read in a lifetime. If you can write the best piece on a given topic, great, but otherwise we don’t need more content. And don’t kid yourself—to write the best piece, you’d need to pick a single topic, become a world expert, and spend months polishing the writing. Most writing is just people yelling over each other for their own reasons.

Quality is not a dice roll. Improvements - and iteration - are possible.