In polite society it's currently fashionable to be in favor of Evidence-based Medicine and proclaim that we don't have enough of it. In this article I want to argue that this preference isn't backed up by good reasons. The paradigm of Evidence-based Medicine isn't backed up by evidence that proves the virtues of the paradigm but by faith.

What's Evidence-based Medicine in the first place? The term was defined in a scientific paper by Guyatt et al in their paper "Evidence-Based Medicine - A New Approach to Teaching the Practice of Medicine" in 1992. According to the paper Evidence-based medicine requires new skills of the physician, including efficient literature searching and the application of formal rules of evidence evaluating the clinical literature. Evidence-based Medicine was supposed to be about replace theory-based medicine with empirically-backed medicine.

They make the assumption that physicians who learn the skill of literature searching and applications of formal rules of evidence will produce better clinical results for their patients. Theoretically there are valid reasons why someone might believe in this assumption. For a community who sincerely believes in evidence-based thinking instead of practicing belief-in-belief I would however expect that they test their assumptions.

It would be possible to run a controlled study whereby some doctors get more classes on learning those efficient literature searching skills and the skills of application of formal rules of evidence. If the cost of that experiment would be too big, it would even be possible to seek correlation evidence. To my knowledge, nobody tried to run either study.

I opened a question on Skeptics.StackExchange to find out whether anybody could find studies who prove core assumptions of Evidence-Based Medicine and nobody replied with studies that validated the idea that teaching doctors more of those evidence based skills improves patient outcomes.

Brienne Yudkowsky wrote on Facebook that she thinks that the Hamming question for epistemic rationality might be "To which topics, or under what circumstances, do you apply different epistemic laws?". For many people medicine is such a field. The majority of supposed defenders of Evidence-based Medicine accept without evidence from controlled studies that those Evidence-based methods of practicing medicine are better. At the same time, they fight alternative medicine paradigms for not providing enough studies that back up their claims.

According to the core assumption in Evidence-based Medicine results that are found in one patient population generally generalize to other patients populations. If that would be true it should be easy to replicate studies. In reality replication often fails even when there's a lot of attention invested to get comparable patient populations.

In real world clinical settings the patient population is more diverse than the carefully chosen patient population of a trial. In the clinical trial patients often only take one drug and while in normal clinical practice patients often take multiple drugs to fight multiple diseases.

Another part of the core Evidence-based Medicine dogma is the dualistic notion that doctors should focus on creating clinical effects for their patients through proper intervention and not through placebo effects. This means that while patients don't care if they get better because of mind or matter, doctors are primarily focused on the matter. An alternative therapist who might get clinical effects for their patients by spending an hour talking to them get rejected in favor of a doctor who interviews a patient for 5 minutes and then gives them a pill. These dogmatic beliefs about how to think about the placebo effect are also largely formed without scientific investigation of the placebo effect. There's a strong double standard about what kinds of beliefs need studies to back them up and what can be accepted without empirical evidence because they make theoretic sense.

There's a belief that placebo blinding procedures generally result in patients not knowing whether or not the the patient got the placebo or verum. Rabkin et al investigated in their paper "How blind is blind?" how well patients can tell what they got. 78% of the patients and 87% of the doctors could correctly distinguish between placebo and verum when they were asked. A research community that would sincerely belief in the tenets of evidence-based medicine would start asking patients in every trial for their subjective belief of whether they got placebo. They behavior of the community we do have that keeps following their established rituals without questioning those rituals looks more like belief-in-belief.

One hypothesis is that patients know whether they take a placebo or verum because verum has side effects. In an environment where the placebo controlled effects of antidepressant as Kirsch et al described in their paper only makes on average 1.8 out of a 50 point scale, there's the question whether antidepressants with high side-effect unblind themselves and are thus better in direct comparison to antidepressants with less side effects. Unfortunately, the ethical review boards don't care about those issues and rather focus on preventing consent forms getting signed with pencils.

There's one Evidence-based Medicine belief that will look very strange to future students who want to make sense of our beliefs. It's the belief that the blind man sees better. The belief that it's bad to clearly see the object under investigation in all it's details. It's true that the practice of blinding can helps us from falling victim to various biases but having access to less data also prevents us from seeing real patterns. Ironically, this blindly leads to researchers not being interested in the subjective experience of their patients to the point that they don't gather data about whether the patients think that they got verum.

Why do we think we need Evidence-based Medicine in the first place? We don't want to trust in human authorities. We want science to free us of the need to trust authorities. Instead of asking us how we can develop justified trust in human authorities, we dream for objective knowledge that transcends human authorities.

I proposed in my post about Prediction-based Medicine a system in which we let doctors make predictions about the outcomes of their treatment and use the quality of those predictions to establish authority. Once we solve the problem of trust the knowledge production itself can get more diverse. One scientists might understand a disease better by doing phenomenological investigation of the subjective experience of patients. Another scientists might use a lot of sensors and run machining learning algorithms to better understand disease. Both profit if they don't have to fit inside the bureaucracy of Evidence-based Medicine and can focus on producing knowledge that helps doctors make better predictions about how to treat their patients.

It won't be as Hahnemann said "Wer heilt, hat Recht" ("He who cures is right") but "He who can predict in advance that he will cure the patient and then actually cures the patient is right".

New to LessWrong?

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 3:33 PM

It seems like, to replace Evidence-based Medicine, you're proposing More-evidence-based Medicine. I think all of your object level ideas are good, but I find the title and the tone of the post weird. Surely nobody thinks that the current system is without flaws. Maybe you're strawmanning someone?

Also, there are obvious reasons why we'd want to replace doctor's authority and experience with standardized and shared body of knowledge, even if that doesn't really improve the outcomes. You seem to suggest some pipeline that would transfer the experience of every doctor into this shared knowledge. This is a good idea and a natural next step for the system, but it should also be obvious why this would be hard to implement.

I think all of your object level ideas are good, but I find the title and the tone of the post weird.

When drug trials don't measure whether their efforts at blinding are actually successful that behavior is very similar to that of the rat psychologists that Feymann talks in his famous post about Cargo-Cult Science.

There are common arguments that practitioners of alternative medicine X are practicing Cargo Cult science but mainstream psychiatrists mostly aren't and that simply isn't the case.

Kirsch et al 2008 showed that if you try to control for placebo antidepressants lose most of their effectiveness and are left with 1.8 points of effect on a 50 point scale. 

Pills that have a lot of side-effects are inherently difficult to blind and as a result they create placebo effects even when you try to control for placebo. Kirsch hypothesis that the remaining 1.8 points can be explained this way. 

If that's true the homeopath that gives out sugar pills to create his placebo effects is better than the psychiatrist who uses drugs with high side effects because Moloch made the drugs with high side effects win because they are harder to blind.

I don't know whether that hypothesis is true but Evidence-based Medicine advocates don't seem to try to find out whether it is and do science to find out, they rather do what Feymann described.

Also, there are obvious reasons why we'd want to replace doctor's authority and experience with standardized and shared body of knowledge, even if that doesn't really improve the outcomes.

I don't think that mono-cultures are in general better than diversity. Laws that standardize a body of knowledge make innovation a lot harder. Medical education is very expensive because there are no system pressures to focus on teaching a doctor exactly the skills he needs to help his patients.

If a system focus on paying-by-performance (via Prediction-based Medicine) instead of paying-for-clever-arguments-for-interventions, I think that would produce much better outcomes.

When drug trials don't measure whether their efforts at blinding are actually successful that behavior is very similar to that of the rat psychologists that Feymann talks in his famous post about Cargo-Cult Science.

Running trials in a flawed way is bad, but not as bad as running no trials at all. Nobody has ever suggested that the current system is perfect and can't be improved in any way, so who are you arguing with?

I don't know whether that hypothesis is true but Evidence-based Medicine advocates don't seem to try to find out whether it is and do science to find out, they rather do what Feymann described.

Presumably, Evidence-based Medicine advocates have decided to spend their limited resources on developing treatments using the current system, rather than on improving the system itself. Their position is not inconsistent, irrational or objectively worse than yours, as you may be suggesting.

I don't think that mono-cultures are in general better than diversity. Laws that standardize a body of knowledge make innovation a lot harder. Medical education is very expensive because there are no system pressures to focus on teaching a doctor exactly the skills he needs to help his patients.

Mono-cultures have their benefits. Without a standardized body of knowledge most innovation is just reinventing the wheel. Was medical education very cheap and short before Evidence-based Medicine?

More generally, I don't understand your position. On one hand you're arguing that current tests aren't rigorous enough, and the the doctors don't understand statistics well enough. And on another hand, that we don't need to run tests or teach statistics at all?

Running trials in a flawed way is bad, but not as bad as running no trials at all.

How do you know? In a scenario where all the effect of antidepressive interventions are "placebo" you will get worse outcomes with a system that tests diffenert pills against each other because the pill with the most obvious side-effects is going to win (because it unblinds itself).

Presumably, Evidence-based Medicine advocates have decided to spend their limited resources on developing treatments using the current system, rather than on improving the system itself. Their position is not inconsistent, irrational or objectively worse than yours, as you may be suggesting.

You can argue that Cargo Cults aren't objectively worse that non-Cargo Cults because they just choose to use their limited resources in another way then checking whether their methods work but that doesn't change the fact that it's a Cargo Cult.

More generally, I don't understand your position. On one hand you're arguing that current tests aren't rigorous enough, and the the doctors don't understand statistics well enough. And on another hand, that we don't need to run tests or teach statistics at all?

This post is focused on criticism and not on advocating alternatives. My post on Prediction-based Medicine lays out one approach of an alternative system that also uses statistics but very different statistics.

Kaj Sotala recently wrote how he used the NLP book Transform yourself by Steve Andreas to cure his depression. I see such a report as a reason to read the book and learn the underlying skills even through it's not Evidence-based Medicine.

On the other hand there are people who reject that kind of decision making and want to outlaw it.

How do you know?

To show that Evidence-based Medicine is a worse system than what we had before, you'd have to show that health outcomes are declining (in a way that can't be explained by external changes). I'm not aware that this is true, and you haven't argued for it. It's possible that the current system does fail miserably for depression treatments, but is still worth keeping for other benefits.

Kaj Sotala recently wrote how he used the NLP book Transform yourself by Steve Andreas to cure his depression.

Again, on one hand you're criticizing trials for not having perfect placebos, and on the other hand you're pointing to a single self-selected self-reported claim with no control at all, as a positive example. To be fair, I haven't read your other post, maybe you have devised a statistically sound mechanism that can make use of such claims. However, that's not what your position look like from here.

outlaw it.

What exactly?

I'm not aware that this is true, and you haven't argued for it.

I haven't argued that it's true. I have argued that we don't know it's true and you claim to know that you know it's true. Do you take that claim back?

If I got you to the point where you agree that we don't whether or not those trials help, my post had a decent impact.

Again, on one hand you're criticizing trials for not having perfect placebos, and on the other hand you're pointing to a single self-selected self-reported claim with no control at all, as a positive example.

No. I'm critizing them for pretending that they have perfect placebo's when they don't. Gathering the data about whether patients think they got a placebo or don't think they got a placebo isn't expensive. The only reason not to do it, is that it's awkward and exposes the messiness of the underlying reality.

One issue with Evidence-based medicine is that people start trusting what other people wrote down instead of trusting their own empiric experience and ability to reason.

Eliezer wrote in Inadequate Equilibria about he cured the SAD of his wife by a combination of his own empirics and reasoning. He argued that our health care system is disfunctional enough that the present system has no good way to provide a effective market. Eliezer proposes his own system of how health care should work that also feature pay-by-performance. I would be happy with Eliezers system getting implemented.

What exactly?

There are people out there who advocate for various legal instruments to forbid the practice of medicine in ways that doesn't follow Evidence-based medicine standards. But I don't want to go too much into the details of existing health regulation here.

I have argued that we don't know it's true and you claim to know that you know it's true.

Maybe I was unclear, I was referring to general health outcomes such as life expectancy, cancer survival rates, etc. Those are being measured and seem to be moving in positive directions. If medicine had really become a Cargo Cult, I would have expected them to be all stagnant or falling.

I'm critizing them for pretending that they have perfect placebo's when they don't.

Are you also the sort of person who criticizes economists for assuming that humans are rational, when they aren't? It sounds similar. Both criticisms are fair, but their significance is dubious.

Inadequate

Yes, the system is inadequate, always has been, EBM has nothing to do with it. To clarify, I'm under the impression that you claim the system before EBM to be superior to EBM. Maybe that's not true?

There are people out there who advocate for various legal instruments to forbid the practice of medicine in ways that doesn't follow Evidence-based medicine standards.

Are you talking about regulatory capture? Again, that's not a new problem, EBM has nothing to do with it.

Maybe I was unclear, I was referring to general health outcomes such as life expectancy, cancer survival rates, etc.

There's a easy way to improve cancer survival rates. Diagnose more people who don't have any problems with cancer. You might harm them by taking out some of their organs but they won't die from that and you have increased cancer survival rates as a result.

Sarah Constantin writes on her blog: "Cancer deaths have only fallen by 5% since 1950"

In the last 50 years we spent a lot of effort in putting more filters into our power plants and getting cleaner air. That might be a way to explain the 5% fall in cancer deaths. People smoke less thanks to insight gathered from correlational studies and not well controlled trials. EBM as commonly practiced would suggest trusting those correlational studies less.

Are you also the sort of person who criticizes economists for assuming that humans are rational, when they aren't? It sounds similar. Both criticisms are fair, but their significance is dubious.

Economists are interested in ways in which humans aren't rational. There's a lot of research money invested into behavorial economics. I can't see the medical community giving Kirsch their Nobel price in the same way the economics community gave Kahnemann their Nobel price.

As far as I know there are no simply steps like recording which of the patients actually thinks they got the real drug that's available to economists but that they don't take.

Practically another problem with treating placebo's this way is that doctors in our system don't attempt to maximize positive placebo effects and minimize nocebo effects.

To clarify, I'm under the impression that you claim the system before EBM to be superior to EBM.

I'm not making a claim that is that strong. A lot of the ways I think EBM reduces innovation are about providing justifications for regulatory capture.

From my perspective it's also more important to ask how we could create a system that's better than what we have now then whether the system of the past was better. A system like the one I proposed under the title Prediction-based Medicine wouldn't have been possible before the internet.

Sarah Constantin writes on her blog: "Cancer deaths have only fallen by 5% since 1950"

The post seems reasonable. It points out some stagnation and some isolated wins (also, the 3x reduction in heart disease?). It could be used to claim that cancer research in inadequate, but it absolutely does not defend your "EBM is a Cargo Cult" rhetoric.

There's a lot of research money invested into behavorial economics.

That's relatively recent. There is also a number of obvious reasons why medicine would lag behind economics in rethinking their dubious assumptions. Still I predict that in the coming decades some of those issues will be addressed.

how we could create a system that's better than what we have now

I fully support your advocacy for better placebos. However, your PBM has issues, as all simple solutions to hard problems do.

I don't think that the rat psychologists didn't create any valid knowledge and I don't think Feymann thought so either. He called them Cargo Cultists because they don't care about investigating the assumption on which their research rests. I think the same is true with EBM.

Feymann says to not engage in Cargo Cultism the missing ingrediant is:

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

When researchers fails to report information about how well their attempts at blinding actually blind people they aren't living up to that standard.

However, your PBM has issues, as all simple solutions to hard problems do.

It's not perfect but it doesn't have to be perfect to be better than the status quo. I'm also not calling for monoculture and that everything has to be done via PBM.

they don't care about investigating the assumption on which their research rests

That sounds like a problem with the researchers and not with the system. I understand that you want to solve this problem with better incentives, but I don't actually see how PBM helps with that.

It's not perfect but it doesn't have to be perfect to be better than the status quo.

The problem with novel ideas is that we're often not clever enough to predict all the ways they will go wrong. Therefore, if a novel idea looks only slightly better than status quo, it's probably going to be worse than the status quo by the time we've implemented it. And that's before we consider switching costs.

I'm also not calling for monoculture and that everything has to be done via PBM.

That's weird, you criticized cancer research a lot, but it seems that PBM wasn't intended for that? It's okay to have partial solutions. But while reading your comments about cancer, I assumed that you did have better ideas.

That's partly what I meant by "weird tone" in the first comment. From my point of view the system is almost the best thing humans could reasonably make, with any flaws relatively minor and with some solutions presumably on the way. And from your point of view, presumably, it's fatally flawed and near useless. But you aren't providing much evidence that would make me change my view at all.

That sounds like a problem with the researchers and not with the system.

I don't think that the placebo problem is caused by individual researchers being stupid but because of the incentives that the system sets.

Researchers don't care for investigating the foundations because they can't get grants for that purpose. On the other hand, they get grants for doing research that might lead to new drugs that make billions in profit.

Therefore, if a novel idea looks only slightly better than status quo, it's probably going to be worse than the status quo by the time we've implemented it.

The solution I outline in my post is to not start by competing with hospitals but by going for treatments that are currently provided by hypnotists and bodyworkers who don't practice much EBM anyway. It's a class of people where individual skill between practioners matter a great deal and studies are therefore less likely to generalize than studies that are about giving out pills.

If you take chiropractics, which happens to be a class of body-workers where there are a lot of them it took till 2008 till we had a Cochrane meta-study according to which chiropractics provide a working treatment for lower back pain.

Moving from a system that takes decades to come to that conclusion and not being able to distinguish between skill differences between different chiropractics to a system that can do that in a year is more than just "slightly better".

But you aren't providing much evidence that would make me change my view at all.

A lot of the value from changing the system depends on how inadequate you believe the present system to be. It's inherently difficult to provide evidence about the amount of low hanging fruit that's out there because by it's very nature providing you cases like EY's SAD treatment means that there is only anecdotal evidence for those treatments.

This post from Sarah Constantin might give some indication that there are a lot of how hanging fruit out there that aren't picked by our current system.

In The legend of healthcare Michael Vassar uses the inability of our system to get doctors to use mirrors to treat phantom limb pain as evidence that we don't really have a healthcare system. Prediction-based Medicine would make it easy for one provider of mirros treatment to offer his treatment to all the people with phantom limb pain who seek treatment.

Researchers don't care for investigating the foundations because they can't get grants for that purpose. On the other hand, they get grants for doing research that might lead to new drugs that make billions in profit.

I don't see why EBM is to blame or PBM would help.

Cochrane meta-study according to which chiropractics provide a working treatment for lower back pain

Reading the abstract, it doesn't look all that positive.

more than just "slightly better".

If you'll let me be witty, I'll suggest that your claims are lacking a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. You're not stating the assumptions that lead you to this conclusion, not explaining possible ways the assumptions could be wrong, and more generally, you don't seem to try to find possible negative consequences or limitations of PBM. To be honest though, I don't really want to discuss these possible problems. This thread already has enough going on.

It's inherently difficult to provide evidence about the amount of low hanging fruit that's out there

For example, some of these fruit involve wasted money (especially in US). I can agree that they are low hanging, because there are other medical systems providing similar outcomes in similar circumstances, for less money, and I can vaguely imagine that policies could be adopted in US to reduce costs. By the way, cost problems have very little to do with EBM. The talk you linked to also seems less about how EBM is inadequate and more about how doctors are failing to make good use of EBM.

More generally, I don't need a list of problems that kind of look easy to solve. I need you to show me why exactly the problems aren't already solved, how much benefit there would be if we did solve them, and that there exists a concrete and simple plan to solve them without assuming that we live in full communism.

What I'm hearing is that EBM as described can't be very effective. Doctors do not have the mental energy to spare to be hunting through the literature. For one thing, the literature is a mess, there's a mass amount of dead wood and conflicting findings and studies that didn't replicate and so on. It is probably beyond the ability of a human to synthesize it all into a decision matrix to actually use to treat patients optimally.

Obviously the basic principle of using evidence gathered from lots and lots of places to make decisions instead of just doing whatever the attending physician you learned under does is sound. As long as the laws of the universe are the same between different medical clinics, this is a better method to make decisions with.

But yeah, this is a field waiting for AI to crack it open.

Errata:

who sincerely beliefs

who sincerely believes

Ah, but what's the evidence that choosing which system of medicine to use based on a study improves medical outcomes?

We shall call it, evidence-based-evidence-based-evidence-based-medicine.

What’s a “Hamming question”? (A quick googling didn’t turn up a Wikipedia page for the term.)

Edit: Also:

Unfortunately, the ethical review boards care about issues like that and rather focus on preventing consent forms getting signed with pencils.

Did you mean “don’t care”?

It's a CFAR term inspired by the story in which Hamming went up to a bunch of people and asked them "what's the most important problem in your field?" and then, after hearing their answers, "why aren't you working on it?" It means, roughly, the most important problem in your life.

Thanks.

(It is probably best to link to a definition of terms like this, when using them.)

(There actually isn't any standard explanation online one can link to, not even a bad one.)

(Maybe someone could write a sequence of the most useful ideas CFAR has come up with and then - if it's well-written, by an actual CFAR tutor - I can curate it and thus encourage everyone to read it, and give users badges when they've read it and then you can see at a glance if another reader has the background knowledge. And then we won't have to have conversations like this one ever again.)

(But whatever. I'm sure we'll get to it.)

Ah. Well, in that case, it’s probably best to explain the term when using it (until such time as a linkable explanation is written)…

I expect such a norm to basically prevent 80% of conversations in almost any organization or local community from being written down. I am strongly in favor of minimizing the costs to people writing stuff down.

I… don’t understand this comment at all. :(

(a) What does what you said have to do with this post? This doesn’t look like a conversation, it looks like… just, a blog post. I am confused about why you are bringing up conversations.

(b) What organization or community has anywhere close to 80% of its conversations being written down?! (Certainly not the rationalist community, or CFAR… I mean, that’s… that’s the whole problem here! That no one has written down what I assume have been conversations about this term “Hamming question”!)

(c) How does the practice of adding explanations of unusual terms when making public posts, prevent you (or anyone else) from writing things down…?

I am very, very confused.

Response to c and b:

I believe Oli's point is that a norm for giving full explanations for every non-standard term you wish to use is an added burden to the task of writing one's thoughts down, and while you might think it is very important, it is going to reduce the total amount of conversations that get turned into insightful blog posts.

You correctly point out the problem of people not writing down their interesting conversations - but if you want to help that happen you want to reduce the barriers to people writing things, not increase them.

Response to a:

As I understand Oli's comment, you're correct that it's not a direct response to this post. It's a response to your proposed norm. Right now, most of the insights in our community from the past five years have not been written up, and this was salient to Oli as an important source of value, and the situation is such that if you want those ideas written up, you want to do everything you can to remove barriers to writing, and make it easier for people to write the ideas up.

But the argument also applies in the example of this blog post - having norms about extra work you have to do before posting is going to increase the cost of writing regardless of the source of the ideas, and thus is a disincentive to writing (while nonetheless increasing the immediate understandability of whatever writing still gets published).

Thank you, that clarifies things a lot. So, when you (and Oliver) say “written down”, you meant “… and posted publicly”, I guess. So here’s my question:

Is the current situation, that most things of interest have been (literally) written down but not published (meaning, posted as posts on LW, etc.)? Or, have they physically not actually even been written down?

These seem like distinct situations. If the former, well, go ahead and publish them, and then we can all collectively work on adding explanations what need be added. If the latter, then the norm of “add explanations to unusual terms when making a public post” can hardly be at fault!

(As an aside: the proportion of CFAR-source and CFAR-adjacent ideas that have been publicly posted is so low (I think… I mean, I can’t be sure, obviously, but that’s the impression I get) that it’s hard to see how much of a difference adding or not adding explanations could make. It’s not like we’re starting from a place of many things being published, and worrying about reducing that to few…)

the norm of “add explanations to unusual terms when making a public post” can hardly be at fault!

That norm is probably not a major reason why they don't get written, but I would guess a contributing factor. And since right now very things get written down, the marginal value of increasing that is particularly high.

But the value is dramatically reduced if most of the potential audience doesn't understand what's been written down due to unexplained jargon.

I think you are missing my point, maybe because my wording was a little bit convoluted. I am saying:

The norm of “add explanations when publishing” cannot possibly affect whether conversations get written down in the first place (regardless of whether or not they then get published).

(As for the marginal value of increasing how many things get published, yes, perhaps it is high, but we are not talking about that, right? We’re talking about not decreasing it, which is not quite the same thing. After all—as evidenced by the OP—the said norm clearly does not exist, as things stand…)

The norm of “add explanations when publishing” cannot possibly affect whether conversations get written down in the first place (regardless of whether or not they then get published).

That seems wrong to me. A lot of the payoff of writing things down comes from publishing the things I've written and getting recognition for that. So increasing the cost of publishing reduces the likelihood of me writing things down in the first place, since it reduces the total cost/benefit ratio of writing things down (whose positive term in large parts consists of the benefits of publishing them).

True! This is certainly a good point.

The problem I have with your model is that it doesn’t seem to predict reality. Consider:

Eliezer wrote the Sequences, despite explaining everything (usually multiple times, and at length), linking everything to everything, and generally taking tremendous effort.

People of the Less Wrong (and adjacent) community today (CFAR included, but certainly not exclusively) are (apparently? [1]) for the most part neither writing anything down nor publishing it (despite the norm of “explain and/or link things” clearly not being nearly strong enough to result in “Yudkowskian levels of hyperlinking” or anything close to it).

I am all for more things being written down and then published! So your general point—that barriers to writing/publishing ought to be minimized—is one which I wholeheartedly endorse. But I question whether it makes sense to object to this particular barrier, because it seems to already be absent, and yet the thing we are worried about not stifling—dissemination of the community’s current ideas and so on—is mostly not there to be stifled in the first place. Clearly, some larger problem obtains, and it’s only in the context of a discussion of that larger problem that we can discuss norms about explaining and/or linking. (One might call it an isolated demand for rigor.)

[1] I say “apparently” because, obviously, I have no real way of knowing how much (ideas, concepts, techniques, whatever) there is to be written down and published; though I do get the vague sense, from things said here and there, that there’s quite a bit of it.

Yep, I think we agree on the broader picture then. I actually think this specific requirement has a pretty decent effect size, and so exploring that specific disagreement about effect size and impact in the larger context seems like a good next thing to do, though probably in meta and not here.

Yep, that’s an accurate summary. Sorry for being unclear.

You shouldn't expect Wikipedia pages for Lesswrong jargon. If you enter Hamming question on the search bar of this website it shows you the post Unofficial Canon on Applied Rationality that post does a decent job at explaining the term in sufficint depth to understand what I'm saying. It also links to Hamming questions and bottlenecks for further information.

When writing a post like that where the target audience is LWers I use community jargon even when that means that outsiders have a harder time following along. In this case, not understanding the term doesn't prevent you from getting the main point the paragraph is about.

Having to explain community jargon everytime it's used means that you lose the advantage of jargon providing you a short way of pointing to a concept.

If Facebook would allow easy linking of posts on the wall I would have linked here to Brienne Yudkowsky's post towards which I'm responding but unfortunately it's not easy to link to Facebook.

There's also Kaj's writeup: https://www.lesserwrong.com/posts/aRNxWnqnrz3FdNpfi/dark-arts-defense-in-reputational-warfare/CQL2WkNB8u8tK6T5r

Well, the problem is that it is not really “Lesswrong jargon”; it is CFAR jargon, which is different. I am no stranger to LW jargon myself (having been reading Less Wrong since long before it was Less Wrong), but this term was unfamiliar to me. (If you’re assuming that everyone associated with Less Wrong is also associated with CFAR, or knows CFAR jargon, etc., that assumption is clearly mistaken and, in my view, rather problematic. The story would be different if there were, say, a CFAR knowledge base / wiki / something; but there is not.)

Two further points:

  1. If I don’t know the term, then I also don’t know that it’s a piece of CFAR jargon, so I cannot anticipate the failure of a Google search for it, nor the reason for that failure. Maybe it’s from CFAR, maybe it’s from some esoteric field in which you are an expert, maybe it’s simply idiosyncratic to you (or perhaps, a reference to your own previous writing)—who can know?

  2. If there is indeed a Less Wrong post that explains the term (as you have just shown that there is), then might I suggest that it’s easy, and helpful, simply to hyperlink the term to it in the first place!

Recall, after all, what Eliezer did, when writing the Sequences; he linked the ever-living heck out of things! He linked things so much that people started speaking of “Yudkowskian levels of hyperlinking”; and let me tell you, that practice tremendously increased the readability and usefulness of his posts.

Edit: But I forgot to add: thanks for the links!

I live in Berlin and have never been at CFAR and I don't have access to the CFAR alumni mailing list. It seems to me like a concept that got around a bit.

I will consider hyperlinking more in future posts.

My quick google turns up explanations in the first two hits, and some others on the first page. A primary source is here.

Did you mean “don’t care”?

Yes, I corrected it.