Scott Alexander wrote yet more words defending his decision to write two posts totaling 25,000 words about Ivermectin. Then he wrote a second post trying again.

More centrally, his first post, of which I very much approve, is defending the most important idea of all: Think for yourself, shmuck!

I want to make clear my violent agreement with all of the following.

  1. Think for yourself, shmuck!
  2. When it seems worthwhile, do your own research.
  3. The ones telling you not to ‘do your own research’ are probably the baddies.
  4. Also applies to the ones telling you to ‘trust us and Trust the Science™’ and calling you an idiot or racist or calling for you to be censored if you disagree.
  5. Baddies or not, those people still are more likely to be more right about any given point than those saying they are wrong or lying to you, unless you have seen them lying or being wrong a lot about closely related things previously. And your own research will often not seem worthwhile if you consider opportunity costs.
  6. When people draw wrong conclusions like Ivermectin being effective or that Atlantis used to exist or whatever, telling people that they are idiots or racists for drawing that conclusion is not going to be super effective.
  7. Pointing out ‘the other side are conspiracy theorists’ or ‘the people who believe this also believe these other terrible things’ does not prove the other side is wrong, nor is it going to convince anyone on the other side that they are wrong.
  8. If you instead explain and work through the evidence, there is a chance someone might be convinced, that is God’s work, you are providing a public service.
  9. There are not ‘legitimate’ and ‘illegitimate’ places to Do Science. You can virtuously Do Science to It, for all values of It and of You.
  10. No, we cannot assume that the medical establishment, or any other establishment, will always get such questions right. That is not how any of this works. Even the best possible version of the medical (or other) establishment will sometimes get it wrong, if no one points it out without being dismissed as a conspiracy theorist or racist then the establishment will keep getting it wrong and so will you, and criticism is the only known antidote to error in such situations.

I would also add, from Kavanagh’s response to Scott in a comment, my disagreement with this particular thing, regarding scuba diving to purported Atlantean ruins:

I also don’t think I would have the same intuition you have that personally exploring the ruins would be informative. I think that would actually be likely to skew my perspective as it feels like it would deliver potentially inaccurate intuitions and that it would require already having the expertise to properly assess what you are seeing.

Actually getting the skills, running experiments, seeing the evidence for yourself? That’s all great stuff in my book. It’s not cheap to do, but if you care enough to learn to scuba dive, by all means scuba dive and see the primary evidence with your own eyes. It seems crazy to me to think this would not be a helpful thing to do – to me it is the most virtuous thing to do, if you care a lot.

Alas, Scott then backtracks a bunch in this second post.

He is afraid others will see him saying not to ‘trust the experts’ so he wants to reiterate to trust the experts, that reasoning is hard and you probably shouldn’t try doing it yourself. Then he says this:

To a first approximation, trust experts over your own judgment. If people are trying to confuse you about who the experts are, then to a second approximation trust prestigious people and big institutions, including professors at top colleges, journalists at major newspapers, professional groups with names like the American ______ Association, and the government.

If none of this rings true, figure out whether you really need to have an opinion.

To a first approximation, you should never suspend the first approximation.

At its best this behavior is free riding. It will not often be at its best.

That whole speech, to me, is a Lovecraftian horror. If we tell young people to (almost) always trust the American [X] Association on X, and journalists about the news, dismiss anything that the authorities call a conspiracy theory, and never get any practice thinking for themselves on such matters, we deserve what we get.

I love that this is the top comment on the post, note inside the parenthesis:

Another objection I don’t buy is the idea that if you are seen giving too much credibility to conspiracy theories, you risk making people generally more vulnerable to conspiracy theories, by increasing their priors on conspiracy theories.

I have several objections to this objection.

  1. You’re saying we should engage in a conspiracy to discredit conspiracy theories?
  2. It is a very bad principle to avoid providing Bayesian evidence if you think this would move someone’s posterior in the wrong direction due to other mistakes.
  3. This is a lot like (at least self-) censoring information that goes against things you believe to be true, on the theory that it is true and therefore it would be bad if people got evidence that made them believe it less.
  4. What do you think about a field if every time they find evidence of X they publish and shout from the rooftops, and every time they find evidence against X they put it in a drawer? What do you believe about X? Does X being true make this better?
  5. I am not convinced that ACX readers tend to give too much credence to conspiracy theories, or puts too little trust in the establishment’s claims.
  6. I am not convinced that considering and then robustly dismissing well-known conspiracy theories will give more credence to such theories.
  7. It lowered my credence for such theories, since I now have a clear data point on one of them. I expect many people, especially those who had previously had doubts about the particular theory in question, would react the same way.

Scott’s discussion of characterization of the three ways to view conspiracy theories – Idiocy (dumb things people fall for), Intellect (same as other theories, only worse, which is mostly the way I see them) and Infohazard (deadly traps that lie in wait) has this same assumption that the goal is to have fewer people believe things in the category ‘conspiracy theory.’ Which is why them being low status would seem good. That… doesn’t seem like the right goal, unless you think all conspiracy theory shaped theories are always false?

The objection of Kavanagh that I do buy, that I think is important, is that you need to read the contextual clues in a situation to know whether sources are worth treating as credible and worthy of your time. Otherwise, you’re going to waste a lot of time on questions where you already know the answer.

Was the full analysis of Ivermectin a good use of Scott’s readers’ time? If everyone who reads some of Scott read the whole thing, then no. If everyone made a reasonable personal decision on whether they found such an analysis of value before reading, then yes. The output was quite valuable to the right people, especially those who could be convinced. I also found it of value.

Was it a good use of Scott’s time? Less clear to me. My guess it the first analysis plausibly was, the second one probably wasn’t.

I was given the same test, here. In addition to Scott Alexander, Alexandros talked extensively to me. His initial claims were thoughtful and interesting, and I engaged. It became quickly clear that he was applying asymmetric standards to evidence. A bunch of his claims weren’t checking out when I looked further. It then became clear he was also bringing other Covid-authority-skeptical standard takes, in particular questioning Pfizer and the vaccine, in ways I had looked into and knew were not good mistakes to be making.

I was confident he would have continued to talk to me and raise additional questions on this, and on vaccinations, and on other things, for as long as I was willing to listen. And I was convinced that he was not about to quit, or be convinced, no matter the response.

So after spending several hours on this, I concluded continued engagement was not a good use of my time, and I stopped doing it. I think that was a good decision process.

New Comment
38 comments, sorted by Click to highlight new comments since: Today at 4:43 PM

A key point underpinning my thoughts, which I don't think this really responds to, is that scientific consensus actually is really good, so good I have trouble finding anecdotes of things in the reference class of ivermectin turning out to be true (reference class: things that almost all the relevant experts think are false and denounce full-throatedly as a conspiracy theory after spending a lot of time looking at the evidence).

There are some, maybe many, examples of weaker problems. For example, there are frequent examples of things that journalists/the government/professional associations want to *pretend* is scientific consensus, getting proven wrong - I claim if you really look carefully, the scientists weren't really saying those things, at least not as intensely as they were saying ivermectin didn't work. There are frequent examples of scientists being sloppy and firing off an opinion on something they weren't really thinking hard about and being wrong. There are frequent examples of scientists having dumb political opinions and trying to dress them up as science. I can't give a perfect necessary-and-sufficient definition of the relevant reference class. But I think it's there and recognizable.

I stick to my advice that people who know they're not sophisticated should avoid trying to second-guess the mainstream, and people who think they might be sophisticated should sometimes second-guess the mainstream when there isn't the exact type of scientific consensus which has a really good track record (and hopefully they're sophisticated enough to know when that is).

I'm not sure how you're using "free riding" here. I agree that someone needs to do the work of forming/testing/challenging opinions, but I think if there's basically no chance you're right (eg you're a 15 year old with no scientific background who thinks they've discovered a flaw in E=mc^2), that person is not you, and your input is not necessary to move science forward. I agree that person shouldn't cravenly quash their own doubt and pretend to believe, they should continue believing whatever rationality compels them to believe, which should probably be something like "This thing about relativity doesn't seem quite right, but given that I'm 15 and know nothing, on the Outside View I'm probably wrong." Then they can either try to learn more (including asking people what they think of their objection) and eventually reach a point where maybe they do think they're right, or they can ignore it and go on with their lives.

reference class: things that almost all the relevant experts think are false and denounce full-throatedly as a conspiracy theory after spending a lot of time looking at the evidence

This seems to be the wrong reference class for Ivermectin.

There are some, maybe many, examples of weaker problems. For example, there are frequent examples of things that journalists/the government/professional associations want to *pretend* is scientific consensus, getting proven wrong - I claim if you really look carefully, the scientists weren't really saying those things, at least not as intensely as they were saying ivermectin didn't work.

In the beginning, Ivermectin seemed to be the case for Ivermectin that the journalists/the government/professional associations wanted to pretend that and "the scientists" published Hariyanto et al (and others at the time) that was in favor of Ivermectin. The LW census from looking at the meta-analysis at the time seemed to point toward the pro-Ivermectin meta-analysis being higher quality.

It might be that the scientists who published the pro-Ivermectin meta-analysis changed their mind later as more evidence came to light but it's hard to know from the outside whether or not that's the case. Given the political charge that the topic had it's hard to know whether later opinions by scientists who spoke on the topic actually spent a lot of time looking at the evidence. 

It's worth noting that the logical conclusion from your post on Ivermectin is that in areas with high worm prevalence, it's valuable to give COVID-19 patients Ivermectin which conflicts with the WHO position. 

General relativity is in a very different reference class where you actually have a lot of experts in every physics department in the world who looked at the evidence. 

[-]qjh1y43

I think the Ivermectin debacle actually is a good demonstration for why people should just trust the 'experts' more often than not. Disclaimer of sorts: I am part of what people would call the scientific establishment too, though junior to Scott I think (hard to evaluate different fields and structures). However, I tend to apply this rule to myself as well. I do not think I have particular expertise outside of my fields, and I tend to trust scientific consensus as much as I can if it is not a matter I can have a professional opinion on.

As far as I can tell, the ivermectin debate is largely settled by this study: https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2790173

It seems quite reasonable that ivermectin doesn't do much for covid-19, but kills the intestinal worms (strongyloidiasis) it is typically prescribed for. Oh, and the prevalence in some countries exceeds 15%, and those worms can cause deadly hyperinfection when used with corticosteroid that are also used to manage covid-19. This would explain why in developed countries ivermectin trials appear to do essentially nothing, and why it shouldn't be used as a first-line covid-19 treatment, but instead can be used as preventative medication against worms before use of immunosuppressive medication if there is a significant risk of strongyloidiasis.

There was also a large trial in the US, with about as many patients as the original meta-studied has, but in a single double-blinded randomised placebo-controlled trial. It didn't find a significant effect: https://jamanetwork.com/journals/jama/fullarticle/2797483

The key here is this: there is an resolution to this debacle, but it ultimately still came from experts in the field! I think everyone has significant cognitive biases, me included. One typical bias I've seen on LW and many other haunts of rationality and EA-types is that humans are lean mean bayesian machines. I know I am not, and I'm a scientist so I'd like to think I put a lot of effort into integrating evidence objectively. Even then, I find it extremely difficult to do comprehensive reviews of all available information outside of my field of study. Ultimately, there is just so much information out there, and the tools to evaluate which papers are good and which are bad are very domain-specific. I have a pretty decent stats background, I think; I actually do statistics in my field day to day. Yet I just don't know how to evaluate medical papers at all beyond the basics of sample size, because of all the field-specific jargon especially surrounding metastudies. Even for the large trial I linked, I figure it is good because experts in the field said so.

In short, we would all like to believe the simple picture of our brains taking evidence in and doing bayesian inference. However, to be exposed to all the information that you need to be to get a good picture of a field includes understanding all the jargon, being up to date on past studies and trends, understanding differences in statistical procedure between fields, understanding the major issues that could plague studies typical to the field, etc., in addition to just the evidence. This is because both synthesising "evidence" into actual bayesian evidence and being so steeped in the literature that one can have a birds' eye view essentially require one to be an expert in the field.

Yet I just don't know how to evaluate medical papers at all beyond the basics of sample size, because of all the field-specific jargon especially surrounding metastudies. Even for the large trial I linked, I figure it is good because experts in the field said so.

So it basically boils down to "there's a resolution to this debacle because experts said so". 

I haven't looked into Ivermectin evidence recently and thought like Zvi that engaging more with Alexandros isn't worth my time. 

One problem with trusting the experts is that there doesn't seem to really be experts at the question of how the knowledge gained in clinical trials translates into predicting treatment outcomes for patients. 

If you ask the kind of people who do those trials simple questions like "Does blinding as it's commonly done mean that the patients don't know whether they are taking the placebo or not?" You likely get a lot of them falsely answering that it means that because they are ignorant of the literature that found that if you ask patients they frequently have some knowledge. 

There are billion-dollar Big Pharma companies that regularly pay billions of dollars in fines for all sorts of ways they manipulate the epistemic environment to their ends. Those companies profit from medical studies being read a certain way. The fact that they historically engaged in conspiracies where they bribed doctors to recommend their treatments is on the open record. 

You could imagine a field where doctors make predictions of treatment outcomes and study how knowledge gained from academic papers about the treatments helps make better predictions but we don't have that field. That field doesn't exist. 

It might very well be that the experts actually know what they are doing but it's hard to tell from the outside. 

This is an important point that is often ignored.

"Does blinding as it's commonly done mean that the patients don't know whether they are taking the placebo or not?" You likely get a lot of them falsely answering that it means that because they are ignorant of the literature that found that if you ask patients they frequently have some knowledge."

Accurate - and obvious on reflection, particularly with the COVID vaccines. I knew multiple people in the COVID vaccine trials. Just over half confidently said they got the real vaccine, and they knew it because of side effects. The rest were in most cases less certain but suspected they got the placebo, because so many participants had the side effects and they didn't. 

Example: Moderna And Pfizer Vaccine Studies Hampered As Placebo Recipients Get R eal Shot : Shots - Health News : NPR "Mott, who lives in the Overland Park, Kan., got a strong reaction to the second shot, so she correctly surmised she had received the Moderna vaccine, not the placebo."

Our blind and double-blind methodology is nowhere near the perfect black box we pretend.

Yes, it's worth noting here that if researchers would care about whether patients know whether or not they get verum or the placebo, they could easily add another question to the forms that the patients fill out and report the answers in their papers. 

The status quo of how trials are run is that researchers are willfully ignorant about the extent to which patients know they take verum or placebo. 

[-]qjh1y31

One problem with trusting the experts is that there doesn't seem to really be experts at the question of how the knowledge gained in clinical trials translates into predicting treatment outcomes for patients. 

I mean, kinda? But at the same time, translation of clinical trials into patient outcomes is something that the medical community actively studies and thinks about from time to time, so it's really not like people are standing still on this. (Examples: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6704144/ and https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-017-1870-2)

If you ask the kind of people who do those trials simple questions like "Does blinding as it's commonly done mean that the patients don't know whether they are taking the placebo or not?" You likely get a lot of them falsely answering that it means that because they are ignorant of the literature that found that if you ask patients they frequently have some knowledge. 

You could probably find some specific examples of common misconceptions in most fields. However, I'd really wager that most people outside the fields have more. That is in addition to the breadth of exposure to studies, which is a point you completely ignored. Ultimately, even if you can integrate evidence perfectly, bayesian thinking relies on 2 things: interpreting evidence correctly, and having a lot of evidence. We disagree regarding interpreting evidence correctly. Sure, maybe you think you can do as well as the average medical researcher; I still think that is pretty bad advice in general. I don't think most people will. In addition, I doubt people outside of a field would consider the same breadth of evidence as someone in the field, simply because in my experience just reading up enough to keep up to date, attending talks, conferences, etc. ends up being half of a full-time job.

This is not an argument against bayesian thinking, obviously. This is me saying if someone has integrated evidence for you, you should evaluate that evidence with a reasonable prior regarding how trustworthy that person is, and one should also try to be objective regarding their own abilities outside of their field. A reasonable prior is to not think of oneself as exceptional, and look at how often people venture outside of their fields of expertise fall flat on their face.

Someone inside a field is generally able to see a larger breadth of evidence but at the same time, they have incentives not to bite the hand that feeds them. Big Pharma spends a lot of money to shape the system in a way where they can earn a lot of money by having patent-protected drugs that went through every expensive clinical trials being seen as the gold standard.

Misconceptions about the nature of blinding don't exist because the involved researchers are stupid. Researchers that do well in STEM academia tend to be very intelligent. They exist because there are incentive pressures to keep those misconceptions alive. 

When thinking about whether to look more at Ivermectin the question isn't "Do I as an outsider know more?" but "Are the billions that big pharma spends to bias research toward finding that patent pending drugs are better than generics strong enough to affect the expert judgments on Ivermectin".

[-]qjh1y51

Your model of medical research could be true, if only countries with extensive investments in pharmaceuticals do clinical trials, all funding is controlled by "Big Pharma", and scientists are highly corruptible. Even then, it only takes one maverick research hospital to show the existence of a strong effect, if there is one. Thus, at best, you can argue that there's a weak effect which may or may not be beneficial compared to side-effects.

I don't think your view seems correct anyway. Many clinical trials, including those that found no significant effect, came from countries that seem relatively unlikely to be controlled by American and European pharmaceutical companies. A lot of them are also funded by government sources, and etc. Perhaps your view is correct in the US; however, much of the rest of the world doesn't suffer from extreme medication prices. If "Big Pharma" can't even effect that change, which they are much more likely to be able to due to market power and also affects their bottom line more directly, why would we still think there are tentacles throughout research all over the world? 

At the end of the day, your worldview involves so much worldbuilding and imagination that it's likely highly penalised by Occam's razor considerations (or whatever quantitative version of your choice) that you'd need a lot of evidence to justify it. Just "saying things" isn't evidence of a worldwide conspiracy. And if there's no worldwide conspiracy, there's really nothing to your argument, since there are clinical trials from all over the world saying similar things.

Even then, it only takes one maverick research hospital to show the existence of a strong effect, if there is one. 

In the Ivermectin case, you have a bunch of hospitals doing that. Enough that you could publish scientific meta-reviews in respected journals that came out in favor of Ivermectin. At the same time, you had the establishment organizations talk about how Ivermectin certainly can't work. That was the reason to look closer.

At the time I thought, well here are those published meta-reviews and then there are the institutions that say that Ivermectin doesn't work, that's odd and I started a LessWrong thread to look together at the different papers.

much of the rest of the world doesn't suffer from extreme medication prices. If "Big Pharma" can't even effect that change, which they are much more likely to be able to due to market power and also affects their bottom line more directly, why would we still think there are tentacles throughout research all over the world? 

Prices are relatively legible to political stakeholders. It's easy for politicians to push in the direction of lower drug prices because they want to balance healthcare budgets.

Poorer countries can say something along the lines of "Either you give us the drugs at a price that our citizens can afford, or we allow our own companies to produce the drugs in violation of your patent". Big Pharma companies are okay with making their profits mostly in rich countries and selling drugs in lower prices to poorer companies in exchange for their patents not being violated. 

Directly, after COVID-19 emerged it would have been possible for governments to fund studies to investigate all generics that are candidates for possible treatments. To me that was surprising. While there are multiple different things that went wrong in the COVID response, it caused me to update that the existing institutions are more flawed than I previously assumed. 

Just "saying things" isn't evidence of a worldwide conspiracy.

There's tons of evidence. Take https://www.theguardian.com/business/2012/jul/03/glaxosmithkline-fined-bribing-doctors-pharmaceuticals it's about a Big Pharma company having to pay $3bn in fines because they conspired with doctors by bribing them to encourage the prescription of unsuitable antidepressants to children.

I think you need plenty of imagination to have a world where companies regularly have to pay billions in fines because they illegally bribed people and those bribes also had no effect.

Plagiarism is generally considered an academic crime and yet plenty of researchers are willing to put their names on papers written by Big Pharma that they themselves did not write. Researchers regularly conspire in that process with Big Pharma to mislead the public about who wrote papers. 

[-]qjh1y21

It's interesting that you assume I'm talking about poorer countries. What about developed Asia? They have a strong medical research corp, and yet they are not home to companies that made covid-19 medication. Even in Europe, many countries are not host to relevant companies. You do realise that drug prices are much lower in the rest of the developed world compared to the US, right? I am not talking about 'poorer countries', I am talking about most of the developed world outside of the US, where there are more tightly regulated healthcare sectors, and where the government isn't merely the tail trying to wag the proverbial dog.

Also, your evidence is not evidence of a worldwide conspiracy. Everything you've mentioned is essentially America-centric.

Plagiarism is generally considered an academic crime and yet plenty of researchers are willing to put their names on papers written by Big Pharma that they themselves did not write.

I'm starting to feel like you really don't know much about how these things work. People put their names on papers they don't write all the time. Authorship of a paper is an attribution of scientific work, but not necessarily words on paper. In many cases, even minor scientific contribution can mean authorship. This is why in physics author lists can stretch to the thousands. At any rate, improper authorship isn't the same 'academic crime' as plagiarism at all. The problem isn't plagiarism, it is non-disclosure (of industry links) and improper authorship. It's also important to note that this is not some kind of unique insight. Even a quick Google search brings up myriad studies on the topic in medical journals, such as: https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2800869

Studies on ghostwriters in 'establishment' journals, by 'establishment' authors, recommending more scrutiny!

What about developed Asia? 

India and China can actually make credible threads that they just let their own companies break patents if Big Pharma doesn't sell them drugs at prices they consider reasonable. 

When it comes to reducing prices paid, if you look at the UK for example, they have politicians who care about the NHI budget being manageable. If drugs don't provide enough benefits for the price they cost they don't approve them, so there's pressure to name reasonable prices.

It's also important to note that this is not some kind of unique insight. 

Yes, the fact that there's a lot of conspiracy going on in Big Pharma is not an unique insight. That's just business as usual for Big Pharma in a way that should be obvious to any observer that pays attention.

Ghost authorship isn't just putting a name on a paper to which you little contributed but also about the real authors not appearing on the paper. Ghostwriters are people who wrote something and don't appear on the author list. 

If a student goes to upwork, lets someone write him an essay, does a few minor changes, and then turns it in under their own name while leaving out the real author of the paper that's seen as plagiarism by every academic department out there. 

[-]qjh1y10

India and China can actually make credible threads that they just let their own companies break patents if Big Pharma doesn't sell them drugs at prices they consider reasonable. 

When it comes to reducing prices paid, if you look at the UK for example, they have politicians who care about the NHI budget being manageable. If drugs don't provide enough benefits for the price they cost they don't approve them, so there's pressure to name reasonable prices.

Sure, but that doesn't address why you think researchers in these countries would be so affected by American pharma that there aren't enough people to do convincing studies that would affect American bottom lines. In other words, still the same thing: why you think there is evidence of a worldwide conspiracy.

Ghost authorship isn't just putting a name on a paper to which you little contributed but also about the real authors not appearing on the paper. Ghostwriters are people who wrote something and don't appear on the author list. 

If a student goes to upwork, lets someone write him an essay, does a few minor changes, and then turns it in under their own name while leaving out the real author of the paper that's seen as plagiarism by every academic department out there. 

I don't think that's right. I think it would be considered academic dishonesty but not plagiarism per se, because for students the expectation for graded work is that they are submitting their own work (or work with fellow students in the same class, for some classes and types of work). However, for papers, works are supposed to be collaborative, so just having additional contributors isn't itself a problem. The problem instead is that all authors are listed and all authors contributed. In terms of industry research, disclosure of industry links is another problem.

I looked up a few articles on the subject, and it really doesn't seem like ghostwriting is plagiarism (though it depends on the definition and who you ask!), but it certainly can violate ethical codes or journal guidelines:

https://www.insidehighered.com/blogs/sounding-board/ethics-authorship-ghostwriting-plagiarism

https://www.turnitin.com/blog/ghostwriting-in-academic-journals-how-can-we-mitigate-its-impact-on-research-integrity

https://www.plagiarismtoday.com/2015/03/02/why-is-ghostwriting-not-always-considered-plagiarism/

 

I think this is my last post on this thread. I've made several arguments that were ignored, because you seem to be in favour of raising new points as opposed to addressing arguments. I don't think it's quite a Gish Gallop, but unfortunately I also don't have unlimited time, and I think I've already made a strong case here. Readers can make their own decisions on whether to update their beliefs, and feel free to get a last word in.

I looked up a few articles on the subject, and it really doesn't seem like ghostwriting is plagiarism (though it depends on the definition and who you ask!), but it certainly can violate ethical codes or journal guidelines:

Whether or not you use the word plagiarism, it's an ethical violation where people are paid money to do something in secret to further the interest of pharma companies. 

What's what conspiring in private to mislead the public is about. The ghostwriting case is one that's well-documented. It's evidence that a lot of conspiracy exists in the field. 

I've made several arguments that were ignored, because you seem to be in favour of raising new points as opposed to addressing arguments. 

Your argument is basically "if they have power to do X, why don't they also have power to do Y". The only way to address that is to get into the details of how the power works. That means making new points. 

  1. The ones telling you not to ‘do your own research’ are probably the baddies.
  2. Also applies to the ones telling you to ‘trust us and Trust the Science™’ and calling you an idiot or racist or calling for you to be censored if you disagree.

It's bad enough to use the same epistemology habits for in-person communication and social media (lesswrong is a third category, not social media). But implying that people should run the exact same bayesian algorithms on social media and in-person communication, without distinguishing between the two at all, without implying there is a distinction, is really a serious issue.

Like, there are no baddies in person. Given that you're in person, you're probably talking with them, and given that you're talking with them, you're probably at least a little invested in that particular conversation being not awful. On social media, basically nothing is worth doing, and everyone is only there because they've vastly overestimated the value that social media gives them.

The impact of social media discussion depends a lot on who you are talking to. 

One of the most impactful things I did in the last years was how I reacted to a social media conversation where someone from RaDVaC participated and I asked them whether they have room for funding. Having established that this is the case, I complained a few times on LessWrong that they don't have funding and that got someone to look into providing them funding and now they are decently funded. 

I definitely agree that there are exceptional cases where it turns out that you had a lot more control over your environment than you thought. But it seems like the vast majority of the influence is in the opposite direction, where the system goodharts them into vastly overestimating how much control they have.

[-]lc1y162

On social media, basically nothing is worth doing, and everyone is only there because they've vastly overestimated the value that social media gives them.

Mostly correct and strong upvoted, although one aspect of it is that it's also a pyramid scheme, like every other social phenomenon nowadays. The way it works is, social media companies 

  1. Hand out a relatively small amount of influence and prestige to an absurdly small fraction of users, the "influencers"
  2. These "influencers" encourage their followers to see Twitter as a path to status, by dropping very occasional stories about how, as a top 0.01% Twitter user, they got introduced to ${high_status_user} once, or maybe a story about how they landed a job
  3. Their unfunny readers think to themselves not "wow, I care about these hot takes so much", but "wow, everyone else seems to care so much about this Twitter thing", and also "man, I bet I could be as famous as ${high_profile_twitter_user}, his posts don't look so hard to replicate"
  4. Convinced that the shortest path to success is crafting viral content for FaceGoog (other paths, like earning a Nobel Prize or becoming President, being legibly difficult instead of illegibly difficult), the "grunt users" inadvertently contribute to the meme that everybody in their social circle cares about Twitter, and on it goes

In this sense you could compare it to, say, joining the mafia during the 70s and 80s. The mafia may in fact have provided an avenue to comfortable wealth and microfame for like, a dozen or so people. It was still one of the worst life decisions you could ever make.

This is actually not what I was referring to, but it's very close and it's also very helpful.

Very much not asking that anyone write a new post on Climate Change, since I assume a good discussion on that topic exists, but ... does anyone have a recommended link to a critical analysis of those questions comparable to Scott Alexander's discussion of Ivermectin, one that neither assumes a priori that the environmental science community will always get such questions right nor that those who question the approved science are idiots?

And, yes, I have read Climate Change 2022: Mitigation of Climate Change (ipcc.ch), and several previous editions, but that is not at all what I am looking for. 

Note: My intended audience is the occasional college student who asks in good faith, generally almost in a whisper because of the (inappropriate) stigma of even asking. 

^^^ ditto on this; such a resource would be very valuable.

[-]qjh1y30

You might want to look into Berkeley Earth and Richard Muller (the founder). They have a sceptics' guide to climate change: https://berkeleyearth.org/wp-content/uploads/2022/12/skeptics-guide-to-climate-change.pdf

For context, Richard is a physicist who wasn't convinced by the climate change narrative, but actually put his money where his mouth is and decided to take on the work needed to prove his suspicions right. However, his work actually ended up convincing himself instead, as his worries about the statistical procedures and data selection actually ended up having little effect on the measured trend. He says (and I quote):

When we began our study, we felt that skeptics had raised legitimate issues, and we didn't know what we'd find. Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that. They managed to avoid bias in their data selection, homogenization and other corrections.

The linked PDF was not terribly detailed, but it more-or-less confirmed what I've long thought about climate change. Specifically: the mechanism by which atmospheric CO2 raises temperatures is well-understood and not really up for debate, as is the fact that human activity has contributed an enormous amount to atmospheric CO2. But the detailed climate models are all basically garbage and don't add any good information beyond the naive model described above.

ETA: actually, I found that this is exactly what the Berkeley Earth study found:

The fifth concern related to the over-reliance on large and complex global climate models by the Intergovernmental Panel on Climate Change (IPCC) in the attribution of the recent temperature increase to anthropogenic forcings.

We obtained a long and accurate record, spanning 250 years, demonstrating that it could be well-fit with a simple model that included a volcanic term and, as an anthropogenic proxy, CO2 concentration. Through our rigorous analysis, we were able to conclude that the record could be reproduced by just these two contributions, and that inclusion of direct variations in solar intensity did not contribute to the fit.

I feel doubly vindicated, both in my belief that complex climate models don't do much, but also that you don't need them to accurately describe the data from the recent past and to make broad predictions.

I agree on not terribly detailed. It's more of an "I checked, and Climate Change is correct" than a critical analysis. [I'll reread it more carefully in a few weeks, but that was my impression on a first reading, admittedly while drugged up after surgery.]

Perhaps I'm looking for the impossible, but I'm not comfortable with the idea that climate is so esoteric that no one outside the field can understand anything between CO2 traps UV at one extreme ... and the other extreme consisting of the entire model with conclusion that therefore the planet will warm by x degrees this century unless we eliminate fossil fuels. That alone has not satisfied many who ask - and it shouldn't. I have more respect for my students (math-based but a different field) who search for more detail than for those who accept doctrine.

I can explain fusion on many levels: from hydrogen-becomes-helium to deuterium-and-tritium become helium to this is the reaction cross section for D-D or D-T or D-He3 and ____ MeV are released in the form of ____ .... Similarly for the spectrum from lift/drag to the Navier–Stokes equations ..., and similarly for dynamic stability of structures. I am disappointed that climate scientists cannot communicate their conclusions at any intermediate level. Where is their Richard Feynman or (preferably) Carl Sagan?

[-]qjh1y20

But the detailed climate models are all basically garbage and don't add any good information beyond the naive model described above.

That's a strange conclusion to draw. The simple climate models basically has a "radiative forcing term"; that was well estimated even in the first IPCC reports in the late 80s. The problem is that "well-estimated" means to ~50%, if I remember correctly. More complex models are primarily concerned with the problem of figuring out the second decimal place of the radiative forcing and whether it has any temperature dependence or tipping points. These are important questions! In simple terms, the question is just whether the simple model shown breaks down at some point.

I don't think actually reading the literature should convince anyone otherwise, the worst charge you could levy is one regarding science communication. I mean, I don't think anyone from the climate community would dispute the fact that the early IPCC reports, which were made before we had access to fancy computers, did actually predict the climate of the 21st century so far remarkably well: https://www.science.org/cms/asset/a4c343d8-e46a-4699-9afc-983fea62c745/pap.pdf

The other aspect is that the ~50% (ballpark) uncertainty in the forcing, back then, allows for good near-term projections but the projections diverge after more than a couple decades, and we really want to have a better handle on things with a longer time horizon.

Finally, you can see that sea-level projections weren't quite as good. Detailed modelling is a bit more important there.

When I was about ten years old, the experts told me that animal fat was poisonous to a top predator. I did try eating margarine, but it seemed kind of nasty.

Ever since, I have had a certain amount of trouble trusting the experts. I wonder if the methods the experts are using are really the sorts of methods that reliably find the truth.

But as a very wise man once said:

Three can keep a secret, if two be dead. And if I thought my hat knew my counsel I would cast it into the fire.

If a theory relies on there being a conspiracy, then that is a priori a very high burden on it. Conspiracies are hard and unstable.

Once there's an actual conspiracy theory about your conspiracy, the cat is pretty much out of the bag, and it becomes even harder and less stable.

I do think that it can be a good heuristic to pay attention to the thoughts that are unwise to express in polite company. But it's usually not so much a conspiracy as the usual human tendency to try to silence political enemies, which needs little co-ordination.

Three can keep a secret, if two be dead. And if I thought my hat knew my counsel I would cast it into the fire.

If a theory relies on there being a conspiracy, then that is a priori a very high burden on it. Conspiracies are hard and unstable.

Some social structures can evolve that allow secrets to be kept with larger numbers of people. For example, intelligence agencies are not only compartmentalized, but the employees making them up all assume that if someone approaches them offering to buy secrets, that it's probably one of the routine counterintelligence operation within the agency that draws out and prosecutes untrustworthy employees. As a result, the employees basically two-box their agency and never accept bribes from foreign agents, no matter how large the payout. And any that fall through the cracks are hard to disentangle from disinformation by double/triple agents posing as easily-bribed-people.

It's much more complex than that, but that's just one example of a secret-keeping system evolving inside institutions, effective enough not just to keep secrets, but to thwart or misinform outside agents intelligently trying to rupture secret-keeping networks (emerging almost a hundred years ago or earlier).

the goal is to have fewer people believe things in the category ‘conspiracy theory.’ 

Depends how we define the term — a "conspiracy theory" is more than just a hypothesis that a conspiracy took place. Conspiracy theories tend to come with a bundle of suspicious behaviors.

Consider: as soon as three of four Nord Stream pipelines ruptured, I figured that Putin ordered it. This is an even more "conspiratorial" thought than I usually have, mainly because, before it happened, I thought Putin was bluffing by shutting down Nord Stream 1 and that he would (1) restore the gas within a month or two and (2) finally back down from the whole "Special Military Operation" thing. So I thought Putin would do X one week and decided that he had done opposite-of-X  the next week, and that's suspicious—just how a conspiracy theorist might respond to undeniable facts! Was I doing something epistemically wrong? I think it helped that I had contemplated whether Putin would double down and do a "partial mobilization" literally a few minutes before I heard the news that he had done exactly that. I had given a 40% chance to that event, so when it happened, I felt like my understanding wasn't too far off base. And, once Putin had made another belligerent, foolish and rash decision in 2022, it made sense that he might do a third thing that was belligerent, foolish and rash; blowing up pipelines certainly fits the bill. Plus, I was only like 90% sure Putin did it (the most well-known proponents of conspiracy theorists usually seem even more certain).

When I finally posted my "conspiracy theory" on Slashdot, it was well-received, even though I was mistaken in my mind about the Freeport explosion (it only reduced U.S. export capacity by 16%; I expected more). I then honed the argument a bit for the ACX version. I think most people who read it didn't pick up on it being a "conspiracy theory". So... what's different about what I posted versus what people recognize as a "conspiracy theory"?

  • I didn't express certainty
  • I just admitted a mistake about Freeport. Conspiracy theorists rarely weaken their theory based on new evidence. Also note that I found the 16% figure by actively seeking it out, and I updated my thinking based on it, though it didn't shift the probability by a lot. (I would've edited my post, but Slashdot doesn't support editing.)
  • I didn't "sound nuts" (conspiracy theorists often lack self-awareness about how they sound)
  • It didn't appear to be in the "conspiracy theory cluster". Conspiracy theorists usually believe lots of odd things. Their warped world model usually bleeds into the conspiracy theory somehow, making it "look like" a conspiracy theory.

My comment appears in response to award-winning[1] journalist Seymour Hersh's piece. Hersh has a single anonymous source saying that Joe Biden blew up Nord Stream, even though this would harm the economic interests of U.S. allies. He shows no signs of having vetted his information, but he solicits official opinions and is told "this is false and complete fiction". After that, he treats his source's claims as undisputed facts — so undisputed that claims from the anonymous source are simply stated as raw statements of truth, e.g. he says "The plan to blow up Nord Stream 1 and 2 was suddenly downgraded" rather than "The source went on to say that the plan to blow up Nord Stream 1 and 2 was suddenly downgraded". Later, OSINT investigator Oliver Alexander pokes holes in the story, and then finds evidence that NS2 ruptured accidentally (which explains why only one of the two NS2 lines was affected) while NS1 was blown up with help from the Minerva Julie, owned by a Russia-linked company. He also notes that the explosives destroyed low points in the pipelines that would minimize corrosion damage to the rest of the lines. This information doesn't affect Hersh's opinions, and his responses are a little strange [1][2][3]. Finally, Oliver points out that the NS1 damage looks different from the NS2 damage.

If you see a theory whose proponents have high certainty, refuse to acknowledge data that doesn't fit the theory (OR: enlarge the conspiracy to "explain" the new data by assuming it was falsified), speak in the "conspiracy theory genre", and sound unhinged, you see a "conspiracy theory"[2]. If it's just a hypothesis that a conspiracy happened, then no.

So, as long as people are looking for the right signs of a "conspiracy theory", we should want fewer people to believe things in that category. So in that vein, it's worth discussing which signs are more or less important. What other signs can we look for?

  1. ^

    He won a Pulitzer Prize 53 years ago

  2. ^

    Hersh arguably ticks all these boxes, but especially the first two which are the most important. Hersh ignores the satellite data, and assumes the AIS ship location data is falsified (on both the U.S. military ship(s) and the Russia-linked ship?)

I dislike this definition of a conspiracy theory. It tacks on way more meaning to the phrase than it contains on its own, forcing someone to know the definition you're using, and allowing motte and bailey behavior (you call a conspiracy theory a conspiracy theory to discredit it because by definition it is not epistemically sound, but then when provided evidence for it you say 'well it's a theory about a conspiracy, so it's a conspiracy theory'. I'm not saying you would do that, just that defining it like so allows that.)

It's better to keep "conspiracy theory" as "a theory about a conspiracy", and then discuss which ones are legitimate and which ones aren't.

That's a very reasonable concern. But I don't think your proposal describes how people use the term "conspiracy theory" most of the time. Note that the reverse can happen too, where people dismiss an idea as a "conspiracy theory" merely because it's a theory about a conspiracy. Perhaps we just have to accept that there are two meanings and be explicit about which one we're talking about.

(Mod note: The TM in Science™ wasn't supposed to be gigantic, that happened during crossposting. I have edited it back to normal size.)

Do you happen to have any idea of why that keeps happening in Zvi's posts? I'm probably misunderstanding something, but I would think that the "TM" in the Wordpress version of this essay is a unicode character, so why does it turn into an embedded image on LW import, gigantic or otherwise?

If you think you can beat the American __ Association over a long run average, that's great news for you!  That means free money!

Being right is super valuable, and you should monetize it immediately.

---

Anything else is just hot air.

[-]dxu1y134

The entire argument of Inadequate Equilibria is that there are ways in which systems can be inadequate without being exploitable by relevant actors. Adequacy is a stronger condition than inexploitability, because all the latter requires is that all of the available free energy in the system has been consumed for any reason at all, whereas the former requires also that the energy consumed specifically translates to results in the real world.

In other words: no, it's not always the case that being able to see things a large organization is doing wrong means "free money". That line of thought is naive to the difference between adequacy and inexploitability.

It is a useful exercise to try to map any complaint that something sucks, or that "these people who are in charge are wrong", into a business plan for making a profit by doing something better, and see what obstacles you hit.  However, sometimes the obstacle is "The law forbids you from doing something better".

For example, my contention is that the requirements for becoming a doctor in America are too high; that one could train slightly less intelligent people to do doctor-work, and certainly drop the requirement for a bachelor's degree before going to medical school, and still get doctors who are more than good enough.  (I'm not 100% sure that this requirement ultimately comes from the American Medical Association, but let's say 75% sure.)  How would I monetize it?

[-]TAG1y21

When it seems worthwhile, do your own research.

If you leave it at "seems", it is going to be bias driven [*]. How about "develop the ability to reason objectively about when you should do your own research"?

[*] Note how contrarianism about climate change and evolution are driven by politics and religion.

No, we cannot assume that the medical establishment, or any other establishment, will always get such questions right. That is not how any of this works. Even the best possible version of the medical (or other) establishment will sometimes get it wrong, if no one points it out without being dismissed as a conspiracy theorist or racist then the establishment will keep getting it wrong and so will you, and criticism is the only known antidote to error in such situations

But remember that just because the official version of X is imperfect , that doesn't mean the alternative version is better. Distinguish the absolute and the relative.

Idiocy, Intellect, and Infohazard

Formulating these illustrates the key technique for productively engaging the obviously false, or the hopelessly muddled: make up simple frames, doesn't matter if they are apt and capture the intended nuance, develop them enough to restate some of the claims in a coherent way, gain ability to clearly think about something related, even if it's not the intended thing.

At best, you'll capture a kernel of a truth in all the nonsense, even if it's not actually there in the intended senses of the claims being made; and if it's there, a seed for framing it should be easy to notice. At worst, you'll give too much credence to some nonsense, so it's best to give priority to understanding (what the claims you made up mean, which distinctions and arguments should be natural to consider important), not to knowing (whether you understand the intended claims and not just your own made up claims correctly, if the intended claims are true, which arguments are well-supported).

As always, the hard part is not saying "Boo! conspiracy theory!" and "Yay! scientific theory!"

The hard part is deciding which is which