Sometimes programming is like that, but then I get all anxious that I just haven’t checked everything thoroughly!
My guess is this has more to do with whether or not you’re doing something basic or advanced, in any discipline. It’s just that you run into ambiguity a lot sooner in the humanities
Just glancing at that Many Labs paper, it's looking specifically at psych studies replicable through a web browser. Who knows to what extent that generalizes to psych studies more broadly, or to biomedical research?
I don't think that paper allows any such estimate because it's based on published results, which are highly biased toward "significant" findings.
So it sounds like you're worried that a bunch of failed replication attempts got put in the file drawer, even after there was a published significant finding for the replication attempt to be pushing back against?
From another of Ioannidis's own papers:
Of 49 highly cited original clinical research studies, 45 claimed that the intervention was effective. Of these, 7 (16%) were contradicted by subsequent studies, 7 others (16%) had found effects that were stronger than those of subsequent studies, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged.
If 44% of those unchallenged studies in turn replicated, then total replication rates would be 54%. Of course, Ioannidis himself gives a possible reason why some of these haven't been replicated: "Sometimes the evidence from the original study may seem so overwhelming that further similar studies are deemed unethical to perform." So perhaps we should think that more than 44% of the unchallenged studies would replicate.
If we count the 16% that found relationships with weaker but still statistically significant effects as replications rather than failures to replicate, and add in 16% of the 24% of unchallenged studies, then we might expect that a total of 74% of biomedical papers in high-impact journals with over 1,000 citations have found a real effect. Is that legit? Well, it's his binary, not mine, and in WMPRFAF he's talking about the existence, not the strength, of relationships.
Although this paper looked at highly-cited papers, Ioannidis also notes that "The current analysis found that matched studies that were not so highly cited had a greater proportion of “negative” findings and similar or smaller proportions of contradicted results as the highly cited ones." I.e. less-highly-cited findings have fewer problems with lack of replication. So that 74% is, if anything, most likely a lower bound on replication rates in the biomedical literature more broadly.
Ioannidis has refuted himself.
You could think of it this way: If R is the ratio of (combinations that total N on two dice) to (combinations that don't total N on two dice), then the chance of (rolling N on two dice) is R/(R+1). For example, there are 2 ways to roll a 3 (1 and 2, and 2 and 1) and 34 ways to not roll a 3. The probability of rolling a 3 is thus (2/34)/(1+2/34)=2/36.
Question re: "Why Most Published Research Findings are False":
Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field... The pre-study probability of a relationship being true is R/(R + 1).
What is the difference between "the ratio of the number of 'true relationships' to 'no relationships' among those tested in the field" and "the pre-study probability of a relationship being true"?
I get the point of view that we should be forthright about our goals, practices, and community affiliations. Nothing wrong with using a label to cultivate a sense of belonging. After all, Christians call themselves after their ideal of perfection, so why shouldn't we?
I think part of the reason is that just about everybody wants to be rational. Not everybody wants to be a guitarist, Christian, perfectionist, or idealist.
Also, most groups have some way of telling whether somebody's "doing the thing" or not. Catholics have the sacrament and you have to call him Jesus, not Frank. Guitarists practice or have chops. Just about everybody tries to think rationally from time to time, even if they fail, so what's the thing that somebody would have to do to not be a rationalist?
Why don't we call ourselves epistemologists. At least it's one syllable shorter than "aspiring rationalist." Plus, it comes with the implication that we're interested in rational thought, not experts at doing it.
Funnily enough, I feel more trepidation about referring to myself as an epistemologist than as a "rationalist." I think it sounds too much like a professional title. But heck, I'm an author even though I've never published a book. I'm a musician even though I don't play professionally. Why can't I be an epistemologist?
How should we weight and relate the training of our mind, body, emotions, and skills?
I think we are like other mammals. Imitation and instinct lead us to cooperate, compete, produce, and take a nap. It's a stochastic process that seems to work OK, both individually and as a species.
We made most of our initial progress in chemistry and biology through very close observation of small-scale patterns. Maybe a similar obsessiveness toward one semi-arbitrarily chosen aspect of our own individual behavior would lead to breakthroughs in self-understanding?
In programming, that's true at first. But as projects increase in scope, there's a risk of using an architecture that works when you’re testing, or for your initial feature set, but will become problematic in the long run.
For example, I just read an interesting article on how a project used a document store database (MongoDB), which worked great until their client wanted the software to start building relationships between data that had formerly been “leaves on the tree.” They ultimately had to convert to a traditional relational database.
Of course there are parallels in math, as when you try a technique for integrating or parameterizing that seems reasonable but won’t actually work.
Math is training for the mind, but not like you think
Just a hypothesis:
People have long thought that math is training for clear thinking. Just one version of this meme that I scooped out of the water:
“Mathematics is food for the brain,” says math professor Dr. Arthur Benjamin. “It helps you think precisely, decisively, and creatively and helps you look at the world from multiple perspectives . . . . [It’s] a new way to experience beauty—in the form of a surprising pattern or an elegant logical argument.”
But math doesn't obviously seem to be the only way to practice precision, decision, creativity, beauty, or broad perspective-taking. What about logic, programming, rhetoric, poetry, anthropology? This sounds like marketing.
As I've studied calculus, coming from a humanities background, I'd argue it differently.
Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.
It can therefore serve as a more reliable signal, to self and others, of one's own learning capacity.
Experiencing a subject like that can be training for the mind, because becoming successful at it requires cultivating good habits of study and expectations for coherence.
It was the silence of sullen agreement.