Interesting question. I found this article https://arxiv.org/abs/1802.07740 together with the papers that cite it https://ui.adsabs.harvard.edu/abs/2018arXiv180207740R/citations as a good starting point.
Subscribe/get notification for new comments in a post. I have already enough tabs open on the browser to keep track of all the interesting posts :)
The title doesn't seem to fit well the question: P-Hacking Detection does not map well to replicability, even if the presence of P-hacking usually means that the study will not replicate.
I'm interested in automatic summarization of papers key characteristics (PICO, sample size, methods) and I'm starting to build something soon.
You're referring to the general population I guess, so it could be a reusable device where you can blow your nose into, then a (manual?) vacuum system could suck the remaining mucus from the nostrils. In order to avoid the contact between hands and the pathogens, the device would be pressed to the nasal base, maybe with the thumb and the middle finger under the nostrils, with the index on the bridge of the nose. A size that fits in the pockets should be similar to a vaporizer pen, with mini plastic bags to throw when full, transparent in order to examine the content for medical purposes, and the bags should be rechargeable.
I can see a lot of engineering problems with that, but the function would be performed efficiently, unless I'm missing something.
" That's useful when you have many professionals who need a common language but which disagree about the causes of mental illnesses. "
Using the proposed framework, it means that the field lacks Foundational Understanding. Thus I wouldn't feel comfortable calling the DSM an ontology, though there is i.e. the Mental Disease Ontology, which sometimes maps to DSM.
This post is so good! I was just thinking if this framework could be useful for prediction business, where the Foundational Understanding is crowd-sourced through e.g. academic literature, open data, manual curator. Ontologies might be created and curated by public consortia and evaluation could be a private-public endeavour.
We don't really have a metric for meaning or impact though.
And even if we had decent metrics they would gain value with time since the impact of a discovery becomes evident only after a while (think patents, landmark papers, new disciplines).
For the most part it seems to me that people are scared to work on problems that are actually meaningful.
It appears to me that the incentives system is the real issue here. UBI or some basic job might release a lot of people from their publishing cages, allowing them to work on research fundamentals: gathering good data, working on theory and methodology, replicating studies.