It just takes time. Your diaphragm will eventually stop being so annoying.
Not in my experience, which is substantial, but perhaps specific to me.
Good advice, but note that it doesn't apply to all running, but specifically to street or track running. Trail is different in a few notable ways.
Major differences:
Epistemic status: generally aligned with pro recommendations, injury free for 15 years, starting once I adopted these practices.
I'm really glad to see taste enter the discourse! Taste is a meta-rational skill and meta-rationality - the art of relevant realization, how to not get stuck optimizing up to a local maximum that's not actually where you want to be - has very much been lacking in public conversation about how to get more of the good AI can bring while also doing common-sense things to manage risk.
Why have I been concerned about this? Optimizing up to a local maximum which isn't actually where you want to be is EXACTLY the problem that so many content platforms (Facebook, Twitter, Quora) have gotten themselves into and my impression has been that the people building AI products are drawn from the same population. So seeing taste get discussed is very welcome and, if this continues, will definitely prompt a positive update to my expectation of how things are going to turn out.
A useful direction for further research would be to analyze fiction and non-fiction as separate corpora. Longer sentences may reflect a tendency towards hypotaxis, and they may also be a deliberate stylistic choice for fiction writers, perhaps setting a more languid and relaxed tone not appropriate for much non-fiction.
Fanelli is a good, if dated reference for this. Another important point is that there are levels of misconduct in research, ranging from bad authorship practices to outright fabrication of results, with the less severe practices being relatively more common: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4269469/
Aside from all that, there's irreproducibility, which doesn't arise from any kind of deliberate misconduct, but still pollutes the epistemic commons: https://www.cos.io/rpcb
As someone with experience in BSL-3 labs, BSL feels like a good metaphor to me. The big issue with the RSP proposal is that it's still just a set of voluntary commitments that could undermine progress on real risk management by giving policymakers a way to make it look like they've done something without really doing anything. It would be much better with input from risk management professionals.
I'm confused.The protest is now listed at 4. Have you coordinated with them?
I like it, but it feels like you could have worked snakes in there somehow: https://www.vectorsofmind.com/p/the-snake-cult-of-consciousness
On the topic of capabilities, agentic research assistants have come a long way in 2025. Elicit & Edison Scientific are ones to watch, but they still struggle to adequately cite the sources for the claims in the reports they generate our conclusions they come to. Contrast this with Deep Research models, which include nonsense from press releases and blog posts, even when explicitly asked to exclude these, though they have improved significantly too. Progress, but we're still a long way from what a average PhD could put together. They sure do it more quickly, but moving more quickly in the wrong direction isn't that helpful in research! One encouraging direction I see some tools (notably Moara.io) moving is towards automating the drudgery of systematic data extraction, freeing up experts to do the high-context analysis required for synthesizing the literature.
If you're not an expert, you'll no doubt be impressed, but be careful. Gell-Mann amnesia is still as much a thing with LLM-assisted research as it ever was with science journalism.