waveman's Posts

Sorted by New

waveman's Comments

Causal Abstraction Toy Model: Medical Sensor

It looks like I have to read the whole post to see whether it is of interest to me, because there is no summary. Instead you seem to just wade in to the detail.

I tried reading the first sentences of each paragraph but that was useless because they are almost all opaque references to the previous material.

I suggest you add a summary and start paragraphs with a sentence encapsulating the key idea of the paragraph.

A letter on optimism about human progress

I downvoted because in brief a) this article is very one-sided b) When you read human history, the plethora of collapses IMHO puts a strong onus of proof on those who argue it won't happen again c) There are many warning signs of huge problems ahead - global warming, resource depletion (soils, fresh water, phosphates, oil, coal, uranium, numerous other minerals), overpopulation, increasing proliferation of nuclear weapons d) Our so clever civilization depends utterly on cheap energy and this looks like ending fairly soon e) There is no clear evidence that technological progress is rapid enough to solve these problems.

How do you assess the quality / reliability of a scientific study?

On bias see here https://www.bmj.com/content/335/7631/1202 and references. There is a lot of research about this. Note also that you do not even need to bias a particular researcher, just fund the researchers producing the answers you like, or pursuing the avenues you are interested in e.g. Coke's sponsorship of exercise research which produces papers suggesting that perhaps exercise is the answer.

One should not simply dismiss a study because of sponsorship, but be aware of what might be going on behind the scenes. And also be aware that people are oblivious to the effect that sponsorship has on them. One study of primary care doctors found a large effect on prescribing from free courses, dinners, etc, but the doctors adamantly denied any impact.

The suggestions of things to look for are valid and useful but often you just don't know what actually happened.

How do you assess the quality / reliability of a scientific study?

Mostly belatedly realizing that studies I took as Gospel turned out to be wrong. This triggered an intense desire to know why and how.

How do you assess the quality / reliability of a scientific study?

Mostly medicine, nutrition, metabolism. Also finance and economics.

In Defense of Kegan

For people wanting to understand Kegan's key ideas without too much pain, I suggest "The Discerning Heart" by Philip Lewis. It is a concise and excellent introduction to the topic.

Climate technology primer (1/3): basics

Excellent post.

One oversight I see often in this space, and here, relates to a carbon tax. It is stated that the revenue from a carbon tax can be used to compensate people, especially lower income people, for the increased cost of living resulting from the tax. The fatal problem with this is that in a zero emissions world, there will be no emissions and therefore no carbon tax revenue.

Of course it may be possible to compensate people via other means such as other taxes. But a carbon tax is only required because it is otherwise cheaper to emit carbon. This means costs will go up overall and that there will be a net loss (in the short term at least). There is no free lunch and someone will have to pay.

How do you assess the quality / reliability of a scientific study?

One of the most miserable things about the LW experience is realizing how little you actually know with confidence.

How do you assess the quality / reliability of a scientific study?

I've probably read about 1000 papers. Lessons learned the hard way...

1. Look at the sponsorship of the research and of the researchers (previous sponsorship, "consultancies" etc are also important for up to 10-15 years). This creates massive bias. E.g: A lot of medical bodies and researchers are owned by pharmaceutical companies

2. Look at ideological biases of the authors. E.g. a lot of social science research assumes as a given that genes have no effect on personality or intelligence. (Yes, really).

3. Understand statistics very deeply. There is no pain-free way to get this knowledge, but without it you cannot win here. E.g. a) The assumptions behind all the statistical models b) the limitations of alleged "corrections". You need to understand both Bayesian and Frequentist statistics in depth, to the point that they are obvious and intuitive to you.

4. Understand how researchers rig results. e.g. undisclosed multiple comparisons, peeking at the data before deciding what analysis to do, failing to pre-publish the design and end points and to follow that pre-publication, "run-in periods" for drug trials, sponsor-controlled committees to review and change diagnoses... There are papers about this e.g. "why most published research findings are false".

5. After sponsorship, read the methods section carefully. Look for problems. Have valid and appropriate statistics been used? Were the logical end points assessed? Maybe then look at the conclusions. Do the conclusions match the body of the paper? Has the data from the study been made available to all qualified researchers to check the analysis? Things can change a lot when that happens e.g. Tamiflu. Is the data is only available to commercial interests and their stooges this is a bad sign.

6. Has the study been replicated by independent researchers?

7. Is the study observational? If so, does is meet generally accepted criteria for valid observational studies? (large effect, dose-response gradient, well understood causal model, well understood confounders, confounders smaller than the published effect etc).

8. Do not think you can read abstracts only and learn much that is useful.

9. Read some of the vitriolic books about the problems in research e.g. "Deadly Medicines and Organised Crime How big pharma has corrupted healthcare" by PETER C GØTZSCHE. Not everything in this book is true but it will open your eyes about what can happen.

10. Face up to the fact that 80-90% of studies are useless or wrong. You will spend a lot of time reading things only to conclude that there is not much there.

SSC Meetups Everywhere: Brighton, UK

I am thinking that "Australia" is not correkt and should read "England".

Load More