I've been looking at papers involving a lot of 'controlling for confounders' recently and am unsure about how much weight to give their results.
Does anyone have recommendations about how to judge the robustness of these kind of studies?
Also, I was considering doing some tests of my own based on random causal graphs, testing what happens to regressions when you control for a limited subset of confounders, varying the size/depth of graph and so on. I can't seem to find any similar papers but I don't know the area, does anyone know of similar work?
Robust statistics is a field. Wikipedia links to http://lagrange.math.siu.edu/Olive/ol-bookp.htm which has chapters like Chapter 7-Robust Regression and Chapter 8-Robust Regression Algorithms
Thanks, I'll give it a read.
Maybe reading Gelman's self-contained comments on SSC's More Confounders would make you more confused in a good way.
Cheers, glad I'm not dealing with 300 variables. Don't think the situation is quite as dire as for sleeping pills luckily.
Question about error-correcting codes that's probably in the literature but I don't seem to be able to find the right search terms:
How can we apply error-correcting codes to logical *algorithms*, as well as bit streams?
If we want to check that bit-stream is accurate, we know how to do this for a manageable overhead - but what happens if there's an error in the hardware that does the checking? It's not easy for me to construct a system that has no single point of failure - you can run the correction algorithm multiple times but how do you compare the results without ending up back with a single point of failure?
Anyone know any relevant papers or got a cool solution?
Interested for the stability of computronium-based futures!
At the risk of pointing to the obvious, the "typical" method that has been used in the past military and space is hardware redundancy (often x3).