Wiki Contributions


I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

If I take a "non-sentient" chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition. 

I'm curious how you would distinguish between entities that can be harmed in a morally relevant way and entities that cannot. I use subjective experience to make this distinction, but it sounds like you're using something like -- thwarted intentions? telos-violation? I suspect we'd both agree that chickens are morally relevant and (say) pencils are not, and that snapping a pencil in half is not a morally-relevant action. But I'm curious what criterion you're using to draw that boundary.

One could make similar inquiries into 'dissociation'. If a person is regularly dissociated and doesn't feel things very intensely, does it make it more okay to hurt them? 

This is an interesting point; will think about it more.

How much should you update on a COVID test result?

Most people are tested for cancer because they have one or more symptoms consistent with cancer. So the base rate of 1% "for the patient's age and sex" isn't the correct prior, because most of the people in the base rate have no symptoms that would provoke a test.

To clarify, the problem that Gigerenzer posed to doctors began with "A 50-year-old woman, no symptoms, participates in a routine mammography screening". You're right that if there were symptoms or other reasons to suspect having cancer, that should be factored into the prior. (And routine mammograms are in fact recommended to all women of a certain age in the US.) 

 We really need a computation whose result is a probability.

I agree - it would be ideal to have a way to precisely calculate your prior odds of having COVID. I try and estimate this using microCOVID to sum my risk based on my recent exposure level, the prevalence in my area, and my vaccination status. I don't know a good way to estimate my prior if I do have symptoms.

My prior would just be a guess, and I don't see how multiplying a guess by 145x is helpful.

I don't fully agree with this part, because regardless of whether my prior is a guess or not, I still need to make real-world decisions about when to self-isolate and when to seek medical treatment. If I have a very mild sore throat that might just be allergies, and I stayed home all week, and I test negative on a rapid test, what should I do? What if I test negative on a PCR test three days later? Regardless of whether I'm using Bayes factors, or test sensitivity or just my intuition, I'm still using something to determine at which point it's safe to go out again. Knowing the Bayes factors for the tests I've taken helps that reasoning be slightly more grounded in reality. 

Edit: I've updated my post to make it clearer that the Gigerenzer problem specified that the test was a routine test on an asymptomatic patient.

How much should you update on a COVID test result?

An earlier draft of this actually mentioned vaccination status, and I only removed it for sentence flow reasons. You're right that vaccination status (or prior history of COVID) is an important part of your prior estimate, along with prevalence in your area, and your activities/level of exposure. The microCOVID calculator I linked factors in all three of these. I've also edited the relevant sentence in the "Using Bayes factors" section to mention vaccination status.

How much should you update on a COVID test result?

Wow, that is surprising, thanks for sharing.  Am I reading correctly that you got no positive NAAT/PCR tests, and only got positives from antigen tests? 

I took 13 rapid tests in total, 5 of which were positive, and 4 of these positive tests were from the same brand. 4 out of 5 of the tests of that brand that I have taken were positive.

Would you be up for sharing what brand that was?

I don't yet know enough about what causes false positives and false negatives in either antigen tests or NAATs to speculate much, but I appreciate this datapoint! (Also, glad you're feeling well and didn't develop any symptoms)

How much should you update on a COVID test result?

Thanks for linking the meta-analysis and the other papers; will read (and possibly update the post afterwards)! I especially appreciate that the meta-analysis includes studies of BinaxNOW, something I'd been looking for.

Sensitivity for Ct < 25: 94%, Ct > 3: 30%. (I'll be writing more about these results in a bit, but the short version is that this strongly supports the belief that test sensitivity depends strongly on viral load and will be highest during peak infectivity).

Nice, I'd been hearing/reading about using cycle count to determine how much a test's results track infectiousness, and that's really to see the results so starkly supporting that. Looking forward to your writeup! 

How much should you update on a COVID test result?

I haven't had time to read up about Beta distributions and play with the tool you linked, but I just wanted to say that I really appreciate the thorough explanation! I'm really happy that posting about statistics on LessWrong has the predictable consequence of learning more statistics from the commenters :)

How much should you update on a COVID test result?

Thanks, I was wondering if the answer would be something like this (basically that I should be using a distribution rather than a point estimate, something that @gwillen also mentioned when he reviewed the draft version of this point).  

If the sensitivity and specificity are estimated with data from studies with large (>1000) sample sizes it mostly won’t matter.

That's the case for the antigen test data; the sample sizes are >1000 for each subgroup analyzed (asymptomatic, symptoms developed <1 week ago, symptoms developed >1 week ago).  

The sample size for all NAATs was 4351, but the sample size for the subgroups of Abbot ID Now and Cepheid Xpert Xpress were only 812 and 100 respectively. Maybe that's a small enough sample size that I should be suspicious of the subgroup analyses? (@JBlack mentioned this concern below and pointed out that for the Cepheid test, there were only 29 positive cases total). 

How much should you update on a COVID test result?

Thanks, I appreciate this explanation!

The other problem is that the positive sample size must have been only 29 people. That's disturbingly small for a test that may be applied a billion times, and seriously makes me question their validation study that reported it.

Thanks for flagging this. The review's results table ("Summary of findings 1") says "100 samples" and "29 SARS-COV-2 cases"; am I correctly interpreting that as 100 patients, of which 29 were found to have COVID? (I think this is what you're saying too, just want to make sure I'm clear on it)

If I had to pick a single number based only on seeing their end result, I'd go with 96% sensitivity under their study conditions, whatever those were.

Can you say more about how you got 96%?

How much should you update on a COVID test result?

Yeah, based on the Cochrane paper I'd interpret "one positive result and one negative result" as an overall update towards having COVID. In general, both rapid antigen tests and NAATs are more sensitive than they are specific (more likely to return false negatives than false positives.)

Though also see the "Caveats about infectiousness" section, which suggests that NAATs have a much higher false positive rate for detecting infectiousness than they do for detecting illness. I don't have numbers for this, unfortunately, so I'm not sure if 1 positive NAAT + 1 negative NAAT is overall an update in favor or away from infectiousness.

How much should you update on a COVID test result?

I'm not super sure; I wrote about this a little in the section "What if you take multiple tests?":

If you get a false negative because you have a low viral load, or because you have an unusual genetic variant of COVID that's less likely to be amplified by PCR*, presumably that will cause correlated failures across multiple tests. My guess is that each additional test gives you a less-significant update than the first one.

*This scenario is just speculation, I'm not actually sure what the main causes of false negatives are for PCR tests.

but that's just a guess. I'd love to hear from anyone who has a more detailed understanding of what causes failures in NAATs and antigen tests.

Naively, I'd expect that if the test fails due to low viral load, that would probably cause correlated failures across all tests taken on the same day. Waiting a few days between tests is probably a good idea, especially if you were likely to be in the early-infection stage (and so likely low viral load) during your first test. The instructions for the BinaxNOW rapid antigen test say that if you get a negative result, you shouldn't repeat the test until 3 days later.

Load More