Wiki Contributions


I haven't looked into this, no. I'm quite confident that I have COVID; I have all the classic symptoms (fever, cough, shortness of breath). I also re-tested using an antigen test today with a nasal-only swab and got a positive.

As a datapoint, I tested positive on 3 antigen tests of two different kinds and negative on a Cue test in the same hour, on my first day of having COVID. My suspicion is that this was because I swabbed my throat for the antigen tests, but not for the Cue test because I wasn't sure if saliva worked for Cue. As further supporting evidence, I lightly brushed my throat for antigen test #1 and got an extremely faint line, and then vigorously swabbed it less an than hour later and got a very clear dark line (both on BinaxNow tests).

Edit: These tests were all performed yesterday, May 2 2022.

If I take a "non-sentient" chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition. 

I'm curious how you would distinguish between entities that can be harmed in a morally relevant way and entities that cannot. I use subjective experience to make this distinction, but it sounds like you're using something like -- thwarted intentions? telos-violation? I suspect we'd both agree that chickens are morally relevant and (say) pencils are not, and that snapping a pencil in half is not a morally-relevant action. But I'm curious what criterion you're using to draw that boundary.

One could make similar inquiries into 'dissociation'. If a person is regularly dissociated and doesn't feel things very intensely, does it make it more okay to hurt them? 

This is an interesting point; will think about it more.

Most people are tested for cancer because they have one or more symptoms consistent with cancer. So the base rate of 1% "for the patient's age and sex" isn't the correct prior, because most of the people in the base rate have no symptoms that would provoke a test.

To clarify, the problem that Gigerenzer posed to doctors began with "A 50-year-old woman, no symptoms, participates in a routine mammography screening". You're right that if there were symptoms or other reasons to suspect having cancer, that should be factored into the prior. (And routine mammograms are in fact recommended to all women of a certain age in the US.) 

 We really need a computation whose result is a probability.

I agree - it would be ideal to have a way to precisely calculate your prior odds of having COVID. I try and estimate this using microCOVID to sum my risk based on my recent exposure level, the prevalence in my area, and my vaccination status. I don't know a good way to estimate my prior if I do have symptoms.

My prior would just be a guess, and I don't see how multiplying a guess by 145x is helpful.

I don't fully agree with this part, because regardless of whether my prior is a guess or not, I still need to make real-world decisions about when to self-isolate and when to seek medical treatment. If I have a very mild sore throat that might just be allergies, and I stayed home all week, and I test negative on a rapid test, what should I do? What if I test negative on a PCR test three days later? Regardless of whether I'm using Bayes factors, or test sensitivity or just my intuition, I'm still using something to determine at which point it's safe to go out again. Knowing the Bayes factors for the tests I've taken helps that reasoning be slightly more grounded in reality. 

Edit: I've updated my post to make it clearer that the Gigerenzer problem specified that the test was a routine test on an asymptomatic patient.

An earlier draft of this actually mentioned vaccination status, and I only removed it for sentence flow reasons. You're right that vaccination status (or prior history of COVID) is an important part of your prior estimate, along with prevalence in your area, and your activities/level of exposure. The microCOVID calculator I linked factors in all three of these. I've also edited the relevant sentence in the "Using Bayes factors" section to mention vaccination status.

Wow, that is surprising, thanks for sharing.  Am I reading correctly that you got no positive NAAT/PCR tests, and only got positives from antigen tests? 

I took 13 rapid tests in total, 5 of which were positive, and 4 of these positive tests were from the same brand. 4 out of 5 of the tests of that brand that I have taken were positive.

Would you be up for sharing what brand that was?

I don't yet know enough about what causes false positives and false negatives in either antigen tests or NAATs to speculate much, but I appreciate this datapoint! (Also, glad you're feeling well and didn't develop any symptoms)

Thanks for linking the meta-analysis and the other papers; will read (and possibly update the post afterwards)! I especially appreciate that the meta-analysis includes studies of BinaxNOW, something I'd been looking for.

Sensitivity for Ct < 25: 94%, Ct > 3: 30%. (I'll be writing more about these results in a bit, but the short version is that this strongly supports the belief that test sensitivity depends strongly on viral load and will be highest during peak infectivity).

Nice, I'd been hearing/reading about using cycle count to determine how much a test's results track infectiousness, and that's really to see the results so starkly supporting that. Looking forward to your writeup! 

I haven't had time to read up about Beta distributions and play with the tool you linked, but I just wanted to say that I really appreciate the thorough explanation! I'm really happy that posting about statistics on LessWrong has the predictable consequence of learning more statistics from the commenters :)

Thanks, I was wondering if the answer would be something like this (basically that I should be using a distribution rather than a point estimate, something that @gwillen also mentioned when he reviewed the draft version of this point).  

If the sensitivity and specificity are estimated with data from studies with large (>1000) sample sizes it mostly won’t matter.

That's the case for the antigen test data; the sample sizes are >1000 for each subgroup analyzed (asymptomatic, symptoms developed <1 week ago, symptoms developed >1 week ago).  

The sample size for all NAATs was 4351, but the sample size for the subgroups of Abbot ID Now and Cepheid Xpert Xpress were only 812 and 100 respectively. Maybe that's a small enough sample size that I should be suspicious of the subgroup analyses? (@JBlack mentioned this concern below and pointed out that for the Cepheid test, there were only 29 positive cases total). 

Thanks, I appreciate this explanation!

The other problem is that the positive sample size must have been only 29 people. That's disturbingly small for a test that may be applied a billion times, and seriously makes me question their validation study that reported it.

Thanks for flagging this. The review's results table ("Summary of findings 1") says "100 samples" and "29 SARS-COV-2 cases"; am I correctly interpreting that as 100 patients, of which 29 were found to have COVID? (I think this is what you're saying too, just want to make sure I'm clear on it)

If I had to pick a single number based only on seeing their end result, I'd go with 96% sensitivity under their study conditions, whatever those were.

Can you say more about how you got 96%?

Load More