Wiki Contributions


Do you mean that the half-day projects have to be in sequence relative to the other half-day projects, or within a particular half-day project, its contents have to be in sequence (so you can't for instance miss the first step then give up and skip to the second step)?

In general if things have to be done in sequence, often I make the tasks non-specific, e.g. lets say i want to read a set of chapters in order, then i might make the tasks 'read a chapter' rather than 'read the first chapter' etc. Then if I were to fail at the first one, I would keep reading the first chapter to grab the second item, then when I eventually rescued what would have been the first chapter, I would collect it by reading whatever chapter I was up to. (This is all hypothetical—I never read chapters that fast.)

Second sentence: 

  • People say very different things depending on framing, so responses to any particularly-framed question are presumably not accurate, though I'd still take them as some evidence.
  • People say very different things from one another, so any particular person is highly unlikely to be accurate.  An aggregate might still be good, but e.g. if people say such different things that three-quarters of them have to be totally wrong, then I don't think it's that much more likely that the last quarter is about right than that the answer is something almost nobody said.

First sentence: 

  • In spite of the above, and the prior low probability of this being a reliable guide to AGI timelines, our paper was the 16th most discussed paper in the world. On the other hand, something like Ajeya's timelines report (or even AI Impacts' cruder timelines botec earlier) seem more informative, and to get way less attention. (I didn't mean 'within the class of surveys, interest doesn't track informativeness much' though that might be true, I meant 'people seem to have substantial interest in surveys beyond what is explained by them being informative about e.g. AI timelines'

We didn't do rounding though, right? Like, these people actually said 0?

I did ask about it, data here (note that n is small): https://www.lesswrong.com/posts/iTH6gizyXFxxthkDa/positly-covid-survey-long-covid

Yeah, I meant that early on in the vaccinations, officialish-seeming articles said or implied that breakthrough cases were very rare (even calling them 'breakthrough cases', to my ear, sounds like they are sort of more unexpected than they should be, but perhaps that's just what such things are always called). That seemed false at the time even, before later iterations of covid made it more blatantly so.  I think it was probably motivated partly by desire to convince people that the vaccine was very good, rather than just error, which I think is questionable behavior.

I agree that I'm more likely to be concerned about in-fact-psychosomatic things than average, and on the outside view, thus probably biased in that direction in interpreting evidence. Sorry if that colors the set of considerations that seem interesting to me. (I didn't mean to claim that this was an unbiased list, sorry if I implied it. )

Some points regarding the object level:

  1. The scenario I described was to illustrate a logical point (that the initially tempting inference from that study wasn't valid). So I wouldn't want to take the numbers from that hypothetical scenario and apply them across the board to interpreting other data. I haven't thought through what range of possible numbers is really implied, or whether there are other ways to make sense of these prima facie weird findings (especially re lack of connection between having covid and thinking you have covid). If I put a lot of stock in that study,  I agree there is some adjustment to be made to other numbers (and probably anyway - surely some amount of misattribution is going on, and even some amount of psychosomatic illness). 
  2. My description was actually of how you would get those results if approximately none of the illness was psychosomatic but a lot of it was other illnesses (the description would work with psychosomatic illnesses too, but I worry that you misread my point, since you are saying that in that world most things are psychosomatic, and my point was that you can't infer that anything was psychosomatic).
  3. If the scenario I described was correct, the rates of misattribution implied would be specific to that population and their total ignorance about whether they had covid, rather than a fact intrinsic to covid in general, and applicable to all times and places. I do find it very hard to believe that in general there is not some decently strong association between having covid and thinking you have covid, even if also a lot of errors. 
  4. It's a single study, and single studies find all kinds of things. I don't recall seeing other evidence supporting it. In such a case, I'm inclined to treat it as worthy of adding some uncertainty, but not worthy of a huge update about everything. 
  5. If this consideration reduced real long covid cases by a factor of two, it doesn't feel like that changes the story very much (there's a lot of factor-of-two-level uncertainty all over the place, especially in guessing what the rate is for a specific demographic), so I guess it doesn't seem cruxy enough to give a lot of attention to.
  6. I agree that mostly it isn't salient to me that some fraction of cases are misattributions, and that maybe I should keep it in mind more, and say things like 'it looks like many people who think they had covid can no longer do their jobs' instead of taking things at face value. Though in my defense, this was a list of considerations, so I'm also not flagging all of the other corrections one might want to make to numbers throughout, as I might if I were doing a careful calculation. 
  7. It's true that I don't really believe that half of the bad cases at least are misattributions or psychosomatic—the psychosomatic story seems particularly far-fetched (particularly for the bad cases).  Perhaps I'm mis-imagining what this would look like. Is there other evidence for this that you are moved by? 

I thought rapid tests were generally considered to have a much lower false negative rate for detecting contagiousness, though they often miss people who are infected but not yet contagious. I forget why I think this, and haven't been following possible updates on this story, but is that different from your impression? (Here's one place I think saying this, for instance: https://www.rapidtests.org/blog/antigen-tests-as-contagiousness-tests) On this story, rapid tests immediately before an event would reduce overall risk by a lot.

Load More