hold_my_fish

Posts

Sorted by New

Comments

Young kids catching COVID: how much to worry?

This seems reasonable, but I wonder whether "long-term complications" might be a bit underrated. It seems like there are a lot of viruses that have long-term effects or other non-obvious consequences. (I should add that I'm not a biologist, so this is not an informed opinion.)

The example I'm most familiar with is chicken pox causing shingles, decades later from the initial sickness. In that case, shingles is (I think) typically more severe than the original sickness, and is quite common: 1 out of 3 people develop it in their lifetime, according to the CDC.

Other examples that come to mind are measles erasing immune memory (which IIRC wasn't known until recently) and, though not a childhood illness, HPV causing cervical cancer.

Each of these examples has some big differences from SARS-CoV-2, but there isn't much experience with severe coronaviruses, so I don't know how to do better. Maybe the ideal would be to go through a list of reasonably well-understood viruses and check what proportion have known long-term effects or non-obvious consequences (and the rate).

We can get a lower bound from chicken pox and measles. If there are 10-20 common childhood illnesses (based on a quick search), then, using 2 as the numerator, at least 10%-20% of them have consequences that are not immediately obvious. If we go with the 1/3rd rate for shingles (since I don't know for measles), that would translate into a 3%-7% lower bound for covid.

Would I go with a >3% estimate of serious long-term effect or non-obvious consequence from covid to a kid that catches it? A persuasive counterargument that comes to mind is that the immediate experience of covid is less severe to a kid than chicken pox or measles, which would suggest that non-immediate effects are also less severe.

All-in-all, my confidence is extremely low, but hopefully this gives some food for thought.

Human instincts, symbol grounding, and the blank-slate neocortex

Thanks for your reply!

A few points where clarification would help, if you don't mind (feel free to skip some):

  • What are the capabilities of the "generative model"? In general, the term seems to be used in various ways. e.g.
    • Sampling from the learned distribution (analogous to GPT-3 at temp=1)
    • Evaluating the probability of a given point
    • Producing the predicted most likely point (analogous to GPT-3 at temp=0)
  • Is what we're predicting the input at the next time step? (Sometimes "predict" can be used to mean filling in missing information, but that doesn't seem to make sense in this context.) Also, I'm not sure what I mean by "time step" here.
  • The "input signal" here is coming from whatever is wired into the cortex, right? Does it work to think of this as a vector in ?
  • Is the contextual information just whatever is the current input, plus whatever signals are still bouncing around?

Also, the capability described may be a bit too broad, since there are some predictions that the cortex seems to be bad at. Consider predicting the sum of two 8-digit integers. Digital computers compute that easily, so it's fundamentally an easy problem, but for humans to do it requires effort. Yet for some other predictions, the cortex easily outperforms today's digital computers. What characterizes the prediction problems that the cortex does well?

Human instincts, symbol grounding, and the blank-slate neocortex

I find myself returning to this because the idea of a "common cortical algorithm" is intriguing.

It seems to me that if there is a "common cortical algorithm" then there is also a "common cortical problem" that it solves. I suspect it would be useful to understand what this problem is.

(As an example of why isolating the algorithm and problem could be quite different, consider linear programming. To solve a linear programming problem, you can choose a simplex algorithm or an interior-point method, and these are fundamentally different approaches that are both viable. It's also quite a bit easier to state linear programming as a problem than it is to describe either solution approach.)

Do you have a view on the most plausible candidates for a "common cortical problem" (CCP)? The tricky aspects that come to mind: not being too narrow (i.e. the CCP should include (almost) everything the CCA can do), not being too broad (i.e. the CCA should be able solve (almost) every instance of the CCP), and not being too vague (ideally precise enough that you could actually make a benchmark test suite to evaluate proposed solutions).