Thanks for the reply! I'm familiar with (and am skeptical of) the basic information theoretic argument as to why genome size should constrain the complexity of whatever algorithm the brain is running, but my question here is more specific. What I'm not clear on is how those two numbers (20,000 genes and a few thousand neuron types) specifically relate to each other in your model of brain functioning. Is the idea that each neuron type roughly corresponds to the expression of one or two specific genes, and thus you'd expect <20,000 neuron types?
"Fo...
"They found low-thousands of neuron types in the mouse, which makes sense on priors given that there are only like 20,000 genes encoding the whole brain design and everything in it, along with the rest of the body."
I'm a bit puzzled by this statement; how would the fact that there are ~20,000 genes in the mouse/human genome constrain the number of neuron types to the low thousands? From a naive combinatorics standpoint it seems like 20,000 genes is sufficiently large to place basically zero meaningful constraints on the number of potential cell types. E.g....
Interesting...I think I vaguely understand what you're talking about, but I'm doubtful that these concepts really apply to biology. Especially since your example is about constraints on evolvability rather than functioning. In practice that is pretty much how everything tends to work, with absolutely wild amounts of pleiotropy and epistasis, but that's not a problem unless you want to evolve a new function. Which is probably why the strong strong evolutionary default is towards stasis, not change.
I guess my priors are pretty different because my background... (read more)