Neuroscience things that confuse me right now

by Steven Byrnes12 min read26th Jul 20216 comments

14

NeuroscienceWorld Modeling
Personal Blog

(Quick poorly-written update, probably only of interest to neuroscientists.)

It’s not that I’m totally stumped on these; mostly these are things I haven’t looked into much yet. Still, I’d be very happy and grateful for any pointers and ideas.

Most of these are motivated by one or more of my longer-term neuroscience research interests: (1) “What is the “API” of the telencephalon learning algorithm?” (relevant to AGI safety because maybe we’ll build similar learning algorithms and we’ll want to understand all our options for “steering” them towards trying to do the things we want them to try to do), (2) “How do social instincts work, in sufficient detail that we could write the code to implement them ourselves?” (relevant to AGI safety for a couple reasons discussed here), (3) I’d also like to eventually have good answers to the “meta-problem of consciousness” and “meta-problem of suffering”, but maybe that’s getting ahead of myself.

1. Layout of autonomic reactions / “assessments” in amygdala, mPFC, ACC, etc.

In Big Picture of Phasic Dopamine, I talked about these areas under “Dopamine category #3: supervised learning”. In the shorter follow-up A Model of Decision-making in the Brain I talked about them as “Step 2”, the step where possible plans are "assessed" in dozens of genetically-hardcoded categories like "If I do this plan, would it be a good idea to raise my cortisol levels?".

Anyway, at a zoomed-out level, I think I have a good story that explains a lot. At a zoomed-in level, I'm pretty unclear on exactly what's happening where.

What I have so far is:

  • I think the outputs in question are coming from (1) agranular prefrontal cortex, (2) agranular anterior cingulate cortex, (3) there’s a little piece of insular cortex which is also agranular; it’s right next to PFC (more specifically OFC) and for all intents and purposes we should lump it in with agranular PFC (see Wise 2017—"Although the traditional anatomical literature often treats the orbitofrontal and insular cortex as distinct entities, a detailed analysis of their architectonics, connections, and topology revealed that the agranular insular areas are integral parts of an “orbital prefrontal network”"), and (4) the amygdala, or at least part of it. My main interest / confusion is the division of labor among these things.
    • (I continue to believe that "agranular" = "output region", for reasons similar to this paper.)
  • There’s one very important special “assessment calculation”, namely the “Reward Prediction”. (Again see here.) The O’Reilly PVLV model says that this signal comes from vmPFC somewhere, but I’m not sure exactly where.
  • I just started reading Bud Craig’s book and he says: Y’know how motor cortex & somatosensory cortex are the output and feedback input (respectively) for the normal (musculoskeletal) motor control system? Well by the same token, we should think of cingulate cortex & insular cortex as the output and feedback input (respectively) of the autonomic control system. Or something like that. That’s an interesting idea for how to think about ACC specifically. But still leaves the question of how the amygdala etc. fits in.
  • I think there's some indirect evidence that the amygdala outputs have 1:1 correspondence with the vmPFC assessments, but I'm quite unsure about that.
  • I sometimes think that I should think of neocortical assessment areas as being more “sophisticated” than amygdala assessments—more dependent on abstract context, less dependent on low-level sensory input. I’m not sure if that’s correct though. The original reason I was thinking this was (at least partly) because neocortex has six layers and the amygdala doesn’t, it’s a lot simpler. But as I noted here, that’s not necessarily right, it may actually be correct to think of the amygdala as a “neocortex layer 6b” that happened to have physically separated from the other layers. (What are the other layers? Answer: The lateral nucleus of amygdala is layer 6b of “ventral temporal cortex”, and the basomedial and posterior nuclei of the amygdala is layer 6b of “amygdalar olfactory cortex”. Or so says Swanson 1998.)
  • I sometimes think that neocortical assessments are projecting a bit farther into the future than amygdala assessments (e.g. maybe ACC says “it will be appropriate to raise cortisol levels one second from now”, while amygdala says “it will be appropriate to raise cortisol levels 0.2 seconds from now”), but I’m not sure if that’s right. Well, I'm pretty confident that it's right for the vmPFC “reward prediction” assessment I mentioned above, but I'm not sure it generalizes to other assessments.

1.1 What’s with S.M.?

As I mentioned here, S.M., a person supposedly missing her whole amygdala and nothing else, seems to have more-or-less lost the ability to have (and to understand in others) negative emotions, but not positive emotions. This seems to suggest that the amygdala triggers negatively-valenced autonomic outputs, and not positively-valenced ones. But my impression from other lines of evidence is that the amygdala can do both. So I'm confused by that.

1.2 The lesions that cause pain asymbolia are in the wrong place

I was reading a book about pain asymbolia (the ability to be intellectually aware of pain inputs, without caring about them or reacting to them). On a quick skim, I got the impression that this condition is caused by lesions of the insular cortex. Unless I’m confusing myself, that’s backwards from what I would have expected: I would have thought a lesion of the insular cortex should make a person intellectually unaware of the pain input (since the “primary interoceptive cortex” in the insula should presumably be what feeds that information into higher-level awareness / GNW), but still motivated by and reacting to the pain input (since that comes from these assessment areas, like maybe ACC, working in loops with the hypothalamus / brainstem that ultimately feeds into dopamine-based motivation signals).

I can kinda come up with a story that hangs together, but it has a lot of implausible-to-me elements.

…But then I saw a later paper that early studies didn’t reproduce and maybe pain asymbolia is not caused by insula lesions after all. But their evidence isn’t that great either. Also, their proposed alternative lesion sites wouldn't make it any easier for me to explain.

Well anyway, I guess I’m hoping that things will clear up when I read more about the insula, survey the literature better, etc. But for now I'm confused.

2. A few things about the superior colliculus

We have two sensory-processing systems, one in the cortex and one in the brainstem. I have a nice little story about how they relate:

I think the brainstem one needs to take incoming sensory data and use it to answer a finite list of genetically-hardcoded questions like “Is there something here that looks like a spider? Is there something here that sounds like a human voice? Am I at imminent risk of falling from a great height? Etc. etc.” And it needs to do that from the moment of birth, using I guess something like hardcoded image classifiers etc.

By contrast, the cortex one is a learning algorithm. It needs to take incoming sensory data and put it into an open-ended predictive model. Whatever patterns are in the data, it needs to memorize them, and then go look for patterns in the patterns in the patterns, etc. Like any freshly-initialized learning algorithm, this system is completely useless at birth, but gets more and more useful as it accumulates learned knowledge, and it’s critical for taking intelligent actions in novel environments.

Well anyway, that’s a neat story, but there are other things going on with the superior colliculus too, and I’m hazy on the details of what they are and why.

2.1 Connections from neocortex sensory processing to superior colliculus

Let’s just talk about the case of vision, although I believe there are analogs for auditory cortex, somatosensory, etc.

As far as I understand, there are connections from primary visual cortex (V1) to the superior colliculus (SC), arranged topographically—i.e. the parts that analyze the same part of the visual field are wired together.

One theory is that these connections are cortical motor control (superior colliculus is involved in moving the eyes / saccades, in addition to sensory processing). I heard the "motor control" theory from Jeff Hawkins (he didn't really defend it in the thing I read, he just claimed it). I think Hawkins likes that theory because it fits in neatly with “cortical uniformity”—every part of the cortex is a sensorimotor processing system, he says. A new paper from S. Murray Sherman and W. Martin Usrey also says that these connections are motor commands. I don’t know who else thinks that, those are the only two places I’ve seen it.

I generally don’t like the “motor control” theory. For one thing, my understanding is that V1 is not set up with the cortico-basal ganglia-thalamo-cortical loops that the brain uses for RL, and I normally think you need RL to learn motor control. For another thing, aren’t the frontal eye fields in charge of saccades?? (At least, in charge at the cortical level.) For yet another thing, it seems to me that “V1 cortical column #832” is not in a good position to know whether saccading to the corresponding part of the visual field is a good or bad idea. The decision of where and when to saccade needs to incorporate things like “what am I trying to do”, “what’s going on in general”, “what has high value-of-information”, etc.—information that I don’t think a particular V1 column would have.

The closest thing to motor control theory that kinda makes sense to me is a “Confusing things are happening here” message. More specifically, each V1 column ought to “know” if it's the case that higher-level models keep issuing confident predictions about what’s gonna happen at that part of the visual field, and those predictions keep being falsified. So when that happens, it could send a "Confusing things are happening here" message to SC.

Those messages would not be exactly a motor command per se, but the SC could reasonably act on the information by saccading to the confusing area. So then the messages wind up being more-or-less a motor command in effect.

That's not bad, but I'm still not entirely happy about this theory. For one thing, it seems not to match which neocortical layer these messages are coming out of. Also, I think that "the saccade target that best resolves a confusion" is not necessarily "the saccade target where incorrect predictions keep happening", and my introspection tentatively says that I would tend to saccade to the former, not the latter, when they disagree.

So here's one more theory I was thinking about. There’s a thing where if there’s a sudden flashing light, we immediately saccade to it, and maybe do other orienting reactions like move our head and body (and maybe also release cortisol etc.). My impression is that it’s SC that decides that this reaction is appropriate, and that orchestrates it.

But if we expect the flashing light, we’ll be less likely to orient to it.

So maybe the V1 → SC axons are saying: “Hey SC, there’s about to be motion in this particular part of the visual field. So if you see something there, it’s fine, chill out, we don’t have to orient to it.”

I don’t know which of those ideas (or something else entirely) is the real explanation, and haven’t looked into it too much.

2.2 Connections from superior colliculus to neocortex sensory processing

I think these exist too. Why?

I guess I always have my go-to cop-out answer of "They provide "context" that the neocortical learning algorithm can exploit to make better predictions". But maybe there's something else going on.

2.3 Learning in the superior colliculus

Contrary to my neat theorizing, there does seem to be some learning that happens in SC. I mean, I guess there kinda has to be, insofar as SC has some role in orchestrating motor commands, and the body keeps changing as it grows. I’m just generally hazy on what is being learned and where the ground truth comes from. I'll return to this in a later section below.

3. Why are there (a few) dopamine receptors in primary visual cortex?

Dopamine receptors are stereotypically used for RL, although I happen to think they're used for supervised-learning too. But (see here), V1 doesn't seem to me to have use for either of those things. Predictive learning (augmented by top-down attention and variable learning rates) seems like the right tool for the visual-processing job, and I don’t see what could be missing.

Yet there are in fact dopamine receptors in V1, apparently. Very few of them! But some! That makes it even weirder, right?

This paper found that mice with no D2 receptors (anywhere, not just V1) had close-to-normal vision. The differences were small, and I presume indirect; in fact the D2-knockout mice had slightly sharper vision!

…So anyway, I'm at a loss, this doesn’t make any sense to me. I’m tempted to just shrug and say “there’s some process tangentially related to vision processing, and the circuits doing that thing happen to be intermingled with the normal V1 visual-processing circuits, and that’s what the dopamine is there for.” I’m not happy about this. :-/

As with everything else here, I haven't looked into it much.

4. Learning-from-scratch-ism in motor cortex

(For definition of “learning-from-scratch-ism” see here.)

I’ll start by saying that I really like Michael Graziano’s grand unified theory of motor cortex. He argues (e.g. here and his book, and see here for someone arguing against) that the textbook division of motor-related cortex into “primary motor cortex”, “premotor cortex”, “supplementary motor area”, “frontal eye field”, “supplementary eye field”, etc. etc., is all kinda arbitrary and wrongheaded. Instead all those areas are basically doing the same kinds of thing in the same way, namely orchestrating different species-typical actions. If you think about mapping a discontinuous multi-dimensional space of species-typical actions onto a 2D piece of cortical sheet, you’re gonna get some sharp boundaries, and that’s where those textbook divisions come from.

Anyway, all that is kinda neat, but the part I’m confused about is how the motor cortex learns to do this. Like, what are the training signals, and how are those signals calculated?

One hint is that the midbrain can apparently also perform species-typical actions. I’m very unclear on what’s the difference between when the midbrain orchestrates a species-typical action versus the cortex orchestrating (nominally) the same action. I doubt they’re redundant; that would be a big waste of space, compared to having a much smaller area of cortex that merely “presses go” on the midbrain motor programs. Or does motor cortex do a better job somehow? How do these two regions talk to each other? Does the cortex teach the midbrain? Does the midbrain teach the cortex? Does the midbrain “initialize” the cortex and then the cortex improves itself by RL? Does the midbrain motor system learn, and if so, how does it get ground truth?

I don’t know, and I haven’t really looked into it, I’m just currently confused about what’s going on here.

And certainly I can’t feel good about advocating the truth of “learning-from-scratch-ism” if I’m not confident that the theory is compatible with everything we know about motor cortex.

5. Every brainstem-to-telencephalon neuromodulator signal besides dopamine and acetylcholine: what do they do?

I feel generally quite happy about my big-picture understanding of dopamine (see here) and acetylcholine (see here), even if I have a few confusions around the edges. But I haven’t gotten a chance to look at serotonin, norepinephrine, and so on, or at least not much. I’ve tried a little bit and nothing I read made any sense to me at all. So I remain confused.

14

5 comments, sorted by Highlighting new comments since Today at 9:27 AM
New Comment

Various thoughts:

  • It would make a lot of sense to me if norepinephrine acted as a Q-like signal for negative rewards. I don't have any neuroscience evidence for this, but it makes sense to me that negative rewards and positive rewards are very different for animals and would benefit from different approaches. I once ran some Q-learning experiments on the classic Taxi environment to see if I could make a satisficing agent (one that achieves a certain reward less than the maximum achievable and then rests). The agent responded by taking illegal actions that give highly negative rewards in the Taxi environment and hustling as hard as possible the rest of the time to achieve the reward specified. So I had to add a Q-function solely for negative rewards to get the desired behavior. Given that actual animals need to rest in a way that RL agents don't have to in most environments, it makes sense to me that Q-learning on its own is not a good brain architecture.
  • Dopamine receptors in V1 kind of makes sense if you want to visually predict reward-like properties of objects in the environment. Like something could look tasty or not tasty, maybe.

That's an interesting anecdote about the satisficing thing! I don't think it quite applies to animals because I don't think animals are maximizing the sum of future rewards (see here). Anyway the system is already set up with separate channels throughout for good things happening vs bad things happening (there's a thing I haven't written about but believe where the striatum sends out a cost and benefit estimate separately rather than just adding them up, and also in the "assessor" zone here there are different channels because good vs bad things have different autonomic consequences, e.g. sympathetic vs parasympathetic). Also this says norepinephrine is slow-acting, which suggests that it doesn't implement a learning rule tied to particular thoughts and actions and events.

But the article says it does affect learning rate, and arousal and whatnot. So maybe something like: NE and acetylcholine both signal "important things are happening now, let's increase learning rate, tune the dial towards fast reactions at the expense of energy efficiency, etc. etc.", but acetylcholine is fast and local ("important things are happening at this particular part of the visual field right now") and NE is is slow and global ("I am in a generally important situation")? Dunno, just speculating based on one abstract.

Maybe the V1 dopamine receptors are simply useless evolutionary leftovers (perhaps it's easier from a developmental perspective)

LOL! The ultimate cop-out answer!!

Not that it's necessarily wrong. But I would be very surprised if that were the correct answer.

My vague impression is that there's an awful lot of genetic micro-management of cell types and receptors and so on for different areas of cortex. So "not expressing a receptor in a cortical area where it's unused" is (I assume) very easy evolutionarily, and these dopamine receptors are in lots of mammal species I think.

Also, "I'm confused about this" has a pretty high prior for me. I don't feel obligated to go looking very hard for ways for that not to be true. :-P

But thanks for the comment :)

LW paradigm right here. Interesting too.