5.1 Post summary / Table of contents
Part of the “Intro to brain-like-AGI safety” post series.
In the previous post, I discussed the “short-term predictor”—a circuit which, thanks to a learning algorithm, emits an output that predicts a ground-truth supervisory signal arriving a short time (e.g. a fraction of a second) later.
In this post, I propose that we can take a short-term predictor, wrap it up into a closed loop involving a bit more circuitry, and we wind up with a new module that I call a “long-term predictor”. Just like it sounds, this circuit can make longer-term predictions, e.g. “I’m likely to eat in the next 10 minutes”. This circuit is closely related to Temporal Difference (TD) learning, as we’ll see.
I will argue that there are a large collection of side-by-side long-term predictors in the brain, each comprising a short-term predictor in the telencephalon (involving specific areas such as ventral striatum, medial prefrontal cortex, and amygdala) that loops down to the Steering Subsystem (hypothalamus and brainstem) and then back via a dopamine neuron. These long-term predictors make predictions about biologically-relevant inputs and outputs—for example, one long-term predictor might predict whether I’ll feel pain in my arm, another whether I’ll get goosebumps, another whether I’ll release cortisol, another whether I’ll eat, and so on. Moreover, one of these long-term predictors is essentially a value function for reinforcement learning.
All these predictors will play a major role in motivation—a story which I will finish in the next post.
Table of contents:
- Section 5.2 starts with a toy model of a “long-term predictor” circuit, consisting of the “short-term predictor” of the previous post, plus some extra components, wrapped into a closed loop. Getting a good intuitive understanding of this model will be important going forward, and I will walk through how that model would behave under different circumstances.
- Section 5.3 relates that model to Temporal Difference (TD) learning, which is closely related to a “long-term predictor”. I’ll show two variants of the long-term predictor circuit, a “summation” version (which leads to a value function that approximates the sum of future rewards), and a “switch” version (which leads to a value function that approximates the next reward, whenever it should arrive, which may not be for a long time). The “summation” version is universal in AI literature, but I’ll suggest that the “switch” version is probably closer to what happens in the brain. Incidentally, these two models are equivalent in cases like AlphaGo, wherein reward arrives in a lump sum right at the end of each episode (= game of Go).
- Section 5.4 will relate long-term predictors to the neuroanatomy of (part of) the telencephalon and brainstem.
- For the “vertical” neuroanatomy, I’ll describe how the brain houses a huge number of parallel “cortico-basal ganglia-thalamo-cortical loops”, and I’ll suggest that some of these loops function as short-term predictors, with a dopamine signal as supervisor.
- For the “horizontal” neuroanatomy, I’ll propose that the supervised learning I’m talking about involves (for example) the medial prefrontal cortex, ventral striatum, anterior insular cortex, and amygdala.
- Section 5.5 will offer six lines of evidence that lead me to believe this story: (1) It’s a sensible way to implement a biologically-useful capability; (2) It’s introspectively plausible; (3) It’s evolutionarily plausible; (4) It offers a reconciliation between the “visceromotor” and “motivational” ways to describe the medial prefrontal cortex; (5) It explains the Dead Sea Salt experiment; and (6) It offers a nice explanation of the diversity of dopamine neuron activity.
5.2 Toy model of a “long-term predictor” circuit
A “long-term predictor” is ultimately nothing more than a short-term predictor whose output signal helps determine its own supervisory signal. Here’s a toy model of what that can look like:
- The blue box is the short-term predictor of the previous post. It optimizes its output signal such that it approximates what the supervisor signal will be in 0.3 seconds (as an example).
- The purple box is a 2-way switch. The toggle on the switch is controlled by genetically-hardwired circuitry (gray oval), according to the following rules:
- By and large, the switch is in the bottom setting (“defer-to-predictor mode”). This setting is akin to the genetically-hardwired circuitry “trusting” that the short-term predictor’s output is sensible, and in particular producing the suggested amount of digestive enzymes.
- If the genetically-hardwired circuitry gets a signal that I’m eating something right now, and that I don’t have adequate digestive enzymes, it flips the switch to “override mode”. Regardless of what the short-term predictor says, it sends the signal to manufacture digestive enzymes.
- If the genetically-hardwired circuitry has been asking for digestive enzyme production for an extended period, and there’s still no food being eaten, then it again flips the switch to “override mode”. Regardless of what the short-term predictor says, it sends the signal to stop manufacturing digestive enzymes.
Note: You can assume that all the signals in the diagram can vary continuously across a range of values (as opposed to being discrete on/off signals), with the exception of the signal that toggles the 2-way switch. In the brain, smoothly-adjustable signals might be created by, for example, rate-coding—i.e., encoding information as the frequency with which a neuron is firing.
5.2.1 Toy model walkthrough part 1: static context
Let’s walk through what would happen in this toy model. To start with, assume that the “context” is static for some extended period of time. For example, imagine a situation where some ancient worm-like creature is digging in the sandy ocean bed for many consecutive minutes. Plausibly, its sensory environment would stay pretty much constant as long as it keeps digging, as would its thoughts and plans (insofar as this ancient worm-like creature has “thoughts and plans” in the first place). Or if you want another example of (approximately) static context—this one involving a human rather than a worm—hang on until the next subsection.
In the static-context case, let’s first consider what happens when the switch is sitting in “defer-to-predictor mode”: Since the output is looping right back to the supervisor, there is no error in the supervised learning module. The predictions are correct. The synapses aren’t changing. Even if this situation is very common, it has no bearing on how the short-term predictor eventually winds up behaving.
The times that do matter for the eventual behavior of the short-term predictor are those rare times that we go into “override mode”. Think of the overrides as like a sporadic “injection of ground truth”. They produce an error signal in the short-term predictor’s learning algorithm, changing its adjustable parameters (e.g. synapse strengths).
After enough life experience (a.k.a. “training” in ML terminology), the short-term predictor should have the property that the overrides balance out. There may still be occasional overrides that increase digestive-enzyme production, and there may still be occasional overrides that decrease digestive-enzyme production, but those two types of overrides should happen with similar frequency. After all, if they didn’t balance out, the short-term predictor’s internal learning algorithm would gradually change its parameters so that they did balance out.
And that’s just what we want! We’ll wind up with appropriate digestive enzyme production at appropriate times, in a way that properly accounts for any information available in the context data—what the animal is doing right now, what it’s planning to do in the future, what its current sensory inputs are, etc.
220.127.116.11 David-Burns-style exposure therapy—a possible real-life example of the toy model with static context?
As it happens, I recently read David Burns’s book Feeling Great (my review). David Burns has a very interesting approach to exposure therapy—an approach that happens to serve as an excellent example of how my toy model works in the static-context situation!
Here’s the short version. (Warning: If you’re thinking of doing exposure therapy on yourself at home, at least read the whole book first!) Excerpt from the book:
For example, when I was in high school, I wanted to be on the stage crew of Brigadoon, a play my school was putting on, but it required overcoming my fear of heights since the stage crew had to climb ladders and work near the ceiling to adjust the lights and curtains. My drama teacher, Mr. Krishak, helped me overcome this fear with the very type of exposure techniques I’m talking about. He led me to the theater and put a tall ladder in the middle of the stage, where there was nothing nearby to grab or hold on to. He told me all I had to do was stand on the top of the ladder until my fear disappeared. He reassured me that he’d stand on the floor next to me and wait.
I began climbing the ladder, step by step, and became more and more frightened. When I got to the top, I was terrified. My eyes were almost 18 feet from the floor, since the ladder was 12 feet tall, and I was just over 6 feet tall. I told Mr. Krishak I was in a panic and asked what I should do. Was there something I should say, do, or think about to make my anxiety go away? He shook his head and told me to just stand there until I was cured.
I continued to stand there in terror for about ten more minutes. When I told Mr. Krishak I was still in a panic, he assured me that I was doing great and that I should just stand there a few more minutes until my anxiety went away. A few minutes later, my anxiety suddenly disappeared. I couldn’t believe it!
I told him, “Hey, Mr. Krishak, I’m cured now!”
He said, “Great, you can come on down from the ladder now, and you can be on the stage crew of Brigadoon!”
I had a blast working on the stage crew. I absolutely loved climbing ladders and adjusting the lights and curtains near the ceiling, and I couldn’t even remember why or how I’d been so afraid of heights.
This story seems to be beautifully consistent with my toy model here. David started the day in a state where his short-term-predictors output “extremely strong fear reactions” when he was up high. As long as David stayed up on the ladder, those fear-reaction short-term-predictors kept on getting the same context data, and therefore they kept on firing their outputs at full strength. And David just kept feeling terrified.
Then, after 15 boring-yet-terrifying minutes on the ladder, some innate circuit in David’s brainstem issued an override—as if to say, “C’mon, nothing is changing, nothing is happening, we can’t just keep burning all these calories all day. It’s time to calm down now.” The short-term-predictors continued sending the same outputs as before, but the brainstem exercised its veto power, and forcibly reset David’s cortisol, heart-rate, etc., back to baseline. This “override” state immediately created error signals in the relevant short-term-predictors in David’s amygdala! And the error signals, in turn, led to model updates! The short-term predictors were all edited, and from then on, David was no longer afraid of heights.
This story kinda feels like speculation piled on top of speculation, but whatever, I happen to think it’s right. If nothing else, it’s good pedagogy! Here’s the diagram for this situation; make sure you can follow all the steps.
5.2.2 Toy model walkthrough, assuming changing context
The previous subsections assumed static context lines (constant sensory environment, constant behaviors, constant thoughts and plans, etc.). What happens if the context is not static?
If the context lines are changing, then it’s no longer true that learning happens only at “overrides”. If context changes in the absence of “overrides”, it will result in changing of the output, and the new output will be treated as ground truth for what the old output should have been. Again, this seems to be just what we want: if we learned something new and relevant in the last second, then our current expectation should be more accurate than our previous expectation, and thus we have a sound basis for updating our models.
5.3 Value function calculation (TD learning) as a special case of long-term prediction
At this point, ML experts will recognize a resemblance to Temporal Difference (TD) learning. It’s not quite the same, though. The differences are:
First, TD learning is usually used in reinforcement learning (RL) as a method for going from a reward function to a value function. By contrast, I was talking about things like “digestive enzyme production”, which are neither rewards nor values.
In other words, there is a generally-useful motif that involves going from some immediate quantity X to “long term expectation of X”. The calculation of a value function from a reward function is an example of that motif, but it’s not the only useful example.
(As a matter of terminology, it seems to be generally accepted that the term “TD learning” can in fact apply to things that are not RL value functions. However, empirically in my own experience, as soon as I mention “TD learning”, the people I’m talking to immediately assume I must be talking about RL value functions. So I want to be clear here.)
Second, to get something closer to traditional TD learning, we’d need to replace the 2-way switch with a 2-way summation—and then the “overrides” would be analogous to rewards. Much more on “switch vs summation” in the next subsection.
Third, there are many additional ways to tweak the circuit which are frequently used in AI textbooks, and some of those may be involved in the brain circuits too. For example, we can put in time-discounting, or different emphases on false-positives vs false-negatives (see my discussion of distributional learning in Section 18.104.22.168 below), etc.
To keep things simple, I will be ignoring all these possibilities (including time-discounting) in the discussion below.
5.3.1 Switch (i.e., value = expected next reward) vs summation (i.e., value = expected sum of future rewards)?
The figures above show two variants of our toy model. In one, the purple box is a two-way switch between “defer to the short-term predictor” and some independent “ground truth”. In the other, the purple box is a two-way summation instead.
The switch version trains the short-term-predictor to predict the next ground truth, whenever it should arrive.
The summation version trains the short-term-predictor to predict the sum of future ground truth signals.
The correct answer could also be “something in between switch and summation”. Or it could even be “none of the above”.
RL papers universally use the summation version—i.e., “value is the expected sum of future rewards”. What about biology? And which is actually better?
It doesn’t always matter! Consider AlphaGo. Like every RL paper today, AlphaGo was originally formulated in the summation paradigm. But it happens to have one and only one nonzero reward signal per game, namely +1 at the end of the game if it wins, or -1 if it loses. In that case, switch vs summation makes no difference. The only difference is one of terminology:
- In the summation case, we would say “each non-terminal move in the Go game has reward=0”.
- In the switch case, we would say “each non-terminal move in the Go game has a reward of (null)”.
(Do you see why?)
But in other cases, it does matter. So back to the question: should it be switch or summation?
Let’s step back. What are we trying to do here?
One thing that a brain needs to do is make decisions that weigh cross-domain tradeoffs. If you’re a human, you need to decide whether to watch TV or go to the gym. If you’re some ancient worm-like creature, you need to “decide” whether to dig or to swim. Either way, this “decision” impacts energy balance, salt balance, probability of injury, probability of mating—you name it. The design goal in the decision-making algorithm is that you make the decision that maximizes inclusive genetic fitness. How might that goal be best realized?
One method involves building a value function that estimates the organism’s inclusive genetic fitness (compared to some arbitrary—indeed, possibly time-varying—baseline), conditional on continuing to execute a given course of action. Of course it won’t be a perfect estimate—real inclusive genetic fitness can only be calculated in hindsight, many generations after the fact. But once we have such a value function, however imperfect, we can plug it into an algorithm that makes decisions to maximize value (more on this in the next post), and thus we get approximately-fitness-maximizing behavior.
So having a value function is key for making good decisions that weigh cross-domain tradeoffs. But nowhere in this story is the claim “value is the expectation of a sum of future rewards”! That’s a particular way of setting up the value-approximating algorithm, a method which might or might not be well suited to the situation at hand.
I happen to think that brains use something closer to the switch circuit, not the summation circuit, not only for homeostatic-type predictions (like the digestive enzymes example above), but also for value functions, contrary to mainstream RL papers. Again, I think it’s really “neither of the above” in all cases; just that it’s closer to switch.
Why do I favor “switch” over “summation”?
An example: sometimes I stub my toe and it hurts for 20 seconds; other times I stub my toe and it hurts for 40 seconds. But I don’t think of the latter as twice as bad as the former. In fact, even five minutes later, I wouldn’t remember which is which. (See the peak-end rule.) This is the kind of thing I would naturally expect from switch, but is an awkward fit for summation. It’s not strictly incompatible with summation; it just requires a more complicated, value-dependent reward function. As a matter of fact, if we allow the reward function to depend on value, then switch and summation can imitate each other.
Anyway, in upcoming posts, I’ll be assuming switch, not summation. I don’t think it matters very much for the big picture. I definitely don’t think it’s part of the “secret sauce” of animal intelligence, or anything like that. But it does affect some of the detailed descriptions.
The next post will include more details of reinforcement learning in the brain, including how “reward prediction error” works and so on. I am bracing for lots of confused readers, who will be disoriented by the fact that I’m assuming a different relationship between value and reward than what everyone is used to. For example, in my picture, “reward” is a synonym for “ground truth for what the value function should be right now”—both should account for not only the organism’s current circumstances but also its future prospects. Sorry in advance for any confusion! I will do my best to be clear.
5.4 An array of long-term predictors involving the telencephalon & brainstem
Here’s the long-term-predictor circuit from above:
I can lump together the 2-way switch with the rest of the genetically-hardwired circuitry, and then rearrange the boxes a bit, and I get the following:
Now, obviously digestive enzymes are just one example. Let’s draw in some more examples, add some hypothesized neuroanatomy, and include other terminology. Here’s the result:
Excellent! We’re halfway to my big picture of decision-making and motivation. The rest of the picture—including the “actor” part of actor-critic reinforcement learning—will come in the next post, and will fill in the hole in the top-left side of that diagram. (The term “Steering Subsystem” comes from Post #3.)
Here’s one more diagram and caption for pedagogical purposes.
In the next two subsections, I will elaborate on the neuroanatomy which I’m hinting at in this diagram, and then I’ll talk about why you should believe me.
5.4.1 “Vertical” neuroanatomy: cortico-basal ganglia-thalamo-cortical loops
In my post Big Picture of Phasic Dopamine, I talked about the theory (due originally to Larry Swanson) that the whole telencephalon is nicely organized into three layers (cortex, striatum, pallidum):
Cortex-like part of the loops
Amygdala [basolateral part]
Medial prefrontal cortex
Motor & “planning” cortex
Striatum-like part of the loops
Amygdala [central part]
Pallidum-like part of the loops
The entire telencephalon—neocortex, hippocampus, amygdala, everything—can be divided into cortex-like structures, striatum-like structures, and pallidum-like structures. If two structures are in the same column in this table, that means they’re wired together into cortico-basal ganglia-thalamo-cortical loops (see next paragraph). This table is incomplete and oversimplified; for a better version see Fig. 4 here.
This idea then connects to the earlier (and now widely accepted) theory, dating to Alexander 1986, that these three layers of the telencephalon are interconnected into a large number of parallel “cortico-basal ganglia-thalamo-cortical loops”, which can be found in almost every part of the telencephalon.
Here’s a little illustration:
Given all that, here is a possible rough model for how this loop architecture relates to the short-term predictor learning algorithm that I’ve been talking about:
5.4.2 “Horizontal” neuroanatomy—cortical specialization
The previous subsection was about the “vertical” three-layer structure of the telencephalon. Now let’s switch to the “horizontal” structure, i.e. the fact that different parts of the cortex do different things (in cooperation with the corresponding parts of the striatum and pallidum).
This is oversimplified, but here’s my latest attempt at (part of) the cortex in a nutshell:
- The extended motor cortex (and corresponding striatum) is the cortex’s main output region for behaviors involving skeletal muscles, like reaching and walking.
- The medial prefrontal cortex (mPFC—which also includes anterior cingulate cortex) (and corresponding (ventral) striatum) is the cortex’s main output region for behaviors involving autonomic / visceromotor / hormonal actions, like releasing cortisol, vasoconstriction, goosebumps, and so on.
- The amygdala (which has both cortex-like and striatum-like parts) is the cortex’s main output region for certain behaviors that involve both skeletal muscle actions and autonomic actions, like flinching-reactions, or freezing-reactions (when frightened), and so on.
- The insular cortex (and corresponding (ventral) striatum) is the cortex’s main input region for autonomic / homeostatic / body status information, like blood sugar levels, pain, cold, taste, muscle strain, etc.
I won’t talk about the motor cortex in this series, but I think the other three are all involved in these long-term prediction circuits. For example:
- I claim that if you look at a little subregion in the medial prefrontal cortex, you might find that it’s being trained to fire in proportion to the probability of upcoming cortisol release;
- I claim that if you look at a little subregion in the amygdala, you might find that it’s being trained to fire in proportion to the probability of upcoming freezing-reactions;
- I claim that if you look at a little subregion of the (anterior) insular cortex, you might find that it’s being trained to fire in proportion to the probability of upcoming cold feelings in your left arm.
5.5 Six reasons I like this “array of long-term predictors” picture
5.5.1 It’s a sensible way to implement a biologically-useful capability
If you start producing digestive enzymes before eating, you’ll digest faster. If your heart starts racing before you see the lion, then your muscles will be primed and ready to go when you do see the lion. Etc.
So these kinds of predictors seem obviously useful.
Moreover, as discussed in the previous post (Section 4.5.2), the technique I’m proposing here (based on supervised learning) seems either superior to or complementary with other ways to meet these needs.
5.5.2 It’s introspectively plausible
For one thing, we do in fact start salivating before we eat the cracker, start feeling nervous before we see the lion, etc.
For another thing, consider the fact that all the actions I’m talking about in this post are involuntary: you cannot salivate on command, or dilate your pupils on command, etc., at least not in quite the same way that you can wiggle your thumb on command.
(More on voluntary actions in the next post—they’re in a whole different part of the telencephalon.)
I’m glossing over a bunch of complications here, but the involuntary nature of these things seems pleasingly consistent with the idea that they are being trained by their own dedicated supervisory signals, straight from the brainstem. They’re slaves to a different master, so to speak. We can kinda trick them into behaving in certain ways, but our control is limited and indirect.
5.5.3 It’s evolutionary plausible
As discussed in Section 4.4 of the previous post, the simplest short-term predictor is extraordinarily simple, and the simplest long-term predictor is only a bit more complicated than that. And these very simple versions are already plausibly fitness-enhancing, even in very simple animals.
Moreover, as I discussed a while back (Dopamine-supervised learning in mammals & fruit flies), there is an array of little learning modules in the fruit fly, playing a seemingly-similar role to what I’m talking about here. Those modules also use dopamine as a supervisory signal, and there is some genomic evidence of a homology between those circuits and the mammalian telencephalon.
5.5.4 It offers a reconciliation between “visceromotor” and “motivation” pictures of the medial prefrontal cortex (mPFC)
Take the mPFC (which also includes the anterior cingulate cortex—ACC), as an example. People talk about this region in two quite different ways:
- On the one hand, as mentioned above (Section 5.4.2), mPFC is described as a visceromotor / homeostatic / autonomic motor output region—it issues commands to control hormones, to execute sympathetic and parasympathetic nervous system reactions, and so on. For example, “electrical stimulation of the infralimbic cortex has been shown to affect gastric motility and to cause hypotension”, or this paper says stimulating mPFC caused “[bristling]; pupillary dilation; and changes in blood pressure, respiratory rate, and heart rate”, or see Bud Craig’s book which characterizes ACC as a homeostatic motor output center. This way of thinking also elegantly explains the fact that the region is agranular (missing layer #4 out of the 6 neocortex layers), which implies “output region” both for theoretical reasons and by analogy with the (agranular) motor cortex.
- On the other hand, mPFC is frequently described as being related to a host of vaguely-motivation-related activities. For example, Wikipedia mentions “attention allocation, reward anticipation, decision-making, ethics and morality, impulse control … and emotion” in regards to ACC.
I think my picture works for both:
For the first (visceromotor) perspective, if you look at Section 5.2 above, you’ll see that the predictors’ outputs do in fact cause homeostatic changes—at least, they do when the genetically-hardwired circuitry of the Steering Subsystem has set that signal in “defer-to-predictor mode” (as opposed to “override mode”).
For the second (motivation) perspective, this will make a bit more sense after the next post, but note my suggestive description of a “scorecard” in the diagram of Section 5.4. The idea is: The “context” lines going into the “Thought Assessors” contain the horrific complexity of everything in your conscious mind and more—where you are, what you’re seeing and doing, what you’re thinking about, what you’re planning to do in the future and why, etc. The relatively simple, genetically-hardcoded Steering Subsystem can’t make heads or tails of any of that!
But that’s a dilemma, because the Steering Subsystem is the source of rewards / drives / motivations! How can the Steering Subsystem issue rewards for a good plan, if it can’t make heads or tails of what you’re planning??
The “scorecard” is the answer. It takes all that horrific complexity and distills it into a nice standardized scorecard—exactly the kind of thing that genetically-hardcoded circuits in the Steering Subsystem can easily process.
Thus, whenever there’s an interaction between thoughts and drives—emotions, decision-making, ethics, aversions, etc.—the “Thought Assessors” need to be involved as an intermediary.
5.5.5 It explains the Dead Sea Salt Experiment
See my discussion in my old post Inner alignment in salt-starved rats. In brief, experimenters sporadically played a sound and popped an object into a rat’s cage, and immediately thereafter sprayed super-salty water directly into the rat’s mouth. The rat found the saltwater disgusting, and started reacting with horror to the sound and object. Then later, the experimenters made the rat feel salt-deprived. When they played the sound and popped the object this time, the rat got very excited—even though the rat had never been salt-deprived before in its life.
In our setup, this is exactly what we expect: when the sound and object appear, the “I anticipate tasting salt” predictor starts firing like crazy. Meanwhile the Steering Subsystem (hypothalamus & brainstem) has hardwired circuitry that says “If I’m salt-deprived, and if the ‘scorecard’ from the Learning Subsystem suggests that I will soon taste salt, then that’s awesome, and whatever thought the Learning Subsystem is thinking, it should pursue that idea with gusto!”
5.5.6 It offers a nice explanation for (some of) the diversity of dopamine neuron activity
Recall from Section 5.4.1 above that I’m claiming that dopamine neurons carry the supervisory signals of all these supervised-learning modules.
There’s a pop-science misconception that there is a (singular) dopamine signal in the brain, and it bursts when good things are happening. In reality, there are many different dopamine neurons doing many different things.
Thus we get the question: what are all these diverse dopamine signals doing? There’s no consensus; claims in the literature are all over the place. But I can throw my hat into the ring: in my picture described above, there are probably hundreds or thousands of short-term predictors in the telencephalon, predicting hundreds or thousands of different things, and they each need a different dopamine supervisory signal!
(And there are even more dopamine signals besides those! One such signal, associated with the brain’s “main” reward prediction error signal, will show up in the next post. Still others are off-topic for this series but discussed here.)
If my story is right, what would we expect to see in dopamine-measuring experiments?
Imagine a rat running through a maze. Moment by moment, its array of predictors are getting dopamine supervisory signals about its various hormone levels, its heart rate, its expectation of drinking and eating and having a sore leg and freezing and tasting salt, and on and on. In short, we expect dopamine neurons to be bouncing up and down in all kinds of different ways.
Thus, pretty much any instance where an experimenter has measured that a dopamine neuron is correlated with some behavioral variable, it’s probably consistent with my picture too.
Here are a couple examples:
- There are dopamine neurons that burst for salient stimuli like unexpected flashes of light (ref). Can I explain that? Sure, no problem! I say: they could be supervisory signals saying “this would have been a good time to orient”, or “to flinch”, or “to raise your heart rate”, etc.
- There are dopamine neurons that correlate with the velocity with which a mouse is running on a treadmill-ball (ref). Can I explain that? Sure, no problem! I say: they could be supervisory signals saying “expect sore muscles”, or “expect cortisol”, or “expect high heart rate”, etc.
Here’s another data point which seems reassuringly consistent with my picture. A few dopamine neurons burst when aversive things happen (ref). Four of the five regions in which such neurons can be found (according to the linked paper) are right where I expect that array of short-term predictors to be—namely, the cortex-like and striatum-like layers of amygdala, and medial prefrontal cortex (mPFC), and the ventromedial shell of the nucleus accumbens, which is (at least roughly?) the striatum stop of the mPFC cortico-basal ganglia-thalamo-cortical loops. This is exactly what I expect in my picture. For example, if a mouse gets shocked, then a “should-I-freeze-now” predictor would get a supervisory signal saying “Yes, you should have been freezing”.
Side note: Lammel et al. 2014 mentions so-called “‘non-conventional’ VTA [dopamine] neurons” in “medial posterior VTA (PN and medial PBP)”. These seem to project to exactly the non-value-function Thought Assessor areas, and it’s claimed that they have different firing patterns from other dopamine neurons. Maybe the firing pattern difference is reflective of the different requirements of supervised learning versus reinforcement learning? (I’m not an expert; I’m just flagging that it sounds intriguing and would be worth looking into more.)
UPDATE JAN 2023: Upon further investigation (thanks Nathaniel Daw), I think what I’m talking about here is basically the right explanation for the diverse dopamine signals on the fringes of VTA / SNc, or something like that, but the fine-grained dopamine diversity more typically measured has a different explanation which is at least spiritually closer to the “distributional” story next.
22.214.171.124 Aside: Distributional predictor outputs
I didn’t talk about it in the last post, but short-term predictors have hyperparameters in their learning algorithms, two of which are “how strongly to update upon a false-positive (overshoot) error”, and “how strongly to update upon a false-negative (undershoot) error”. As the ratio of these two hyperparameters varies from 0 to ∞, the resulting predictor behavior varies from “fire the output if there’s even the faintest chance that the supervisor will fire” to “never fire the output unless it’s all but certain that the supervisor will fire”.
Therefore, if we have many predictors, each with a different ratio of those hyperparameters, then we can (at least approximately) output a probability distribution for the prediction, rather than a point estimate.
A recent set of experiments from DeepMind and collaborators found evidence (based on measurements of dopamine neurons) that the brain does in fact use this trick, at least for reward prediction.
I speculate that it may use the same trick for the other long-term predictors too—e.g. maybe the predictions of arm pain and cortisol and goosebumps etc. are all in the form of ensembles of long-term predictors that each sample a probability distribution.
I bring this up, first, because it’s another example where dopamine neurons are behaving in a way that seems pleasingly consistent with my worldview, and second, because it’s plausibly useful for AGI safety—and thus I was looking for an excuse to bring it up anyway!
Anyway, as usual I don’t pretend to have smoking-gun proof of my hypothesis (i.e. that the brain has an array of long-term predictors involving telencephalon-brainstem loops), and there are some bits that I know I’m still confused about. But considering the evidence in the previous subsection (and rest of the post), I wind up feeling strongly that I’m broadly on the right track. I’m happy to discuss more in the comments. Otherwise, onward to the next post, where we will finally put everything together into a big picture of how I think motivation and decision-making work in the brain!
‘Horizontal’ neuroanatomy versus ‘vertical’ neuroanatomy is my idiosyncratic terminology, but I’m hoping it’s intuitive. If you imagine stretching out the cortex into a sheet, oriented horizontally, then the ‘vertical’ neuroanatomy would include e.g. the interconnections between cortical and subcortical structures, and the ‘horizontal’ neuroanatomy would include e.g. the different roles played by different parts of the cortex. See also the table in Section 5.4.1.
To be clear, in reality, there probably isn’t a discrete all-or-nothing 2-way switch here. There could be a “weighted average” setting, for example. Remember, this whole discussion is just a pedagogical “toy model”; I expect that reality is more complicated in various respects.
I note that I’m just running through this algorithm in my head; I haven’t simulated it. I’m optimistic that I didn’t majorly screw up, i.e. that everything I’m saying about the algorithm is qualitatively true, or at least can be qualitatively true with appropriate parameter settings and perhaps other minor tweaks.
Examples of using the terminology “TD learning” for something which is not related to RL reward functions include “TD networks”, and the Successor Representations literature (example), or this paper, etc.
The classic attempt to reconcile “visceromotor” and “motivation” pictures of mPFC is Antonio Damasio’s “somatic marker hypothesis”. My discussion here has some similarities and some differences from the somatic marker hypothesis. I won’t get into that; it’s off-topic.
As in the previous post, when I say that “dopamine carries the supervisory signal”, I’m open to the possibility that dopamine is actually a closely-related signal like the error signal, or the negative error signal, or the negative supervisory signal. It really doesn’t matter for present purposes.
The fifth area where that paper found dopamine neurons bursting under aversive circumstances, namely the tail of the striatum, has a different explanation I think—see here.
I don't think I understand how the scorecard works. From:
And this makes sense. But when I picture how it could actually work, I bump into an issue. Is the scorecard learned, or hard-coded?
If the scorecard is learned, then it needs a training signal from Steering. But if it's useless at the start, it can't provide a training signal. On the other hand, since the "ontology" of the Learning subsystem is learned-from-scratch, then it seems difficult for a hard-coded scorecard to do this translation task.
The categories are hardcoded, the function-that-assigns-a-score-to-a-category is learned. Everybody has a goosebumps predictor, everyone has a grimacing predictor, nobody has a debt predictor, etc. Think of a school report card: everyone gets a grade for math, everyone gets a grade for English, etc. But the score-assigning algorithm is learned.
So in the report card analogy, think of a math TA ( = Teaching Assistant = Thought Assessor) who starts out assigning math grades to students randomly, but the math professor (=Steering Subsystem) corrects the TA when its assigned score is really off-base. Gradually, the math TA learns to assign appropriate grades by looking at student tests. In parallel, there’s an English class TA (=Thought Assessor), learning to assign appropriate grades to student essays based on feedback from the English professor (=Steering Subsystem).
The TAs (Thought Assessors) are useless at the start, but the professors aren't. Back to biology: If you get shocked, then the Steering Subsystem says to the “freezing in fear” Thought Assessor: “Hey, you screwed up, you should have been sending a signal just now.” The professors are easy to hardwire because they only need to figure out the right answer in hindsight. You don't need a learning algorithm for that.
I'm liking this theory more and more.
One solution to a -300ms delay connected to its own input is a constant output. However, this is part of an infinite class of solutions. Any function f(tms%300) is a solution to this.
(Admittedly, with any error metric Lx;x>1 the optimum solution is a constant output.)
Output stability here depends on the error gain through the loop. (Control theory is not my forte, but I believe to analyze this rigorously control theory is what you'd want to look into.)
If the error gain is sub-unity, the system is stable and will converge to a constant output.
The error gain being unity is the critical value where the system is on the edge of stability.
If the error gain is super-unity, the system is unstable and will go into oscillations.
Or, to bring this back to what this means for a predictor:
Sub-unity error gain means 'if the current input is X and the predictor predicts the input will be Y in 300ms, the predictor outputs (X+(Y−X)∗C);C<1.'
Unity error gain means 'if the current input is X and the predictor predicts the input will be Y in 300ms, the predictor outputs (X+(Y−X)∗C);C=1.'
Super-unity error gain means 'if the current input is X and the predictor predicts the input will be Y in 300ms, the predictor outputs (X+(Y−X)∗C);C>1.'
Super-unity error gain is 'obviously' suboptimal behavior for a human brain, so we'd probably end up with the error amplification tuned to under the critical value. Ditto, a predictor that systematically underestimated system changes is also "obviously" suboptimal. A 'perfect' predictor corresponds to unity error gain.
So all told you'd expect the predictors to be tuned to a gain that's as close to possible to unity without going over.
...hm. Actually, predictors going haywire with a ~300ms (~3Hz) period sounds a lot like a seizure. Which would nicely explain why humans do occasionally get seizures. (Or rather, why they aren't evolved out.) For ideal prediction you want an error gain as close as possible to unity... but too close to unity and variations in effective error gain mean that you're suddenly overunity and get rampant 300ms oscillations.
In the sense of "it seems plausible and explains things that I haven't heard other good explanations for"
The predictor is a parametrized function output = f(context, parameters) (where "parameters" are also called "weights"). If (by assumption) context is static, then you're running the function on the same inputs over and over, so you have to keep getting the same answer. Unless there's an error changing the parameters / weights. But the learning rate on those parameters can be (and presumably would be) relatively low. For example, the time constant (for the exponential decay of a discrepancy between output and supervisor when in "override mode") could be many seconds. In that case I don't think you can get self-sustaining oscillations in "defer to predictor" mode.
Then maybe you'll say "What if it's static context except that there's a time input to the context as well? But I still don't see how you would learn oscillations that aren't in the exogenous data.
There could also be a low-pass filter on the supervisor side. Hmm, actually, maybe that amounts to the same thing as the slow parameter updates I mentioned above.
I think I disagree that "perfect predictors" are what's wanted here. The input data is a mix of regular patterns and noise / one-off idiosyncratic things. You want to learn the patterns but not learn the noise. So it's good to not immediately and completely adapt to errors in the model. (Also, there's always learning-during-memory-replay for genuinely important things that only happen only once and quickly.)
I disagree; let me try to work through where we diverge.
A 300ms predictor outputting a sine wave with period 300ms into its own supervisor input has zero error, and hence will continue to do so regardless of the learning rate.
Do you at least agree that in this scheme a predictor outputting a sine wave with period 300ms has zero error while in defer-to-predictor mode?
This is true for a standard function; this is not true once you include time. A neuron absolutely can spike every X milliseconds with a static input. And it is absolutely possible to construct a sine-wave oscillator via a function with a nonzero time delay connected to its own input.
Unfortunately, as long as the % of time spent in override mode is low you need a high learning rate or else the predictor will learn incredibly slowly.
If the supervisor spends a second a week in override mode, then the predictor is actively learning ~0.002% of the time.
Unfortunately, as long as each override event is relatively short a low-pass filter selectively removes all of your learning signal!
You keep bouncing between a sufficiently-powerful-predictor and a simple exponential-weighted-average. Please pick one, to keep your arguments coherent. This statement of yours is only true for the latter, not the former. A powerful predictor can suddenly modeswitch in the presence of an error signal.
(For a simple example, consider a 300ms predictor trying to predict a system where the signal normally stays at 0, but if it ever goes non-zero, even by a very small amount, it will go to 1 100ms later and stay at 1 for 10s before returning to 0. As long as the signal stays at 0, the predictor will predict it stays a zero. The moment the error is nonzero, the predictor will immediately switch to predicting 1.)
And for something like a 'freeze in terror' predictor I absolutely could see a rate that low.
Or maybe predict spikes. Meh.
Hmm, I think we're mixing up two levels of abstraction here. At the implementation level, there are no real-valued signals, just spikes. But at the algorithm level, it's possible that the neuron operations are equivalent to some algorithm that is most simply described in a way that does not involve any spikes, and does involve lots of real-valued signals. For example, one can vaguely imagine setups where a single spike of an upstream neuron isn't sufficient to generate a spike on the downstream neuron, and you only get effects from a neuron sending a train of spikes whose effects are cumulative. In that case, the circuit would be basically incapable of "fast" dynamics (i.e. it would have implicit low-pass filters everywhere), and the algorithm is really best thought of as "doing operations" on average spike frequencies rather than on individual spikes.
Oh sorry if I was unclear. I was never talking about exponential weighted average. Let's say our trained model is f(context,θ) (where θ is the parameters a.k.a. weights). Then with static context, I was figuring we'd have a differential equation vaguely like:∂→θ∂t∝−∇θ(f(context,→θ)−supervisor)2
I was figuring that (in the absence of oscillations) the solution to this differential equation might look like θ(t) asymptotically approaching a limit wherein the error is zero, and I was figuring that this asymptotic approach might look like an exponential with a timescale of a few seconds.
I'm not sure if it would be literally an exponential. But probably some kind of asymptotic approach to a steady-state. And I was saying (in a confusing way) that I was imagining that this asymptotic approach would take a few seconds to get most of the way to its limit.
If we go to the Section 126.96.36.199 example of David on the ladder, the learning is happening while he has calmed down, but is still standing at the top of the ladder. I figure he probably stayed up for at least 5 or 10 seconds after calming down but before climbing down.
For example, we can imagine an alternate scenario where David was magically teleported off the ladder within a fraction of a second after the moment that he finally started feeling calm. In that scenario, I would be a lot less confident that the exposure therapy would actually stick.
By the same token, when you're feeling scared in some situation, you're probably going to continue feeling scared in that same situation for at least 5 or 10 seconds.
(And if not, there's always memory replay! The hippocampus can recall both the scared feeling and the associated context 10 more times over the next day and/or while you sleep. And that amounts to the same thing, I think.)
Sorry in advance if I'm misunderstanding your comment. I really appreciate you taking the time to think it through for yourself :)
Alright, so we at least agree with each other on this. Let me try to dig into this a little further...
Consider the following (very contrived) example, for a 300ms predictor trying to minimize L2 norm:
Context is static throughput the below.
t=0, overrider circuit forces output=1.
t=150ms, overrider circuit switches back to loopback mode.
t=450ms, overrider circuit forces output=0.
t=600ms, overrider circuit switches back to loopback mode.
t=900ms, overrider circuit forces output=1.
Do you agree that the best a slow-learning predictor that's a pure function f(context,→θ) can do is to output a static value 0.5, for an overall error rate of, what 0.08¯3? (The exact value doesn't matter.)
Do you agree that a "temporal-aware" predictor that outputted a 300ms square wave as follows:
t=0, predictor switches output=1.
t=150ms, predictor switches output=0.
t=300ms, predictor switches output=1.
t=450ms, predictor switches output=0.
t=600ms, predictor switches output=1.
...would have zero error rate?
I can see why you'd say this. It's even true if you're just looking at e.g. a well-tuned PID controller. But even for a PID controller there are regimes where this behavior breaks down and you get oscillation... and worse, the regimes where this breaks down are regimes that you're otherwise actively tuning said controller for!
I think here is the major place we disagree. As you say, this model of these circuits is basically incapable of fast dynamics, and you keep leaning towards setups that forbid fast dynamics in general. But for something like a startle signal, you absolutely want it to be able to handle a step change in the context as a step change in the output.
I don't know of a general-purpose method of predicting fast dynamics that doesn't have mode-switching regions where seemingly-small learning rates can suddenly change the output.
Almost anything would would here, really. L1 is just annoying due to the lack of unique solution.
I am making up this term on the spot. I haven't formalized it; I suspect one way to formalize it would be to include time % 300ms as an input like the rest of the context.
Please ignore clock skew for now.
"Normally" the feedback path is through the input->output path, not the PID parameters... but you can get oscillations in the PID-parameter path too
...and a 'step change' inherently has high frequency components.
Perhaps a better term might be 'high-bandwidth' dynamics. Predicting a 10MHz sine wave is easy. Predicting <=10kHz noise, less so.
Just to make sure we're on the same page, I made up the “300ms” number, it could be something else.
Also to make sure we're on the same page, I claim that from a design perspective, fast oscillation instabilities are bad, and from an introspective perspective, fast oscillation instabilities don't happen. (I don't have goosebumps, then 150ms later I don't have goosebumps, then 150ms later I do have goosebumps, etc.)
Sure. But to make sure we're on the same page, the predictor is trying to minimize L2 norm (or whatever), but that's just one component of a system, and successfully minimizing the L2 norm might or might not correspond to the larger system performing well at its task. So “zero error rate” doesn't necessarily mean “good design”.
Sorry, I'm confused. There's an I and a D? I only see a P.
It seems to me that you can start a startle reaction quickly (small fraction of a second), but you can't stop a startle quickly. Hmm, maybe the fastest thing the amygdala does is to blink (mostly <300ms) , but if you're getting 3 blink-inducing stimuli a second, your brainstem is not going to keep blinking 3 times a second, instead it will just pinch the eyes shut and turn away, or something. (Source: life experience.) (Also, I can always pull out the “Did I say 300ms prediction? I meant 100ms” card…)
If the supervisor is really tracking the physiological response (sympathetic nervous system response, blink reaction, whatever), and the physiological response can't oscillate quickly (even if its rise-time by itself is fast), then likewise the supervisor can't oscillate quickly, right? Think of it like: once I start a startle-reaction, then it flips into override mode for a second, because I'm still startle-reacting until the reaction finishes playing out.
Hmm, I think I want to forbid fast updates of the adjustable parameters / weights (synapse strength or whatever), and I also want to stay very very far away from any situation where there might be fast oscillations that originate in instability rather than already being present in exogenous data. I'm open to a fast dynamic where “context suddenly changes, and then immediately afterwards the output suddenly changes”. If I said something to the contrary earlier, then I have changed my mind! :-)
And I continue to believe that these things are all compatible: you can get the “context suddenly changes → output suddenly changes” behavior, without going right to the edge of unstable oscillations, and also without fast (sub-second) parameter / weight / synapse-strength changes.
Sure; the further you get away from ~300ms the less the number makes sense for e.g. predicting neuron latency, as described earlier.
I absolutely agree that most of the time oscillations don't happen. That being said, oscillations absolutely do happen in at least one case - epilepsy. I remain puzzled that evolution "allows" epilepsy to happen, and epilepsy being a breakdown that does allow ~300ms oscillations to happen, akin to feedback in audio amplifiers, is a better explanation for this than I've heard elsewhere.
A generic overdamped PID controller will react to a step-change in its input via (vaguely)-exponential decay towards the new value.
Even for a non-overdamped PID controller the magnitude of the tail decreases exponentially with time. (So long as said PID controller is stable at least.)
You are correct that all that is necessary for a PID controller to react in this fashion is a nonzero P term.
Absolutely; a step change followed by a decay still has high-frequency components. (This is the same thing people forget when they route 'slow' clocks with fast drivers and then wonder why they are getting crosstalk on other signals and high-frequency interference in general.)
Your slow-responding predictor is going to have a terrible effective reaction time, is what I'm trying to say here, because you're filtering out the high-frequency components of the prediction error, and so the rising edge of your prediction error gets filtered from a step change to something closer to a sigmoid that takes quite a while to get to full amplitude.... which in turn means that what the predictor learns is not a step-change followed by a decay. It learns the output of a low-pass filter on said step-change followed by a decay, a.k.a. a slow rise and decay.
Right. Which brings me back to my puzzle: why does epilepsy continue to exist?
(Do you at least agree that, were there some mechanism where there was enough feedback/crosstalk such that you did get oscillations, it might look something like epilepsy?)
Can you please give an example of a general-purpose function estimator, that when plugged into this pseudo-TD system, both:
I know of schemes that achieve 1, and schemes that achieve 2. I don't know of any schemes that achieve both offhand.
Thank you again for going back and forth with me on this by the way. I appreciate it.
...or some offset from the new value, in some cases.
I'm not going to worry too much if e.g. there's a single unstable pathological case.
LReLU violates 2. LReLU with regularization violates 1. Etc.
I must have missed that part; can you point more specifically to what you're referring to?
I think practically anywhere in the brain, if A connects to B, then it's a safe bet that B connects to A. (Certainly for regions, and maybe even for individual neurons.) Therefore we have the setup for epileptic seizures, if excitation and inhibition are not properly balanced.
Or more generically, if X% of neurons in the brain are active at time t, then we want around X% of neurons in the brain to be active at time t+1. That means that we want each upstream neuron firing event to (on average) cause exactly one net downstream neuron to fire. But individual neurons have their own inputs and outputs; by default, there seems to be a natural failure mode where the upstream neurons excite not-exactly-one downstream neuron, and we get exponential growth (or decay).
My impression is that there are lots of mechanisms to balance excitation and inhibition—probably different mechanisms in different parts of the brain—and any of those mechanisms can fail. I'm not an epilepsy expert by any means (!!) , but at a glance it does seem like epilepsy has a lot of root causes and can originate in lots of different brain areas, including areas that I don't think are doing this kind of prediction, e.g. temporal lobe and dorsolateral prefrontal cortex and hippocampus.
I still think you're incorrectly mixing up the time-course of learning (changes to parameters / weights / synapse strengths) with the time-course of an output following a sudden change in input. I think they're unrelated.
To clarify our intuitions here, I propose to go to the slow-learning limit.
However fast you've been imagining the parameters / weights / synapse strength changing in any given circumstance, multiply that learning rate by 0.001. And simultaneously imagine that the person experiences everything in their life with 1000× more repetitions. For example, instead of getting whacked by a golf ball once, they get whacked by a golf ball 1000× (on 1000 different days).
(Assume that the algorithm is exactly the same in every other respect.)
I claim that, after this transformation (much lower learning rate, but proportionally more repetitions), the learning algorithm will build the exact same trained model, and the person will flinch the same way under the same circumstances.
(OK, I can imagine it being not literally exactly the same, thanks to the details of the loss landscape and gradient descent etc., but similar.)
Your perspective, if I understand it, would be that this transformation would make the person flinch more slowly—so slowly that they would get hit by the ball before even starting to flinch.
If so, I don't think that's right.
Every time the person gets whacked, there's a little interval of time, let's say 50ms, wherein the context shows a golf ball flying towards the person's face, and where the supervisor will shortly declare that the person should have been flinching. That little 50ms interval of time will contribute to updating the synapse strengths. In the slow-learning limit, the update will be proportionally smaller, but OTOH we'll get that many more repetitions in which the same update will happen. It should cancel out, and it will eventually converge to a good prediction, F(ball-flying-towards-my-face) = I-should-flinch.
And after training, even if we lower the learning rate all the way down to zero, we can still get fast flinching at appropriate times. It would only be a problem if the person changes hobbies from golf to swimming—they wouldn't learn the new set of flinch cues.
Sorry if I'm misunderstanding where you're coming from.
If you take any solution to 1, and multiply the learning rate by 0.000001, then it would satisfy 2 as well, right?
It feels wrong to refer you back to your own writing, but much of part 4 was dedicated to talking about these short-term predictors being used to combat neural latency and to do... well, short-term predictions. A flinch detector that goes off 100ms in advance is far less useful than a flinch detector that goes off 300ms in advance, but at the same time a short-term predictor that predicts too far in advance leads to feedback when used as a latency counter (as I asked about/noted in the previous post).
(It's entirely possible that different predictors have different prediction timescales... but then you're just replaced the problem with a meta-problem. Namely: how do predictors choose the timescale?)
1x the training data with 1x the training rate is not equivalent to 1000x the training data with 1/1000th of the training rate. Nowhere near. The former is a much harder problem, generally speaking.
(And in a system as complex and chaotic as a human there is no such thing as repeating the same datapoint multiple times... related data points yes. Not the same data point.)
(That being said, 1x the training data with 1x the training rate is still harder than 1x the training data with 1/1000th the training rate, repeated 1000x.)
You appear to be conflating two things here. It's worth calling them out as separate.
Putting a low-pass filter on the learning feedback signal absolutely does cause something to learn a low-passed version of the output. Your statement "In that case, the circuit would be basically incapable of "fast" dynamics (i.e. it would have implicit low-pass filters everywhere)," doesn't really work, precisely because it leads to absurd conclusions. This is what I was calling out.
A low learning rate is something different. (That has other problems...)
My apologies, and you are correct as stated; I should have added something on few-shot learning. Something like a flinch detector likely does not fire 1,000,000x in a human lifetime, which means that your slow-learning solution hasn't learnt anything significant by the time the human dies, and isn't really a solution.
I am aware that 1m is likely you just hitting '0' a bunch of times'; humans are great few-shot (and even one-shot) learners. You can't just drop the training rate or else your examples like 'just stand on the ladder for a few minutes and your predictor will make a major update' don't work.
My flinch reflex works fine and I'd put a trivial upper-bound of 10k total flinches (probably even 1k is too high). (I lead a relatively quiet life.)
Oh, hmm. In my head, the short-term predictors in the cerebellum are for latency-reduction and discussed in the last post, and meanwhile the short-term predictors in the telencephalon (amygdala & mPFC) are for flinching and discussed here. I think the cerebellum short-term predictors and the telencephalon short-term predictors are built differently for different purposes, and once we zoom in beyond the idea of “short-term prediction” and start talking about parameter settings etc., I really don't lump them together in my mind, they're apples and oranges. In the conversation thus far, I thought you were talking about the telencephalon (amygdala & mPFC) ones. If we're talking about instability from the cerebellum instead, we can continue the Post #4 thread.
I think I said some things about low-pass filters up-thread and then retracted it later on, and maybe you missed that. At least for some of the amygdala things like flinching, I agree with you that low-pass filters seem unlikely to be part of the circuit (well, depending on where the frequency cutoff is, I suppose). Sorry, my bad.
A common trope is that the hippocampus does one-shot learning in a way that vaguely resembles a lookup table with auto-associative recall, whereas other parts of the cortex learn more generalizable patterns more slowly, including via memory recall (i.e., gradual transfer of information from hippocampus to cortex). I'm not immediately sure whether the amygdala does one-shot learning. I do recall a claim that part of PFC can do one-shot learning, but I forget which part; it might have been a different part than we're talking about. (And I'm not sure if the claim is true anyway.) Also, as I said before, with continuous-time systems, “one shot learning” is hard to pin down; if David Burns spends 3 seconds on the ladder feeling relaxed, before climbing down, that's kinda one-shot in an intuitive sense, but it still allows the timescale of synapse changes to be much slower than the timescale of the circuit. Another consideration is that (I think) a synapse can get flagged quickly as “To do: make this synapse stronger / weaker / active / inactive / whatever”, and then it takes 20 minutes or whatever for the new proteins to actually be synthesized etc. so that the change really happens. So that's “one-shot learning” in a sense, but doesn't necessarily have the same short-term instabilities, I'd think.
To add to this a little: I think it likely that the gain would be dynamically tuned by some feedback system or another. In order to tune said gain however you need a non-constant signal to be able to measure the gain to adjust it.
...Hm. That sounds a lot like delta waves during sleep. Switch to open-loop operation, disable learning, suppress output, input a transient, measure the response, and adjust gain accordingly. (Which would explain higher seizure risk with a lack of sleep...)
Too much variance to be able to hardcode the gain, I'd imagine.
This makes too much sense. I'm somewhat concerned that I'm off in the weeds as a result.
Do these comments make sense to people that aren't me?
I don't completely get this.
Let's call the short term predictor (in the long term predictor circuit) P, so if P tries to predict [what P predicts in 0.3s], then the correct prediction would be to immediately predict the output at whatever point in the future the process terminates (the next ground truth injection?). In particular, P would always predict the same until the ground truth comes in. But if I understand correctly, this is not what's going on.
So second try: is P really still only trying to predict 0.3s into the future, making it less of a "long term predictor" and more of an "ongoing process predictor"? And then you get, e.g., the behavior of predicting a little less enzyme production with every step?
Or third try, is P just trying to minimize something like the sum of squared differences between adjacent predictions, and is thus trying to minimize the number of ground-truth injections, and we get the above an emergent effect?
I’m advocating for the first one—P is trying to predict the next ground-truth injection. Does something trouble you about that?
No; it was just that something about how the post explained it made me think that it wasn't #1.
I'm not sure if this is the right post in the sequence to ask this question on but: how does your model explain the differences in effects of different reinforcement schedules? Perhaps there's some explanation of them already in the literature, but I've always wondered why, for instance, variable ratio scheduling is so much more motivating than fixed ratio scheduling.
I haven’t read the literature on that, but it’s always fun to speculate off the top of my head. Here goes :)
You’re deciding whether or not to pull the lever.
In a 5%-win slot machine (variable-rate schedule), if you pull the lever, there’s a probability distribution for what will happen next, and that distribution has 5% weight on “immediate reward”. Maybe that’s sufficiently motivating to press the lever. (See Section 188.8.131.52 above.)
In a win-every-20-presses machine (fixed-rate schedule), there are 20 different scenarios (depending on how close you are to the next reward). Probably the least motivating of those 20 scenarios is the one where you just won and you’re 20 lever-presses away from the next win. Now, the probability distribution for what happens after the next press has 0% weight on “immediate reward”. Instead, you might concoct the plan “I will press the lever 20 times and then I’ll 100% get a reward”. But that plan might not be sufficiently motivating, because it gets penalized by the boring exertion required, and the reward doesn’t count for as much because it’s distant in time.
So then I would say: a priori, it’s not obvious which one would be more motivating, but there’s no reason to expect them to be equally motivating. The winner depends on several innately-determined parameters like how steep the hyperbolic time-discounting is, and exactly how does the brain collapse the reward prediction probability distribution into a decision. And I guess that, throughout the animal kingdom, these parameters are such that the 5%-win slot machine is more motivating. ¯\_(ツ)_/¯
My vague, uneducated intuition on the matter is that it has something to do with surprise. More specifically, that a pleasant event that is unexpected is intrinsically higher valence / more rewarding, for some reason, than a pleasant event that is expected. I don't know why this would be the case or how it works in the brain but it fits with my life experience pretty well and likely yours too. (In the same way, an unexpected bad event feels far worse than an expected bad event in most cases.)
Then a fixed rate schedule is such that the entity will quickly learn to predict each reward and will find it less rewarding - meanwhile in a variable rate schedule, the rewards are harder to predict and thus more compelling.
But that just pushes the question backwards a bit: why is unpredictability of an event a multiplicative factor in the equation determining its reward, magnifying highs and lows? What evolutionary purpose does that serve if it is true and how is it implemented in the brain? I'm not sure.
Hmm, maybe this (if accurate) is how curiosity and risk-aversion are implemented? Heck, maybe they're both the same drive, an emergent result of this amplification that uncertainty hypothetically causes: since unexpected rewards are more rewarding, entities will seek out environments in which unexpected good events are more likely to occur, e.g. novel environments (but not so novel that they are predicted to be unsafe) - meanwhile, entities will avoid environments in which unexpected bad events are likely to occur, and will tend to minimize risk. (Meaning that your prediction about the valence of novel things in general has a large effect on whether it is more or less compelling than familiar things, leading to the balance of sensitivities between good versus bad surprises being a hyperparameter perhaps differing between individuals - bear versus bull etc.) But that's all just conjecture.
Here's a twitter thread wherein Nathaniel Daw gently pushes back on my dopamine neuron discussion of Section 5.5.6.
Minor editing issue: I think you have a dangling/unfinished sentence here:
Oops! I started to add that in a new bullet point, but then decided to put it in the figure caption instead, and then I forgot to go back and delete the bullet point. Thank you, fixed now.
5.5.7 should be 5.6, I think?
Yeah, I think I like that better. Thanks.