MadHatter

Posts

Sorted by New

Wiki Contributions

Comments

Neuroscience things that confuse me right now

Various thoughts:

  • It would make a lot of sense to me if norepinephrine acted as a Q-like signal for negative rewards. I don't have any neuroscience evidence for this, but it makes sense to me that negative rewards and positive rewards are very different for animals and would benefit from different approaches. I once ran some Q-learning experiments on the classic Taxi environment to see if I could make a satisficing agent (one that achieves a certain reward less than the maximum achievable and then rests). The agent responded by taking illegal actions that give highly negative rewards in the Taxi environment and hustling as hard as possible the rest of the time to achieve the reward specified. So I had to add a Q-function solely for negative rewards to get the desired behavior. Given that actual animals need to rest in a way that RL agents don't have to in most environments, it makes sense to me that Q-learning on its own is not a good brain architecture.
  • Dopamine receptors in V1 kind of makes sense if you want to visually predict reward-like properties of objects in the environment. Like something could look tasty or not tasty, maybe.
Big picture of phasic dopamine

This was an amazing article, thank you for posting it!

  • Side tangent: There’s an annoying paradox that: (1) In RL, there’s no “zero of reward”, you can uniformly add 99999999 to every reward signal and it makes no difference whatsoever; (2) In life, we have a strong intuition that experiences can be good, bad, or neutral; (3) ...Yet presumably what our brain is doing has something to do with RL! That “evolutionary prior” I just mentioned is maybe relevant to that? Not sure … food for thought ...

The above isn't quite true in all senses in all RL algorithms. For example, in policy gradient algorithms (http://www.scholarpedia.org/article/Policy_gradient_methods for a good but fairly technical introduction) it is quite important in practice to subtract a baseline value from the reward that is fed into the policy gradient update.  (Note that the baseline can be and most profitably is chosen to be dynamic - it's a function of the state the agent is in. I think it's usually just chosen to be V(s) = max Q(s,a).) The algorithm will in theory converge to the right value without the baseline, but subtracting the baseline speeds convergence up significantly. If one guesses that the brain is using a policy-gradients-like algorithm, a similar principle would presumably apply.  This actually dovetails quite nicely with observed human psychology - good/bad/neutral is a thing, but it seems to be defined largely with respect to our expectation of what was going to happen in the situation we were in. For example, many people get shitty when it turns out they aren't going to end up having sex that they thought they were going to have - so here the theory would be that the baseline value was actually quite high (they were anticipating a peak experience) and the policy gradients update will essentially treat this as an aversive stimulus, which makes no sense without the existence of the baseline.

It's closer to being true of Q-learning algorithms, but here too there is a catch - whatever value you assign to never-before-seen states can have a pretty dramatic effect on exploration dynamics, at least in tabular environments (i.e. environments with negligible generalization). So here too one would expect that there is a evolutionarily appropriate level of optimism to apply to genuinely novel situations about which it is difficult to form an a priori judgment, and the difference between this and the value you assign to known situations is at least probably known-to-evolution.