Jan

phd student in comp neuroscience @ mpi brain research frankfurt. https://twitter.com/janhkirchner and https://universalprior.substack.com/

Sequences

Simulator seminar sequence

Wiki Contributions

Comments

Jan1y110

Neuroscience and Natural Abstractions

Similarities in structure and function abound in biology; individual neurons that activate exclusively to particular oriented stimuli exist in animals from drosophila (Strother et al. 2017) via pigeons (Li et al. 2007) and turtles (Ammermueller et al. 1995) to macaques (De Valois et al. 1982). The universality of major functional response classes in biology suggests that the neural systems underlying information processing in biology might be highly stereotyped (Van Hooser, 2007, Scholl et al. 2013). In line with this hypothesis, a wide range of neural phenomena emerge as optimal solutions to their respective functional requirements (Poggio 1981, Wolf 2003, Todorov 2004, Gardner 2019). Intriguingly, recent studies on artificial neural networks that approach human-level performance reveal surprising similarity between emerging representations in both artificial and biological brains (Kriegeskorte 2015, Yamins et al. 2016, Zhuang et al. 2020).

Despite the commonalities across different animal species, there is also substantial variability (Van Hooser, 2007). One prominent example of a functional neural structure that is present in some, but absent in other, animals is the orientation pinwheel in the primary visual cortex (Meng et al. 2012), synaptic clustering with respect to orientation selectivity (Kirchner et al. 2021), or the distinct three-layered cortex in reptiles (Tosches et al. 2018). These examples demonstrate that while general organization principles might be universal, the details of how exactly and where in the brain the principles manifest is highly dependent on anatomical factors (Keil et al. 2012, Kirchner et al. 2021), genetic lineage (Tosches et al. 2018), and ecological factors (Roeth et al. 2021). Thus, the universality hypothesis as applied to biological systems does not imply perfect replication of a given feature across all instances of the system. Rather, it suggests that there are broad principles or abstractions that underlie the function of cognitive systems, which are conserved across different species and contexts.

Jan1yΩ330

Hi, thanks for the response! I apologize, the "Left as an exercise" line was mine, and written kind of tongue-in-cheek. The rough sketch of the proposition we had in the initial draft did not spell out sufficiently clearly what it was I want to demonstrate here and was also (as you point out correctly) wrong in the way it was stated. That wasted people's time and I feel pretty bad about it. Mea culpa.

I think/hope the current version of the statement is more complete and less wrong. (Although I also wouldn't be shocked if there are mistakes in there). Regarding your points:

  1. The limit now shows up on both sides of the equation (as it should)! The dependence on  on the RHS does actually kind of drop away at some point, but I'm not showing that here. I'd previously just sloppily substituted "chose  as a large number" and then rewrite the proposition in the way indicated at the end of the Note for Proposition 2. That's the way these large deviation principles are typically used.
  2. Yeah, that should have been an  rather than a . Sorry, sloppy.
  3. True. Thinking more about it now, perhaps framing the proposition in terms of "bridges" was a confusing choice; if I revisit this post again (in a month or so 🤦‍♂️) I will work on cleaning that up. 
Jan1yΩ110

Hmm there was a bunch of back and forth on this point even before the first version of the post, with @Michael Oesterle  and @metasemi arguing what you are arguing. My motivation for calling the token the state is that A) the math gets easier/cleaner that way and B) it matches my geometric intuitions. In particular, if I have a first-order dynamical system  then  is the state, not the trajectory of states . In this situation, the dynamics of the system only depend on the current state (that's because it's a first-order system). When we move to higher-order systems, , then the state is still just , but the dynamics of the system but also the "direction from which we entered it". That's the first derivative (in a time-continuous system) or the previous state (in a time-discrete system).

At least I think that's what's going on. If someone makes a compelling argument that defuses my argument then I'm happy to concede!

Jan1yΩ110

Thanks for pointing this out! This argument made it into the revised version. I think because of finite precision it's reasonable to assume that such an  always exists in practice (if we also assume that the probability gets rounded to something < 1).

Jan1yΩ110

Technically correct, thanks for pointing that out! This comment (and the ones like it) was the motivation for introducing the "non-degenerate" requirement into the text. In practice, the proposition holds pretty well - although I agree it would nice to have a deeper understanding of when to expect the transition rule to be "non-degenerate"

Jan1y10

Hmmm good point. I originally made that decision because loading the image from the server was actually kind of slow. But then I figured out asynchronicity, so could totally change it... I'll see if I find some time later today to push an update! (to make an 'all vs all' mode in addition to the 'King of the hill')

Jan1y30

Hi Jennifer!

Awesome, thank you for the thoughtful comment! The links are super interesting, reminds me of some of the research in empirical aesthetics I read forever ago.

On the topic of circular preferences: It turns out that the type of reward model I am training here handles non-transitive preferences in a "sensible" fashion. In particular, if you're "non-circular on average" (i.e. you only make accidental "mistakes" in your rating) then the model averages that out. And if you consitently have a loopy utility function, then the reward model will map all the elements of the loop onto the same reward value.

Finally: Yes, totally, feel free to send me the guest ID either here of via DM!

Jan1y10

Hi Erik! Thank you for the careful read, this is awesome!

Regarding proposition 1 - I think you're right, that counter-example disproves the proposition. The proposition we were actually going for was  ,  i.e. the probability without the end of the bridge! I'll fix this in the post.

Regarding proposition II - janus had the same intuition and I tried to explain it with the following argument: When the distance between tokens becomes large enough, then eventually all bridges between the first token and an arbitrary second token end up with approximately the same "cost".  At that point, only the prior likelihood of the token will decide which token gets sampled. So Proposition II implies something like , or that in the limit "the probability of the most likely sequence ending in  will be (when appropriately normalized) proportional to the probability of ", which seems sensible? (assuming something like ergodicity). Although I'm now becoming a bit suspicious about the sign of the exponent, perhaps there is a "log" or a minus missing on the RHS... I'll think about that a bit more.

Load More