Top postsTop post
Jan
Message
phd student in comp neuroscience @ mpi brain research frankfurt. https://twitter.com/janhkirchner and https://universalprior.substack.com/
1071
Ω
156
32
78
Previously in this series: Elementary Infra-Bayesianism 1. There’s this paper Earlier last week I got nerd-sniped by a paper called Condensation: a theory of concepts (Eisenstat 2025). It’s the kind of paper where the abstract makes a claim so clean you assume you must be misreading it: roughly, there is...
TL;DR: IAN v1 died expensive TPU death, IAN v2 rises from markdown ashes. Personal AI assistants, knowledge graphs, and the alignment problem when the AI is you. A vacation in 2021 Back in 2021, I used a two-week vacation to > [finetune] a large language model on the text I...
Update February 21st: After the initial publication of this article (January 3rd) we received a lot of feedback and several people pointed out that propositions 1 and 2 were incorrect as stated. That was unfortunate as it distracted from the broader arguments in the article and I (Jan K) take...
TL;DR: A holiday obsession turns into a deep meditation on all things pretty. Albatrosses and reward *models* included. Also, check out www.fashionator.xyz (no malware, I promise). Over the Christmas holiday, I became slightly obsessed with the Netflix show "Next in Fashion." It's (probably) only a temporary obsession, nothing to worry...
Meta: Over the past few months, we've held a seminar series on the Simulators theory by janus. As the theory is actively under development, the purpose of the series is to discover central structures and open problems. Our aim with this sequence is to share some of our discussions with...
TL;DR: We showed how Hebbian learning with weight decay could enable a) feedforward circuits (one-to-many) to extract the first principal component of a barrage of inputs and b) recurrent circuits to amplify signals which are present across multiple input streams and suppress signals which are likely spurious. Short recap In...