Modern physics has been extraordinarily successful at describing the natural world, yet the process by which new physical theories are constructed remains largely artisanal. In this talk, Alexei Koulakov,Charles Robertson Professor of Neuroscience at Cold Spring Harbor Laboratory, will discuss the principles of brain function and evolution which can offer tools for building new physics theories.
First, he will introduce the concept of a genomic bottleneck, the idea that neural systems are forced to compress vast sensory experience into representations that are simple, robust, and reusable across tasks. I suggest that similar bottlenecks may be essential for identifying abstractions that generalize across subfields of physics. Second, he will discuss how brains appear to construct internal imagination modules, generative models that allow organisms to simulate physical phenomena and test hypotheses without direct interaction with the world. Finally, Alexei will show how hierarchical reinforcement learning can provide a natural framework for organizing physical reasoning across scales, from low-level dynamics to high-level concepts.
By decomposing complex problems into nested objectives, hierarchical control offers a computational model for how intelligent systems, biological or artificial, can efficiently explore and solve hard physics problems. These ideas suggest a neuroscience-inspired roadmap for transforming theory building in physics: one that emphasizes distillation, imagination, and hierarchical control as core computational primitives.
Modern physics has been extraordinarily successful at describing the natural world, yet the process by which new physical theories are constructed remains largely artisanal. In this talk, Alexei Koulakov, Charles Robertson Professor of Neuroscience at Cold Spring Harbor Laboratory, will discuss the principles of brain function and evolution which can offer tools for building new physics theories.
First, he will introduce the concept of a genomic bottleneck, the idea that neural systems are forced to compress vast sensory experience into representations that are simple, robust, and reusable across tasks. I suggest that similar bottlenecks may be essential for identifying abstractions that generalize across subfields of physics. Second, he will discuss how brains appear to construct internal imagination modules, generative models that allow organisms to simulate physical phenomena and test hypotheses without direct interaction with the world. Finally, Alexei will show how hierarchical reinforcement learning can provide a natural framework for organizing physical reasoning across scales, from low-level dynamics to high-level concepts.
By decomposing complex problems into nested objectives, hierarchical control offers a computational model for how intelligent systems, biological or artificial, can efficiently explore and solve hard physics problems. These ideas suggest a neuroscience-inspired roadmap for transforming theory building in physics: one that emphasizes distillation, imagination, and hierarchical control as core computational primitives.
Posted on: