Interpreting neural networks remains challenging, largely due to their dense parametrization, global coupling of parameters, and the polysemantic behavior of neurons. These problems are ameliorated in Kolmogorov-Arnold Networks, which have fewer parameters overall, parameter changes are contained to local regions, and there are less polysemantic neurons.
In the first part of this talk, Fabian will show how KANs can be viewed as neural networks that have undergone a principled sparsification, clarifying why they exhibit improved interpretability and parameter efficiency. He will then present a new framework for multivariate symbolic regression that couples KANs, LLMs, and genetic search strategies, akin to FunSearch, to discover compact analytic expressions from data. This approach enables scalable symbolic regression in high-dimensional settings, leverages the inductive biases inherent in KANs, and the ability to prime the LLM's regression proposals for different data domains.
Posted on: