LESSWRONG
LW

Singular Learning Theory

Feb 16, 2023 by Alexander Gietelink Oldenziel

Singular Learning Theory (SLT) is a novel mathematical framework that expands and improves upon traditional Statistical Learning theory using techniques from algebraic geometry, bayesian statistics, and statistical physics. It has great promise for the mathematical foundations of modern machine learning. 

From the meta-uni seminar on SLT:

The canonical references are Watanabe’s two textbooks:

  • The gray book: S. Watanabe “Algebraic geometry and statistical learning theory” 2009.
  • The green book: S. Watanabe “Mathematical theory of Bayesian statistics” 2018.

Some other introductory references:

  • Matt Farrugia-Roberts’ MSc thesis, October 2022, Structural Degeneracy in Neural Networks.
  • Spencer Wong’s MSc thesis, May 2022, From Analytic to Algebraic: The Algebraic Geometry of Two Layer Neural Networks.
  • Liam Carroll’s MSc thesis, October 2021, Phase transitions in neural networks.
  • Tom Waring’s MSc thesis, October 2021, Geometric Perspectives on Program Synthesis and Semantics.
  • S. Wei, D. Murfet, M. Gong, H. Li , J. Gell-Redman, T. Quella “Deep learning is singular, and that’s good” 2022.
  • Edmund Lau’s blog Probably Singular.
  • Shaowei Lin’s PhD thesis, 2011, Algebraic Methods for Evaluating Integrals in Bayesian Statistics.
  • Jesse Hoogland’s blog posts: general intro to SLT, and effects of singularities on dynamics.
  • Announcement of the devInterp agenda. 
186Neural networks generalize because of this one weird trick
Ω
Jesse Hoogland
2y
Ω
34
50Interview Daniel Murfet on Universal Phenomena in Learning Machines
Alexander Gietelink Oldenziel
2y
1
61Spooky action at a distance in the loss landscape
Ω
Jesse Hoogland, Filip Sondej
2y
Ω
4
37Gradient surfing: the hidden role of regularization
Ω
Jesse Hoogland
2y
Ω
9
34The shallow reality of 'deep learning theory'
Jesse Hoogland
2y
11
32Empirical risk minimization is fundamentally confused
Jesse Hoogland
2y
8