On the quest to understand the fundamental mathematics of intelligence and of the universe with curiosity.
And if you want to do literally statistical physics on top of deep learning: https://deeplearningtheory.com
Dr. Jeff Beck is lovely transdisciplinary galaxybrain, mathematician turned computational neuroscientist doing physics+neuroscience-inspired machine learning
"A mathematician builds the most explanatory model they can which they can still prove theorems about."
"A physicist builds the simplest model they can which still explains the key phenomena."
- Authors of Principles of Deep Learning Theory book
"An engineer builds the most realistic/accurate model they can which can still be computed within budget."
- jez2718
"Claude Sonnet 4.5 was able to recognize many of our alignment evaluation environments as being tests of some kind, and would generally behave unusually well after making this observation."
How do you rate the lowered sycophancy of GPT-5, relatively speaking?
According to Jan Leike, Claude Sonnet 4.5 It’s the most aligned frontier model yet https://x.com/janleike/status/1972731237480718734
I really like the definition of rationalist from https://www.lesswrong.com/posts/2Ee5DPBxowTTXZ6zf/rationalists-post-rationalists-and-rationalist-adjacents :
"A rationalist, in the sense of this particular community, is someone who is trying to build and update a unified probabilistic model of how the entire world works, and trying to use that model to make predictions and decisions."
I recently started saying that I really love Effective Curiosity:
Maximizing the total understanding of reality by building models of as many physical phenomena as possible across as many scales of the universe as possible, that are as comprehensive, unified, simple, and empirically predictive as possible.
And I see it more as a direction. And I see it from a more collective intelligence perspective. I think modelling the whole world in fully unified way and in total accuracy is impossible, even with all of our science with all our technology, because we're all finite limited agents with limited computational resources and time, limited modelling capability, and we get stuck in various models, from various perspectives, and so on. And all we have is approximations, that predict certain parts reality to a certain degree, but never fully all of reality in perfect accuracy in all it's complexity. And we have a lot of blind spots. All models are wrong but some predictively approximate the extremely nuanced complexity of reality better than others.
And from all of this, intelligence and fundamental physics, which are subsets of this, are the most fascinating to me.
I like your definition of rationalism!
I recently started saying that I really love Effective Curiosity:
Maximizing the total understanding of reality by building models of as many physical phenomena as possible across as many scales of the universe as possible, that are as comprehensive, unified, simple, and empirically predictive as possible.
And I see it more as a direction. And I see it from a more collective intelligence perspective. I think modelling the whole world in fully unified way and in total accuracy is impossible, even with all of our science with all our technology, because we're all finite limited agents with limited computational resources and time, limited modelling capability, and we get stuck in various models, from various perspectives, and so on. And all we have is approximations, that predict certain parts reality to a certain degree, but never fully all of reality in perfect accuracy in all it's complexity. And we have a lot of blind spots. All models are wrong but some predictively approximate the extremely nuanced complexity of reality better than others.
And from all of this, intelligence and fundamental physics, which are subsets of this, are the most fascinating to me.
Also I think a lot of people say a lot of things not as falsifiable factual claims, but as status signaling or values signaling or being part of ingroup signaling.