Daniel Murfet

Wikitag Contributions

Comments

Sorted by

I just mean that it's relatively easy to prove theorems. More precisely, if you decide the probability of a parameter is just determined by the data and model via Bayes' rule, this is a relatively simple setup compared to e.g. deciding the probability of a parameter is an integral over all possible paths taken by something like SGD from initialisation. From this simplicity we can derive things like Watanabe's free energy formula, which currently has no analogue for the latter model of the probability of a parameter.

That theorem is far from trivial, but still there seems to be a lot more "surface area" to grip the problem when you think about it first from a Bayesian perspective and then ask what the gap is from there to SGD (even if that's what you ultimately care about).

Occam's razor cuts the thread of life

Thanks Lucius. This agrees with my take on that paper and I'm glad to have this detailed comment to refer people to in the future.

Hehe. Yes that's right, in the limit you can just analyse the singular values and vectors by hand, it's nice. 

No general implied connection to phase transitions, but the conjecture is that if there are phase transitions in your development then you can for general reasons expect PCA to "attempt" to use the implicit "coordinates" provided by the Lissajous curves (i.e. a binary tree, the first Lissajous curve uses PC2 to split the PC1 range into half, and so on) to locate stages within the overall development. I got some way towards proving that by extending the literature I cited in the parent, but had to move on, so take the story with a grain of salt. This seems to make sense empirically in some cases (e.g. our paper).

To provide some citations :) There are a few nice papers looking at why Lissajous curves appear when you do PCA in high dimensions:

  • J. Antognini and J. Sohl-Dickstein. "PCA of high dimensional random walks with comparison to neural network training". In Advances in Neural Information Processing Systems, volume 31, 2018
  • M. Shinn. "Phantom oscillations in principal component analysis". Proceedings of the National Academy of Sciences, 120(48):e2311420120, 2023.

It is indeed the case that the published literature has quite a few people making fools of themselves by not understanding this. On the flipside, just because you see something Lissajous-like in the PCA, doesn't necessarily mean that the extrema are not meaningful. One can show that if a process has stagewise development, there is a sense in which performing PCA will tend to adapt PC1 to be a "developmental clock" such that the extrema of PC2 as a function of PC1 tends to line up with the midpoint of development (even if this is quite different from the middle "wall time"). We've noticed this in a few different systems.

So one has to be careful in both directions with Lissajous curves in PCA (not to read tea leaves, and also not to throw out babies with bathwater, etc).

Thanks Jesse, Ben. I agree with the vision you've laid out here.

I've spoken with a few mathematicians about my experience using Claude Sonnet and o1, o1-Pro for doing research, and there's an anecdote I have shared a few times which gets across one of the modes of interaction that I find most useful. Since these experiences inform my view on the proper institutional form of research automation, I thought I might share the anecdote here.

Sometime in November 2024 I had a striking experience with Claude Sonnet 3.5. At the end of a workday I regularly paste in the LaTeX for the paper I’m working on and ask for its opinion, related work I was missing, and techniques it thinks I might find useful. I finish by asking it to speculate on how the research could be extended. Usually this produces enthusiastic and superficially interesting ideas, which are however useless.

On this particular occasion, however, the model proceeded to elaborate a fascinating and far-reaching vision of the future of theoretical computer science. In fact I recognised the vision, because it was the vision that led me to write the document. However, none of that was explicitly in the LaTeX file. What the model could see was some of the initial technical foundations for that vision, but the fancy ideas were only latent. In fact, I have several graduate students working with me on the project and I think none of them saw what the model saw (or at least not as clearly).

I was impressed, but not astounded, since I had already thought the thoughts. But one day soon, I will ask a model to speculate and it will come up with something that is both fantastic and new to me.

Note that Claude Sonnet 3.5/3.6 would, in my judgement, be incapable of delivering on that vision. o1-Pro is going to get a bit further. However, Sonnet in particular has a broad vision and "good taste" and has a remarkable knack of "surfing the vibes" around a set of ideas. A significant chunk of cutting edge research comes from just being familiar at a "bones deep" level with a large set of ideas and tools, and knowing what to use and where in the Right Way. Then there is technical mastery to actually execute when you've found the way; put the vibe surfing and technical mastery together and you have a researcher.

In my opinion the current systems have the vibe surfing, now we're just waiting for the execution to catch up.

I found this clear and useful, thanks. Particularly the notes about compositional structure. For what it's worth I'll repeat here a comment from ILIAD, which is that there seems to be something in the direction of SAEs, approximate sufficient statistics/information bottleneck, the work of Achille-Soatto and SLT (Section 5 iirc) which I had looked into after talking with Olah and Wattenberg about feature geometry but which isn't currently a high priority for us. Somebody might want to pick that up.

I like the emphasis in this post on the role of patterns in the world in shaping behaviour, the fact that some of those patterns incentivise misaligned behaviour such as deception, and further that our best efforts at alignment and control are themselves patterns that could have this effect. I also like the idea that our control systems (even if obscured from the agent) can present as "errors" with respect to which the agent is therefore motivated to learn to "error correct".

This post and the sharp left turn are among the most important high-level takes on the alignment problem for shaping my own views on where the deep roots of the problem are.

Although to be honest I had forgotten about this post, and therefore underestimated its influence on me, until performing this review (which caused me to update a recent article I wrote, the Queen's Dilemma, which is clearly a kind of retelling of one aspect of this story, with an appropriate reference). I assess it to be a substantial influence on me even so.

I think this whole line of thought could be substantially developed, and with less reliance on stories, and that this would be useful.

Load More