I'm actually curious about a related problem.
One of the big surprises of the deep learning revolution has been the universality of gradient descent optimization.
How large is the class of optimization problems that we can transform into a gradient descent problem of some kind? My suspicision is that it's a very large class; perhaps there is even a general way to transform any problem into a gradient descent optimization problem?
The natural thing that comes to mind is to consider gradient descent of Langrangian energy functionals in (optimal) control theory.
I'll conjecture the following in a VERY SPECULATIVE, inflammatory, riff-on-vibes statements:
Minimising some term like , with , where the standard deviation and expectation are taken over the batch.
Why does this make tend to be small? Wouldn't it just encourage equally-sized jumps, without any regard for the size of those jumps?
Yes. The hope would be that step sizes are more even across short time intervals when you’re performing a local walk in some space, instead of jumping to a point with random distance to the last point at every update. There’s probably a better way to do it. It was just an example to convey the kind of thing I mean, not a serious suggestion.
Checking my understanding: for the case of training a neural network, would S be the parameters of the model (along with perhaps buffers/state like moment estimates in Adam)? And would the evolution of the state space be local in S space? In other words, for neural network training, would S be a good choice for H?
In a recurrent neural networks doing in-context learning, would S be something like the residual stream at a particular token?
Suppose we have a system that we suspect is some form of local optimiser. Maybe it's gradient descent training a neural network. Maybe it’s a neural network doing in-context learning. Maybe it's a mind, because we guess that the operation of minds in general is to an extent well-described as local movement in some 'abstraction space' a lot of the time. Maybe it’s something else.
Is there, on paper, a general way:
Setup
Concretely, say a system has a state vector s, which we can see. s evolves over time steps t, so let’s call it st. Our guess is that the system evolves st→st+1 through some local optimisation method. But it’s not really local in s-space. It’s local in some other space we don’t know about. Let’s call the state vector in that hidden space we don’t know about ht∈H, where H is some space. H seems guaranteed to be a topological space, else the idea of a local search in it wouldn't make much sense. Maybe in a lot of cases we can also assume that it is a metric space? Not sure about that one, but a lot of the local search algorithms I can think of seem to have some notion of distance between points in the space.
So, overall the system maps st→st+1 as st→ht→ht+1→st+1, where the transformations st→ht and ht+1→st+1 can be complicated and nonlinear.
ht→ht+1 is assumed to be a step in some local optimisation method. That is to say, there is some evaluation function f(ht) that the system is trying to minimise, and it tries to do this minimisation via something like a genetic algorithm, gradient descent, Adam, or something else that makes use of local information to take one step in the space H per time step t. The evaluation function f(h) should have the kind of loss landscape a local optimiser could reasonably navigate. No parity learning problems or anything like that. It also shouldn’t be completely trivial to find the minimum of f(h). I.e. if the optimiser is using Newton’s method, f(h) shouldn’t just be quadratic everywhere, else there isn’t much of a search trajectory for us to look at because the system always finds the minimum in one step.
Now, what I’m wondering about is this: Is there some characteristic attribute of the trajectories local optimisations take in their search spaces which we can use to figure out the map st→ht with reasonable confidence, provided we have enough states st from different searches to look at?
Example of what this could look like
I'm not claiming the following would actually work, I'm just trying to point to the sort of thing my brain is thinking about by giving a concrete example.
In a lot of cases I can think of, neighbouring points in the time series of a local search will tend to be close to each other in terms of Euclidean distance in the search space. So, we could try to find a map st→ht such that δt:=||ht−ht+1||22 will tend to be small. We could do this by training an autoencoder with a deep and expressive architecture on a loss of
And then maybe the map st→h′t we can now explicitly see in the autoencoder would tend to roughly match the map st→ht we're looking for, up to trivial transformations like affine transforms. And if the autoencoder training can't converge with good loss, then maybe that means the original system wasn't a local optimiser, or at least not a local optimiser taking small steps in a Euclidean space.
Summary
Up to affine transformations and such of course.