Suppose we have a system that we suspect is some form of local optimiser. Maybe it's gradient descent training a neural network. Maybe it’s a neural network doing in-context learning. Maybe it's a mind, because we guess that the operation of minds in general is to an extent well-described as local movement in some 'abstraction space' a lot of the time. Maybe it’s something else.

Is there, on paper, a general way:

  1. To check whether our guess that the system is a local optimiser is correct?
  2. To find the space in which the optimiser is taking local steps?

Setup

Concretely, say a system has a state vector , which we can see.  evolves over time steps , so let’s call it . Our guess is that the system evolves  through some local optimisation method. But it’s not really local in -space. It’s local in some other space we don’t know about. Let’s call the state vector in that hidden space we don’t know about , where  is some space.  seems guaranteed to be a topological space, else the idea of a local search in it wouldn't make much sense. Maybe in a lot of cases we can also assume that it is a metric space? Not sure about that one, but a lot of the local search algorithms I can think of seem to have some notion of distance between points in the space.

So, overall the system maps  as , where the transformations  and  can be complicated and nonlinear.

 is assumed to be a step in some local optimisation method. That is to say, there is some evaluation function  that the system is trying to minimise, and it tries to do this minimisation via something like a genetic algorithm, gradient descent, Adam, or something else that makes use of local information to take one step in the space  per time step . The evaluation function  should have the kind of  loss landscape a local optimiser could reasonably navigate. No parity learning problems or anything like that. It also shouldn’t be completely trivial to find the minimum of . I.e. if the optimiser is using Newton’s method shouldn’t just be quadratic everywhere, else there isn’t much of a search trajectory for us to look at because the system always finds the minimum in one step.

Now, what I’m wondering about is this: Is there some characteristic attribute of the trajectories local optimisations take in their search spaces which we can use to figure out the map  with reasonable confidence, provided we have enough states  from different searches to look at?

Example of what this could look like

I'm not claiming the following would actually work, I'm just trying to point to the sort of thing my brain is thinking about by giving a concrete example.

In a lot of cases I can think of, neighbouring points in the time series of a local search will tend to be close to each other in terms of Euclidean distance in the search space. So, we could try to find a map  such that   will tend to be small. We could do this by training an autoencoder with a deep and expressive architecture on a loss of 

  1. Reconstructing the original system outputs. So, learn some map .
  2. Minimising some term like , with , where the standard deviation  and expectation  are taken over the batch.

And then maybe the map  we can now explicitly see in the autoencoder would tend to roughly match the map  we're looking for, up to trivial transformations like affine transforms. And if the autoencoder training can't converge with good loss, then maybe that means the original system wasn't a local optimiser, or at least not a local optimiser taking small steps in a Euclidean space.

Summary

  1. Should we expect local search spaces to often be describable as metric spaces, for the kind of local searches we might often see in real life? Local searches run by in-context learning algorithms or other minds, for example.
  2. If a local optimiser maps , with  being the states in the local search space, can we infer the map  from seeing enough states ,[1] by leveraging our knowledge that the time series  has to look like the trajectory of a local search?
  1. ^

    Up to affine transformations and such of course.

New Answer
New Comment

1 Answers sorted by

I'm actually curious about a related problem. 

One of the big surprises of the deep learning revolution has been the universality of gradient descent optimization. 

How large is the class of optimization problems that we can transform into a gradient descent problem of some kind? My suspicision is that it's a very large class; perhaps there is even a general way to transform any problem into a gradient descent optimization problem? 

The natural thing that comes to mind is to consider gradient descent of Langrangian energy functionals in (optimal) control theory. 

[-]robo*10

I'll conjecture the following in a VERY SPECULATIVE, inflammatory, riff-on-vibes statements:

  • Gradient descent solves problem in the complexity class P[1].  It is P-Complete.
  • Learning theory (and complexity theory) have for decades been pushing two analogous bad narratives about the weakness of gradient descent (and P).
  • These narratives dominate because it is easy prove impossibility results like "Problem X can't be solved by gradient descent" (or "Problem Y is NP-Hard").  It's academically fecund -- it's a subject aspiring academics can write a lot
... (read more)
3 comments, sorted by Click to highlight new comments since:

Minimising some term like , with , where the standard deviation  and expectation  are taken over the batch.


Why does this make  tend to be small? Wouldn't it just encourage equally-sized jumps, without any regard for the size of those jumps?

Yes. The hope would be that step sizes are more even across short time intervals when you’re performing a local walk in some space, instead of jumping to a point with random distance to the last point at every update. There’s probably a better way to do it. It was just an example to convey the kind of thing I mean, not a serious suggestion.

[-]robo10

Checking my understanding: for the case of training a neural network, would S be the parameters of the model (along with perhaps buffers/state like moment estimates in Adam)?  And would the evolution of the state space be local in S space?  In other words, for neural network training, would S be a good choice for H?
In a recurrent neural networks doing in-context learning, would S be something like the residual stream at a particular token?