Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
New Comment
1 comment, sorted by Click to highlight new comments since: Today at 4:22 AM
[-]A Ray2yΩ7210

It's worth probably going through the current deep learning theories that propose parts of gears-level models, and see how they fit with this.  The first one that comes to mind is the Lottery Ticket Hypothesis.  It seems intuitive to me that certain tasks correspond to some "tickets" that are harder to find.

I like the taxonomy in the Viering and Loog, and it links to a bunch of other interesting approaches.

This paper shows phase transitions in data quality as opposed to data size, which is an angle I hadn't considered before.

There's the google paper explaining neural scaling laws that describes these two regimes that can be transitioned between: variance-limited and resolution-limited.  Their theory seems to predict that behavior between the two is similar to a phase boundary.

I think also there should be a bit of a null hypothesis.  It seems like there are simple functional maps where even if the internal improvement on "what matters" (e.g. feature learning) is going smoothly, our metric of performance is "sharp" in a way that hides the internal improvement until some transition when it doesnt.

Accuracy metrics seem like an example of this -- where you get 1 point if the correct answer is highest probability, otherwise 0 points.  It's easy to understand why this has a sharp transition in complex domains.

Personal take: I've been spending more and more time thinking about modularity, and it seems like modularity in learning could drive sharp transitions (e.g. "breakthroughs").