Yes, I really don't see how this would work right now. If I try doing Taylor series, which is what I'd start with for something like this, I very much get the opposite result.
I'm actually (hopefully) joining ai safety camp to work on your topics next month, so maybe we can talk about this more then?
The measure for peak broadness used near the end confuses me in many ways. It seems to imply that a large Hessian determinant means a broad peak. But wouldn't you expect the opposite, if anything? E.g. in one dimension, this would seem to imply that a larger second derivative would mean a broader peak. That just seems exactly false.
It seems like there's either something missing in this post, or in my head.
To clarify, my point was that at least in my experience, this isn't always the hard step. I can easily see that being the case in a "top-down" field, like a lot of engineering, medicine, parts of material science, biology and similar things. There, my impression is that once you've figured out what a phenomenon is all about, it often really is as simple as fitting some polynomial of your dozen variables to the data.
But in some areas, like fundamental physics, which I'm involved in, building your model isn't that easy or straightforward. For example, we've been looking for a theory of quantum gravity for ages. We know roughly what sort of variables it should involve. We know what data we want it to explain. But still, actually formulating that theory has proven hellishly difficult. We've been on it for over fifty years now and we're still not anywhere close to real success.
I don't know enough about neurology to make a statement on whether this is something human children learn, or whether it comes evolutionarily preprogrammed, so to speak. But in a universe where physics wasn't at least approximately local, I would expect there'd indeed be little point in holding the notion that points in space and time have given "distances" from one another.
I think this certainly describes a type of gears level work scientists engage in, but not the only type, nor necessarily the most common one in a given field. There's also model building, for example.
Even once you've figured out which dozen variables you need to control to get a sled to move at the same speed every time, you still can't predict what that speed would be if you set these dozen variables to different values. You've got to figure out Newton's laws of motion and friction before you can do that.
Finding out which variables are relevant to a phenomenon in the first place is usually a required initial step for building a predictive model, but it's not the only step, nor necessarily the hardest one.
Another type of widespread scientific work I can think of is facilitating efficient calculation. Even if you have a deterministic model that you're pretty sure could theoretically predict a class of phenomena perfectly, that doesn't mean you have the computing power necessary to actually use it.
Lattice Quantum Chromodynamics should theoretically be able to predict all of nuclear physics, but employing it in practice requires coming up with all sorts of ingenuous tricks and effective theories to reduce the computing power required for a given calculation. It's enough to have kept a whole scientific field busy for over fifty years, and we're still not close to actually being able to freely simulate every interaction of nucleons at the quark level from scratch.
I remember my thought process going something like this:P (Aliens in Milky way) ~0.75P (Aliens) ~100P (Answer pulled from anus on basis of half remembered internet facts is remotely correct) ~0,8
So:P (Aliens) P (Anus) ~0,8P (Milky aliens) P (Anus) ~0,6