Programmers generally distinguish between “imperative” languages in which you specify *what to do* (e.g. C) versus “declarative” languages in which you specify *what you want*, and let the computer figure out how to do it (e.g. SQL). Over time, we generally expect programming to become more declarative, as more of the details are left to the compiler/interpreter. Good examples include the transition to automated memory management and, more recently, high-level tools for concurrent/parallel programming.

It’s hard to say what programming languages will look like in twenty or fifty years, but it’s a pretty safe bet that they’ll be a lot more declarative.

I expect that applied mathematics will also become much more declarative, for largely the same reasons: as computers grow in power and software expands its reach, there will be less and less need for (most) humans to worry about the details of rote computation.

What does this look like? Well, let’s start with a few examples of “imperative” mathematics:

- Most grade-school arithmetic: it’s explicitly focused on computation, and even spells out the exact steps to follow (e.g. long division).
- Gaussian reduction, as typically taught in a first-semester linear algebra class. It’s the undergrads’ version of grade-school arithmetic.
- Most of the computation performed by hand in physics, engineering and upper-level econ courses & research, i.e. algebra/DEs/PDEs.

Contrast to the declarative counterparts:

- Figure out what arithmetic needs to be done (i.e. what numbers to plug in) and then use a calculator
- Set up a system of linear equations, then have python or wolfram invert the matrix
- Choose which phenomena to include in a model, set up the governing equations, then use either numerical simulation (for pretty graphs) or a computer algebra system (for asymptotics and scaling relations).

In the declarative case, most of the work is in formulating the problem, figuring out what questions to ask, and translating it all into a language which a computer can work with - numbers, or matrices, or systems of equations.

This is all pretty standard commentary at the level of mathematics education, but the real importance is in shaping the *goals* of applied mathematics. For the past century, the main objectives of mathematical research programs would be things like existence & uniqueness, or exhaustive classification of some objects, or algorithms for solving some problem (a.k.a. constructive solution/proof). With the shift toward declarative mathematics, there will be more focus on *building declarative frameworks* for solving various kinds of problems.

The best example I know of is convex analysis, in the style taught by Stephen Boyd (__course__, __book__). Boyd’s presentation is the user’s guide to convex optimization: it addresses what kinds of questions can be asked/answered, how to recognize relevant applications in the wild, how to formulate problems, what guarantees are offered in terms of solutions, and of course a firehose of examples from a wide variety of fields. In short, it includes exactly the pieces needed to use the tools of convex analysis as a declarative framework. By contrast, the internals of optimization algorithms are examined only briefly, with little depth and a focus on things which a user might need to tweak. Complicated proofs are generally omitted altogether, the relevant results simply stated as tools available for use.

This is what a mature declarative mathematical framework looks like: it provides a set of tools for practitioners to employ on practical problems. Users don’t need to know what’s going on under the hood; the algorithms and proofs generally “just work” without the user needing to worry about the details. The user’s job is to understand the language of the framework, the interface, and translate their own problems into that language. Once they’ve expressed what they want, the tools take over and handle the rest.

That’s the big goal of future mathematical disciplines: provide a practical framework which practitioners can use to solve real-world problems in the wild, without having to know all the little details and gotchas under the hood.

One last example, which is particularly relevant to me and to ML/AI research. One of the overarching goals of probability/statistics/ML is to be able to code up a generative model, pass it into a magical algorithm, and get back parameter estimates and uncertainties. The “language” of generative models is very intuitive and generally easy to work with, making it an excellent interface to a declarative mathematical toolkit. Unfortunately, the behind-the-scenes part of the toolkit remains relatively finicky and inefficient. As of today, the “magical algorithm” part is usually MCMC, which is great in terms of universality but often super-exponentially slow for multimodal problems, especially in high dimensions, and can converge very slowly even in simple unimodal problems. It’s not really reliable enough to use without thinking about what’s under the hood. Better mathematical tools and guarantees are needed before this particular framework fully matures.

If anyone has other examples of maturing or up-and-coming declarative mathematical frameworks, I’d be very interested to hear about them.