In the halls of the math department at my old university there's a TV screen that loops through different event announcements. But there's one screen on the loop that will stick with me forever. Paraphrasing:
"Pure mathematicians value the proof, applied mathematicians value the theorem."
I forget who said this, or if it's attributed to anyone at all, but I've been mulling over this for years at this point. What I take from this is there exists a duality to all theorems. There's everything that needs to come together to prove it, and there's everything the theorem implies. Different mathematicians look at different parts of this duality depending on what they are working on. LLM autoproofs are the death of the pre-theorem part of the duality. If we are able to prove things with a quick LLM query, then there's nothing to wrestle with.
This thought makes me actually feel a deep sense of loss. I went through my whole math education without formal proof engines, just books and my brain for every proof I had to write over the ~5 years I actually was doing proofs. I think this is a dying way to do math, especially in the age of AI, and for some reason that hurts.
I think the act of actually grappling with a problem is far more important than knowing the solution, and I see this as a larger problem with AI that goes beyond just math. We are witnessing what may be the dying of the old way of learning. There's a new option we all have. Before we had to solve our problems, now we are fast approaching a world where we can choose to solve our problems or outsource it to AI. This is terrifying to me, as knowing how to solve problems is probably the most important thing a human can learn.
The first few months I spent talking with LLMs were some of the most depressing in my life. It felt as if everything I loved about the act of knowledge gathering and application, was saying goodbye.
It only dies if we want it to die. There's nothing stopping humans who actually value the act of gathering and applying knowledge from doing so. We just won't get there first that way, and there won't be countless lives and vast sums of money and glorious prestige tied up in it. But it might become like a sport, or poetry, or the kind of person that tries to recreate bronze age tech in his backyard using only tools he made himself. This seems like a meta-ethical or fun-theoretic problem.
The act of doing math the old way, will, in the future, become akin to cultures practicing ancient out-moded traditions. They still exist sure, but not in the same way or scale.
When I say the practice will 'die' I mean as a primary mode of action on a large scale. The same way cultural traditions die, so do academic traditions. This is what's depressing to me and its in the same way as someone who watches their culture die.
Related: Ted Chiang's The Evolution of Human Science. (N.B. this is work of fiction.)
From Chiang's story:
"But as metahumans began to dominate experimental research, they increasingly made their findings avail- able only via DNT (digital neural transfer), leaving journals to publish second-hand accounts translated into human language. Without DNT, humans could not fully grasp earlier developments nor effectively utilize the new tools needed to conduct research, while metahumans continued to improve DNT and rely on it even more. Journals for human audiences were reduced to vehicles of popularization, and poor ones at that, as even the most brilliant humans found themselves puzzled by translations of the latest findings."
A quote your post:
"The likely outcome is that formalized mathematics will now develop in two separate layers, an intelligible layer embodied by Mathlib, and an unintelligible layer we might call Mathslop, a library of results that are known to be correct via proofs that no human has ever understood."
an intelligible layer embodied by Mathlib, and an unintelligible layer we might call Mathslop
That sounds transient too. If there are superhuman AIs around and humans are not dead, then, it would likely be trivial to get them to legibilize those results up to the limits of what humans can take in and/or uplift the humans to slurp it up natively.
Don't worry, mathematicians, the AI is coming for every other discipline, too. Maybe you'll show the rest of us how to accept with grace that our societal and academic systems for assigning prestige were based on a contingent overlap of at least three distinct goals. Namely, knowing the answer, understanding the answer, and building the means for more effectively learning and understanding answers. Only the first has fallen in this example. The second and third might follow soon, or they might not. All three are valuable in their own right.
Sometimes you really do just want the answer. If AI offers you a provably correct cure for cancer for $5, you shut up and take it.
Sometimes knowing the answer, or even just knowing the answer is provable (within some system), is exactly the clue you need to find a better way to get the answer that actually enlightens you.
And sometimes you might really wish you lived in Brennan's world.
I found this post from mathematician David Bessis very interesting. It explains that while AI can be trained to prove mathematical theorems in Lean, AI-written proofs are often poor at conveying useful mathematical insights. Bessis argues that the human-usable intuitions gained by the process of proving a theorem is more important than the proof itself, and AI provers do not address this need.
It's known that AIs do sloppy work on hard-to-verify tasks, but I was surprised that similar issues arise even in ungameable domains like math. Even when an AI's work is provably correct, its sloppiness may make it much harder for humans to benefit from.
I definitely recommend reading the whole post, but I'll pull out a few key quotes here:
Mathematics is meant to improve human understanding
Lean proofs from AI don't contain useful abstractions
Mathematicians aren't rewarded for writing better explanations of existing proofs
If AI proved the Riemann hypothesis, what would be the point?
Personally, I'm hopeful that we can find ways to motivate AIs to make their work easier for humans to understand. For example, we could train AIs to prefer simple solutions over complex ones, or to start by producing general insights that apply to problems besides the specific one they're working on. I speculate that the math domain could be an especially nice place to start researching ways to make AI outputs more legible, before moving on to domains that are harder to verify.
Read the full post here.