Actually, this might be what Yoshua Bengio would welcome. He wants a non-agentic tool AI which is as competent in STEM subjects as possible. That’s his approach to AI existential safety.
The problem with this has always been: “what would prevent people from adding an agentic wrapper or generally reorienting a model in an agentic way”.
In this sense, gpt-oss might be the step in exactly the direction Bengio eventually wants: a model which is completely disabled in all things social and agentic, but very smart in all things STEM.
Perhaps people should ponder if this model points at Bengio’s approach being more realistic that it’s customary to think… (I still doubt that Bengio’s approach is feasible, but gpt-oss existence tells us we should ponder this more.)
Yes, this might help.
Recommendations in a recent OpenAI paper seem to resonate with some of the thoughts in this post:
Regarding the popularity of coding tools on Google search trends, could it be that we mostly see the dynamics of their competition with each other and that Codex (which is not listed among the alternatives) started to eat a lot of relative share after Aug 7 GPT-5 release?
When one looks at Google Trends, the curves for both Codex the word (“search term”) and OpenAI Codex the topic (“software”) look suggestive.
I wonder if this is an exhaustive list of clusters of notable doctrines. These three are the dominant ones, but we seem to have some important “minority viewpoint clusters”.
For example, a number of people talk about “merge with AI” as potentially the most straightforward and powerful form of human “intelligence augmentation”.
A number of people talk about possibility of ASIs being robustly aligned to a non-human set of values which is “benign enough” to imply reasonable future for humans.
There is some correlation between people being inclined to argue for feasibility of 1 and being inclined to argue for feasibility of 2, and people often think about 1 and 2 going hand in hand together, so I am not sure if these should be considered as a single cluster or as two different clusters.
The difference between GenAI-2025 and GenAI-2023 in terms of their ability to assist software engineering efforts is quite drastic indeed.
Thanks!
I will certainly grant you the main point: we see tons of errors, even in the texts written by leading mathematicians. There is still plenty of uncertainty about validity of some core results.
But that’s precisely why it is premature to say that we have entered the formalist era. When people start routinely verifying in automated theorem provers, that’s when we’ll be able to say that we are in the formalist era.
changing the order a bit:
So, as a student (not as a researcher), I actually got acquainted with DCPOs and Scott domains, specifically in the context of denotation semantics.
One of my teachers was actually quite interested in general (non-Haussdorf) manifolds.
That's awesome! I'd love to understand more about manifolds in this context. We managed at some point to progress to reconciling Scott domains with linear algebra (with some implications for machine learning), but progressing to manifolds is something else.
I think this is precisely one of the realms where formalism shines.
I expect it would have been impossible for the field to take off without formalism.
There are so many broken intuitions in non-metric geometry! If we were left at the intuitions as being the ground truth, the field could not have succeeded. (Even in beginner metric geometry, it is easy to get screwed when one is not clearly introduced to specific examples where the definitions of closed/bounded/compact differ.)
There are just too many different types of spaces/topologies/manifolds, with different axioms behind them. The "true objects" here have little to do with standard numbers, intuitive euclidean spaces et al.
Yes, absolutely. But only when one is engaging in the interplay between the formal and the informal.
The informal and the formal have to go together, at least when humans practice math.
To the extent one has "true objects" in minds, these object are of a purely formal nature.
The question is, what is formal. Are categories and toposes well defined? A follower of Brouwer and Heyting would tell you "no".
And can one work with the objects before their formalization is fully elucidated? One often has to, otherwise one would often be unable to obtain that formalization.
When one is talking about the cutting edge math, math which is in the process of being created, the full formalization is often unavailable, yet progress needs to be made.
back to the beginning:
Regardless of how they feel about non-intuitive proofs, when these proofs contradict their intuitions, the non-intuitive proofs will still be the source of truth.
Yes, absolutely, but it will be a very incomplete truth.
The intuition about what is connected to what is an important part of the actual mathematical material even if that intuition is not fully reified into definitions, axioms, and theorems.
So yes, this is a valid source of mathematical truth, but all kinds of informal considerations also form important mathematical material and empower people's ability to obtain the formalized part of truth as well. E.g. if one looks at the Langlands program, one is clearly seeing how much the informal part dominates and leads, and how the technical part gradually follows, conjectures evolving and changing before they become theorems.
From the simple example of my own studies, if one wants to obtain generalized metrization of Scott topology, this task is obviously very informal. Who knows what this generalized metrization might be a priori... All one understands at the beginning is that ordinary metrics would not do because the topology is non-Hausdorff. Then one needs to search, and one first finds that the literature contains the "George Washington bridge distance" quasi-metrics, where an asymmetric distance is non-zero in one direction and zero in the opposite direction, just like the toll on that bridge. And then one proves that a topologically correct generalized metrization can be obtained this way, but when one is also trying to figure out how to compute this generalized distance, one encounters obstacles. And then one realizes that the axiom d(x,x)=0 is incompatible with distance being monotonic with respect to both variables, and, therefore, is incompatible with distance being Scott continuous with respect to both variables, and Scott continuity is the abstract version of computability in this field. And then one proceeds to (re)discover generalized metrics which don't have the d(x,x)=0 axiom obtaining (symmetric) relaxed metrics and partial metrics which can be made Scott continuous with respect to both variables and can be made computable, and so on.
The informal part always leads, formalization can't be fruitful without it (although perhaps different kinds of minds which are not like minds of human mathematicians could fruitfully operate in an entirely formal universe, who knows).
You mention Hasse diagrams. They are a modern purely formal construct.
Only if the set is finite.
If one wants to represent an infinite partially ordered set in this fashion, one has to use an informal "Hasse-like sketch" which is not well-defined (as far as I know, although I would be super-curious if one could extend the formal construct of Hasse diagrams to those cases).
I am going to link to a few of the examples of these informal Hasse diagrams. I am going to use this text originally from 2002 (and the diagrams are all from 1990-s): https://arxiv.org/abs/1512.03868
On page 15 (page 30 of the PDF file), we see an informal "Hasse diagram" for interval numbers within a segment. On page 16 (page 31 of the PDF file), this "Hasse diagram" is used in an even more informal fashion (with dashed lines to indicate that these lines are not a part of the depicted upper set Va,b).
On page 103 (page 118 of the PDF file), we see a more discrete infinite partial order, yet even this more discrete case is, I believe, beyond the power of formalized "Hasse diagrams" (it's not an accident that most elements have to be omitted and replaced by ellipsis-like dots looking like small letters "o").
That does not correspond to my observations of the math community (or to my introspection as I have done some math research in my life and published some math papers).
On one hand, mathematicians tend to downgrade proofs which are impossible to intuitively understand. They would say, "ok, we now know that the statement is true, but any potential for better understanding of adjacent areas is not gained yet, or, perhaps, even lost, because people are not going to study this problem now, and so this achievement might even be detrimental to our field, as it is the interconnections between notions and subfields of study which truly matter the most, and they will not be discovered now, because the motivation to study this particular question is largely lost". (In reality, of course, this would depend; some statements are so important and central, that it matters more that we know their status, than any gain from them motivating further research.)
I think most mathematicians think about themselves as having a "Platonistic telescope" inside their minds, although this "telescope" is imperfect, similarly to a physical telescope.
Here I can try to fall back onto my introspection. As a mathematician, I have mostly studied various partial orders with additional structure ("Scott domains" in their various incarnations, e.g. continuous or algebraic lattices, and such). I certainly think those objects actually exist "out there" in the "abstract realm". An informal Hasse diagram does more to me than any syntactic description. So, although the field is trying to be algebraic (these structures are used as semantic domains in the field of denotational semantics of programming languages, and so can be useful in some approaches to formal proofs of software correctness), my intuition about them is mostly informal and geometric. In reality, I switch back and forth between those informal geometric ideas and syntactic aspects which I need when I focus on more low-level technical aspects (e.g. how to actually make a proof to go through).
I think most working mathematicians would not agree that we are presently in the formalist era.
Many would state rather emphatically that they study a "Platonic realm" of math structures and would express their beliefs that those structures don't have much to do with the linguistic and formal means used to study them, but exist objectively regardless of the "auxiliary" linguistic formalisms.
Others (e.g. V.I.Arnold) would emphasize the connections with physics and would actively promote the idea that excessive Bourbaki-style formalization is harmful.
Yet, we might be about to actually enter a formalist era with both humans and AI systems gradually becoming more and more fluent with automated formal theorem provers like Lean.
Yes, not only did they buy what remained of Windsurf in July, but they just raised 400 million at 10.2 billion valuation (Sep 8 news; quite a bit of further info at the link): https://cognition.ai/blog/funding-growth-and-the-next-frontier-of-ai-coding-agents