"I always remember, [Hamming] would come into my office and try to solve a problem [...] I had a very big blackboard, and he’d start on one side, write down some integral, say, ‘I ain’t afraid of nothin’, and start working on it. So, now, when I start a big problem, I say, ‘I ain’t afraid of nothin’, and dive into it."
—Bruce MacLennan
Agree with the last sentence. I think in a majority of the fields, lines of investigation with higher insight-per-effort, in the current margin, are those done with choice (or with even more controversial things like the large cardinal axioms).
I think you misinterpreted me - my claim is that working without choice often reveals genuine hidden mathematical structures that AC collapses into one. This isn't just an exercise in foundations, in the same way that relaxing the parallel postulate to study the resulting inequivalent geometries (which were equivalent, or rather, not allowed under the postulate) isn't just an "exercise in foundations."
Insofar as [the activity of capturing natural concepts of reality into formal structures, and investigating their properties] is a core part of mathematics, the choice of working choice-free is just business as usual.
My understanding is that yes, axiom of choice (or more generally non-constructive methods) is convenient and it "works", and if you naively take definitions and concepts from those realm and see what results / properties hold when removing the axiom of choice (or only use constructive methods), many of the important results / properties no longer hold (as you mentioned: Tychonoff, existence of basis, ... ).
But it is often the case that you can redevelop these concepts in a choice-free / constructive context in such a way that it captures the spirit of what those definitions and concepts originally intended to capture, and yes it is harder this way, but 1) doing so often lets one recover the "morally correct" equivalent of those results / properties that do in fact hold in this context, and more importantly, 2) doing so has a lot of conceptual value.
For example, equivalent definitions become non-equivalent (such as finiteness; trying to do computable analysis and make sense of the intermediate value theorem in this context leads to new ideas like locale theory, abstract stone duality, overtness (dual of compactness, which is trivial classically), etc) where each has different and new interpretation, and the role of computability and approximation is made explicit which requires bringing in new / additional mathematical structures, etc. Also, many classical theorems have their choice-free / constructive equivalent, eg Tychonoff's theorem for locales (arbitrary coproduct of compact frames is compact - no axiom of choice required to prove!) - and all of this gives us new and sometimes deep insight about the concept that would have been overlooked in the classical realm[1].
To put it differently: Choice turns structure into property. Without choice, we can instead treat those structure as additional data. This lets various theorems re-emerge, often in many non-equivalent forms - and this is good.
See also: Five Stages of accepting Constructive Mathematics, Expanding the domain of discourse reveals structure already there but hidden.
I have only heard of these examples (I am not at all familiar with them) in the context of constructive / computable analysis, but I expect this to be a lesson that holds broadly throughout mathematics (and more narrowly: that it is possible to come up with a "morally correct" choice-free equivalent of the theory that in its current form crucially depends on choice, and that this gives new conceptual insights), and this not having been done already for some subject X is more of an issue of lack-of-mathematician-years put into it.
Another solution: Practice dynamic visual acuity and predict the opponent's move via their hand shape.
The extreme version of this strategy looks like this robot.
The human version of this strategy (source) is to realize that rock is the weakest (since it is easy to recognize as there is no change in hand shape over time, given that the default hand-state is usually rock), and so conclude that the best strategy is to play paper if you recognize no change in hand shape, and play scissor if you recognize any movement (because it means it's either paper or scissor, and scissor gives win or draw)[1].
This is of course vulnerable to exploitation once the opponent knows you're using this and they also have good dynamic visual acuity (eg opponent can randomize the default hand-state, diagonalize against your heuristic by inserting certain twitches to their hand movement, etc).
Update to my last shortform on "Why Homology?"
My current favorite frame for thinking about homology is as fixing Poincare's initial conception of "counting submanifolds up to cobordism". (I've learned this perspective from this excellent blog post, and I summarize my understanding below.)
In Analysis Situs, Poincare sought to count m-submanifolds of a given a n-manifold up to some equivalence relation - namely, being a boundary of some (m+1)-submanifold, i.e. cobordism. I personally buy cobordism as a concept that is as natural as homotopy for one to have come up with (unlike the initially-seemingly-unmotivated definition of "singular homology"), so I am sold on this as a starting point.
Formally, given a n-manifold and m-submanifolds (disjoint) , being cobordant means there's a (m+1)-submanifold such that . These may have an orientation, so we can write this relation as a formal sum where . Now, if there are many such (m+1)-submanifolds for which the form a disjoint boundary, we can sum all of these formal sums together to get where .
Now, this already looks a lot like homology! For example, above already implies themselves have empty boundary (because manifold boundary of manifold boundary is empty, and are disjoint). So if we consider two formal sums and to be the same if , then 1) we are considering formal sums of with empty boundary 2) up to being a boundary of a (m+1)-dimensional manifold. This sounds a lot like - though note that Poincare apparently put none of this in a group theory language.
So Poincare's "collection of m-submanifolds of up to cobordism" is the analogue of !
But it turns out this construction doesn't really work for some subtle issues (due to Heegaard). This led Poincare to a more combinatorial alternative to this cobordism idea that didn't face these issues, which became the birth of the more modern notion of simplicial homology.
(The blog post then describes how Poincare's initial vision of "counting submanifolds up to cobordism" can still be salvaged (which I plan to read more about in the future), but for my purpose of understanding the motivation behind homology, this is already very insightful!)
Perhaps relevant: An Informational Parsimony Perspective on Probabilistic Symmetries (Charvin et al 2024), on applying information bottleneck approaches to group symmetries:
... the projection on orbits of a symmetry group’s action can be seen as an information-preserving compression, as it preserves the information about anything invariant under the group action. This suggests that projections on orbits might be solutions to well-chosen rate-distortion problems, hence opening the way to the integration of group symmetries into an information-theoretic framework. If successful, such an integration could formalise the link between symmetry and information parsimony, but also (i) yield natural ways to “soften” group symmetries into flexible concepts more relevant to real-world data — which often lacks exact symmetries despite exhibiting a strong “structure” — and (ii) enable symmetry discovery through the optimisation of information-theoretic quantities.
Have you heard of Rene Thom's work on Structural Stability and Morphogenesis? I haven't been able to read this book yet[1], but my understanding[2] of its thesis is that: "development of form" (i.e. morphogenesis, broadly construed, eg biological or otherwise) depends on information from the structurally stable "catastrophe sets" of the potential driving (or derived from) the dynamics - structurally stable ones, precisely because what is stable under infinitesimal perturbation are the only kind of information observable in nature.
Rene Thom puts all of this in a formal model - and, using tools of algebraic topology, show that these discrete catastrophes (under some conditions, like number of variables) have a finite classification, and thus (in the context of this morphological model) is a sort of finitary "sufficient statistics" of the developmental process.
This seems quite similar to the point you're making: [insensitive / stable / robust] things are rare, but they organize the natural ontology of things because they're the only information that survives.
... and there seems to be the more speculative thesis of Thom (presumably; again, I don't know this stuff), where geometric information about these catastrophes directly correspond to functional / internal-structure information about the system (in Thom's context, the Organism whose morphogenic process we're modeling) - this presumably is one of the intellectual predecessors of Structural Bayesianism, the thesis that there is a correspondence between internal structures of Programs or the Learning Machine with the local geometry of some potential.
I don't think I have enough algebraic topology background yet to productively read this book. Everything in this comment should come with Epistemic Status: Low Confidence.
From discussions and reading distillations of Thom's work.
Thank you for the suggestion! That sounds like a good idea, this thread seems to have some good recommendations, will check them out.
Consider my vote to be placed on writing that megasequence, I know next to nothing about large cardinals and would be eager to know more about them!