To me, this is a clear example of there being no such thing as an "objective" truth about the the validity of the parallel postulate - you are entirely free to assume either it or incompatible alternatives. You end up with equally valid theories, it's just those theories are applicable to different models
This is true, but there's an important caveat: Mathematicians accepted Euclidean geometry long before they accepted non-Euclidean geometry, because they took it to be intuitively evident that a model of Euclid's axioms existed, whereas the existence of models of non-Euclidean geometry was AFAIK regarded as non-obvious until such models were constructed within a metatheory assuming Euclidean space.
From the perspective of modern foundations, it's not so important to pick one kind of geometry as fundamental and use it to construct models of other geometries, because we now know how to construct models of all the classical geometries within more fundamental foundational theories such as arithmetic or set theory. But OP was asking about incompatible variants of the axioms of set theory. We don't have a more fundamental theory than set theory in which to construct models of different set theories, so we instead assume a model of set theory and then construct models of other set theories within it.
For example, one can replace the axiom of foundation of ZFC with axioms of anti-foundation postulating the existence of all sorts of circular or infinitely regressing chains of membership relations between sets. One can construct models of non-well-founded set theories within well-founded set theories and vice versa, but I don't know of anyone who claims that therefore both kinds of set theory are equally valid as foundations. The existence of models of well-founded set theories is natural to assume as a foundation, whereas the existence of models satisfying strong anti-foundation axioms is not intuitively obvious and is therefore treated as a theorem rather than an axiom, the same way non-Euclidean geometry was historically.
Set theories are particularly good and convenient for simulating other theories, but one can alsosimulate set theories within other seemingly more "primitive" theories (e.g. simulating it in theories of basic arithmetic via Godel numbering).
Yes, there are ways of interpreting ZFC in a theory of natural numbers or other finite objects. What there is not, however, is any known system of intuitively obvious axioms about natural numbers or other finite objects, which makes no appeal to intuitions about infinite objects, and which is strong enough to prove that such an interpretation of ZFC exists (and therefore that ZFC is consistent). I don't think any way of reducing the consistency of ZFC to intuitively obvious axioms about finite objects will ever be found, and if I live to see a day when I'm proved wrong about that, I would regard it as the greatest discovery in the foundations of math since the incompleteness theorems.
I have spent a long time looking in vain for any reason to think ZFC is consistent, other than that it holds in the One True Universe of Sets (OTUS, henceforth). So far I haven't found anything compelling, and I am quite doubtful at this point that any such justification exists.
Just believing in the OTUS seems to provide a perfectly satisfactory account of independence and nonstandard models, though: They are just epiphenomenal shadows of the OTUS, which we have deduced from our axioms about the OTUS. They may be interesting and useful (I rather like nonstandard analysis), but they don't have any foundational significance except as a counterexample showing the limitations of what formal systems can express. I take it that this is more or less what you have in mind when you say
a view that maintains objective mathematical truth while explaining why we need to work with multiple models pragmatically.
It's disappointing that we apparently can't answer some natural questions about the OTUS, like the continuum hypothesis, but Gödel showed that our knowledge of the OTUS was always bound to be incomplete 🤷♂️.
Having said that, I still don't find the Platonist view entirely satisfactory. How do humans come to have knowledge that the OTUS exists and satisfies the ZFC axioms? Supposing that we do have such knowledge, what is it that distinguishes mathematical propositions whose truth we can directly perceive (which we call axioms) from other mathematical propositions (which we call conjectures, theorems, etc.)?
An objection more specific to set theory, as opposed to Platonism more generally, would be, given a supposed "universe" of "all" sets, the proper classes of are set-like objects, so why can't we extend the cumulative hierarchy another level higher to include them, and continue that process transfinitely? Or, if we can do that, then we can't claim to ever really be quantifying over all sets. But if that's so, then why should we believe that the power set axiom holds, i.e. that any of these partial universes of sets that we can quantify over is ever large enough to contain all subsets of ?
But every alternative to Platonism seems to entail skepticism about the consistency of ZFC (or even much weaker foundational theories), which is pragmatically inconvenient, and philosophically unsatisfactory, inasmuch as the ZFC axioms do seem intuitively pretty compelling. So I'm just left with an uneasy agnosticism about the nature of mathematical knowledge.
Getting back to the question of the multiverse view, my take on it is that it all seems to presuppose the consistency of ZFC, and realism about the OTUS is the only good reason to make that presupposition. In his writings on the multiverse (e.g. here), Joel Hamkins seems to be expressing skepticism that there is even a unique (up to isomorphism) standard model of that embeds into all the nonstandard ones. I would say that if he thinks that, he should first of all be skeptical that the Peano axioms are consistent, to say nothing of ZFC, because the induction principle rests on the assumption that "well-founded" means what we think it means and is a property possessed by . I have never seen an answer to this objection from Hamkins or another multiverse advocate, but if anyone claims to have one I'd be interested to see it.
What’s more interesting is that I just switched medications from one that successfully managed the depression but not the anxiety to one that successfully manages the anxiety but not the depression
May I ask which medications?
For macroscopic rotation:
- Blood vessels cannot rotate continuously, so nutrients cannot be provided to the rotating element to grow it.
- Without smooth surfaces to roll on, rolling is not better than walking.
There are other uses for macroscopic rotation besides rolling on wheels, e.g. propellers, gears, flywheels, drills, and turbines. Also, how to provide nutrients to detached components, or build smooth surfaces to roll on so your wheels will be useful, seem like problems that intelligence is better at solving than evolution.
I'm middle-aged now, and a pattern I've noticed as I get older is that I keep having to adapt my sense of what is valuable, because desirable things that used to be scarce for me keep becoming abundant. Some of this is just growing up, e.g. when I was a kid my candy consumption was regulated by my parents, but then I had to learn to regulate it myself. I think humans are pretty well-adapted to that sort of value drift over the life course. But then there's the value drift due to rapid technological change, which I think is more disorienting. E.g. I invested a lot of my youth into learning to use software which is now obsolete. It feels like my youthful enthusiasm for learning new software skills, and comparative lack thereof as I get older, was an adaptation to a world where valuable skills learned in childhood could be expected to mostly remain valuable throughout life. It felt like a bit of a rug-pull how much that turned out not to be the case w.r.t. software.
But the rise of generative AI has really accelerated this trend, and I'm starting to feel adrift and rudderless. One of the biggest changes from scarcity to abundance in my life was that of interesting information, enabled by the internet. I adapted to it by re-centering my values around learning skills and creating things. As I contemplate what AI can already do, and extrapolate that into the near future, I can feel my motivation to learn and create flagging.
If, and to the extent that, we get a "good" singularity, I expect that it will have been because the alignment problem turned out to be not that hard, the sort of thing we could muddle through improvisationally. But that sort of singularity seems unlikely to preserve something as delicately balanced as the way that (relatively well-off) humans get a sense of meaning and purpose from the scarcity of desirable things. I would still choose a world that is essentially a grand theme park full of delightful experience machines over the world as it is now, with all its sorrows, and certainly I would choose theme-park world over extinction. But still ... OP beautifully crystalizes the apprehension I feel about even the more optimistic end of the spectrum of possible futures for humanity that are coming into view.
This all does seem like work better done than not done, who knows, usefulness could ensue in various ways and downsides seem relatively small.
I disagree about item #1, automating formal verification. From the paper:
9.1 Automate formal verification:
As described above, formal verification and automatic theorem proving more generally needs to be fully automated. The awe-inspiring potential of LLMs and other modern AI tools to help with this should be fully realized.
Training LLMs to do formal verification seems dangerous. In fact, I think I would extend that to any method of automating formal verification that would be competitive with human experts. Even if it didn't use ML at all, the publication of a superhuman theorem-proving AI, or even just the public knowledge that such a thing existed, seems likely to lead to the development of more general AIs with similar capabilities within a few years. Without a realistic plan for how to use such a system to solve the hard parts of AI alignment, I predict that it would just shorten the timeline to unaligned superintelligence, by enabling systems that are better at sustaining long chains of reasoning, which is one of the major advantages humans still have over AIs. I worry that vague talk of using formal verification for AI safety is in effect safety-washing a dangerous capabilities research program.
All that said, a superhuman formal-theorem-proving assistant would be a super-cool toy, so if anyone has a more detailed argument for why it would actually be a net win for safety in expectation, I'd be interested to hear it.
Correct. Each iteration of the halting problem for oracle Turing machines (called the "Turing jump") takes you to a new level of relative undecidability, so in particular true arithmetic is strictly harder than the halting problem.
The true first-order theory of the standard model of arithmetic has Turing degree . That is to say, with an oracle for true arithmetic, you could decide the halting problem, but also the halting problem for oracle Turing machines with a halting-problem-for-regular-Turing-machines oracle, and the halting problem for oracle Turing machines with a halting oracle for those oracle Turing machines, and so on for any finite number of iterations. Conversely, if you had an oracle that solves the halting problem for any of these finitely-iterated-halting-problem-oracle Turing machines, you could decide true arithmetic.
That HN comment you linked to is almost 10 years old, near the bottom of a thread on an unrelated story, and while it supports your point, I don't notice what other qualities it has that would make it especially memorable, so I'm kind of amazed that you surfaced it at an appropriate moment from such an obscure place and I'm curious how that happened.
This is an appealingly parsimonious account of mathematical knowledge, but I feel like it leaves an annoying hole in our understanding of the subject, because it doesn't explain why practicing math as if Platonism were correct is so ridiculously reliable and so much easier and more intuitive than other ways of thinking about math.
For example, I have very high credence that no one will ever discover a deduction of 0=1 from the ZFC axioms, and I guess I could just treat that as an empirical hypothesis about what kinds of physical instantiations of ZFC proofs will ever exist. But the early set theorists weren't just randomly sampling the space of all possible axioms and sticking with whatever ones they couldn't find inconsistencies in. They had strong priors about what kinds of theories should be consistent. Their intuitions sometimes turned out to be wrong, as in the case of Russel's paradox, but overall their work has held up remarkably well, after huge amounts of additional investigation by later generations of mathematicians.
So where did their intuitions come from? As I said in my answer, I have doubts about Platonism as an explanation, but none of the alternatives I've investigated seem to shed much light on the question.