We have Solomonoff Induction as a formalization of science, and logic/set theory as formalization of math, which are both very far from perfect, but there's nothing even remotely like them for philosophy. How to explain or fix this?
From my perspective this is because we have much better understandings of the nature of math and science (or natural and formal sciences, to use OP's language) than we do of philosophy (or to put it another way, our philosophies or math and science are much more advanced than our philosophy of philosophy, aka metaphilosophy), and this allows us to:
If philosophy is just a kind of science, then how to explain why our understanding of its nature lags so much behind that of other sciences, or if Williamson denies this, how to explain why 1-3 above is working so much better for math and (natural) science relative to philosophy?
Bear with me while I just naively tackle the question, what is philosophy?
Where to start? Well, we could start with "I think therefore I am", which is the "E equals M C squared" of philosophy, its most famous proposition. Where did that proposition come from, what is the process that produced it?
I was familiar with an anecdote about Descartes, that while serving as a soldier as a young man, he was shut up one night in a house, and had some thoughts that were formative for his philosophy. I had hoped that we might be able to say, that was the night that he first tried out universal doubt, but that doesn't seem to be the case. It's more that he had some big ideas (like algebraic geometry), and realized that he could spend his life developing them.
So all we can say is that Descartes thought a lot, and that at some point he had thoughts which he distilled into the cogito argument, which he later put into a book. Can we at least say what sequence of thoughts is characteristic of the cogito? You have to end up asking yourself, is there anything I can't doubt? And after doubting your memories, your sensations, all your beliefs, you end up saying: I am definitely doubting, which is a kind of thinking, and I cannot be thinking unless I exist, so I definitely exist.
What kind of thoughts were these? Descartes was reasoning about reality, and about what we can know about reality. That itself might pass as a first approximate definition of philosophy. But perhaps it's not enough; perhaps there are other mental faculties which are an essential part of the process. How do I know what I am thinking, or that I am thinking - is this self-knowledge obtained through reasoning? Maybe it's due to a different faculty, more akin to perception but directed inwardly - this is sometimes called intuition (this is intended as a technical name for inner perception, it's something different from intuition in the sense of creative leaps to complex truth through pattern matching). In characterizing the cognition that produces philosophy, we might want to include intuition along with reasoning, as faculties that are involved...
I'll pause here because I'm not aiming to produce an ultimately precise characterization of philosophy's nature, but just to illustrate what's involved. We can say: philosophy is a kind of thinking, it is directed at certain very general or fundamental or mysterious topics (we stipulate this to distinguish it from thought which occurs within the bounds of some other discipline), there will be some arbitrariness or edge cases, and so on.
Now suppose we want to "formalize" (or even "automate") philosophy. One of the very first questions would be: must philosophy and philosophical progress, occur in the form of thinking? If it does, then you also have a big part of the reason why there's no well-known formalization of philosophy: because we don't have a "formalization" of thinking. In turn I see at least two perspectives on this. One would be to focus on "general intelligence", and say, we won't know what thinking is, until we know how general intelligence works. The other would be to focus on consciousness, and say, we don't know how to think about thinking because we don't know how to think about consciousness.
But that is not entirely true; we actually do know some ways to think about consciousness. It's just that they do not fit into the methods of formal and natural science, because those by construction have excluded consciousness as an object. To think about consciousness, first you must allow yourself to use your natural faculty of thinking informally about it, and then you can try to be rigorous about it. This kind of thinking about consciousness has been pursued in philosophy, in literature, in psychology... And to be fair, pretty much anyone who comes up with a theory of consciousness, even if it is a wrong theory, must do a certain amount of informal-to-formal thinking-about-thinking.
So that's one approach to why we lack formalization of philosophy: it won't come without progress on consciousness.
On the other hand, suppose we go back to the question: must philosophy and philosophical progress occur in the form of thinking? - and suppose we try out the answer no. Suppose we entertain the idea of making progress on the subject matter of philosophy just via computation, without the use of consciousness. Maybe it's possible, but it seems like you would need to have formalized philosophy first, in order to then pursue it mechanically. And this leads me back to the previous barrier: you can't formalize philosophical thinking until you understand thinking, and you can't understand thinking without understanding consciousness, and you can't understand consciousness without allowing yourself to first freely use the natural capacity for informal thinking about consciousness, and seeing it where it leads.
Incidentally, there is a school of thought due to Edward Zalta which aims to formalize ontology. Zalta's work might be of interest to would-be formalizers of philosophy in general.
must philosophy and philosophical progress occur in the form of thinking?
Well, I think there are trivial non-thinking versions of this.
Like, imagine you learn that there are advanced aliens somewhere nearby and you have an option to snoop on their internet. You can advance human understanding of philosophy much more by snooping around their internet and lifting their philosophical arguments / concepts from there, than advance it by thinking yourself.
Other variants are like funding universities, having very smart kids with affordances to pursue philosophical research, etc.
Or, if you limit progress to my personal understanding of philosophy? Then I can just read stuff and get better than what I would have thought up myself. Same strategy, in some sense.
Those are baselines, to be fair. I'd expect there are much better, more philosophy specific approaches to this.
Those are all forms of progress in which you have someone else do the thinking. I'm talking about whether there can be philosophical progress without any thinking anywhere at all.
(I have sidestepped the scenario in which AI achieves philosophical progress by thinking, because we don't understand what thinking is, enough to say whether an AI is thinking or not, which brings us back to the first answer, that progress regarding the nature of consciousness, intentionality, thinking, etc, is needed before you have any chance of formalizing or automating philosophy.)
Well, my point is more like, are there actions that are not themselves / primarily thinking about philosophy by yourself, that would reliably advance human understanding of philosophy i.e. constitute philosophical progress? And there are some, like stealing data from aliens.
You can bracket how exactly that method works on the inside, to just your interactions with it.
I don't know about the last part:
> Or, if you limit progress to my personal understanding of philosophy? Then I can just read stuff and get better than what I would have thought up myself. Same strategy, in some sense.
I think (ha!) that acquiring an understanding of philosophy does require 'thinking' in a way that some other fields do not. For example, I can get a reasonably good understanding of the history of WW2 by memorizing a sufficiently long list of facts and being able to reproduce them exactly. Obviously thinking, extrapolating, making inferences would help to improve one's understanding further -- but there's some sense in which a large percentage of the 'content' of a historical understanding of WW2 is indeed contained in the bare facts as they happened. This makes sense as history is indeed first and foremost a study of "what happened?", and secondarily a study of "why", "how", etc. I don't think the same applies to philosophy. To take an example from my own experience, memorizing Deleuze & Guattari's definition of a "Rhizome" will do little to help you apply it. I suspect that even memorizing the whole of ATP would do little in that regard! So with regards to improving your "personal understanding" of philosophy, I do think that you need to think, even when given the results of past philosophers to go off of. In some sense I would think of those results as a shortcut towards where to think, what paths of deliberation are productive to go down. So while you would certainly benefit more from reading those alien blogposts than by thinking on your own, you would need to balance (in some proportion -- 50/50, 90/10, I don't know) thinking and reading to actually maximize your own understanding.
Philosophy is not, exceptionally among sciences, concerned with words or concepts.
I think this is not true, insofar as philosophy is done with words, and so can't excise itself from the world of conceptualization and still be philosophy. This doesn't mean philosophy must only or always be about words and concepts, but that philosophy can never ignore that it's attempting to understand the world through words and concepts, and as such that understanding may the tainted by those words.
None of this is revolutionary, and maybe Williamson is aware of it, but seems he has reason to ignore this point, because I think it undermines his overall argument, since sciences work by taking a certain amount of conceptualization for granted, and philosophy is different in that conceptualization is always within the scope of study.
I know and have known people who don't think in words. My experience is kinda in-between-ish, with verbal fragments transiently appearing in my consciousness like words on scraps of paper carried by the wind, or sometimes short verbal comments. The exception is when I'm actually trying to put some rigid structure on my thinking: then I do something that feels more like "thinking in full sentences" (or sometimes formal notation).
It seems to me that high-internal-monologue (or something) people tend to assume that their experience is a human universal,[1] similarly to how aphantasia was only "discovered for real" a few decades ago, because aphantasiacs mostly thought that non-aphantasiacs were being metaphorical when they were talking about "seeing loved ones' faces in their mind's eye", etc.
[ETA: Maybe I misunderstood how strong emphasis you're putting on the role of words/language?]
If you're not conceptualizing, you're not doing philosophy, you're experiencing things. Philosophy is about conceptualizing experience and thereby making sense of it. You can work with those concepts without putting linguistic labels on them, but even so such concepts are still "words" in the sense that "words" here is just a gloss of talking about symbolic referents of any kind.
I think this is not true, insofar as philosophy is done with words, and so can't excise itself from the world of conceptualization and still be philosophy. This doesn't mean philosophy must only or always be about words and concepts, but that philosophy can never ignore that it's attempting to understand the world through words and concepts, and as such that understanding may the tainted by those words.
The other sciences are also done with words though? Other sciences are also understanding the world through words/concepts, and so their understanding can be tainted by those. I'm open to there being a quantitative difference between sciences and philosophy here, but I'm not seeing a qualitative difference.
The difference is that, when we do, say, physics or mathematics, we take for granted the relationship between symbols and their referents and assume that relationship to be sound. This leaves only the question of how to define basic terms, but once those terms are defined, the rest can proceed via formalism and application of formalism to observations.
I want to be clear that I think this is an extremely reasonable thing to do! It would be very hard to do, say, geometry, if we were constantly debating what points and lines are. Instead, we create formal definitions, assume them to be true, and move forward. Similarly, in physics, though the process is a bit more complex, we infer theories from observations, those theories posit the world to be categorized in some way, and then proceed to draw further conclusions on the assumption a theory holds insofar as it provides a meaningly useful ontology to make sense of observations.
Philosophy has a different challenge, in that it includes within it to scope of study the action of creating categories. That is, it must use categories to make sense of categories, not in the way we do in math, but in the messy way that minds actually categorize the world. This sets it apart from sciences in that it must tackle the question of how we use concepts at all, and it must do so all while reasoning with concepts, which leads to a number of weird problems that break formalisms.
Parthood itself is nowhere near as obviously-to-me distinct from our concept of it as birds themselves are from our concept of them... Is it just me?
i have no idea what's being talked about in this post, and i am beginning to suspect that the only differences here are in respective definitions of 'philosophy'.
for example,
Many 20th-century philosophers thought philosophy was chiefly concerned with linguistic analysis (Wittgenstein) or conceptual analysis (Carnap). Williamson disagrees.
i don't think Wittgenstein imagined the tractatus as a work of metaphysics. if it is philosophy, it is philosophy of language. it fits comfortably in ordinary linguistics -- now that the linguistic project is better developed -- and i doubt Wittgenstein himself would object to this categorization.
perhaps at the time it resolved an ongoing debate within philosophy. but once the question was resolved, it became instead a founding text of a new field.
my pet understanding of philosophy is that it is the pursuit of truth where all known methods fail. so i'm not sure what it would mean to develop a system for philosophy. but of course if instead we mean, like, "modal logic" then... yeah have at it.
I spoke with someone recently who admitted that Newcomb's 1-boxers walk away from the problem with more money on average than 2-boxers, yet somehow still argued for 2-boxing.
Some people get so stubborn about the 1/3 answer to the Sleeping Beauty problem, they end up believing in absurd hypotheticals such as "the presumptuous philosopher".
If there is anything distinct about philosophy, I'd say it's the ease of getting stuck in wrong answers. Logical contradictions don't make themselves apparent as well as they do in math, and reality doesn't push back when hypotheticals are difficult to test or people disagree on the way models map reality.
But I also agree philosophy should be able to proceed like any other science. Heck, I may even have made very slight incremental progress on it myself (with particular observations on the SBP that I haven't seen made before).
I spoke with someone recently who admitted that Newcomb's 1-boxers walk away from the problem with more money on average than 2-boxers, yet somehow still argued for 2-boxing.
This doesn't seem like a knock-down argument against 2-boxing.
Meta-level: the fact that 1-boxers walk away with more money on average is ~explicit in the problem statement. So if you know that 2-boxers exist, but you're surprised to see 2-boxing coexist with acknowledgement of the fact that 1-boxers walk away with more money on average, then you're probably modelling 2-boxers wrongly.
Object-level: some 2-boxers reject compatibilism. To the extent that their choice is deterministic and fully predictable, they don't see it as an exercise of free will. The argument runs something like:
So they decide that the possible world in which they are perfectly predictable is irrelevant. To whatever extent they do have free will, they will exercise it to take the extra $1000.
Edit: This post duplicated a shortform I wrote 6 months ago, which I'd forgotten about when going through my drafts yesterday. Apologies! This post supercedes the shortform, but interested readers should check its replies.
Timothy Williamson thinks philosophy is unexceptional.
Timothy Williamson[1] thinks that philosophy[2] is far less distinct as a science as many people believe, including philosophers themselves.
I've read a bunch of his stuff, and here are the claims I think constitute his view:
Williamson typically argues by negation: he enumerates alleged differences between philosophy and other sciences, and argues that either (1) the allegation mischaracterises philosophy, (2) the allegation mischaracterises the other sciences, or (3) the alleged difference is insubstantial.
Implications for automating philosophy
I think that, on Williamson's view, if we can build AIs which can automate the natural and formal sciences, then we can also build AIs which automate philosophy as well. Otherwise, philosophy would be exceptional.
More straightforwardly, it follows from:
This in contrast to Wei Dai.[4]
Overall, I think Wei Dai is more likely to be correct than Williamson, though I'm not confident. I want to get the opposing view into circulation regardless, and I might write up how Williamson's metaphilosophical anti-exceptionalism implies we should automate philosophy.
I'm referring to the former Wykeham Professor of Logic, not to be confused with Timothy Luke Williamson, formerly at the Global Priorities Institute.
Throughout, "philosophy" refers to analytic philosophy unless otherwise stated.
Many 20th-century philosophers thought philosophy was chiefly concerned with linguistic analysis (Wittgenstein) or conceptual analysis (Carnap). Williamson disagrees.
AI doing philosophy = AI generating hands? (Jan 2024)
Meta Questions about Metaphilosophy (Sep 2023)
Morality is Scary (Dec 2021)
Problems in AI Alignment that philosophers could potentially contribute to (Aug 2019)
On the purposes of decision theory research (Jul 2019)
Some Thoughts on Metaphilosophy (Feb 2019)
The Argument from Philosophical Difficulty (Feb 2019)
Two Neglected Problems in Human-AI Safety (Dec 2018)
Metaphilosophical Mysteries (2010)