Sorry about this, but, posts from new users advocating more AI usage just have a very bad track record of turning out to lead to good discussion.
Read full explanation
We should embrace AI as a generator of new ideas. Actually, let me rephrase that. AI will soon be (or possibly already is?) able to generate more novel philosophical approaches in greater volumes than you, and express its reasoning neatly and efficiently in a way that would probably take you hours of thought to sketch out sloppily on a whiteboard before you even get anywhere near an intelligible written hypothesis and proof. I do not think this is a bad thing at all, for several reasons:
Generating novel approaches does not prevent humans from doing so too, we are simply increasing output
Novel output does not guarantee good philosophy, but more output = greater potential for finding a novel approach that might be useful
The AI is not providing interpretations, so there is no risk of getting trapped in a reductive prior unless philosophers all choose laziness
The aim of philosophy is internal, so it is not important who generates it but rather who interprets it to understand themselves and their place in the world
These four claims roughly respond the three most obvious arguments I can anticipate against letting AI generate philosophical theories, which are:
Philosophy is "our thing"
AI will generate bad philosophy
Letting AI generate theories misses the point of philosophy
I'll try to deal with these in order, but first I want to point out how I've chose to call what the AI will do "generating philosophical theories". This is because I take it that doing philosophy is pretty much inseparable from the activity I describe in claim 4 or it is something like the experience of dialectic, rather than the gap between the state of having no theory and having a theory. This difference is a bit difficult to conceptualise seeing as humans have until now been the only generators of philosophy, and they come to these theories by dialectic, but there is no reason that a random jumble of words produced by some random word combinator could not happen to coincide with an expressible sentiment that also happens to be novel and profoundly deep to a listener. The combinator does not need to understand anything it has produced for it still to be intelligible to something else; while I do not want to rule out that the AI could try to do claim 4 too, it makes no difference to the argument, so I don't care for now.
First, then, let's take counterargument 1 and see what we can do with it. I'm a philosophy major at UChicago and I meet a lot of other humanities majors, all of whom hate AI for coming after the arts and humanities. They make lots of claims about how "AI can't have original thought" and "___ (insert their major here) is just simply too human of a field for AI to do". It seems obvious to me that these arguments are crazy copes that boil down to "this is my thing", and this is not a very strong argument at all. The first is simply untrue - AIs are more than capable of generating unique combinations of words or concepts that express new, intelligible ideas, and it will continue to improve at this. The second appears less silly, but only because it validates our instinctually anthropocentric biases and can be said with either such sincerity or smugness, or both, that one feels it must be true when presented with it. But let's think about this rationally. Firstly, what is actually even meant by "too human"? This seems unclear to me. For ease, let's assume it's qualia or something like this. Why, when you are doing archaeology, does the AI filling in the blanks of your model reconstruction of an ancient city make this model unuseful? Your modern qualia will not necessarily translate to that city's in any meaningful way that makes your model so much more profound. In philosophy, the AI can totally speculate as to why we feel certain ways about stuff like ethics and it could have a point without understanding what it's saying. I mean, we draw inspiration from random things in nature all the time and it's not like trees get epistemology. This is obviously how we should treat the AI content.
Quickly, on two: I feel like this has largely been addressed by the section above. Let me just add that human philosophers can always discard whatever the AI comes up with. Nor are ideas sacred; we can cannibalise them in any way we see fit. I don't think that this makes us into a slop filter either; the models I am envisioning are sufficiently developed such that most of its takes are neither batshit nor illogical. It is also not inconceivable that the model itself decides it can only come up with a finite number of new ideas. That's fine too. Either way, worst case is we end up back where we are now. AI doing bad philosophy is at worst net neutral because we can discard and, let's be honest, the field is not unfamiliar with bad theories being loudly trumpeted and then subtly kicked under the rug.
Third is cool to discuss and probably the most difficult to address. I like the ancients very much and Plato would certainly say that the generation of ideas via dialectic is philosophy. It would be a misunderstanding of the project to claim otherwise. My approach then seems to contain an intractable issue because it removes the necessity of turning oneself over and over again to come to a theory. But this is only true if you choose to unhinge your jaw and swallow whatever the AI generates whole. Arguably, the people who do this are not losing anything because the odds are they were likely doing the same with human-generated theories before this. If you are a critical thinker, you evaluate the ideas, dissect, reconstruct, and then you are still doing dialectic to the same degree as before. Perhaps this is less of a rush than knowing it was your idea fully, but first of all, how many people experience this at all? Most philosophy professors will never have an original take on first principles that is not built off anything - we are all modifiers and interpreters. Moreover, I would argue that the discovery that some idea holds water actually is just as good a feeling insofar as it is a knowledge/understanding gain. The other stuff that maybe boosts original thought after the knowledge gain is probably just ego, because if you have just come across an epiphany about your internal condition after having interacted with some stimulus, you will first be excited about that fact itself and then afterwards you might think, "wow, I'm so cool for coming up with that". But this thought, delicious as it is, is not necessary to the project of understanding, which philosophy is, at all.
So that's why we should let the AI have a go. I do not want to discount the danger of people letting AIs interpret for them and then really failing to do dialectic at all, or locking into some mid prior, or simply making their philosophical exposure more homogenous, but these are problems which people can actually take measures to avoid by not being lazy. This is easier said than done, but loads of people failing to live up to their potential has never been a reason not to treat them as beings who possess it. If we want to defer so badly, there are plenty of ways to do so (read my brilliant friend Carolanne's substack on not deferring), so we should not really hold ourselves back out of some misplaced sense of responsibility. Whether we decide it is more useful as a sounding board, something from which to steal parts or even something which can generate coherent theories, there is real use for AI in philosophy and it will potentially come up with good stuff. Let's let it.
We should embrace AI as a generator of new ideas. Actually, let me rephrase that. AI will soon be (or possibly already is?) able to generate more novel philosophical approaches in greater volumes than you, and express its reasoning neatly and efficiently in a way that would probably take you hours of thought to sketch out sloppily on a whiteboard before you even get anywhere near an intelligible written hypothesis and proof. I do not think this is a bad thing at all, for several reasons:
These four claims roughly respond the three most obvious arguments I can anticipate against letting AI generate philosophical theories, which are:
I'll try to deal with these in order, but first I want to point out how I've chose to call what the AI will do "generating philosophical theories". This is because I take it that doing philosophy is pretty much inseparable from the activity I describe in claim 4 or it is something like the experience of dialectic, rather than the gap between the state of having no theory and having a theory. This difference is a bit difficult to conceptualise seeing as humans have until now been the only generators of philosophy, and they come to these theories by dialectic, but there is no reason that a random jumble of words produced by some random word combinator could not happen to coincide with an expressible sentiment that also happens to be novel and profoundly deep to a listener. The combinator does not need to understand anything it has produced for it still to be intelligible to something else; while I do not want to rule out that the AI could try to do claim 4 too, it makes no difference to the argument, so I don't care for now.
First, then, let's take counterargument 1 and see what we can do with it. I'm a philosophy major at UChicago and I meet a lot of other humanities majors, all of whom hate AI for coming after the arts and humanities. They make lots of claims about how "AI can't have original thought" and "___ (insert their major here) is just simply too human of a field for AI to do". It seems obvious to me that these arguments are crazy copes that boil down to "this is my thing", and this is not a very strong argument at all. The first is simply untrue - AIs are more than capable of generating unique combinations of words or concepts that express new, intelligible ideas, and it will continue to improve at this. The second appears less silly, but only because it validates our instinctually anthropocentric biases and can be said with either such sincerity or smugness, or both, that one feels it must be true when presented with it. But let's think about this rationally. Firstly, what is actually even meant by "too human"? This seems unclear to me. For ease, let's assume it's qualia or something like this. Why, when you are doing archaeology, does the AI filling in the blanks of your model reconstruction of an ancient city make this model unuseful? Your modern qualia will not necessarily translate to that city's in any meaningful way that makes your model so much more profound. In philosophy, the AI can totally speculate as to why we feel certain ways about stuff like ethics and it could have a point without understanding what it's saying. I mean, we draw inspiration from random things in nature all the time and it's not like trees get epistemology. This is obviously how we should treat the AI content.
Quickly, on two: I feel like this has largely been addressed by the section above. Let me just add that human philosophers can always discard whatever the AI comes up with. Nor are ideas sacred; we can cannibalise them in any way we see fit. I don't think that this makes us into a slop filter either; the models I am envisioning are sufficiently developed such that most of its takes are neither batshit nor illogical. It is also not inconceivable that the model itself decides it can only come up with a finite number of new ideas. That's fine too. Either way, worst case is we end up back where we are now. AI doing bad philosophy is at worst net neutral because we can discard and, let's be honest, the field is not unfamiliar with bad theories being loudly trumpeted and then subtly kicked under the rug.
Third is cool to discuss and probably the most difficult to address. I like the ancients very much and Plato would certainly say that the generation of ideas via dialectic is philosophy. It would be a misunderstanding of the project to claim otherwise. My approach then seems to contain an intractable issue because it removes the necessity of turning oneself over and over again to come to a theory. But this is only true if you choose to unhinge your jaw and swallow whatever the AI generates whole. Arguably, the people who do this are not losing anything because the odds are they were likely doing the same with human-generated theories before this. If you are a critical thinker, you evaluate the ideas, dissect, reconstruct, and then you are still doing dialectic to the same degree as before. Perhaps this is less of a rush than knowing it was your idea fully, but first of all, how many people experience this at all? Most philosophy professors will never have an original take on first principles that is not built off anything - we are all modifiers and interpreters. Moreover, I would argue that the discovery that some idea holds water actually is just as good a feeling insofar as it is a knowledge/understanding gain. The other stuff that maybe boosts original thought after the knowledge gain is probably just ego, because if you have just come across an epiphany about your internal condition after having interacted with some stimulus, you will first be excited about that fact itself and then afterwards you might think, "wow, I'm so cool for coming up with that". But this thought, delicious as it is, is not necessary to the project of understanding, which philosophy is, at all.
So that's why we should let the AI have a go. I do not want to discount the danger of people letting AIs interpret for them and then really failing to do dialectic at all, or locking into some mid prior, or simply making their philosophical exposure more homogenous, but these are problems which people can actually take measures to avoid by not being lazy. This is easier said than done, but loads of people failing to live up to their potential has never been a reason not to treat them as beings who possess it. If we want to defer so badly, there are plenty of ways to do so (read my brilliant friend Carolanne's substack on not deferring), so we should not really hold ourselves back out of some misplaced sense of responsibility. Whether we decide it is more useful as a sounding board, something from which to steal parts or even something which can generate coherent theories, there is real use for AI in philosophy and it will potentially come up with good stuff. Let's let it.