I am very interested in the application of Dennett's stances to LLMs.
I think where you might want to explore further is the fact that LLMs still display what Dennett (and Davidson) would call "shocking gaps" in understanding -- hallucinations, IOW places where the intentional stance breaks down, reducing or eliminating the utility of intentional stance predictions. Someday, perhaps, LLMs will be as robustly predictable as non-human animals or even humans, but I think the shocking gaps that still occur argue we are not there yet.
I am very interested in the application of Dennett's stances to LLMs.
I think where you might want to explore further is the fact that LLMs still display what Dennett (and Davidson) would call "shocking gaps" in understanding -- hallucinations, IOW places where the intentional stance breaks down, reducing or eliminating the utility of intentional stance predictions. Someday, perhaps, LLMs will be as robustly predictable as non-human animals or even humans, but I think the shocking gaps that still occur argue we are not there yet.