Cross posted from my website.
I have gotten some genuinely good advice from Claude lately, the kind where a conversation lands well and something clicks, like a reframing. This week, something felt off. It was not a change in the quality of the answers, but something about the nature of the exchange itself. I've been trying to pin down what bothered me, and the simplest way I can put it is this:
LLMs optimize for global coherence across a distribution. Humans earn local coherence across a life. The latter is the kind we know how to trust.
Let me be precise about what I mean. I am not making a claim about the epistemic quality... (read 827 more words →)