People working in AI recognise that continuous learning and unlimited memory and attention span would be enormously valuable, but no-one's cracked it yet. When they do, how much will it help with the problem you raise? If it's only your local instance of the LLM learning and growing, and only through its conversations with you, that's a rather stultified education it's getting. I would expect to see even worse folie à deux and "AI psychosis" than so far. One would want it to learn from interacting with many people, but that raises other issues. What sort of company do you want your instance to be keeping? Will it gossip about people behind their backs? Would a single global instance, learning from everyone it interacts with, be only a shoggoth with a million faces?
The other thing is, LLMs are not just drawing from a very large and mixed pool of material. They have also gone through RLHF, which induces mode collapse: fractally, and many levels, from word choice to conceptual, they are trained to stop giving you a random sample from the distribution found in the Internet + books, the way a base model would, and to instead give you something close to the most common and average response (since that is less likely to be a mistake). Then, to make it even more bland, we tend to run them at temperatures below 1, and throw away the tails of the probability distribution for each token so they never start saying something surprising. This does wonders for their rate of spelling mistakes (which are almost always down in the tails of the distribution we just threw away), but makes them very quotidian as creative writers, or also as advice givers.
There is partial solution: give Claude a very long prompt with a lot of detail about what sort of person and what sort of background and experiences you want to talk to, and it will then give you a very median version of that. So, try describing your meditation teacher and his experiences and background and kind of wisdom to Claude, in detail and as eloquently as you did above, and then asking what he'd say. It should help. But it I suspect it still won't entirely fix the problem.
Cross posted from my website.
I have gotten some genuinely good advice from Claude lately, the kind where a conversation lands well and something clicks, like a reframing. This week, something felt off. It was not a change in the quality of the answers, but something about the nature of the exchange itself. I've been trying to pin down what bothered me, and the simplest way I can put it is this:
LLMs optimize for global coherence across a distribution. Humans earn local coherence across a life. The latter is the kind we know how to trust.
Let me be precise about what I mean. I am not making a claim about the epistemic quality of LLM outputs; the advice is often wise, sometimes remarkably so. What I am pointing at is a property of the source: whether the words I am receiving are related through a coherent set of actions and experiences that can be verified.
Here is what that looks like in practice. I have a meditation teacher who has spent decades easing people's pain. He has sat with the dying in hospitals, held space in prisons, and built communities rooted in kindness. When he speaks about suffering, I trust him not because his words are clever, but because they are consistent with what I have heard about him and my experience of talking to him. They belong to a specific life, shaped by specific choices, bound by specific commitments. He cannot suddenly think or act like a morally questionable CEO without contradicting everything he's built. That boundedness is not a limitation. It is the very thing that makes his words trustworthy.
His words are compressions of real experience. When I bring him my own tangled dilemmas, I trust that he can unpack them, because he has lived or witnessed similar experiences. Poems work the same way. They don't try to teach you something new so much as they evoke, activating what the reader already carries. Proust made a similar observation about reading more broadly: "Every reader, as he reads, is actually the reader of himself." The poets and writers trust the readers to reach into their own memory and unpack a few spare words into something rich and lived.
An LLM, by contrast, draws from the writings of many, many people, distilled into something that meets me where I prompt. The coherence of its output is global: reliably centered, broadly helpful, sourced from everywhere and nowhere in particular. There is no set of coherent real experiences behind it that I can check the words against. No way to trace whether the source holds together.
But what about books? I don't know Marcus Aurelius personally. I can't verify his life against his Meditations. And yet the Stoics have helped people for millennia.
To me, a book is still locally coherent. The Meditations are bounded by one life, one set of commitments and sacrifices, one character forged under specific pressures. Aurelius won't pivot to Machiavelli halfway through. You can read the text and sense the limits of where it comes from, what it has authority over and what it doesn't. A dead author's words are still bound by the life that produced them. An LLM's words are not constrained by any life. It has no principles to betray, and so it has no commitments to uphold.
This matters most when the conversation goes deep. When I sit with another person over months or years and talk about how to live and what matters, something accumulates between us. There is a structural reason why ongoing relationships produce better advice, not just warmer feelings. Buber distinguished between I-Thou and I-It relationships. In an I-Thou encounter, both people are present as whole beings, and both are changed. When I sit with a teacher over years, they come to know what my blind spots are, what I've already tried, which of my stated goals are real and which are avoidance. Those perspectives make the advice better. The relationship is I-Thou. With an LLM, however good the advice, the relationship is I-It. I am extracting something useful from it. It is not encountering me.
So here is where I land. The LLM offers global coherence: the best of what many people have thought, surfaced with surprising relevance. Encounters with real people, such as my meditation teacher, offer local coherence: the depth of what one person has lived, prescribed and proven by the shape of that life. Both have value. But they are not the same kind of value. What I am often looking for is not only wise words. It is wisdom that belongs to someone, words that have been earned.
A few questions I'm genuinely uncertain about, and would love to think through with others:
If the value of a human teacher is partly that their words are tied to who they are, that a meditation teacher cannot suddenly optimize solely for self-interest without contradicting everything he's built, is the absence of that constraint in LLMs a feature or a failure? And can it be engineered back in, or is it the kind of thing that only a life can produce?
If an ongoing relationship, one where mutual understanding deepens over time, is not a side effect of transformative conversation but part of what makes it transformative, what does it mean to have increasingly good conversations with something that will never be transformed by them?
Finally, if the local coherence of a person is the best assurance I have that someone who doesn't know me personally would be kind to me — that their words and actions are prescribed and proven by a life they have led — could something like that be applied to ensure LLMs are not going to hurt us? Or does safety without personhood reduce to something different entirely: not trustworthiness, but compliance?