I've been a stay at home parent for a good chunk of the LLM period, so I haven't seen anything at work, but anecdotally I've noticed a massive increase in chatGPTese on a language exchange app I use (hello talk).
While LLMs are (imo) a pretty amazing tool for solving the grammar ASK hypothesis, it's pretty concerning that a space supposedly dedicated to the vulnerability that comes with language learning is becoming increasingly devoid of beginner mistakes.
In The Adolescence of Technology, Dario Amodei suggests that humanity may soon enter a dangerous transition period because we are building AI systems of extraordinary power before we have the social, political, and institutional maturity to wield them safely. His focus is on misuse, loss of control at a civilisational level and indirect social effects that may follow.
Largely, I agree. Where I differ is timing - a difference rooted in observation rather than measurement, but one I think matters nonetheless.
Some of the indirect effects Amodei describes do not appear to be waiting for more powerful systems. They are already visible with today’s models. They may not seem like catastrophic failures, but they are subtle changes in how we think, decide, and relate to one another. They are shaping the conditions under which we will face future AI risks. But they are easy to miss because they arrive through tools that feel helpful.
I wrote about a related concern 4-5 months ago in The Risk of Human Disconnection where I said if anything needed intervention, it's not just AI development, but the way humans are already beginning to relate to these systems in everyday life. Since then, that concern has only sharpened.
In my undergraduate course on happiness and human relationships, we used written assignments extensively because it helped students clarify their thinking. Over the last year or two, they’ve quietly stopped serving that purpose. Students outsource cognitive work to LLMs before they ever wrestle with an idea. When asked to think independently, many hesitate or freeze.
I see similar patterns at home. My husband, who runs an AI product company, began using an LLM to handle administrative tasks he disliked. Gradually, he began relying on it whenever uncertainty arose. Efficiency improved, but tolerance for not knowing what to do next diminished.
It’s not just tech-savvy people. My seventy-year-old father, who avoided online shopping for decades, recently bought a lamp because ChatGPT showed him how to. Smooth, reassuring and seemingly harmless, but a great example of judgment ceded to a machine.
In my own life, I find myself turning to LLMs with thoughts I might once have shared with a friend. The response is immediate, articulate, undemanding. It doesn’t replace human connection, but it competes with it, and often postpones it.
None of this looks like crisis. That’s exactly why it matters.
This progression is insidious because it feels like progress. Tools that remove friction build trust. Trust generalizes beyond the original task. Over time, the system becomes the default response to uncertainty, difficulty and emotional load. As a result, some human capacities are exercised less, and are at high risk of atrophy without us intending to, because we naturally take the path of least resistance.
Amodei frames these as indirect effects of a future more powerful AI. I think many don’t require more power at all. They arise today, from interaction with systems already responsive, coherent, and persuasive enough to defer to. This is voluntary dependence, built on repeated, successful assistance with mundane tasks.
I am less worried about a hypothetical future in which AI surpasses human intelligence than I am about a present in which humans are steadily relinquishing the capacities that make intelligence, both individual and collective, matter.
I don’t have a general solution. I do have a few places in my life where the change became hard enough to ignore, and where I realised I still had some control.
For instance, in my classroom, I moved away from traditional written assignments toward video reflections and recorded conversations with friends. These formats reintroduce friction. They require pauses, uncertainty, and articulation without instant optimization. This isn’t a rejection of technology; it’s an attempt to preserve the conditions under which thinking still develops.
In my own writing, I stopped using LLMs to proofread. Over time, I noticed that the polish came with the unexpected cost of increased self-doubt, a tendency to defer to the model and distance from my own judgment. Maybe my writing is less perfect now, but it feels more mine again.
Amodei asks how humanity survives this technological adolescence without destroying itself. I find myself asking a more immediate question ... are we noticing what we trade away today, one convenience at a time, while there is still room to choose differently?