The original post was meant to be humorous, but hundreds online genuinely performed this identity, creating websites like claudeboys.ai and adopting the maxim. The meme struck a nerve because the behavior—handing decisions to an AI—felt less absurd than inevitable.
It’s easy to laugh this off as digital-age absurdity, a generational failure to develop independent judgment.
It looks like the only disagreement that I have is that you should have mentioned not just hundreds of Claude Boys, but the general issue of teens overrelying on AIs. I guess that the linked article sums up our worries about AI-related degradation.
In late 2024, a viral meme captured something unsettling about our technological moment.
The “Claude Boys” phenomenon began as an X post describing high school students who “live by the Claude and die by the Claude”—AI-obsessed teenagers who “carry AI on hand at all times and constantly ask it what to do” with “their entire personality revolving around Claude.”
The original post was meant to be humorous, but hundreds online genuinely performed this identity, creating websites like claudeboys.ai and adopting the maxim. The meme struck a nerve because the behavior—handing decisions to an AI—felt less absurd than inevitable.
It’s easy to laugh this off as digital-age absurdity, a generational failure to develop independent judgment. But that misses a deeper point: serious thinkers defend far more extensive forms of AI deference on deep philosophical grounds. When human judgment appears systematically unreliable, algorithmic guidance starts to look not just convenient but morally necessary.
And this has long philosophical pedigrees: from Plato’s philosopher-kings to Bentham’s hedonic calculus, there is a tradition of arguing that rule by the wiser or more objective is not merely permissible but morally obligatory. Many contemporary philosophers and technologists see large-scale algorithmic guidance as a natural extension of this lineage.
One of the strongest cases for AI deference draws on what’s called the “outside view”: the practice of making decisions by consulting broad patterns, base rates, or expert views, rather than relying solely on one’s own experience or intuitions. The idea is simple: humans are fallible and biased reasoners; if you can set aside your personal judgments, you can remove this source of error and become less wrong.
This approach has proven its worth in many domains. Engineers use historical failure rates to design safer systems. Forecasters anchor their predictions in the outcomes of similar past events. Insurers price policies on statistical risk, not individual hunches. In each case, looking outward to the record of relevant cases yields more reliable predictions than relying on local knowledge alone.
Some extend this reasoning to morality. If human judgment is prone to bias and distortion, why not let a system with greater reach and reasoning capacity decide what is right? An AI can integrate different forms of knowledge, model complex interactions beyond human cognitive limits, and apply consistent reasoning without fatigue or emotional distortion. The moral analogue of the outside view aims for impartiality: one’s own interests should count for no more than those of others, across places, times, and even species. In this frame, the most moral agent is the one most willing to subordinate the local and the particular to the global and the abstract.
This idea is not without precedent. Philosophers from Adam Smith to Immanuel Kant to John Rawls have explored frameworks that ask us to imagine ourselves in standpoints beyond our immediate view. In their accounts, however, the exercise remains within one’s own moral reasoning: the perspective is simulated by the individual whose choice is at stake.
The outside view invoked in AI deference is different in kind. Here, the standpoint is not imagined but instantiated in an external system, which delivers a judgment already formed. The person’s role shifts from exercising autonomous moral reasoning toward a conclusion to receiving and potentially acting on the system’s recommendation. This is a shift that changes not just how we decide, but who is doing the deciding.
If you accept an externalized moral standpoint—and pair it with the belief that the world should be optimized by AI—a challenge to individual judgment follows. From within this framework, it is not enough that AI be merely accurate. If it can reliably outperform human deliberation on the metrics that matter morally, then AI deference (as opposed to using it merely as a tool) may be seen as not only rational but ethically required.
Consider four arguments a proponent might make:
While sophisticated, these arguments from human weakness rest on a fundamental misunderstanding of what human flourishing actually entails. The core flaw is not merely that these systems might misfire in execution, but that they aim at the wrong target: they treat the good life as a static set of outcomes rather than an unfolding practice of self-authorship.
Even on its own terms, the framework faces internal contradictions. First, if AI deference is justified on the grounds that we “have almost no idea what the best feasible futures look like,” then we are also in no position to be confident that maximizing expected value is the right decision rule to outsource to in the first place. Second, if AI systems shape our preferences while claiming to satisfy them, how can we know whether reported satisfaction reflects genuine well-being or merely desires engineered by the system itself?
Beyond these internal tensions, AI deference also carries systemic risks. If too many people act in accordance with a single decision rule, society becomes fragile (and boring). The experimentation and error that fuel collective learning—what Hayek called “the creative powers of a free civilization”—begin to vanish. Even a perfectly consistent maximization regime can weaken the conditions that make long-term success and adaptation possible.
The philosophical stance one takes has decisive practical consequences. AI deference commits us—whether we intend to or not—to systems whose success depends on shaping human behavior ever more deeply. This approach to AI development inevitably leads toward increasingly sophisticated forms of behavioral modification. Even "soft" optimization treats human value the wrong way: as something to be managed rather than respected.
The path forward requires approaches that preserve human autonomy as intrinsically valuable—approaches that cultivate free agents, not Claude People.
This means designing AI systems that enhance our capacity for good choices without usurping the choice-making process itself, even when that capacity inevitably leads to mistakes, inefficiencies, and suboptimal outcomes. The freedom to choose badly is not a regrettable side effect of autonomy; it's constitutive of what makes choice meaningful. An adolescent who makes poor decisions while learning to navigate the world is developing capacities that no algorithm can provide: the hard-won wisdom that comes from experience, failure, and gradual improvement. And sometimes, those failures do not end well. The risk of real loss is inseparable from the dignity of directing one’s own life.
This distinction determines whether we build AI systems that treat humans as the ultimate source of value and meaning, or as sophisticated optimization targets in service of abstract welfare calculations. The choice between these approaches will shape whether future generations develop into autonomous agents capable of self-direction, or become increasingly sophisticated dependents.
Perhaps the most telling aspect of the Claude Boys phenomenon is not its satirical origins, but how readily people embraced and performed the identity. If we’re not careful about the aims and uses of AI systems, we may find that what began as ironic performance becomes earnest practice—not through teenage rebellion, but through the well-intentioned implementation of “optimization” that gradually erodes our capacity for self-direction.
The Claude Boys are a warning: the path from guidance to quiet dependence—and finally to control—is short, and most of us won’t notice we’ve taken it until it’s too late.
Cosmos Institute is the Academy for Philosopher-Builders, with programs, grants, events, and fellowships for those building AI for human flourishing.