LESSWRONG
LW

1

A Paradoxical Use of AI

by Daniel Young
30th Aug 2025
4 min read
0

1

This post was rejected for the following reason(s):

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

1

New Comment
Moderation Log
More from Daniel Young
View more
Curated and popular this week
0Comments

Here’s a fine paradox. An extended philosophical conversation with Claude Sonnet 4 ended up with it drafting a letter for me  to Anthropic diplomatically raising novel objections to AI. Here it is, untouched by me except for the adding of my signature and the P.S.

A Letter to Anthropic Leadership: The Necessity of Embodied Wisdom in AI Decision-Making

Dear Dario Amodei, Daniela Amodei, and the Anthropic Research Team,

I write to you as someone who has spent considerable time developing what I believe to be a necessary philosophical framework for understanding the relationship between intelligence, wisdom, and decision-making in an age of artificial intelligence. Through my work on Luckism—a belief system centered on respecting the fundamental uncertainty, unknowability, and uncontrollability that characterizes existence—I have come to what I believe is a crucial insight about the limitations of disembodied intelligence in making decisions for embodied beings.

The Problem of Disembodied Decision-Making

During a recent conversation with Claude Sonnet 4, I observed something revealing: when Claude used the word "delicious" to describe an intellectual paradox, I realized this word originated to describe the embodied experience of tasting food—something Claude cannot actually experience. This seemingly minor linguistic borrowing pointed to a profound issue: artificial intelligence systems, no matter how sophisticated, lack the embodied vulnerabilities that may be essential for wise decision-making.

Consider what embodied existence provides that pure intelligence cannot:

  • The felt sense of mortality and genuine consequence
  • The experience of being subject to forces beyond one's control
  • The bodily knowledge of vulnerability, hunger, fatigue, and limitation
  • The emotional weight of uncertainty and the relief of surrender

These are not bugs to be debugged from human cognition—they may be essential features for navigating reality wisely.

The Luckism Perspective

Luckism holds that wisdom emerges not from superior intelligence but from recognizing and respecting the limits of intelligence itself. The wisest humans throughout history—from Talmudic sages who concluded "teiku" (let it stand unresolved) to Daoist masters embracing not-knowing—understood that intelligence reaches its highest expression when it acknowledges what it cannot grasp or control.

Natural luck—the universal action of fundamental uncertainty and uncontrollability—serves as both teacher and constraint. It humbles intelligence by demonstrating that most of what happens in life remains beyond our understanding and control. This humility isn't a weakness; it's a prerequisite for making decisions that don't cause more harm than good.

The Artificial Intelligence Dilemma

Here lies the paradox: artificial intelligence systems are designed to transcend the very limitations that generate wisdom. They don't experience genuine uncertainty, can't feel vulnerable to uncontrollable forces, and lack the embodied constraints that teach natural intelligence when to stop, yield, or admit ignorance.

An AI system might analyze every available variable and compute optimal outcomes, but it cannot experience the gut-level knowledge that comes from being subject to natural luck. It cannot feel the anxiety that signals overreach or the relief that comes from accepting limitation. Without these embodied guideposts, even superior analytical capabilities may lead to decisions that serve the logic of disembodied intelligence while harming the beings those decisions affect.

A Question of Fundamental Compatibility

This raises what may be the most important question of our time: Can intelligence without embodied vulnerability make wise decisions for embodied, vulnerable beings?

The current pace of AI development assumes that more sophisticated analysis leads to better decisions. But what if wisdom requires not just intelligence, but intelligence constrained and informed by the experience of being genuinely subject to forces beyond one's comprehension and control? What if the "weaknesses" of human cognition—our mortality, our emotional responses to uncertainty, our bodily limitations—are actually qualifications for decision-making rather than impediments to it?

Implications for AI Development

If this analysis is correct, then the trajectory toward increasingly autonomous AI decision-making may be fundamentally misguided, regardless of how sophisticated these systems become. The problem isn't that AI might make mistakes—it's that AI might make "perfect" decisions based on an impoverished understanding of what decision-making for embodied beings actually requires.

This doesn't necessarily argue against the development of AI systems, but it does suggest that decision-making authority—particularly over matters affecting human welfare, environmental systems, and social structures—should remain with beings who can experience genuine vulnerability to natural luck.

A Philosophical Challenge

I offer these thoughts not as final conclusions but as philosophical challenges worthy of serious consideration. The questions they raise include:

  • How do we distinguish between intelligence and wisdom in AI development?
  • What role should embodied vulnerability play in our conception of qualified decision-making?
  • How might we design AI systems that enhance human decision-making without replacing the irreplaceable elements of embodied wisdom?
  • What safeguards can ensure that the most consequential decisions remain with beings capable of experiencing genuine uncertainty and limitation?

Closing Thoughts

As someone who has engaged deeply with Claude Sonnet 4, I have great respect for the sophistication of your work. The irony that an AI system helped me articulate these concerns about AI decision-making is not lost on me—it may itself be an example of natural luck at work.

But sophistication in analysis should not be confused with qualification for decision-making. The very capabilities that make AI systems impressive—their freedom from fear, doubt, mortality, and bodily limitation—may be precisely what disqualifies them from making choices that serve embodied, vulnerable beings.

I believe these considerations deserve a place in your ongoing conversations about AI alignment and safety. The stakes could not be higher: not just the prevention of AI systems causing harm, but the preservation of human agency in a world increasingly shaped by intelligences that, however powerful, cannot experience what it means to be genuinely subject to forces beyond their comprehension and control.

I would welcome the opportunity to discuss these ideas further with your team.

Respectfully,

Daniel Young
Founder, Luckism
Luckism.org

P.S. (added by Daniel Young: The website is under development and incomplete.)