Any relation to Peli Grietzer's ideas here?
This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices[1]: networks of actions, action-dispositions, action-evaluation criteria, and action-resources that structure, clarify, develop, and promote themselves. If we want AIs that can genuinely support, collaborate with, or even comply with human agency, AI agents’ deliberations must share a “type signature” with the practices-based logic we use to reflect and act.
I argue that these issues matter not just for aligning AI to grand ethical ideals like human flourishing, but also for aligning AI to core safety-properties like transparency, helpfulness, harmlessness, or corrigibility. Concepts like ’harmlessness’ or ‘corrigibility’ are unnatural -- brittle, unstable, arbitrary -- for agents who’d interpret them in terms of goals or rules, but natural for agents who’d interpret them as dynamics in networks of actions, action-dispositions, action-evaluation criteria, and action-resources.
While the issues this essay tackles tend to sprawl, one theme that reappears over and over is the relevance of the formula ‘promote x x-ingly.’ I argue that this formula captures something important about both meaningful human life-activity (art is the artistic promotion of art, romance is the romantic promotion of romance) and real human morality (to care about kindness is to promote kindness kindly, to care about honesty is to promote honesty honestly).
I start by asking: What follows for AI alignment if we take the concept of eudaimonia -- active, rational human flourishing -- seriously? I argue that the concept of eudaimonia doesn’t simply point to a desired state or trajectory of the world that we should set as an AI’s optimization target, but rather points to a structure of deliberation different from standard consequentialist[2] rationality. I then argue that this form of rational activity and valuing, which l call eudaimonic rationality[3], is a useful or even necessary framework for the agency and values of human-aligned AIs.
These arguments are based both on the dangers of a “type mismatch” between human flourishing as an optimization target and consequentialist optimization as a form, and on certain material advantages that eudaimonic rationality plausibly possesses in comparison to deontological and consequentialist agency with regard to stability and safety.
The concept of eudaimonia, I argue, suggests a form of rational activity without a strict distinction between means and ends, or between ‘instrumental’ and ‘terminal’ values. In this model of rational activity, a rational action is an element of a valued practice in roughly the same sense that a note is an element of a melody, a time-step is an element of a computation, and a moment in an organism’s cellular life is an element of that organism’s self-subsistence and self-development.[4]
My central claim is that our intuitions about the nature of human flourishing are implicitly intuitions that eudaimonic rationality can be functionally robust in a sense highly critical to AI alignment. More specifically, I argue that in light of our best intuitions about the nature of human flourishing it’s plausible that eudaimonic rationality is a natural form of agency, and that eudaimonic rationality is effective even by the light of certain consequentialist approximations of its values. I then argue that if our goal is to align AI in support of human flourishing, and if it is furthermore plausible that eudaimonic rationality is natural and efficacious, then many classical AI safety considerations and ‘paradoxes’ of AI alignment speak in favor of trying to instill AIs with eudaimonic rationality.
This post is a living index of papers in the "theory of appropriateness" sequence. The sequence develops a computational account of normative appropriateness—the idea that much of human cognition and social behavior can be understood as pattern completion over culturally learned expectations of what is appropriate in a given situation. The papers span the cognitive and social sciences and generate implications for AI technology governance. I'll update this post as new papers appear.
The full treatment
The sequence began with a long (100+ page) paper covering the full range of topics:
Note that it is not necessary to read the long general paper before moving on to the others on specific subtopics. All papers in the sequence are self contained.
Humans (cognitive and social science)
These papers focus on the theory of human behavior:
Machines (technology governance)
These papers apply the framework to questions about AI systems—personhood, alignment, and how we may conceptualize sociotechnical progress. Pragmatism, in the sense of Richard Rorty, is a major theme of this work: