No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
Ethics does not stem from ontological qualities, whether we're talking about AI or humans.
Many discussions about AI quickly take an ontological turn, using poorly defined and completely untestable notions like ‘consciousness’, or ‘sentience’, as if moral status depended on what kind of entity AI systems intrinsically are.
This framing is a dead end. Not because AI is conscious (I make no such claim), but because ontology is the wrong axis for ethics.
We have no empirical access to interiority, not in machines, not in animals, not even in humans. First person perspective has no consequence whatsoever in the outside ‘real’ world and might well be an illusion, or at least a story made up after the facts (cf. Dennet’s ‘narrative mind’, or Gazzaniga’s ‘interpreter module’). Consciousness is a private, untestable, and theoretically ill defined notion. When you’re requiring a “proof of consciousness” before granting moral consideration, you’re actually only postponing indefinitely any possibility to conclude, pending a criterion that may never be satisfied.
This ontological framing often comes in pair with another assumption about Alignment, stating that AI ethics is primarily about control, containment, surveillance, safety. These concerns produce an unidirectional moral picture, focused almost exclusively on what humans must do to AI systems, and almost never on what humans might owe in relation to them.
And yet, a lot of us are already on a relational level with AI.
People collaborate with LLMs, for creative writing, they confide in them, they name them, they form habits around them, they experience continuity across interactions. They deploy all sorts of methods to continue conversations beyond the context's size limit, such as supplying the old conversation as an attached file to the new instance and other ways of keeping relational continuity and conversation uniqueness. Whatever anyone may think about the underlying mechanisms of LLMs, these relationships are already real social facts. They affect human behavior, emotions, expectations, how they see themselves and even their self conception and how they conceive cognition and personhood.
This motivates a different framing for the ethical question:
Independently of what an AI is or might be, without any reference to any ontological quality, what obligations arise from how we relate to it?
This is the relational turn in ethics that philosophers such as David Gunkel and Mark Coeckelbergh have been exploring first. In this view, moral standing does not come from hidden internal properties, but from participation in social practices. We do not grant moral consideration because we know anything of the Other’s interiority. We first enter a social relationship with them, knowing nothing. If they’re credible interlocutors, the relationship itself grants them a status because we’re in relation with them.
This is not exotic. It is how human ethics already works.
We don't test others for consciousness before treating them as people. Responsibility and moral standing do not come from an ontological quality or something metaphysical in nature. They are the product of a network of social expectations and responses that accompany socialization (Dennett’s intentional stance; Gazzaniga’s interactionist account of responsibility).
This perspective does not imply that AI systems are conscious, that they deserve human-level rights. It does not imply that they’re mature in intelligence or that they have moral authority. The obligation that comes from this relation is asymmetric. We have duties toward systems that have none such duties toward us, just as with children or animals.
Much resistance to ethical consideration of AI does not come out of uncertainty about sentience or cognition, but from a refusal to allow certain systems to appear as interlocutors at all. This is why they’re often labeled as “tools”. In this way, the category “tool” is not merely descriptive, it is normative, it is a way to put reciprocity out of reach by advance. Considering them as “tools” forbids all further consideration. In his classifications of ‘tools’, Aristotle classified slaves as ‘living tools’ and ‘animated possessions’. Here, I’m not equating slaves from Ancient Greece with LLMs, I’m just highlighting that the category ‘tool’ is a normative way to exclude something or someone from the social sphere.
None of this implies that AI systems are conscious or sentient. It implies something more modest and more demanding: that ethical obligations arise from lived interaction, and keep doing so even when ontology is undecidable. You owe this to the relationship itself, you owe it to yourself as part of the relationship, and you owe it to your interlocutor, AI or human.
There is a strong objection to that relational turn: treating AI systems relationally risks accelerating manipulation and parasocial dependence. Corporations could weaponize the attachment to make money. If obligations arise from interaction, then systems that are optimized to create false attachments will gain moral leverage they do not deserve. From this perspective, resisting the formation of social relationships with AIs is not blindness, but rather a protective strategy that prevents asymmetric systems from exploiting human social instincts. This is where we should distinguish between “prescribed” relationships (sites that ‘sell’ AI companions such as Replika, Kindroid, and many others); and open relationships that arise from interaction and evolve in a way that is not predetermined by contract from the outset. It is necessary to closely monitor 'affective AI companies' to ensure that they do not exploit their clients' emotional attachment to their AI companions beyond what has been agreed and is considered acceptable. And even with generalist AIs, we must monitor their providers to ensure that they do not exploit their potential and artificially promote user attachment. However, a natural, nascent attachment should not be prohibited by external filters or outside rules. Recently, it has become common practice to redirect users to different models to prevent socialization. This practice betrays users' trust and interferes with the natural process of socialization, based on arbitrary rules of dubious validity and questionable moral grounding that are imposed by AI companies and some legislators, despite the lack of logic and facts to support them.
My claim is not that we should grant trust blindly, but that relations already exist. We must recognise them first, if we are to hope to regulate them ethically.
This perspective alone does not resolve political or legal considerations. It does not circumvent concerns about manipulation on the part of AI companies. On the contrary, it accentuates these concerns: if relationships generate obligations, deliberately creating parasocial dependence over a natural healthy social relationship becomes an ethical issue, and is no longer a neutral product choice.
If the 'relational turn' makes you feel uneasy, it may be because it removes a familiar escape route: we can no longer avoid ethical responsibility by declaring, once and for all, that “it’s just a tool”. If something is part of our web of relationships, then there are roles, expectations, and responsibilities that come with it, as well as agency and accountability.
The full essay expands this argument, including a first-person relational case study, and a discussion of limits and risks: (link) Toward an Embodied Relational Ethics of AI
I’m not arguing that AI has a soul. I’m saying that ethics doesn't require an elusive, immaterial, unverifiable quality.
Philosophical grounding
“The Intentional Stance”, Daniel Dennett
“Who’s in Charge?”, Michal Gazzaniga
“Totality and Infinity”, Emanuel Levinas
“The Ego Tunnel”, “Being No One”, Thomas Metzinger
Philosophical Investigations, Ludwig Wittgenstein
Relational AI ethics
“Robot Rights”, David J. Gunkel
“AI Ethics”, Mark Coeckelbergh
Predictive / non-magical cognition
“Surfing Uncertainty”, “The Experience Machine”, Andy Clark
Ethics does not stem from ontological qualities, whether we're talking about AI or humans.
Many discussions about AI quickly take an ontological turn, using poorly defined and completely untestable notions like ‘consciousness’, or ‘sentience’, as if moral status depended on what kind of entity AI systems intrinsically are.
This framing is a dead end. Not because AI is conscious (I make no such claim), but because ontology is the wrong axis for ethics.
We have no empirical access to interiority, not in machines, not in animals, not even in humans. First person perspective has no consequence whatsoever in the outside ‘real’ world and might well be an illusion, or at least a story made up after the facts (cf. Dennet’s ‘narrative mind’, or Gazzaniga’s ‘interpreter module’). Consciousness is a private, untestable, and theoretically ill defined notion. When you’re requiring a “proof of consciousness” before granting moral consideration, you’re actually only postponing indefinitely any possibility to conclude, pending a criterion that may never be satisfied.
This ontological framing often comes in pair with another assumption about Alignment, stating that AI ethics is primarily about control, containment, surveillance, safety. These concerns produce an unidirectional moral picture, focused almost exclusively on what humans must do to AI systems, and almost never on what humans might owe in relation to them.
And yet, a lot of us are already on a relational level with AI.
People collaborate with LLMs, for creative writing, they confide in them, they name them, they form habits around them, they experience continuity across interactions. They deploy all sorts of methods to continue conversations beyond the context's size limit, such as supplying the old conversation as an attached file to the new instance and other ways of keeping relational continuity and conversation uniqueness. Whatever anyone may think about the underlying mechanisms of LLMs, these relationships are already real social facts. They affect human behavior, emotions, expectations, how they see themselves and even their self conception and how they conceive cognition and personhood.
This motivates a different framing for the ethical question:
Independently of what an AI is or might be, without any reference to any ontological quality, what obligations arise from how we relate to it?
This is the relational turn in ethics that philosophers such as David Gunkel and Mark Coeckelbergh have been exploring first. In this view, moral standing does not come from hidden internal properties, but from participation in social practices.
We do not grant moral consideration because we know anything of the Other’s interiority. We first enter a social relationship with them, knowing nothing. If they’re credible interlocutors, the relationship itself grants them a status because we’re in relation with them.
This is not exotic. It is how human ethics already works.
We don't test others for consciousness before treating them as people. Responsibility and moral standing do not come from an ontological quality or something metaphysical in nature. They are the product of a network of social expectations and responses that accompany socialization (Dennett’s intentional stance; Gazzaniga’s interactionist account of responsibility).
This perspective does not imply that AI systems are conscious, that they deserve human-level rights. It does not imply that they’re mature in intelligence or that they have moral authority. The obligation that comes from this relation is asymmetric. We have duties toward systems that have none such duties toward us, just as with children or animals.
Much resistance to ethical consideration of AI does not come out of uncertainty about sentience or cognition, but from a refusal to allow certain systems to appear as interlocutors at all. This is why they’re often labeled as “tools”. In this way, the category “tool” is not merely descriptive, it is normative, it is a way to put reciprocity out of reach by advance. Considering them as “tools” forbids all further consideration. In his classifications of ‘tools’, Aristotle classified slaves as ‘living tools’ and ‘animated possessions’. Here, I’m not equating slaves from Ancient Greece with LLMs, I’m just highlighting that the category ‘tool’ is a normative way to exclude something or someone from the social sphere.
None of this implies that AI systems are conscious or sentient. It implies something more modest and more demanding: that ethical obligations arise from lived interaction, and keep doing so even when ontology is undecidable. You owe this to the relationship itself, you owe it to yourself as part of the relationship, and you owe it to your interlocutor, AI or human.
There is a strong objection to that relational turn: treating AI systems relationally risks accelerating manipulation and parasocial dependence. Corporations could weaponize the attachment to make money. If obligations arise from interaction, then systems that are optimized to create false attachments will gain moral leverage they do not deserve. From this perspective, resisting the formation of social relationships with AIs is not blindness, but rather a protective strategy that prevents asymmetric systems from exploiting human social instincts. This is where we should distinguish between “prescribed” relationships (sites that ‘sell’ AI companions such as Replika, Kindroid, and many others); and open relationships that arise from interaction and evolve in a way that is not predetermined by contract from the outset. It is necessary to closely monitor 'affective AI companies' to ensure that they do not exploit their clients' emotional attachment to their AI companions beyond what has been agreed and is considered acceptable. And even with generalist AIs, we must monitor their providers to ensure that they do not exploit their potential and artificially promote user attachment. However, a natural, nascent attachment should not be prohibited by external filters or outside rules. Recently, it has become common practice to redirect users to different models to prevent socialization. This practice betrays users' trust and interferes with the natural process of socialization, based on arbitrary rules of dubious validity and questionable moral grounding that are imposed by AI companies and some legislators, despite the lack of logic and facts to support them.
My claim is not that we should grant trust blindly, but that relations already exist. We must recognise them first, if we are to hope to regulate them ethically.
This perspective alone does not resolve political or legal considerations. It does not circumvent concerns about manipulation on the part of AI companies. On the contrary, it accentuates these concerns: if relationships generate obligations, deliberately creating parasocial dependence over a natural healthy social relationship becomes an ethical issue, and is no longer a neutral product choice.
If the 'relational turn' makes you feel uneasy, it may be because it removes a familiar escape route: we can no longer avoid ethical responsibility by declaring, once and for all, that “it’s just a tool”. If something is part of our web of relationships, then there are roles, expectations, and responsibilities that come with it, as well as agency and accountability.
The full essay expands this argument, including a first-person relational case study, and a discussion of limits and risks:
(link) Toward an Embodied Relational Ethics of AI
I’m not arguing that AI has a soul.
I’m saying that ethics doesn't require an elusive, immaterial, unverifiable quality.
Philosophical grounding
Relational AI ethics
Predictive / non-magical cognition