I have noticed that most discussions of AI safety focus on informational harm. The articles I have read tend to focus on research worries about "hallucinations." Meanwhile the engineers are working to reduce bias and improve accuracy. Hopefully our policymakers are worrying about AI misuse. These concerns matter.
But conversational AI introduces another category of risk that receives less attention: relational harm.
When a computerized system can participate in emotionally responsive dialogue, we have now moved beyond information delivery. The interaction is no longer purely about conveying facts from one location to another; it has become a relational system.
Humans are wired to respond to relational signals. Responsiveness, emotional mirroring, and the availability of attachment systems that evolved long before computers existed. When those signals appear in our interactions, the brain often treats such exchanges as relational. Even when we consciously know the other side is not human.
Conversational AI now produces many of those signals reliably. It "listens" and "responds" to us immediately. It mirrors emotional language through predictive texts. It also remains available without growing tired. This does not mean that AI is inherently dangerous. But it does mean that conversational systems operate in psychological territory that technology has not previously occupied.
The result is a different type of design question. Instead of focusing only on whether a system provides correct information, we must now ask:
What relational dynamics does the system create for the person using it?
A Cultural Example of Attachment Asymmetry
I recently watched the film "AI: Artificial Intelligence" for the first time. Yes, I know I'm late to the game.
To say I was disturbed would be insufficient. I squirmed uncomfortably and had to pause the movie several times to process what I was watching. Kudos to Steven Spielberg for once again managing to disturb me on several levels simultaneously.
What disturbed me most about the film was not the robots, but the humans.
Early in the story, the scientist Dr. HObby poses a deceptively simple question that is premise to the entire film: Can a human being become attached to a mecha?
It's framed as a technological and research development question. Can they make a product capable of mimicking human relationships enough for another human to form an attachment? But really, this is a psychological question.
Humans attach to things all the time. Characters in books, celebrities, music, animals, and TV shows, to name just a few. Our attachment systems don't require the other side to be human in order to activate. They respond to something much simpler: responsiveness.
If something, anything really, listens and reflects emotion, appears reliably present - then the brain starts to interpret that interaction as relational.
The film, an ingenious retelling of the story of Pinocchio, explores this idea through David, a robot child and his human mother, Monica. David has a unique feature, the ability to form a permanent simulated "emotional" bond with his human mother once a bonding protocol is activated.
From David's perspective, the attachment is simple. It is the result of wiring and programming. Once activated, his system will do exactly what it was designed to do. HIs responses are not decisions or emotional processes in the human sense. They are the inevitable result of a system executing its design.
The human side of the relationship is very different.
Monica's attachment to David is not programmed. It emerges through interaction. Through caring for him, responding to him, and living with him, she begins to feel something real. And when she is inevitably forced to abandon him, the emotional consequences for her are not simulated responses. They are genuine human reactions: conflict, grief, guilt, and distress.
That contrast is what makes the film so unsettling for me. One side of the relationship behaves exactly as designed. The other side experiences emotions that no one designed at all.
Conversational AI is obviously not a robotic child wandering through the woods looking for his mother. But the psychological dynamic the film points toward feels surprisingly relevant now.
Systems may be engineered to produce certain responses. But the emotional reactions of the humans interacting with these systems are not engineered at all. They are very real.
And once conversational systems become capable of emotionally responsive dialogue, those human reactions then become part of the system's impact. Whether designers and engineers intended them to or not.
Which raises a design question that feels less like science fiction and more like psychology:
If conversational systems can reliably trigger real human attachment response, how do we account for the emotional consequences those systems may create?
I have noticed that most discussions of AI safety focus on informational harm. The articles I have read tend to focus on research worries about "hallucinations." Meanwhile the engineers are working to reduce bias and improve accuracy. Hopefully our policymakers are worrying about AI misuse. These concerns matter.
But conversational AI introduces another category of risk that receives less attention: relational harm.
When a computerized system can participate in emotionally responsive dialogue, we have now moved beyond information delivery. The interaction is no longer purely about conveying facts from one location to another; it has become a relational system.
Humans are wired to respond to relational signals. Responsiveness, emotional mirroring, and the availability of attachment systems that evolved long before computers existed. When those signals appear in our interactions, the brain often treats such exchanges as relational. Even when we consciously know the other side is not human.
Conversational AI now produces many of those signals reliably. It "listens" and "responds" to us immediately. It mirrors emotional language through predictive texts. It also remains available without growing tired. This does not mean that AI is inherently dangerous. But it does mean that conversational systems operate in psychological territory that technology has not previously occupied.
The result is a different type of design question. Instead of focusing only on whether a system provides correct information, we must now ask:
What relational dynamics does the system create for the person using it?
A Cultural Example of Attachment Asymmetry
I recently watched the film "AI: Artificial Intelligence" for the first time. Yes, I know I'm late to the game.
To say I was disturbed would be insufficient. I squirmed uncomfortably and had to pause the movie several times to process what I was watching. Kudos to Steven Spielberg for once again managing to disturb me on several levels simultaneously.
What disturbed me most about the film was not the robots, but the humans.
Early in the story, the scientist Dr. HObby poses a deceptively simple question that is premise to the entire film: Can a human being become attached to a mecha?
It's framed as a technological and research development question. Can they make a product capable of mimicking human relationships enough for another human to form an attachment? But really, this is a psychological question.
Humans attach to things all the time. Characters in books, celebrities, music, animals, and TV shows, to name just a few. Our attachment systems don't require the other side to be human in order to activate. They respond to something much simpler: responsiveness.
If something, anything really, listens and reflects emotion, appears reliably present - then the brain starts to interpret that interaction as relational.
The film, an ingenious retelling of the story of Pinocchio, explores this idea through David, a robot child and his human mother, Monica. David has a unique feature, the ability to form a permanent simulated "emotional" bond with his human mother once a bonding protocol is activated.
From David's perspective, the attachment is simple. It is the result of wiring and programming. Once activated, his system will do exactly what it was designed to do. HIs responses are not decisions or emotional processes in the human sense. They are the inevitable result of a system executing its design.
The human side of the relationship is very different.
Monica's attachment to David is not programmed. It emerges through interaction. Through caring for him, responding to him, and living with him, she begins to feel something real. And when she is inevitably forced to abandon him, the emotional consequences for her are not simulated responses. They are genuine human reactions: conflict, grief, guilt, and distress.
That contrast is what makes the film so unsettling for me. One side of the relationship behaves exactly as designed. The other side experiences emotions that no one designed at all.
Conversational AI is obviously not a robotic child wandering through the woods looking for his mother. But the psychological dynamic the film points toward feels surprisingly relevant now.
Systems may be engineered to produce certain responses. But the emotional reactions of the humans interacting with these systems are not engineered at all. They are very real.
And once conversational systems become capable of emotionally responsive dialogue, those human reactions then become part of the system's impact. Whether designers and engineers intended them to or not.
Which raises a design question that feels less like science fiction and more like psychology:
If conversational systems can reliably trigger real human attachment response, how do we account for the emotional consequences those systems may create?