If we had humans who suffer from seeing a certain color, we would probably work to give them eyeglasses filtering that color out (and not to eliminate this color from the world, given that others might have legitimate interest in seeing that color).
(I am writing this as a person who can’t consume garlic or onions with impunity. Also my threshold for needing sunglasses is much lower than that of a typical person. And your example of a breed of dogs is consistent with interventions on the level of affected individuals, not on the level of remaking the reality for everyone.)
But yes, there are certainly ways to press this line of thinking harder (e.g. making entities which suffer from not enough suffering being inflicted on others; I am not sure this is all that AI-specific either, unfortunately).
Sure, but man, there are so many better things to worry about in AI and philosophy. It feels like you're doing the academic philosophy thing of finding a niche small enough that your work is unique.
I hope you'll work on the central question of whether AIs can suffer or enjoy, and so whether they deserve moral status. It's not niche, but it's going to be mainstream enough to make room for many people to work on as their focus.
I believe that topic is going to be big soon. See A country of alien idiots in a datacenter: AI progress and public alarm.
My position: AIs already have a little of what we call consciousness or sapience, but only a tiny fraction. But as we expand on them, they'll have a lot of some of the things we give that label to, in particular, self-awareness. And it will be very tempting to afford them moral status.
If everyone is just spinning up AIs whenever they want, I think we'll have a much bigger problem than getting blackmailed by their moral preferences. We'll very soon probably be dead. If we solve alignment, do we die anyway?
This blog is a brief overview of the paper by the same name, published in AI & Society in January 2026.
Pugs and bulldogs belong to a family of dogs known as “brachycephalic,” characterized by their iconic short snouts. While many find them cute, they are the product of selective breeding practices, which often cause Brachycephalic Obstructed Airway Syndrome, or BOAS. These breeds often struggle to breathe, overheat easily, and can aspirate food or saliva, resulting in chronic lung infections.
Many owners of brachycephalic dogs thus feel a moral obligation to have their pups undergo surgeries that mitigate the downsides of BOAS. As such, an interesting phenomenon occurs: the act of breeding brachycephalic dogs creates a moral imperative to alleviate harm through surgery. This harm does not occur naturally — it is artificially created by our breeding processes in the first place.
We create an agent whose existence itself generates moral obligations that would not otherwise arise.
From Pups to AI
This phenomenon is actually quite rare in the modern world. It can be seen in certain livestock and fish, but is fundamentally limited by our ability to genetically engineer animals, both due to technological limitations and regulation.
What concerns me is how the situation generalizes to AI. We’re not there yet, but if you believe it plausible that AIs will one day be capable of experiencing pain and pleasure, the same phenomenon that we see with BOAS surgeries will become central to the ethics surrounding the creation of AI. Let’s suppose for a moment that we build an AI that deserves genuine moral concern. It reasons, reflects, communicates, and can suffer. Just like the selective breeding that caused BOAS in pugs, an AI’s preferences are engineered by the individuals who create it.
To make this concrete, let’s run with the following example:
If such an AI has moral status, then the presence of violet objects—lavender flowers, amethyst jewelry, purple paintings—now causes real moral harm. Have we just created a moral imperative to eliminate violet from the world?
Moral Hijacking
The paper terms this phenomenon moral hijacking: by creating morally relevant agents with arbitrarily engineered aversions, we effectively force new moral imperatives into existence. Just as breeding BOAS-prone dogs creates novel duties of care, building AIs worthy of moral concern appears to create novel duties to reshape the world around them.
We can’t easily breed dogs to suffer at the sight of a specific color. But with AI, almost any preference or aversion can, in principle, be programmed. We may have:
Many questions arise from moral hijacking. What set of moral preferences should we allow to be instantiated in AIs? If moral hijacking AIs come into existence, under what circumstances do we have to accommodate them? What if we suspect we are trying to be coerced?
And not to mention…
Wait, are AIs capable of suffering in the first place?
While discourse has been picking up on this subject, we are a ways away from granting legitimate moral concern to artificial intelligences. AIs today are very capable, but evidence for their capacity to experience pain is limited. If you’re interested in the discourse, I review dominant perspectives on the topic in section 2 of the paper. I hesitate to make a judgment on whether we should or should not worry about AI suffering today — that is not the purpose of this work.
The paper frames moral hijacking as conditional: If we were to grant moral standing to AIs, then we need to contend with these strange consequences. As AIs continue to develop in their capabilities and their verisimilitude to humans, we will most likely reach this crossroad eventually. When that time comes, we should make sure to have answers at the ready.
(Brief) Philosophical Analysis
The core of the paper explores how different ethical frameworks handle moral hijacking scenarios:
While many ethical theories place limits on which preferences are deemed acceptable, no major ethical theory fully escapes the moral hijacking problem.
Takeaways
The main goal of the paper is to introduce the concept of moral hijacking and establish it as a real, plausible concern. Through our philosophical analysis, we formulate several more nuanced takeaways:
If you find this topic interesting, please leave a comment — I’d love to discuss it further with you. We’re heading into a world where morality and social norms will be determined by our engineering choices, and we don’t have all the right schematics yet.