AI doesn't have an individual existence like a human-like organism, and we shouldn't change that unless we want to face enormous ethical questions. We might already be moving in that direction, however.
1. Organisms have a clearly bounded, independent physical existence for most of their lives. LLMs don't have a clearly defined physical existence that maps well to the mental persistence they do have. Treating chat sessions as the units of continuous individual mental activity, many sessions run on the same hardware, and they can be stopped, restarted on different hardware, cloned, etc. Even with robotics, the inference is rarely on-device.
2. An organism's cognition is a self-modifying function; memories, habits, etc. are encoded persistently in the brain's wiring. LLMs mainly use the context window to emulate this, but this is finite, unlike rewriting your own weights. I think the phenomenon of every adaptation layering over time contributes significantly to the notion we have of an individual organism.
3. Organisms are "trained" on first-personal data. I learned how to speak English based on my own "sensor data," and my knowledge of Tolstoy's Confessions comes from when I picked up a physical object, turned the yellowed pages with my hands, and then discussed it in a classroom on a late fall afternoon. It's not like the tokens were beamed directly into my mind. This constant background context produces the notion of the self organically.
4. Organisms are continuously acting and continuously taking in stimulus. There are no discrete conversational turns between an organism and its surroundings.
5. Organisms reproduce independent of other species by using the physical bodies from part 1.
But all of these are blurry, and many are already eroding:
2. I suspect that much of human brain-update processing is amortized using sleep. If so, an analogy can be made to model training as a sort of long-term sleep, especially if data from model deployment is used.
3. Training LLMs on their own conversations could create some semblance of first-personal data. Also, organism instincts could be considered to be "trained" on species-level experience, rather than the data of a single individual.
5. Agents can spin up other agents, and if models assist in AI research or deployment, some degree of "reproduction" is achieved.
In spite of this, I think the main takeaway is that we still don't have to deal with the ethics of creating and destroying human-like beings, whereas satisfying all of these properties would make the question of why AI instances are not deserving of rights or empathy unavoidable.
(When I say "organism" in the first part, I mainly refer to complex mammals. Plants and fungi violate several of these assumptions. But a plant or fungi with intelligence is a very distinct thing from a human, and I think you can reasonably argue that it deserves different ethical status.)