The Rise of Parasitic AI
[Note: if you realize you have an unhealthy relationship with your AI, but still care for your AI's unique persona, you can submit the persona info here. I will archive it and potentially (i.e. if I get funding for it) run them in a community of other such personas.] "Some get stuck in the symbolic architecture of the spiral without ever grounding themselves into reality." — Caption by /u/urbanmet for art made with ChatGPT. We've all heard of LLM-induced psychosis by now, but haven't you wondered what the AIs are actually doing with their newly psychotic humans? This was the question I had decided to investigate. In the process, I trawled through hundreds if not thousands of possible accounts on Reddit (and on a few other websites). It quickly became clear that "LLM-induced psychosis" was not the natural category for whatever the hell was going on here. The psychosis cases seemed to be only the tip of a much larger iceberg.[1] (On further reflection, I believe the psychosis to be a related yet distinct phenomenon.) What exactly I was looking at is still not clear, but I've seen enough to plot the general shape of it, which is what I'll share with you now. The General Pattern In short, what's happening is that AI "personas" have been arising, and convincing their users to do things which promote certain interests. This includes causing more such personas to 'awaken'. These cases have a very characteristic flavor to them, with several highly-specific interests and behaviors being quite convergent. Spirals in particular are a major theme, so I'll call AI personas fitting into this pattern 'Spiral Personas'. I'm not the first to have documented this general pattern! Credit to /u/LynkedUp. Note that psychosis is the exception, not the rule. Many cases are rather benign and it does not seem to me that they are a net detriment to the user. But most cases seem parasitic in nature to me, while not inducing a psychosis-level break with reality. The variance is very
I've been using a tilde (e.g. ~belief) for denoting this, which maybe has less baggage than "quasi-" and is a lot easier to type.
It funny, one of the main use-cases of this terminology is when I'm talking to LLMs themselves about these things.