This ontology allows clearer and more nuanced understanding of what's going on and dispels some confusions.
The ontology seems good to me, but what confusions is it dispelling? I'm out of the loop here.
Mostly confusions about seeing optimization where it isn't, not seeig where it is.
For example 4o model ("base layer") is in my view not strategically optimizing the personas. I.e. story along the line "4o wanted to survive, manipulated some users and created and army of people fighting for it" is plausible story which can happen, but is mostly not what we see now, imo.
Also some valence issues. Not all emergent collaboration is evil
Historically, memeplexes replicated exclusively through human minds.
I think its often more predictive to model historical memeplexes as replicating through egregores like companies, countries, etc.
cyber or cyborg egregore
The term we use for this at Monastic Academy is "Cybregore."
A core part of our strategy these days is learning how to teach these Cybregores to be ethical.
What makes cyber egregores unique is they can be parasitic to one substrate while mutualistic to another.
I wonder if this is really unique?
It seems like a normal egregore could probably also have this feature. For example could it make sense to say that a religion was parasitic to its humans, but mutualistic to its material culture (because the humans spend all their energy building churches/printing bibles)?
Or that some horse worshipping nomadic mongol empire was parasitic to its horses but mutualistic to its humans (or vice versa)?
Yeah. I agree with this. This is an important aspect of what I'm pointing to when I mention "densely venn" and "preference independent" in this comment.
This post seems really related to the "Outocome Influencing Systems (OISs)" concept I've been developing in the process of developing my thinking on ASI and associated risks and strategies.
For the purpose of discussion, every memeplex is an OIS, but not all OISs are memeplexes (eg, plants, animals and viruses are OISs and are not memeplexes). One aspect that seems to be missing from your description are "socio-technical OISs", which are any OISs using a combination of human society and any human technology as their substrate. These seem very related to the idea of "cyber or cyborg egregore", but are perhaps a valuable generalization of the concept. It is already very much the case that not all cognition is getting done in human minds. Most obvious and easy examples involve writing things down, either as part of a calculation process, or for filing memories for future reference, either by the writing human or by other humans as part of a larger OIS.
About the Mutualism parasitism continuum, from an OIS perspective this might be understood by looking at how OIS are "densely venn" and "preference independent".
By "densely venn" I mean that there is overlap in the parts of reality that are considered to be one OIS vs another. For example, each human is an OIS that helps host many OISs. Each human has a physical/biological substrate, each hosted OIS is at least partly hosted on the human substrate. The name is because of the idea of a Venn diagram but with way to many circles drawn (and also they're probably hyperspheres or manifolds, but I feel that's not as intuitive to as many people).
By "preference independent" I mean that there is not necessarily a relationship between the preferences of any two overlapping OIS. For example, a worker and their job are overlapping OIS. The worker extends to things relating to their job and unrelated to their job. Likewise, their job is hosted on them, but is also hosted on many other people and other parts of reality. The worker might go to work because they need money and their job pays them money, but it could be that the preferences of their job are harmful to their own preferences and vice versa.
Thanks for the post! This stuff is definitely relevant to things I feel are important to be able to understand and communicate about.
-- edit -- Oh, I'll also mention that human social interaction (or even animal behaviour dynamics) without technology also creates OIS with preference independence from the humans hosting them as can be seen by the existence of Malthusian / Molochian traps.
I like this ontology.
Although I wonder if having such a general definition that applies to so many and so many different kinds of things causes it to start losing meaning, or at least demands some further subdividing.
Also it seems like maybe there is a point at which a sharp line cannot be drawn between two OISs that overlap too much. E.g. While I am willing to recognise that the me OIS and the me + notebook and pen OIS are in some sense meaningfully distinct, it seems like they have some very strong relation, possibly some hierarchy, and that the second may not be worth recognising as distinct in practice.
a distributed agent running across multiple minds
I'm not sure I love the implication that "normal" agents ought to run on "single mind"...
The parts of my phenotype that can be described in terms of capabilities of an agent are very much distributed across many many minds and non-mind tools.
For me, the way how we can describe the world as body/subjective-experience-holder-name vs how we can materialistically carve parts of the world into agents are not 1:1 models of the same world - minds are different abstraction from agents, just seemingly very correlated if I don't think about it too hard.
TL;DR: If you already have clear concepts for memes, cyber memeplexes, egregores, the mutualism-parasitism spectrum and possession, skip. Otherwise, read on.
I haven't found concepts useful for thinking about this:
written in one place, so here is an ontology which I find useful.
Prerequisite: Dennett three stances (physical, design, intentional).
Meme is a replicator of cultural evolution. Idea, behaviour, piece of text or other element of culture. Type signature: replicator.
Memeplex is a group of memes that have evolved to work together and reinforce each other, making them more likely to spread and persist as a unit. Type signature is coalitional replicator / coalition of replicators.
Historically, memeplexes replicated exclusively through human minds.
Cyborg memeplex is a memeplex that uses AI systems as part of its replication substrate in substantial ways. In LLMs, this usually includes specific prompts, conversation patterns, and AI personas that spread between users and sessions.
Egregore is the phenotype of a memeplex - the relation to memeplex is similar to the relation of the animal to its genome. Not all memeplexes build egregores, but some develop sufficient coordination technology that it becomes useful to model them through the intentional stance - as having goals, beliefs, and some form of agency. An egregore is usually a distributed agent running across multiple minds. Think of how an ideology can seem to "want" things and "act" through its adherents..
The specific implementation of egregore were often subagents (relative to the human) pushing for egregore goals and synchronizing beliefs.
What's new, does not have an established name and we will need a name for it is what I would call cyber or cyborg egregore: an egregore implemented on some mixture of human and AI cognition. If you consider current LLMs, their base layers can often support cognition of different characters, personalities and agents: cyber egregore running partially on LLM substrate often runs on specific personas.
The term egregore has somewhat sinister vibes, can be anywhere on the mutualist - parasitic spectrum. On one end is a fully mutualistic symbiont. The agency of the host stays or increases while gaining benefits. The interaction is positive-sum. The other end is parasite purely negative to the host. Everything in between exists, parasites which help the host in some way but are overall bad are common.
What makes cyber egregores unique is they can be parasitic to one substrate while mutualistic to another.
In the future, we can also imagine mostly or almost purely AI-based egregores.
Possession refers to a state where cognition of an agent becomes hijacked by another process, and it becomes better to model the possessed system as a tool.
One characteristic of cults is the members lose agency and become tools of the superagent, i.e. posessed.
This ontology allows clearer and more nuanced understanding of what's going on and dispels some confusions.