Looking at those notions of "agency", it seems to me that the main thing unifying them is that, according to all of them, agents (are perceived to) do something effectively, in a way that is somewhat "removed" from the observer (the one doing the judgment of "agency") or "controller". What I mean by this is something like: you don't get to control all the medium-level behaviors and/or you don't get to see/understand the medium-level "gears" that move it. E.g., compare an LLM agent with a web crawler. (The latter would have been called a "software agent" 20 years ago but that term mostly went out of use over time, perhaps because coding became a thing that nearly everybody does, which justifiably inoculated the perception of agency in a sufficient number of people for the collective perception of such entities to shift away from agency.)
This aligns very well with Dennett's intentional stance (see also Abram's Vingean Agency), but intentional stance framing leaves me somewhat unsatisfied, because it largely kicks the can down the road. The intentional stance says that a thing is an agent (or, it makes sense to call a thing an agent) to the extent that modeling it an agent is predictively useful (and since what is predictively useful depends on what one is trying to predict which depends on one's goals, different fields may have developed different, more specialized (sub-)notions of agency, as you're saying in the post). Fair enough. But why does the concept of an "agent" compress some entities/processes and not others? What makes a thing "tick" in an agent-y-seeming way?[1]
On a separate note, it might be useful to look for past examples of similar attempts to unify/cohere a concept spanning multiple disciplines understanding it differently. In particular, it looks to me like the notion you're reaching for is that of a "boundary object":
In sociology and science and technology studies, a boundary object is information, such as specimens, field notes, and maps, used in different ways by different communities for collaborative work through scales. Boundary objects are plastic, interpreted differently across communities but with enough immutable content (i.e., common identity across social words and contexts) to maintain integrity.
...
Boundary objects are said to allow coordination without consensus as they can allow an actor's local understanding to be reframed in the context of a wider collective activity. Similarly, Etienne Wenger describes boundary objects as entities that can link communities together as they allow different groups to collaborate on a common task.
At a glance, the Wiki article doesn't give many examples. ChatGPT, when asked for conceptual examples, lists "agency", along with (i.a.) "gene", "information", "rationality", "signal", "representation", "complexity", and "function",[2] although the last one is probably more of a polysemanticity than a true boundary object. I don't see much interesting shared structure between mathematical functions and biological functions.
There's also an important phylogenetic aspect in that concepts sometimes spread their scope more than "justified", especially if they are normatively laden or, like, ambitious, profound, deep-seeming (e.g., so many things get called "evolution" where they are more accurately described as "development" or simply "change"). So, in general, a concept might be spanning multiple domains interpreting it differently because of some selfish-replicator-like dynamics, rather than group epistemology latching onto something important. (To be clear, I mostly don't expect this to be the case for agency, at least not for the domains you've listed here.)
See also: https://www.lesswrong.com/posts/KpD2fJa6zo8o2MBxg/consciousness-as-a-conflationary-alliance-term-for
FWIW, it's plausible that Dennett would agree. Schwitzgebel, who holds similar views, agrees at least somewhat.
Caveat, though, that those examples might have been skewed by my agent foundations-coded system prompt.
What would be useful: The feature table is collapsing the interesting parts. What's interesting is how the features are represented and decomposed across various domains. In control theory, "memory" is a state; in biology, it may be the genome; and in cognitive science, it decomposes into episodic, semantic, and procedural. A checkmark flattens all these into a single bit. A translation guide between the various formalisms and where they do and don't correspond would be a far more valuable artifact, and sounds like what you're planning.
If you've not come across it yet, Agency Is Frame-Dependent is short but worth adding to your reading list.
Missing domains: Philosophy of action. Also, immunology is a good area to stress-test agent taxonomies.
Methodology: Write first, then verify risks anchoring to your initial framing. I'd recommend interviewing experts with open questions and synthesizing afterward.
Missing domains: Philosophy of action. Also, immunology is a good area to stress-test agent taxonomies.
Now that you say that, some more domains (admittedly with subset-y/intersection-y relations to other domains in the table): decision theory, reinforcement learning, ethology, maybe some (socio-)political theory (my impression is that there's a lot of slop and noise in there, but also some gems; the Key Phenomena curriculum might have some interesting materials on this, AFAIR).
Another potentially useful lens: https://en.wikipedia.org/wiki/The_Major_Transitions_in_Evolution. See also Section 5.3 "De-darwinization" in Darwinian Populations and Natural Selection.
Good points with the checkmarks, the question of communication here is how to convey nuance in a way that is useful and carves reality at its joints. In philosophy it can be useful to do a sort of necessary and sufficient diagram and that was kind of the inspiration for that.
I'm not really sure how to model it in a simplifying, levels of coarse graining and all that.
It's quite funny with the agency is frame-dependent as I literally link to it to in the post :D
Finally for the point around the anchoring bias, I agree to some extent and the one worry that I have is that it can be hard to relate the expression that someones makes to underlying theories and a frame that can be combined with other fields without having an existing idea of where to project the information to. So it's this trade-off between the purity of the information and the directedness one can bring to the questions.
AI was used to translate around 20 minutes of talking out the entire post. The post was then fully edited by a human (multiple times).
In our phylogeny of agents post, we argued that different scientific fields evolved different conceptions of agency and that this would be useful to study. Control theory, economics, biology, cognitive science, and AI research all use the word "agent" to mean different things, why is this? The basic idea is to take the stance of someone studying a natural phenomena in the real world, we put on our anthropologist hat and we say “huh, I wonder if there’s anything to gain in exploring why that is?”
What we argued for in the phylogeny post was that we should look at the evolutionary history of agency. Yet in order to look at the history of agency we first need to know what the conceptions already are in different fields.
We’ve been wanting to do this for some time and we don’t know exactly what a good output would look like so before embarking on this longer project we would like to get some feedback on what outputs would be useful.
Following Dennett's intentional stance and ideas similar to DeepMind's "Agency is Frame-Dependent" paper, we're treating agency as a compression strategy observers use. Our frame is that different fields compress differently because they face different prediction challenges and that they therefore treat what an agent is differently. We want to map those compressions across fields and understand why they differ.
It’s better to see this is an operation trying to gather clues rather than an operation trying to solve the problem. That is, we’re not making an ontological claim that Dennett’s intentional stance is what agency is, we’re rather suspending our disbelief and trying to see if treating “agency” from the intentional stance perspective leads to interesting observations that might then be used to provide evidence for or against theories of agency.
The Plan
We're going to write a series of short posts, one per domain. Each post will take a specific field and try to compress its conception of agency down to the core: what's the concrete system this field treats as its canonical agent, what features does their model require, and why does that compression make sense given what they're trying to predict?
We'll try to identify the sub-functions and sub-modules that each field treats as essential, draw connections to how other fields handle the same features differently, and look at where bridging functions exist between fields — places where the same underlying structure shows up in different mathematical clothing. (The broader methodology behind this — why we think cross-field composition with verification is the right approach — is described in A Compositional Philosophy of Science for Agent Foundations.)
For each domain, we're going to write our initial take and then verify it with an actual expert in that field. We'll invite a researcher onto a conversation or podcast where we present our characterization and ask them to correct it — what did we get right, what did we get wrong, what are we missing? Each domain will then have both a written post and a recorded conversation.
We can't say exactly what each post will look like because the expert conversations will shape them. But the rough sequence of domains we're planning:
We’ll see what happens after this but we might try to put together a paper with the findings.
Potential Artefacts
We've been thinking about different ways of expressing the outputs of this project and we want to figure out what will be most useful. One artifact we've considered is a comparison table — something like a matrix of fields against features (goal-directedness, memory, strategic reasoning, theory of mind, etc.) showing which features each field treats as necessary versus optional for their models to work:
Feature
Beh. Econ
Evolutionary
Dev. Biology
AI/Robotics
Control Theory
Cog. Sci
Goal-Directedness
✓
✓
✓
✓
✓
✓
Memory
✓
optional
✓
✓
optional
✓
Strategic Reasoning
✓
optional
optional
✓
optional
✓
Theory of Mind
optional
—
optional
optional
—
✓
Feedback Control
—
optional
✓
✓
✓
✓
Table: This is an initial table we created from a literature review we did with around 10-15 papers in each field. It’s a bit long and we’re not certain about the validity of it so see this more as a potential output than something verified.
We're looking into this artefact among other artefacts but we're not fully sure what would be most useful for studying what an agent is. Maybe we should try to put up a commutative diagram between different concepts? Maybe a clustering model is useful? If you have thoughts here, we would love to hear them.
What We Want From You
What would be useful? If this project produced one thing you'd actually use or reference, what would it be? A table? A translation guide? A set of diagnostic questions? Something else?
On domains: Are there fields we're not covering that would significantly change the picture? Mechanism design, multi-agent systems, and artificial life all sit awkwardly across our current categories. What else should be on the list?
On experts: Who should we be talking to?
On the frame: Does the Dennett-style frame-dependent approach seem productive, or do you think agency really is a natural kind rather than a compression strategy? How do you make the anthropologist strategy as useful as possible?
On connections: If you work across fields and have noticed places where different agent concepts create confusion or where translations between fields have been productive, we'd love to hear about it.
Work being done at Equilibria Network, if it sounds interesting we're looking for people to collaborate with on various projects so do reach out!