This is a Work In Progress, significant updates are to be expected/hoped for in the near future.

See the introductory post of this sequence to make any sense of the following.


I believe that we must in parallel:

1. Develop an autonomous theory of functional roles

Find a way to ask functional questions without ever referring to non-functional questions such as "what is this made of", "how did it emerge" etc.

2. Build a Rosetta stone between reductionism and telism

Interpret functional questions in terms of the natural objects of reductionism: atomic components, microscopic rules, sets and spaces of possibilities/configurations…

Then use this interpretation to identify candidates for the natural objects of telism.

Asking more classic questions such as:
- how to identify the boundaries of a functionally integrated subsystem (e.g. how to identify that the heart is an organ, whereas the heart + 20% of the lungs is not an organ)
- why a system-wide task should generally decompose into modular subtasks (having organs doing different things) – what kind of microscopic rules and system-wide task allow or prevent it

Why in parallel? 

Because I don't think we can yet think through the former alone, even though it may be our ultimate goal.

I feel the relationship between these two goals should be a bit like topology & geometry: in the Rosetta stone project, we are building a geometry over a space whose metric properties stem from reductionism (e.g. configurations of parts); in the autonomous theory we forget about such details entirely and try to capture global properties and relations of functional roles.

Vocabulary note

Though I am trying to standardize my vocabulary, there may still be moments where I interchangeably use:
- system, organism = the starting point of our explanation, the level of organization at which a simple and understandable selection rule exists
- selection rule, constraint, task, goal = what should happen, which states are forbidden or allowed, or more or less likely (e.g. being alive or dead, performing a certain calculation or operation on the world, etc.)
- subsystem, coherent/integrated subsystem, organ = things that we want to explain by stating their role in the system; once we know that role, we know the selection rule applying to them, and they can in turn serve as a reference point to explain further sub-subsystems...
- role, function = what a subsystem should do so that the whole system satisfies its constraints (e.g. the role of a heart is to pump blood = a heart should be pumping blood if the organism is to survive). In other words, a role in a relation between constraints on the subsystem (the heart must be in a state where it pumps blood) and constraints on the system (the organism must survive).


Example 1: Encoding

A simple cypher is almost like the ideal gas of functional architecture. The general goal is to encode a message, and this task is decomposed into encoding each of the characters of the message.

e.g. in rot13: CHICKEN → PUVPXRA, simply as composition of C → P, etc.

Each subtask is entirely independent of the others as long as there is no ambiguity (i.e. as long as each original character is encoded by a different character in the cypher).

So the selection rule on the whole message maps to almost exactly the same selection rule for every character, with the simple addition of forbidding repeats.

Example 2: Percolation

Percolation is a simple problem from the reductionist point of view of statistical physics: imagine that you have a grid and activate links at random on it, what is the probability that you obtain a path that crosses the link from one end to the other? (here from bottom to top)

Now from a joint reductionist + functional point of view, we could ask: if we retain only systems where such a path exists, do they have particular properties?

A very simple prediction is that we should see more links in the vertical direction than expected by chance – this is like the example of encoding, where the constraint of going from bottom to top at the scale of the whole system transforms into a (less strict, more probabilistic) constraint of going from bottom to top at the scale of a single link. Furthermore, these vertical links should connect more than expected by chance.

Nevertheless, horizontal links are essential to connect vertical parts of the path that are not perfectly aligned – so there are contexts in which individual links are subjected to a local selection that is very different from the global selection; the system would fail in its task if they were suddenly replaced by vertical links in order to “naively” help toward the general goal.


An autonomous theory of telism has to go against some of our most basic intuitions, including the idea that, to understand a system, we should list all its features, and to understand selection, we should describe exhaustively what is being selected and how.

My only reference point here is grammatical theory: grammatical functions clearly capture some aspect of meaning, but they are not constructed and understood by defining an explicit “space of possible meanings” in which we describe all meanings as compositions of simpler meanings. Instead, they are studied by asking “what transformations can be applied while preserving the overall grammaticality”, and doing so reveals relationships between functions: for instance, syntactic trees are structured in such a way that the Object function relates more closely to the Verb function than does the Subject function, regardless of what fills them.

Valid types of questions:

1. How many different functional roles are there in a certain system & context?

This is a crucial question: in the encoding example above, there was in a sense a single function, replicated many times, i.e. the role of each character was the same. 
In the percolation example, there were perhaps two distinct functions: advancing toward the goal (satisfied by vertical links) and connecting disjoint subpaths (satisfied by horizontal links that attach two vertical subpaths)

Being able to explain what it means to have different functional roles without referring to microscopic details feels very tricky, and yet essential to the possibility of an autonomous theory.

I have two ways of thinking about this:

- One is to imagine that there are different “failure modes” for the overall system-wide task: for instance, the heart and the intestines demonstrably have two different functions because the organism will not fail in the same way if the heart is removed than if the intestines are. Perhaps we can somehow capture this “space of failure modes” and its dimensionality tells us that there are different functions, but it doesn’t really explain why and how they are different.

- Another possibility is to try to summarize everything by combinations of system-wide selection rules. Different failure modes exist because different other selection rules are working against the goal of the organism surviving – e.g. there is a global constraint of life needing oxygen, and another one of life needing energy, and different organs are trying to ensure the survival of the organism by “repelling” different such constraints.

2. How do these functional roles interact, i.e. do they need each other, interfere with each other? (e.g. the grammatical Object function exists only if there is something filling the Verb function)

3. Could we draw parallels between functional roles in different systems & contexts? e.g. is it correct to state that “the subject of a sentence performs a similar function to the brain in the body: it directs and controls the actions and events described in the sentence.” (ChatGPT quote)

What would it mean to be able to compare functions like this across domains that cannot be modelled in the same way at a reductionist level?

Possible approaches:

Defining functional roles and their interactions from an interplay between constraints

What can we say about a selection rule if we do not describe exhaustively the set of things it selects? Perhaps the minimal description could be: how does it interact with other selection rules, and what kind of interactions does it create between subsystems trying to satisfy it?

The most basic interaction between two selection rules is whether satisfying one will lead to satisfying the other or not (a positive or negative interaction).

Any kind of organ-organism relationship requires a positive interaction between constraints on the organ and constraints on the organism (this is an interaction across two different levels of organizaiton, one contained in the other). But we could think about interactions between constraints on two organs at the same level of organization, or interactions across more levels of organization.

For instance, on the side of organs:
- some organs’ functions are tied to the task of other organs, e.g. the heart is there to allow blood vessels to play their role
- some are tied to selection between organisms (an adversarial setting like chess), e.g. defenses or camouflage 
- some relate organism-level selection to even higher contexts? e.g. a food-acquisition organ relates system-level selection to an even-higher-level constraint of life requiring energy

On the side of constraints: maybe a classification by whether they engender positive or negative interactions between various organs tied to them? (e.g. a constraint like conservation/limitation implies antagonism between various organs that "draw" on that constraint to achieve another)

To summarize: 

1. Think in terms of:

the overall goal of the organism (a simple selection rule, typically binary, e.g. live/die, percolate or not...)

+ system-wide constraints interferring with this goal

(including all the relevant aspects of the microscopic components and their rules – we do not want to explicitly define those, but only capture in which ways they act like a global constraint: in the percolation example above, the fact that individual links must be on a grid, and cannot directly extend from bottom to top, means that there is a constraint of “spatial distance to cross” acting against the goal of percolation)

2. Characterize functions only by which constraints they answer or oppose (possibly multiple ones at various levels of organization), and constraints only by the type of interactions induced between subsystems that relate this constraint (as above: a constraint like having a limited supply of food implies antagonism between subsystems tasked with acquiring food)

3. Have failure modes and other properties emerge from those interactions

Then, try to apply all of that to examples from linguistics, chess, etc.


Valid types of questions:

1. Modularity and boundaries

- how to identify the boundaries of a functionally integrated subsystem (e.g. how to identify that the heart is an organ, whereas the heart + 20% of the lungs is not an organ)
- why a system-wide task should generally decompose into modular subtasks (having organs doing different things) – what kind of microscopic rules and system-wide task allow or prevent it

Why does biology often proceed by spatially decomposing the body into organ doing different things? Note that a transistor in a CPU is not assigned to a piece of software: at different times it will be involved in different operations.

There are many ways to probe this, e.g. deleting any part of an organ will typically lead to failure of the whole organ, and therefore the subsequent failure of the organism will be the same irrespective of which part of the organ was deleted; thus we could cluster microscopic parts into mesoscopic organs by similarity in how the organism fails upon deletion. 

See also: boundaries, local specialization, modularity, compositionality, etc.

2. Mesoscopic representations
 [Note to self: Fill this in!]

Possible approaches:


We may treat blackboxing as a kind of half-step between reductionism and telism (and most attempts since the 50s have stopped at that half-step): we ignore what the components are made of, but we still describe exhaustively the signals being exchanged between components.

In other words, this is like saying that we can ignore dynamical degrees of freedom that are inside a component, but not those that constitute its interface with other components – we still think very much in reductionist terms about those.

I am not proposing blackboxing as an independent approach so much as a mindset that we can often refer to when considering other specific approaches

Selection rules as domains in configuration space (the Venn Diagram approach)

(see e.g. Viability theory)

Seeing constraints as subsets within a space of configurations, and therefore seeing functions as relationships between multiple frontiers of subsets (nested, intersecting, etc.)



The idea would be to represent:

- each external constraint as a frontier through configuration space (think perceptron, where each constraint is a hyperplane)

- the whole organism's goal as a shape drawn by such frontiers (e.g. a triangle due to three different constraints each forbidding a half plane)

- functional roles as projections -> from a reductionist point of view, if the whole organism lives in hi-dim space, each organ lives in lower-dim space

-> makes evident that the goal of an organ is more contextual: e.g. depending on where we are along the axis of an other organ, the projection on the first organ doesn't have the same shape


- the most obvious "correct projections" are those over the normals of the constraints -> we can get to the obvious idea of "one organ per external constraint" (the constrained domain in the projected space remains contextual, but minimally so, in the sense that it has at least one clear context-independent rule that one should not cross the 0 coordinate)

- what kinds of shapes will lead to more or less complex couplings between multiple projections, either side-by-side (interactions between organs) or nested (multilevel)

- how do we compare projections? what does it mean to say that organ A in system S has the same function as organ A' in system S'?

- can we imagine the Chomskian grammar in such a situation? what the shapes must be to lead to binary trees of functions, etc.? same for chess, having two agents with opposing goals?


New Comment