The Teleological Mechanism

by G Gordon Worley III3 min read19th Jan 20216 comments


World Modeling

I just wrote this up as a comment, but I think it deserves to be a top level post because it's an important idea. Additionally, this formulation is crisp enough that folks should be able to usefully engage with it.

In this seminal cybernetics essay a way of thinking about the concept we might variously call care, concern, telos, or purpose is laid out. This is relevant both to thinking about goal-directed behavior in AI and other non-human systems and to thinking about why humans do things.

I reference this concept a lot, but I've not (yet) had a good reference post to link about it. Usually I default to pointing at something about Heidegger's Sorge (tr. "care" or "concern"), but Heidegger is notoriously hard to read and lots of people don't like him. Also there's not a detailed argument for why care is so important, so I find myself trying to make the case all the time. Hopefully this will put an end to that.

So in that essay, first, they consider systems that have observable behavior, i.e. systems that take inputs and produce outputs. Such systems can be either active, in that the system itself is the source of energy that produces the outputs, or passive, in that some outside source supplies the energy to power the mechanism. Compare an active plant or animal to something passive like a rock that only changes when heated by an outside source, though obviously whether or not something is active or passive depends a lot on where you draw the boundaries of its inside vs. its outside (e.g. is a plant passive because it gets its energy from the sun, or active because it uses stored energy to perform its behaviors?).

Active behavior is subdivided into two classes: purposeful and purposeless. They say that purposeful behavior is that which can be interpreted as directed to attaining a goal and purposeless behavior does not. They spend some time in the paper defending the idea of purposefulness and their vague definition of it. I'd instead propose we think of these terms differently. I prefer to think of purposeful behavior as that which creates a reduction in entropy within the system and its outputs and purposeless behavior does not. This doesn't quite line up with how they think about it, though, so I'm open to arguments that entropy is not the useful place to draw the line here.

They then go on to divide purposeful behavior into teleological and non-teleological behavior, by which they simply mean behavior that's the result of feedback (and they specify negative feedback) or not. In LessWrong terms, I'd say this is like the difference between optimizers ("fitness maximizers") and adaptation executors.

They then go on to make a few additional distinctions that are not relevant to the present topic although do have some relevance to AI alignment relating to predictability of systems.

I'd say then that systems with active, purposeful, teleological behavior are the ones that "care", and the teleological mechanism is the aspect of the way the system functions that causes it to care. When we talk about a teleological system or being, we're talking about something that cares because it uses its own power to transform the world into some particular state it's aiming for.

To go on, most "interesting" systems are teleological ones: humans, plants, many types of machines, bacteria, evolution, AI. All have something they care about, something they use to choose that one state of the world is better than another, and that creates an important difference between them and systems that lack this feature like rocks and planets and water, which might either fail to be active, purposeful, or teleological.

So when we concern ourselves with purposeful, caring systems, I think this is what we mean: that the system has a teleological mechanism.


6 comments, sorted by Highlighting new comments since Today at 12:32 AM
New Comment

Fundamentally, from the perspective of physics, what is the difference between animate matter and inanimate matter? Living things/non-living things? At which point does a non-living thing become a living thing?

Well, the categories of this post suggest one way we might do it that's more satisfying than the naive way we draw the boundaries of "life".

We could equate life we active systems.

We could equate life with active, purposeful systems.

We could equate life with active, purposeful, teleological systems.

The test would then be to see which one is most useful to us if we use that to mean "life". Are we happy with the kinds of things that end up in the category? Does it seem natural? Or are these categories cutting at something orthogonal to what we mean by "life" and we would actually prefer to define it some other way.

I suspect the answer is that by "life" we mean something orthogonal to this classification system such that things we consider alive cut across the boundaries it draws.