Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is a short attempt to articulate a framing which I sometimes find useful for thinking about embedded agency. I noticed that I wanted to refer to it a few times in conversations and other writings.

A useful stance for thinking about embedded agents takes as more primitive, or fundamental, 'actor-moments' rather than (temporally-extended) 'agents' or 'actors'. The key property of these actor-moments is that they get one action - one opportunity to 'do something' - before becoming simply part of the history of the world: no longer actual.

This is just one of the implications of embedded agency, but sometimes pulling out more specific consequences helps to motivate progress on ideas. It is an intuition pump, and, as with the exemplar archetype for intuition pumps, it does not tell the whole story and should be used with caution.

The Cartesian picture

It is often convenient to consider a decision algorithm to persist through time, separated from its environment by a Cartesian boundary. The agent receives (perhaps partial) observations from the environment, performs some computation, and takes some action (perhaps updating some internal state, learning from observations as it goes). The resulting change in the environment produces some new observation and the process continues.

This is convenient because it is often empirically approximately true for the actors we encounter[1].

Only one shot

In contrast, in reality, any act, and the concomitant changes in the environment, impinge on the actor (which is after all part of the environment), even if only in a minor way[2].

Taking an alternative stance where we imagine an actor only existing for a moment - having a single 'one shot' at action - can prompt new insights. In this framing, a 'state update' is just a special case of the more general perspective of 'self modification', which is itself a special case of 'successor engineering'. And all of these are part of the larger picture implied by the stance taking as more fundamental not 'actor' but 'actor-moment': wherein an actor makes a decision and so unfolds a particular future world trajectory, which may - or may not - contain relevantly-similar actor-moments.[3]

This stance is somewhat unnatural for many actors we encounter (and consider worthy of attention and conversation investment), because for the most part these have been selected for a semblance of self-integrity and durability, and so the Cartesian abstraction is a workable approximation. But departures from these modes, or attraction to additional modes of influence over subsequent actor-moments, may be more accessible to future agents, especially artificial ones[4].

Further, I have found this a useful stance for analysing existing and hypothetical systems, because it can prevent leaking intutions, and it provides an alternate framing to better understand systems-building-systems (some which already exist, and some which might some day exist).

Finally, as embedded agents ourselves, humans can sometimes get useful insights from taking this stance.

Scott Garrabrant discusses Cartesian Frames which I think are a related (and more fleshed out) conceptual tool.

  1. most likely as a consequence of self- and goal-content preservation being simultaneously instrumentally and intrinsically useful strategies with respect to natural selection. ↩︎

  2. Sometimes we can abstract that impact as consisting of a 'state update' to some privileged computational component of the algorithm implemented by the actor, while leaving the rest of the algorithm unchanged, and be essentially correct. ↩︎

  3. All of this is ignoring the challenge of defining what an 'act' is, what an 'actor' is, and the related questions about counterfactuals. I also ignore the challenge of identifying the time interval constituting a 'moment': it seems sensible for this to depend on the essential form of the algorithm the actor implements. ↩︎

  4. For example by uncoupling from constraints which restrict operations relevant to other instrumental goals ↩︎

New Comment
4 comments, sorted by Click to highlight new comments since:

This feels more like an intuition pump for myopia than for embedded agency.

Actually maybe not for myopia. Or at least at most in a weird way. Because you can have non-myopic one-shot approaches; that's commonly done in e.g. chess AIs, afaik.

Interesting. I realise now 'one shot' is an overloaded term and perhaps a poor choice. I'm referring to 'one action', 'one chance', rather than 'one training/prompt example' which is how 'one shot' often gets used in ML.

The typical chess AI (or other boardgame-playing RL algorithm) is episode-myopic. Or at least, its training regime is only explicitly incentivising returns over a single episode (e.g. policy gradient or value-based training pressures) - and I don't think we have artefacts yet which reify their goals in a way where it's possible to misgeneralise to non-myopia. It's certainly not action-myopic though (this is the whole point of training to maximise return - aggregate reward - vs single-step reward).

I'm not sure what it would mean entirely for an actor-moment to be myopic, but I imagine it would at minimum have to be 'indifferent' somehow to the presence or absence of relevantly-similar actor-moments in the future.

Interesting, especially if we draw a wild association with Dehaene's model of conscious activity, which pretty much implies that humans' agency machinery is firing in discrete ~500-millisecond intervals.

I'm not sure it's a useful association, but it's interesting.