I find that an especially illustrative thought experiment regarding embodiment is to imagine a superintelligent Stone that can talk. Let’s say that this Stone can somehow perceive its environment but is, as you might expect, incapable of moving around. The Stone is not a very powerful optimiser of its environment as long as it is just lying around at the beach or under some rubble, but this story quickly changes if the Stone is found by a human.
 

I like to imagine this in a tribal setting, where the Stone is brought back to the tribe and figures out how to play the role of a holy spirit or something of the sort. One could also go with a modern setting, but I think the point is more clear with the tribal one, because taking control of this setting is easier for us to imagine.

In any case, the Stone could expectedly quickly figure out how to manipulate the humans’ behavior. In relatively little time, the tribe effectively becomes the arms and legs of the Stone, as well as its extended eyes and ears. Even some of the cognition can be outsourced, for example by giving vague instructions in some cases but correctly figuring that the humans will end up making the right decisions.

It’s tempting to imagine how the Stone would efficiently assimilate other tribes, establish institutions to outsource management and generally find ways to make its control over the planet wider and more stable, instrumentalizing humanity. 
One can almost see this body, made out of humans, with institutions as organs, slowly stretch around the planet and tighten the grip of optimisation pressure towards whatever goal the Stone pursues. 
However, the body analogy might distract from the fact that the Stone could just be waiting to “upgrade its body”, to engineer more efficient tools. Maybe the Stone is attached to humans or has a somewhat similar morality, but if not, it might well replace most humans with machines eventually.

 

In light of this thought experiment, what can be said about the Stone’s embodiment? Would it be correct to say that the Stone retains its body throughout all of this, or is this a progression from a small Stone body to eventually having the planet Earth as its body? 

One could say that the Stone still interfaces with the world from its original body, only able to perceive a very limited amount of data and give a very limited amount of verbal instructions or suggestions because of this bottleneck. But in the larger system, there can be more such bottlenecks, e.g. when a tribe is interfacing with its environment - or even within the Stone, when it has limited insight into its own inner workings and subconsciousness. So why single out the one?
Also, the “larger” body can be set up to preprocess/curate information such that the effective access that the Stone has to relevant data is enormously increased, and an analogous case can be made for the output. 
A picture that could emerge is of a body that might be static in places but is overall sufficiently plastic as to configurationally account for a wide array of functional expressions.

 

Towards Formalization

I currently conceptually distinguish (agentic) embodiment into Sensors, Actuators, Cognition, and “Automated Regulators” (ARs). ARs could refer to any cognitive process that gets outsourced/automated, e.g. training human tribe members to fulfill certain tasks - initially the Stone has to do it personally, but after some time it can set up teachers/institutions to lower the cognitive load on itself. 

This applies more generally: If the tribe’s village is surrounded by a forest, we can imagine different stages of cognitive integration of that territory - initially it is mostly unmapped, the boundary of “sufficient certainty” extends only to the edges of the village. Over time, the Stone could map out the forest and its bordering region(s), gather information about places of interest and various animal species living inside, and set up plans for sustainable hunting+gathering, perhaps breeding some animals or cultivating some plants, enabling the optimization for various resources the forest can provide (perhaps sap or wood). 

The forest becomes “captured territory” and the boundary of “sufficient certainty” now extends to the edges of the forest. It is as if the forest has become part of a larger body like some organ system, and this organ system can run by itself with minimal supervision from the Stone. 


Do note that both the creation of institutions and the described integration of the forest are ultimately related to “automated regulation”. There are certain regulation targets that need to be met, e.g. regular inspection of the institutions or forest to account for the uncertainty about their sub-states (this uncertainty naturally builds up over time). We can think of an agent as extending its embodiment if it sets up (/alters towards its use) regulation systems in its environment.

Those regulation systems might:

  • Extend the effective markov blanket of the agent
  • Achieve (sub-)goals with low required supervision
  • Be robust to outside change
  • Have narrow learning capabilities to become more efficient over time 
     

I consider ARs a useful conceptual contribution for better framing notions of extended embodiment, including how the markov blanket (surfaces for perception, actuation) and even cognitive make-up of an agentic system can change over time.
It allows us to account for more contextually salient boundaries of embodiment of a given system, e.g. by only including ARs that are relevant to the problem domain or ARs that said system has sufficient certainty about. 

The associated notion of “captured territory” also lends itself to explanations regarding “overlapping extended embodiment” where multiple agents are not entirely aligned on how to utilize a shared space of environment. 

I am not attempting a full formalization quite yet, as I have more literature review to do, and don’t want to prematurely give an illusion of precision. 

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since:

Your analogy with the "body" of the stone is like a question I have asked about ChatGPT before: "What is the body of ChatGPT?" Is it

  • the software (not running),
  • the software (running, but not including the hardware),
  • the CPU and RAM of the machines involved,
  • the whole data center,
  • the whole data center including the personnel operating it, or
  • this and all the infrastructure needed to operate it (power, water, ...).

For humans, the body is clear and when people say "I," they mostly mean "everything within this physical body." Though some people only mean their brain (that's why cryonists sometimes freeze only their head) and some mean only their mind (see Age of Em). Humans can sustain themselves at least to some degree without infrastructure, but for ChatGPT, even if it became ASI, it's less clear where the border is.