Review

Update February 21st: After the initial publication of this article (January 3rd) we received a lot of feedback and several people pointed out that propositions 1 and 2 were incorrect as stated. That was unfortunate as it distracted from the broader arguments in the article and I (Jan K) take full responsibility for that. In this updated version of the post I have improved the propositions and added a proof for proposition 2. Please continue to point out weaknesses in the argument; that is a major motivation for why we share these fragments.

For comments and clarifications on the conceptual and philosophical aspects of this article, please read metasemi's excellent follow-up note here.

Meta: Over the past few months, we've held a seminar series on the Simulator theory by janus. As the theory is actively under development, the purpose of the series is to uncover central themes and formulate open problems. A few high-level remarks upfront:

  • Our aim with this sequence is to share some of our discussions with a broader audience and to encourage new research on the questions we uncover. 
  • We outline the broader rationale and shared assumptions in Background and shared assumptions. That article also contains general caveats about how to read this sequence - in particular, read the sequence as a collection of incomplete notes full of invitations for new researchers to contribute.

Epistemic status: Exploratory. Parts of this text were generated by a language model from language model-generated summaries of a transcript of a seminar session. The content has been reviewed and edited for conceptual accuracy, but we have allowed many idiosyncrasies to remain. 

Three questions about language model completions

GPT-like models are driving most of the recent breakthroughs in natural language processing. However, we don't understand them at a deep level. For example, when GPT creates a completion like the Blake Lemoine greentext, we

  1. can't explain why it creates that exact completion.
  2. can't identify the properties of the text that predict how it continues.
  3. don't know how to affect these high-level properties to achieve desired outcomes.

We can make statements like "this token was generated because of the multinomial sampling after the softmax" or "this behavior is implied by the training distribution", but these statements only imply a form of descriptive adequacy (or saying “AlphaGo will win this game of Go"). They don't provide any explanatory adequacy, which is what we need to sufficiently understand and make use of GPT-like models.

Simulator theory (janus, 2022) has the potential for explanatory adequacy for some of these questions. In this post, we'll explore what we call “semiotic physics”, which follows from simulator theory and which has the potential to provide partial answers to questions 1., 2. and perhaps 3. The term “semiotic physics” here refers to the study of the fundamental forces and laws that govern the behavior of signs and symbols. Similar to how the study of physics helps us understand and make use of the laws that govern the physical universe, semiotic physics studies the fundamental forces that govern the symbolic universe of GPT, a universe that reflects and intersects with the universe of our own cognition. We transfer concepts from dynamical systems theory, such as attractors and basins of attraction, to the semiotic universe and spell out examples and implications of the proposed perspective.

Example. Semiotic coin flip.

To illustrate what we mean by semiotic physics, we will look at a toy model that we are familiar with from regular physics: coin flips. In this setup, we draw a sequence of coin flips from a large language model[1]. We encode the coin flips as a sequence of the strings  1 and  0 (since they are tokenized as a single token) and zero out all probabilities of other tokens. 

We can then look at the probability of the event  that the sequence of coin flips ends in tails ( 0) or heads ( 1) as a function of the sequence length.

We note two key differences between the semiotic coin flip and a fair coin:

  • the semiotic coin is not fair, i.e. it tends to produce sequences that end in tails ( 0) much more frequently than sequences that end in heads ( 1).
  • the semiotic coin flips are not independent, i.e. the probability of observing heads or tails changes with the history of previous coin flips.

To better understand the types of sequences that end in either tails or heads, we next investigate the probability of the most likely sequence ending in  0 or  1. As we can see in the graph below, the probability of the most likely sequence ending in  1 does not decrease for the GPT coin as rapidly as it does for a fair coin.

Again, we observe a notable difference between the semiotic coin and the fair coin:

  • while the probability of a given sequence of coin flips decreases exponentially (every sequence of length  of fair coinflips has the same probability ), the probability of the most likely sequence of semiotic coin flips decreases much slower.

This difference is due to the fact that the most likely sequence of semiotic coinflips ending in f.e.  0 is:  0  0  0  0 ...  0  0. Once the language model has produced the same token four or five times in a row, it will latch onto the pattern and continue to predict the same token with high probability. As a consequence, the probability of the sequence does not decrease as drastically with increasing length, as each successive term has almost a probability of .

With the example of the semiotic coin flip in mind, we will set up some mathematical vocabulary for discussing semiotic physics and demonstrate how the vocabulary pays off with two propositions. We believe this terminology is primarily interesting for alignment researchers who would like to work on the theory of semiotic physics. The arithmophobic reader is invited to skip or gloss over the section (for an informal discussion, see here).

Simulations as dynamical systems

Simulator theory distinguishes between the simulator (the entity that performs the simulation) and the simulacrum (the entity that is generated by the simulation). The simulacrum arises from the chained application of the simulation forward pass. The result can be viewed as a dynamical system where the simulator describes the system’s dynamics and the simulacrum is instantiated through a particular trajectory.

We commence by identifying the state and trajectory of a dynamical system with tokens and sequences of tokens.

Definition of the state and trajectories. Given an alphabet of tokens  with cardinality  we call  the trajectory.[2] While a trajectory can generally be of arbitrary length, we denote the context length of the model as ; therefore,  can effectively be written as . The empty sequence is denoted as .[3][4][5]

While token sequences are the objects of semiotic physics, the actual laws of semiotic physics derive from the simulator. In particular, a simulator will provide a distribution over the possible next state given a trajectory via a transition rule.

Definition of the transition rule. The transition rule is a random function that maps a trajectory to a probability distribution over the alphabet (i.e., the probabilities for the next token completion after the current state). Let  denote the set of probability mass functions over , i.e., the set of functions  which satisfies the Kolmogorov axioms.[6][7][8] The transition rule is then a function .

Analogous to the wave collapse in quantum physics, sampling a new state from a distribution over states turn possibility into reality. We call this phenomenon the sampling procedure.

Definition of the sampling procedure. The sampling procedure , selects a next token, i.e., .[9]  The resulting trajectory  is simply the concatenation of  and  (see the evolution operator below). We can, therefore, define the repeated application of the sampling procedure recursively as  and .

Lastly, we need to concatenate the newly sampled token to the trajectory of the previous token to obtain a new trajectory. Packaging the transition rule, the sampling procedure, and the concatenation results in the evolution operator, which is the main operation used for running a simulation.

Definition of the evolution operator. Putting the pieces together, we finally define the function  that evolves a given trajectory, i.e., transforms  into  by appending the token generated by the sampling procedure . That is,  is defined as . As above, repeated application is denoted by .

Note that both the sampling procedure and the evolution operator are not functions in the conventional sense since they include a random element (the step of sampling from the distribution given by the transition function). Instead, one could consider them random variables or, equivalently, functions of unobservable noise. This justifies the use of a probability measure, e.g., in an expression like .

Definition of an induced probability measure. Given a transition rule  and a trajectory , we call  the induced probability measure (of  and ). We write  to denote , i.e. the probability of the token  assigned by the probability measure induced by . For a given trajectory  the induced probability measure satisfies by definition the Kolmogorov axioms. We construct a joint measure of a sequence of tokens, , as the product of the individual probability measures, . For ease of notation, we also use the shorthand , where the length of the sequence, , is implicit.

Two propositions on semiotic physics

Having identified simulations with dynamical systems, we can now draw on the rich vocabulary and concepts of dynamical systems theory. In this section, we carry over a selection of concepts from dynamical systems theory and encourage the reader to think of further examples.

First, we will define a token bridge of length  as a trajectory  that starts on a token  ends on a token , and that has length  such that the resulting trajectory is valid according to the transition rule of the simulator. For example, a token bridge of length 3 from "cat" to "dog" would be the trajectory "cat and a dog".

Second, we call the family of probability measures  induced by a simulator non-degenerate if there exists an  such that  for (almost) all  the probability assigned to any  by the induced measure is less than or equal to ,

We can now formulate the following proposition:

Proposition 1. Vanishing likelihood of bridges. Given a family of non-degenerate probability measures  on , the probability of a token bridge  of length  decreases monotonically as  increases[10], and converges to 0 in the limit,

Proof: The probability of observing the particular bridge can be decomposed into the product of all individual transition probabilities,  . Given that  for all transitions (minus at most a finite set), we see immediately that the probability of a longer sequence, , is at most equal (on a finite set) or strictly smaller than the probability of the shorter sequence . We also see that  from which the proposition follows.

Notes: As correctly pointed out by multiple commenters, in general, it is not true that the probability of  decreases monotonically when  is fixed. In particular, the sequence  plausibly gets assigned a higher probability than the sequence . So the proposition only talks about the probability of a sequence when another token is appended. In general, when a sequence is sufficiently long and the transition function is not exceedingly weird, the probability of getting that particular sequence will be small. We also note that real simulators might well induce degenerate probability measures, for example in the case of a language model that falls into a very strong repeating loop[11]. In that case, the sequence can converge to a probability larger than zero.


There are usually multiple token bridges starting from and ending in any given pair of tokens. For example, besides "and a", we could also have "with a" or "versus a" between "cat" and "dog". We define the set of all token bridges of length  between  and  as 

and the total probability of transitioning from  to  in  steps, denoted as , and calculate it as 

Computing this sum is, in general, computationally infeasible, as the number of possible token bridges grows exponentially with the length of the bridge. However, proposition one suggests that we will typically be dealing with small probabilities. This insight leads us to leverage a technique from statistical mechanics, that is concerned with the way in which unlikely events come about:

Proposition 2. Large deviation principle for token bridges. The total probability of transitioning from a token   to  in  steps satisfies a large deviation principle with rate function ,

where we call  the average action of a token bridge.

Proof: We again leverage the product rule and the properties of the exponential function to write the probability of a token bridge  as

so that the total probability  can be written as a sum of exponentials,

We now expand the definition of the average action which makes the dependence of the exponential on  explicit,

Let  . Then  is the largest term of the sum and we can rewrite the sum as

Applying the logarithm to both sides and multiplying with  results in

Since  by construction,  is larger than zero and