Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Abstractions as Redundant Information

7Steven Byrnes

2johnswentworth

5romeostevensit

4tailcalled

4gwern

3Leon Lang

3james.lucassen

2johnswentworth

New Comment

8 comments, sorted by Click to highlight new comments since: Today at 7:25 AM

(Warning: I might not be describing this well. And might be stupid.)

I feel like there's an alternate perspective compared to what you're doing, and I'm trying to understand why you don't take that path. Something like: *You're* taking the perspective that there is *one *Bayes net that we want to understand. And the alternative perspective is: there is a succession of "scenarios", and each is its own Bayes net, and there are deep regularities connecting them—for example they all obey the laws of physics, there's some relation between the "chairs" in one scenario and future scenarios, etc.

In the "multiple scenarios" perspective, *we can frame everything in terms of building good generative models*, that predict missing data from limited data, or predict the future from the past. It seems like "resampling" is unnecessary in this perspective; we can evaluate generative models by whether they work well in the next scenario. Or as another example, we would learn a generative model of a gear that involved rigid rotation plus small-amplitude vibration, and when we see something that seems to match that model, we would guess that the model is applicable here. Etc.

Then a claim about far-away information would look something like "the generative model is structured as a bunch of sub-items with coordinates, and these items have local interactions". And then you can make a substantive claim: "This class of generative models is a very powerful model class, really good at making predictions given a fixed model complexity". Is that claim true? In some senses yes, but we can also immediately see limitations, e.g. "it is nighttime" is a useful generative model ingredient that doesn't really have coordinates, and likewise "illegal", etc.

The "redundant information" framing still applies; if a comparatively simple generative model can make lots of correct predictions, then clearly there was redundant information in what was predicted.

Anyway, my question is: am I correct that we can define abstractions as "ingredients in generative models", and if so, why don't you like that approach? (Or is it equivalent to your approach??)

You can think of everything I'm doing as occurring in a "God's eye" model. I expect that an agent embedded in this God's-eye model will only be able to usefully measure natural abstractions within the model. So, shifting to the agent's perspective, we could say "holding these abstractions fixed, what possible models are compatible with them?". And that is indeed a direction I plan to go. But first, I want to get the nicest math I possibly can for computing the abstractions within a model, because the cleaner that is the cleaner I expect that computing possible models from the abstractions will be.

... that was kinda rambly, but I guess the summary is "building good generative problems is the inverse problem for the approach I'm currently focused on, and I expect that the cleaner this problem is solved the easier it will be to handle its inverse problem".

This post reminds me of a self-supervised machine learning model I wanted to make a while ago, but I never got around to.

Essentially the idea was to split the data up into a "small connected region" vs "everything else". For instance, if the data was images, I'd pick out a small patch of the image vs the rest of the image. And then I'd predict the distribution of the small region from the everything else. The logic was that one could then invert the process, by taking a patch and predicting the distribution that this patch came from; this would tell you the features of the patch that correlate with far-away information.

More formally, my idea was to have two neural networks, which I called F and G, where F takes an image and outputs a feature vector, and G takes a feature vector and somehow functions as a generative model (in my original proposal I chose G to Image-GPT, but you could also imagine training G as a GAN or something). Let x be an image, C(x) be a small region from that image, and let M(x) be most of x with C(x) masked away, and let P(y|G(f)) be the probability of y according to G conditioned on f.

If we then consider the expression P(C(x)|G(F(M(x)))), then this essentially amounts to, how well is a region of an image predicted by features "far away". So you can optimize this to get a model that does the sort of thing that your post talks about.

My idea was then that F might be kinda messy because id there's a lot of redundant information, F doesn't need to capture all copies of it, and so we probably shouldn't use F. But since G is a generative model, it is incentivized to put in the redundant information to each of its output variables where the information would occur. So we would expect G to be quite robust.

If we wanted to abstract the important faraway features of an image while leaving behind the unimportant noise, we could thus just invert G, essentially asking "what faraway-relevant features might account for this patch?".

I wrote more discussion about it in the #general channel of the EleutherAI discord back the 12th-13th of March, 2021.

Looks like all the followups to BEiT masked autoencoder image modeling are doing similar things: https://arxiv.org/abs/2202.03382 https://arxiv.org/abs/2202.03026 https://arxiv.org/abs/2202.04200

**Summary**

In this post, John starts with a very basic intuition: that abstractions are things you can get from many places in the world, which are therefore very *redundant*. Thus, for finding abstractions, you should first define redundant information: Concretely, for a system of n random variables X1, …, Xn, he defines the redundant information as that information that remains about the original after repeatedly resampling one variable at a time while keeping all the others fixed. Since there will not be any remaining information if n is finite, there is also the somewhat vague assumption that the number of variables goes to infinity in that resampling process.

The first main theorem says that this resampling process will not break the graphical structure of the original variables, i.e., if X1, …, Xn form a Markov random field or Bayesian network with respect to a graph, then the resampled variables will as well, even when conditioning on the abstraction of them. John’s interpretation is that you will *still* be able to make inferences about the world in a local way *even if* you condition on your high-level understanding (i.e., the information preserved by the resampling process)

The second main theorem applies this to show that *any* abstraction F(X1, …, Xn) that contains all the information remaining from the resampling process will also contain **all** the abstract summaries from the telephone theorem for **all** the ways that X1, …, Xn (with n going to infinity) could be decomposed into infinitely many nested Markov blankets. This makes F a supposedly quite powerful abstraction.

**Further Thoughts**

It’s fairly unclear how exactly the resampling process should be defined. If n is finite and fixed, then John writes that no information will remain. If, however, n is infinite from the start, then we should (probably?) expect the mutual information between the original random variable and the end result to also often be infinite, which also means that we should not expect a small abstract summary F.

Leaving that aside, it is *in general* not clear to me how F is obtained. The second theorem just *assumes* F and deduces that it contains the information from the abstract summaries of all telephone theorems. The hope is that F is low-dimensional and thus manageable. But no attempt is made to show the existence of a low-dimensional F in any realistic setting.

Another remark: I don’t quite understand what it *means* to resample one of the variables “in the physical world”. My understanding is as follows, and if anyone can correct it, that would be helpful: We have some “prior understanding” (= prior probability) about how the world works, and by measuring aspects in the world — e.g., patches full of atoms in a gear — we gain “data” from that prior probability distribution. When forgetting the data of one of the patches, we can look at the others and then use our physical understanding to *predict* the values for the lost patch. We then sample from that prediction.

Is that it? If so, then this resampling process seems very observer-dependent since there is probably no actual randomness in the universe. But if it is observer-dependent, then the resulting abstractions would also be observer-dependent, which seems to undermine the hope to obtain *natural* abstractions.

I also have a similar concern about the pencils example: if you have a prior on variables X1, …, Xn and you know that all of them will end up to be “objects of the same type”, and a joint sample of them gives you n pencils, then it makes sense to me that resampling them one by one until infinity will *still* give you a bunch of pencil-like objects, leading you to conclude that the underlying preserved information is a graphite core inside wood. However, where do the variables X1, …, Xn come from in the first place? Each Xi is already a high-level object and it is unclear to me what the analysis would look like if one reparameterized that space. (Thanks to Erik Jenner for mentioning that thought to me; there is a chance that I misrepresent his thinking, though.)

This might not work depending on the details of how "information" is specified in these examples, but would this model of abstractions consider "blob of random noise" a good abstraction?

On the one hand, different blobs of random noise contain no information about each other on a particle level - in fact, they contain no information about *anything* on a particle level, if the noise is "truly" random. And yet they seem like a natural category, since they have "higher-level properties" in common, such as unpredictability and idk maybe mean/sd of particle velocities or something.

This is basically my attempt to produce an illustrative example for my worry that mutual information might not be sufficient to capture the relationships between abstractions that make them good abstractions, such as "usefulness" or other higher-level properties.

If they have mean/sd in common (as in e.g. a Gaussian clustering problem), then the mean/sd are exactly the abstract information. If they're all completely independent, without any latents (like mean/sd) at all, then the blob itself is not a natural abstraction, at least if we're staying within an information-theoretic playground.

I do expect this will eventually need to be extended beyond mutual information, especially to handle the kinds of abstractions we use in math (like "groups", for instance). My guess is that most of the structure will carry over; Bayes nets and mutual information have pretty natural category-theoretic extensions as I understand it, and I expect that roughly the same approach and techniques I use here will extend to that setting. I don't personally have enough expertise there to do it myself, though.

What is it that makes the concept of “pencil” a good abstraction?

One way to frame it: there are lots of “pencils” in the world - lots of little chunks of the world which I could look at which all contain information about pencils-in-general. Many different places from which I could gain information about “pencils”, and many different places where I can apply that information to make predictions. If I don’t know that pencils are usually made of wood with a graphite core, there are many different chunks of the world which I could look at to gain that information. If I do know that pencils are usually made of wood with a graphite core, there are many different chunks of the world where I could apply that information to predict the materials from which something is made.

Alternatively, consider a gear, spinning in a gearbox. What makes the gear's rotational speed a good abstraction? Well, there are lots of little patches of metal comprising the gear. If I don't know the gear's rotation speed, I could precisely estimate it from any of the little patches. If I do know the gear's rotation speed, I can use it to predict the rotation of many different little patches of metal within the gear.

This seems like an intuitively-reasonable notion of what makes abstractions “good” in general: there are many different places from which we can learn the information, and many different places where we can apply the information to make predictions. In other words, a good abstraction is highly redundant: it appears in many different places, and in any of those places we can use the abstract information to make predictions and/or gain more information about the abstraction in general.

In this post I’ll sketch out one way to formalize this idea mathematically, and show that it’s equivalent to the formalization of

abstraction as information-at-a-distance. In particular, in the gear example:conditional on the highly redundant information (i.e. the overall rotation of the gear), the low-level rattling of far-apart chunks of metal is statistically independent. More generally, redundancy yields the same high-level abstract information as theTelephone Theorem, but in a mathematically-cleaner form, and without the need for different summaries between different variables.## Meta

Advice for readers: I try to keep the more-dense math in a few specific sections and the appendices, which can all be skimmed/skipped if you just want a conceptual picture.

Epistemic status: I’m aiming for physics-level rigor here. The proofs involve some shenanigans with infinite limits, and I expect them to contain subtle errors. However, I also expect that the results will accurately predict reality in practice, and that whatever subtle errors the proofs contain can be patched by the sort of mathematician who enjoys dealing with tricky limit shenanigans.

Thankyou to Rob Miles and Adam Shimi for suggesting I try Excalidraw.

## Basic Idea: Redundant Information Is Conserved Under Resampling

Suppose I sample the genomes of two random humans, G1 and G2.What information is redundant across these two random variables?

One intuitively-reasonable answer: if I threw away one of the two sequences, then the redundant information is whatever I could still figure out from the other. More generally, if I have a whole bunch of genomes from random humans, G1…GN, the redundant information is whatever I could still figure out after throwing one of them away.

To formalize “what I could still figure out after throwing one away”, we’ll use the idea of

resampling: I throw away the value of Gi, and then sample anewvalue for Gi, using my original probability distributionconditioned onall the other genomes. So, for instance, I throw away Gi, then I look at all the other genomes and see that in most places they’re the same - so when I sample my new Gi, I know that it should match all the other genomes in all those places. However, there are a few locations where the other genomes differ, e.g. maybe 10% of them contain a particular mutation. So, when I sample my new Gi, it will contain that mutation with roughly 10% probability (assuming there’s enough data to swamp the impact of my priors).Now I repeat this process many times, each time throwing away one randomly-chosen genome and resampling it conditional on the others. Intuitively, I expect that the information conserved by this process will be whatever information is highly redundant, so that approximately-zero loss occurs at each step.

General method:

## Conceptual Example: Gear

Suppose I have a gear, spinning around in a gearbox. I look at a few nanometer-size patches on the gear’s surface, just a hundred-ish atoms each, and measure the (approximate) position and velocity of each atom in each little patch. Notation: random variable Xi gives the positions and velocities of each atom in patch i.

What information is redundant across these different patches? Intuitively, I can look at any patch and average together the rotational velocities of the atoms about the gear’s center to get a reasonably-precise estimate of the gear’s overall rotation speed. If I throw away Xi, I can still precisely estimate the gear’s overall rotation from the other X’s. Then, when I resample Xi, I will resample it so that the average rotational velocity of the atoms in Xi matches the gear’s overall rotation.

## Equations

Notation: we’ll use Xt for the variable-values after t steps of this process, and X∞ for variable-values after running the process for an arbitrarily long time. So, we’re mainly interested in the mutual information between X0 and X∞. The resampling process itself specifies the distribution P[X∞|X0,Mresample].

We’ll use “model” variables Mbase and Mresample to distinguish probabilities from the “base” distribution (which is only over X1…XN) vs probabilities from the resampling process (which is over X01…X0N,X11…X1N,…,X∞1…X∞N). One potential point of confusion: both models contain a variable called “X”, but these two variables are “in different scopes” (in the programmer’s sense); “X” means something different depending on which model it’s in. In the base model, X is just a single instance of our base random variables. In the resampling model, X consists of many instances Xt, one for each timestep t, and each individual instance is distributed the same as the base model’s X. We use the same variable name for both because there’s a conceptual correspondence between the two.

The resampling process defines the full relationship between the two models:

P[X01=x1…X0N=xN|Mresample]=P[X1=x1…XN=xN|Mbase]

P[Xti=x′i|Mresample,Xt−1=x]=P[Xi=x′i|Mbase,X≠i=x≠i] (assuming variable i is resampled at resampling-time t; for all the other variables, P[Xti=x′i|Mresample,Xt−1=x]=I[x′i=xi], since their values just stay the same)

P[X|Mresample]=P[X0|Mresample]∏tP[Xt|Mresample,Xt−1]

These equations follow directly from the process outlined above, and define the distribution P[X|Mresample] in terms of the distribution P[X|Mbase].

## … So We’re Running MCMC?

If you’ve worked with Markov Chain Monte Carlo (MCMC) algorithms before, this should look familiar: we’re basically asking which information about the initial conditions will be conserved as we run a standard MCMC process.

If you’ve worked with MCMC algorithms before, you might also guess that this question has two answers:

In this post, we’re going to think about infinitely large models, so the process no longer converges to P[X|Mbase] at all; that convergence time goes to infinity. The distribution does still converge, but the limiting distribution depends on the initial conditions, and that dependence is exactly what we're interested in. Unfortunately, this introduces a bunch of tricky subtleties about how we take the limits: do we take the limit of infinitely many variables first, or the limit of infinite time first, or do we take them at the same time with some fixed relationship between the two? I’ll handle those subtleties mainly by ignoring them and hoping a mathematician comes along to clean it up later. Remember, we’re aiming for physics-level rigor here.

The important takeaway of this section is that we have a ton of data on how these sorts of processes actually behave in practice, thanks to the popularity of MCMC algorithms. So we don’t just have to rely on physics-level-of-rigor arguments; anyone with firsthand experience with MCMC on large models can use their intuition as a guide. (I mostly won’t explicitly talk more here about lessons from MCMC; I expect that those of you already familiar with the topic can reason it through for yourselves, and explaining the relevant experience/intuitions to people not already familiar is beyond the scope of this post.)

## Worked Examples

## Two Variables

Let’s start with a trivial example to show how any information at all can be conserved by the resampling process. We have two random variables, X1 and X2. Our model for the two variable values is:

What happens when we run our resampling process on this system? Well, first we throw away the coin flip, and resample it given our record. If the record says “H”, then we know the coin came up heads, so our sampler selects heads again for X1; vice-versa for tails. The first coin is therefore reset to its original value. Then, we throw away the record, and resample it given our coin. If the coin is heads, we know the record says “H”; vice-versa for tails. The record is therefore reset to its original value.

The process continues, back and forth, with each variable “storing the information” when the other is thrown away, and the information then perfectly copied back over into the resampled variable value.

On the other hand, imagine that our record-keeping is imperfect - maybe there’s a 10% chance that X2 records the wrong value. Then, at each “timestep” of the resampling process, there’s roughly a 20% chance (10% for each variable) that we’ll lose the original value. Given, say, 100 timesteps, we’ll lose approximately-all of the information about the original values.

General point: only information which is perfectly conserved at each timestep will be conserved indefinitely; everything else is completely lost.

In general, the information perfectly conserved indefinitely will be the value of deterministic constraints, i.e. functions f1 and f2 such that f1(X1)=f2(X2) with probability 1 in the base distribution P[X|Mbase]. (We can prove this via the Telephone Theorem plus an equilibrium condition, but it’s not the main theorem of interest in this post.)

## Many Measurements Of One Thing

Let’s say we have a stick of length L, and take N conditionally-independent measurements X1…XN of L. We’ll model each measurement as normally distributed with mean L and standard deviation σ, and we’ll assume that we have enough data that the prior on L doesn’t matter much (i.e. we’ll just pretend the prior is flat).

To simplify the analysis a little, we’ll resample the variables in order rather than at random - i.e. we resample L conditional on all the measurements, then resample each of the measurements X1…XN conditional on L, and repeat.

When we resample L, we draw from a normal distribution with mean equal to the average of the measurements (i.e. 1N∑iXi) and standard deviation 1√Nσ, so our new L will be about 1√Nσ away from the previous average. When we resample the measurements, their new average will be normally distributed with mean L and standard deviation 1√Nσ, so the new average will be about 1√Nσ away from the previous L. In other words: L and the measurement average follow a random walk, drifting about √2Nσ per timestep. Over T timesteps, they will drift a distance of about √2TNσ. (In general, the distance a random walk drifts scales with √T rather than T, since it often wanders back on itself.)

So: if we run the process for a number of steps T>>N, then all information about the initial conditions is lost. On the other hand, if the number of variables N>>T, then the drift is close to zero, so L and the measurement average are approximately conserved. The order in which we take our limits matters.

Practically speaking, we’re looking for information which is

approximatelyconserved, i.e. the “timescale” T over which it’s lost is large. So it makes sense to consider L and the measurement average as approximately-conserved when N is large. That’s our abstract information in this example.## Factorization

Now for our first theorem about this kind of resampling process.

Imagine that our base distribution is

local- i.e. each variable only “directly interacts with” a few “neighbor variables”. When modeling a physical gear, for instance, each little chunk of metal only interacts directly with the chunks of metal spatially adjacent. Any longer-range interactions have to “go through” those direct interactions.Our theorem says that interactions are still local after controlling for the information conserved by the resampling process. In the gear example, after controlling for the high-level rotation of the gear, the remaining low-level vibrations and rattling are still local; the low-level details of each chunk of metal interact directly only with the low-level details in chunks spatially adjacent.

Why does this matter? In general, locality is the main tool which

lets us reason about our high-dimensional world at all. It means we can look at one part of the world, and understand what’s going on there without having to understand everything that’s happening in the whole universe. The factorization theorem says that this still applies when we condition on our high-level knowledge - in other words, we can “zoom in” on lower-level details, and add them to our high-level picture without having to understand all the other low-level details in the whole universe. Conditional on our high-level knowledge, any low-level information still has to flow through neighboring variables in order to influence things “far away” in the graph.That’s going to be key to our next theorem, which is the main item of interest in this post.

## Formal Statement

Resampler Conserves Locality: If the base distributionP[X|Mbase]factors over some graphG, then so does the limiting resampling distributionP[X∞|Mresample,X0]. This factorization theorem applies to both undirected graphical models (i.e. Markov Random Fields) and directed graphical models (i.e. Bayes Nets/Causal Models). See the appendices for a proof sketch.In the gear example, the graph G would be the adjacency graph for chunks of metal: each chunk is a node, and the edges show spatial adjacency. The factorization follows the standard formulas for factorization of Markov Random Fields or Bayes Nets, depending on which type of graphical model we’re using.

## The Interesting Part: Resampler-Conserved Quantities Mediate Information At A Distance

Time for the big claim from earlier: in the gear example, conditional on the highly redundant information (i.e. the overall rotation of the gear), the low-level rattling of far-apart chunks of metal is statistically independent.

More generally: assume that our base distribution factors on a graph G.

Conditional on all the quantities perfectly conserved by the resampling process, variables far apart inGare independent. If you’ve read theTelephone Theorem, this is basically the same, but with one big upgrade: our “high-level summary” no longer depends onwhichnotion of “far away” we use; thesamesummary applies toanysequence of nonoverlapping nested Markov blankets. We can take the information conserved by the resampling process to be the “high-level abstractions” for the whole model.Let’s unpack that. First, we’ll do a quick recap of the Telephone Theorem.

We start with a large Causal Model/Bayes Net:

Each node is a random variable, and the arrows show direct causal influence; paths show indirect causal influence. If you’ve used MCMC before, you were probably picturing something like this already; we usually use MCMC with Bayes nets, because the locality structure makes it easy to resample variables (we only need to look at neighbor values rather than the values of all variables).

Now, just like in the Telephone Theorem, we picture a sequence of nested Markov blankets M1…Mn in our model:

Each possible sequence of blankets cuts the graph into pieces, with each piece only connected directly to the piece before and the piece after. A choice of sequence of blankets defines a notion of “far away” - i.e. if two variables are separated by a large number of “layers” of blankets in the sequence, then they are “far apart”. In order for M1 to have any mutual information with Mn, that information must propagate through each of the layers in between.

The basic idea of the Telephone Theorem is that information is either perfectly conserved or completely lost as we move through enough layers; information can only propagate “far away” if the information can be perfectly computed from each layer individually.

… but if some information can be perfectly computed from each layer individually, then that information will be conserved by our resampler. When resampling Mn, I can still perfectly calculate the information from Mn+1 or Mn−1, and therefore the information will be perfectly conserved. So, the only information which can propagate far away is information which is perfectly conserved by resampling. That means that

conditionalon the information conserved by resampling, the mutual information must drop to zero.A slightly more formal version of that argument is in the appendices, but that’s the core idea.

## Formal Statement

LetFsatisfyP[X∞|Mresample,X0]=P[X∞|Mresample,F(X0)], i.e.Fencodes the values of all conserved quantities in the resampling process. For any infinite sequence of nested nonoverlapping Markov blanketsB1,B2,...on the base model, the conditional mutual informationMI(B1,Bn|F(X))→0asn→∞.## Gear Example

In the gear example, our Markov blankets might be nested layers of metal:

“Far away” then indicates moving through a large number of such layers.

In order for information to propagate far away, we must be able to compute it from each layer - i.e. we can very precisely compute the overall rotation of the gear from the overall rotation of chunks of metal in each layer. So, when we resample a chunk of metal, the overall rotation will be conserved - it will be “stored in” the other chunks.

This reasoning must apply to any information which propagates through many layers: if the information propagates far away, it will be conserved by resampling. So, assuming the overall rotation is the only quantity conserved by the resampling process, far-apart chunks of the gear must be independent given the overall rotation.

Intuitively, this lets us factor the gear into a “global” component and a “local” component. The global component, the gear’s overall rotation, is redundantly represented; it can be estimated by looking at many different little chunks, and we expect these estimates to (approximately) agree. The local component captures everything else, and is guaranteed to be “local” in the sense that far apart pieces are independent.

## Conclusion

We started with the intuition that abstraction is about redundant information: there are many different places from which we can learn the information, and many different places where we can apply the information to make predictions. That’s what makes abstractions generalizable and useful.

Then, we showed that a formalization of this intuition based on resampling variables reproduces the main ideas of abstraction as information-relevant-at-a-distance. In particular, the resampling approach yields a better version of the Telephone Theorem.

Personally, I came to all this in a different order: I noticed that the Telephone Theorem required redundancy of information, figured out the resampling thing, and only backed out the intuition of abstraction-as-redundant-information later. Nonetheless, it was an exciting thing to find: when different intuitive formulations of an idea turn out to be basically-equivalent, that’s strong evidence that we’re on the right track.

In terms of applications, the locality results in particular are exactly the sort of thing which I’ve been looking for since my

last update on testing the Natural Abstraction Hypothesis. In combination with thegeneralized Koopman-Pitman-Darmois theorem, they get us to the point where calculating abstractions from a base model is roughly-tractable, i.e. it can be done in something like O(N3) time with respect to the size of the base model. That still isn’t quite efficient enough to handlebigmodels, and unfortunately big models are exactly where we expect to see nontrivial results, so I’m still notquiteat the point of running empirical tests. But it feels like the hard part is now past; there are some clear steps forward on the math, and I expect that those will basically close the gap to efficient calculation.On the theoretical side, the first leg of the

Natural Abstraction Hypothesissays:Assuming the proofs in this post basically hold up, and the loopholes aren’t critical, I think this claim is now basically proven. There’s still some operationalization to be done (e.g. the “dimension” of the summary hasn’t actually been addressed yet), loopholes to close (e.g. deterministic computation makes things tricky), some legwork to flesh it all out (e.g. including numerical approximation), various extensions (e.g. logical uncertainty), and a lot of distillation needed, but I think this math is enough to conclude that the basic claim is probably true in worlds following local laws (like ours).

The basic idea is also useful even in nonlocal models, which I’ll hopefully write about in the not-too-distant future. That form is more readily applicable to clustering-style applications, like e.g. recognizing “pencils” as a kind of object.

## Appendices

## Proof Sketch: Factorization

We’ll use three facts. First, P[X∞|Mresample,X0] is invariant under resampling any variable - i.e. the distribution reaches an equilibrium as we run the resampling process. I won’t actually prove that, because I don’t expect that there’s anything new involved; standard MCMC convergence proofs should largely carry over (allowing for conserved quantities, of course). Formally:

P[Xt|Mresample,X0]=P[Xt−1|Mresample,X0] as t→∞

Second, the resampler is local:

P[Xti=x′i|Mresample,Xt−1=x]=P[Xi=x′i|Mbase,X≠i=x≠i]

… and

P[Xi=x′i|Mbase,X≠i=x≠i]=P[Xi=x′i|Mbase,Xnb(i)=xnb(i)]

… so

P[Xti=x′i|Mresample,Xt−1≠i=x≠i]=P[Xti=x′i|Mresample,Xt−1nb(i)=xnb(i)]

… where Xnb(i) indicates the neighbors of i in the graph on which Mbase factors.

Third, Xt−1≠i screens off X0 from Xt (thus the “Markov Chain” part of “Markov Chain Monte Carlo). So, P[Xt|Mresample,Xt−1≠i]=P[Xt|Mresample,X0,Xt−1≠i].

Combining these three, we find

P[Xti|Mresample,X0,Xt≠i]=P[Xti|Mresample,X0,Xtnb(i)] as t→∞

… i.e. the neighbors which screen off Xi from everything else in the base model also screen off X∞i from everything else in X∞, given X0. The

Hammersley-Clifford Theoremtells us that this is a sufficient condition for P[X∞|Mresample,X0] to factor over the graph G (modulo taking some limits).Note that the Hammersley-Clifford theorem applies to undirected graphical models. I won’t prove the extension of our theorem to Bayes Nets here because, again, there's nothing particularly novel involved.

## Proof Sketch: Resampler Telephone Theorem

Once we have locality of X∞ given X0, this theorem is easy: it’s exactly like the Telephone Theorem, but on the distribution P[X∞|Mresample,X0] rather than P[X|Mbase]. Because P[X∞|Mresample,X0] factors over the same graph as P[X|Mbase], we can pick any sequence of nonoverlapping nested Markov blankets B1…Bn in the base model, carry them over directly to X∞, and apply the Telephone Theorem argument:

The key thing to notice here is the deterministic constraint fn(Bn)=fn+1(Bn+1). In the two-variable example, we saw that exactly this kind of information is conserved by our resampling process: when we resample variables in Bn, the constraint value is perfectly “remembered” by Bn+1, and when we resamples variables in Bn+1, the constraint value is perfectly “remembered” by Bn. Newly-generated sample values are forced to be perfectly consistent with the constraint value, so that information sticks around.

So, if fn(Bn)=fn+1(Bn+1) with probability 1, and the blankets Bn and Bn+1 are nonoverlapping, then the value f=fn(Bn)=fn+1(Bn+1) is perfectly conserved when resampling. It’s perfectly conserved by the variables in Bn when resampling a variable outside of Bn, and perfectly conserved by the variables in Bn+1 when resampling a variable outside of Bn+1. So, conditional on information conserved by the resampling process (or, equivalently for purposes of the X∞ distribution, conditional on X0), the value of f is known; it does not give any information at all. So, conditional on quantities conserved by resampling, the mutual information MI(B1,Bn|X0) must drop to zero in the limit of large n.