Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Mentioned in

Intuitions on Universal Behavior of Information at a Distance

20th Apr 2020

10Alexei

5romeostevensit

4Pattern

2cousin_it

New Comment

4 comments, sorted by Click to highlight new comments since: Today at 9:33 AM

This is so so good! I think this is my favorite post of yours. It’s just teeming with possibilities and connections.

Melting/freezing and magnetization connections are very interesting. This almost seems similar to what happens with brain plasticity increase (due to mania, being excited, or substances).

>with three random variables X, Y, Z, it’s possible for X to contain zero information about Z, and Y to contain zero information about Z, but for the pair (X,Y) together to contain nonzero information about Z.

there's a name for this and I can *never freakin remember it.* I need to make a mnemonic the next time I find it.

*Music jingles*

If you think your x and y vary independently

And neither x nor y can explain your z

But x and y together create a neat story

then you've got a ________!

Pairwise independent but triple-wise dependent*?

*The phrase mutually dependent seems more common, but it's more general.

EDIT: Easier to remember version:

Dependent triple that's pairwise independent.

What’s special about

information at a distance? Some examples of what I mean:The unifying feature in all these examples is that we imagine high-level observables which are “far apart” within some low-level model, in the sense that interactions between any two observables are mediated by many layers of hidden variables. In an undirected causal model, we could draw the picture like this:

There’s a handful of observed variables (black nodes), whose interactions are mediated by many layers (dotted groups) of unobserved variables (white nodes). Somewhere in the middle, everything connects up. We’re interested in the limit where the number of layers of hidden variables between any two observables is large.

In the limit of many hidden variables intermediating interactions, I suspect that there’s some kind of universal behavior going on - i.e. all models (within some large class) converge to behavior with some common features.

This post will give some hand-wavy, semi-mathematical arguments about what that behavior looks like. The main goal is to show some intuitive pictures; later posts can more carefully build out the details and boundaries of the phenomena sketched here.

Outline:

## Intuition on Distributed Information

Fun fact: with three random variables X, Y, Z, it’s possible for X to contain zero information about Z, and Y to contain zero information about Z, but for the pair (X,Y) together to contain nonzero information about Z.

Classic example: X and Y are uniform random bits, and Z=X⊕Y, the exclusive-or of the other two. A priori, the distribution P[Z] is 50/50. If we learn that X is 0 (without knowing Y) then it’s still 50/50; if we learn that X is 1 (without knowing Y) then it’s also still 50/50. Likewise for Y; learning either variable by itself does not change our distribution for Z. Noise in either input wipes out the signal from the other input.

We can also generalize this example to more variables: X1…Xn are uniform random bits, and Z=X1⊕…⊕Xn. If there’s even just one input Xi whose value we don’t know, then we have zero information about Z, even if we know the values of all the other inputs.

On the other hand, it is not the case

in generalthat all information is always wiped out by noise in a few variables. Suppose we have a whole bunch of earthquake-sensors in an area, and one of them drops offline; this does not significantly inhibit our ability to detect earthquakes in the area. In this case, each individual sensor contains information on its own - there’s even redundant information across sensors, since they’re all presumably highly correlated.We have two prototypical pictures here: one in which information is stored in the relationship between variables, and another in which information is stored in the variables themselves. Intuitively, information stored in relationships seems to be much more sensitive to noise - uncertainty in even just one variable can wipe out the signal from all the others. Thus, a hypothesis: in a large complex system, with lots of noisy unobserved variables mediating observable interactions, information stored in many-variable relationships tends to be wiped out.

I’ll call information stored in relationships “distributed information”. We can re-state our hypothesis as: in a complex system, noise tends to wipe out many-variable distributed information. (In an earlier draft, I called it “entanglement information” in analogy to quantum; this produces the familiar-sounding-to-physicists hypothesis that “noise tends to wipe out many-variable entanglement”.)

## A Measure of Distributed Information

In order to formalize this idea, we first need an information-theoretic notion of “information stored in the relationship between variables”. Although the notion is qualitatively discussed in the context of

multivariate mutual information, I have surprisingly not seen an actual formulation which separates distributed info from redundancy; multivariate mutual information just adds the two together.After trying a few approaches, here’s what I’ve settled on.

We have variables X1...Xn and Y; we want to talk about the information about Y contained in X, i.e. the mutual information I(X;Y). We’d like to decompose this into two parts: the information contained in individual variables Xi, and the information stored only across multiple variables. But we can’t just add up the information I(Xi,Y) stored in each individual variable to estimate the non-distributed info, because some of that information is redundant - we’d be double-counting.

So, strategy: let’s build a model which breaks the entanglement, forcibly throw out the distributed info, without throwing out any information in the individual variables. How do we do that? Start with the model which generated all of our variables (i.e. the experiment). Now, imagine running n independent copies of that model (i.e. n runs of the experiment), but condition on the outcome being Y - i.e. select only runs with the same Y. Finally, imagine that each Xi came from a different run. This breaks the relationships between the Xi, while still maintaining all the information between any individual Xi and Y.

Write it out, and this model is essentially a Naive Bayes approximation: it says that

P[Y|X]≈~P[Y|X]=1Z(X)P[Y]∏iP[Xi|Y]

… where Z(X) is a normalizer. This makes sense - the Naive Bayes approximation ignores information contained in interactions between the Xi.

Now, we define the “distributed information” ID as the amount of information thrown out by this approximation:

ID(Y;X1,…,Xn):=EX[DKL(P[Y|X]||~P[Y|X])]

… where DKL is the

KL divergence. The “non-distributed information” is then whatever mutual information is not distributed, i.e. I(Y;X)−ID(Y;X1,…,Xn).Now, let’s imagine that X1…Xk are observables, and Xk+1…Xn are hidden. How does the distributed information of X1…Xk about Y when the hidden variable values are known compare to the distributed information when the hidden variable values are unknown? We’d guess that unknown hidden variables wipe out some of the distributed information in the observables, and we can prove that using the

convexity of KL-divergence:ID(Y;X1,…,Xk)=EX1...Xk[DKL(P[Y|X1...Xk]||~P[Y|X1...Xk])]≤EX1...Xk[EXk+1...Xn[DKL(P[Y|X]||~P[Y|X])]]=EXk+1...Xn[ID(Y;X1,…,Xk|Xk+1...Xn)]

In other words: distributed information can only decrease as we integrate hidden variables out of the model. Just as hypothesized, those noisy unobserved variables wipe out distributed information.

Another way to look at it: as we integrate out hidden variables, Naive Bayes becomes a better approximation for the remaining variables.

Now, we haven’t actually shown

how muchdistributed information is lost as we integrate out hidden variables. But we’ll punt that problem to a future post. For now, suppose that as we integrate out unobserved variables, all distributed information falls to zero. What would that distribution look like?## Zero Distributed Information

Just based on the definition, “zero distributed information” of X1…Xn about Y means that the Naive Bayes approximation is exact:

P[Y|X]=1Z(X)P[Y]∏iP[Xi|Y]

But we don’t just want zero distributed information of some particular variables with respect to one other variable. We want zero information of

anysubset of observable variables with respect toanyother variable:P[Xi|XS]=1Z(XS)P[Xi]∏j∈SP[Xj|Xi]

… where XS is any subset of the observable variables, and Xi is any observable not in XS. (Without loss of generality, I’ll usually use the name Y for the variable Xi whose probability we’re applying Naive Bayes to, in order to make the formulas a bit easier to follow.)

At this point, we can throw math at the problem. Skip the next two subsections (“First Trick” and “Second Trick”) if you just want the summary and intuition.

First TrickThe first big condition we need to satisfy is that Naive Bayes still holds after integrating out one more variable, i.e. it holds for both P[Y|XS] and P[Y|XS′], where S′=S∪{j} for some index j∉S,j≠i. We can calculate via the usual rules of probability:

P[Y|XS]=∑XjP[Y|XS′]P[Xj|XS]

Applying Naive Bayes to both sides and cancelling common terms then yields:

1Z(XS)=∑Xj1Z(XS′)P[Xj|Y]P[Xj|XS]

Key thing to notice: the left-hand-side does not depend on Y, so the dependence on Y on the right must somehow cancel out. There are two “obvious” ways for this to happen:

These two can be combined - e.g. if Xj has multiple components, then one component could be independent of Y, and Z(XS′) could be proportional to the distribution of the other component. There may also be other solutions - I haven’t done all the math yet, especially in the continuous case - but I don’t expect that there’s any qualitatively different behavior.

Anyway, let’s make a note that there can be independent variation in the variables, and then look closer at the second condition Z(XS′)=P[Xj|XS]α(XS). If we have a set of observables, none of which are pairwise independent, then we can inductively apply this condition to each of them one-by-one. I won’t write that all out, but it turns out that (barring independences):

Z(XS)=P[XS]

Second TrickStill assuming no pairwise independencies, we have

P[Y|XS]=1P[XS]∏j∈SP[Xj|Y]P[Y]

… or, moving the P[XS] to the other side,

P[Y,XS]=∏j∈SP[Xj|Y]P[Y]

In other words: we can choose any set of the observables, pick one of them to be Y, and get a factorization of the joint distribution. This needs to work no matter which one we pick to be Y, so our joint distribution has to factor in multiple different ways.

Let’s play with this a bit.

Pick any three observables - without loss of generality, call them X1,X2,X3. If we pick X1 to be Y, then we get

P[X1,X2,X3]=P[X2|X1]P[X3|X1]P[X1]

… but we could just as easily pick X2 to be Y:

P[X1,X2,X3]=P[X1|X2]P[X3|X2]P[X2]

Equate these two, and we find:

P[X3|X1]=P[X3|X2]

… which is rather unusual. Combine this with the Naive Bayes condition and apply Bayes’ Rule a couple times:

P[X1|X2,X3]=1P[X2,X3]P[X2|X1]P[X3|X1]P[X1]=1P[X2,X3]P[X2|X1]P[X3|X2]P[X1]=1P[X2]P[X2|X1]P[X1]=P[X1|X2]

This proof also generalizes inductively to more variables: P[X1|X2…Xk]=P[X1|X2].

What does all this mean? It means that:

More intuitively, it says that any information which is not strictly local (i.e. not independent of all other variables) must be global (i.e. present in ALL variables).

Summary and Intuition for Zero Distributed InformationHere’s what a system with zero distributed information looks like:

withoutany pairwise independencies, all information is either strictly local (i.e. independent of other variables) or strictly global (i.e. present in all variables). Any variable is independent of any other variable given any third variable.This isn’t a complete picture yet - we still need to better fill out the cases where some variables are pairwise independent and others are not. But it seems to point toward the idea that any piece of information is globally present within some set of variables, and completely independent of anything outside of that set.

Let’s see what this looks like in a more concrete example.

## Quick Test: System of Normals

We’ll build a system of normal random variables. Each variable Xi is given by

Xi=Zi+∑j<iθijXj

… with Zi a standard random normal, and each θij either zero with probability 1−p (used to tune density of connections) or uniform between 0 and s (used to tune strength of connections). We make the system reasonably large, with reasonably low connection-density p, and randomly pick some subset of the Xi to consider “observed”.

(Note: this is not a particularly careful rendition of the assumptions made earlier. That’s fine; it’s supposed to be a quick test, and sensitivity to the exact conditions is something we care about anyway.)

Assuming the zero distributed information picture applies, what should this look like?

Well, we should see a mix of two behaviors:

Because each individual variable is only one-dimensional and normal, there isn’t really “enough room” for subcomponents of a variable to display different behaviors - each variable should be perfectly correlated with some other variables, and independent of everything else.

So what do we see when we run this?

Well, at very low connection density and strength, of course everything is basically independent. As we turn up the density and strength, we see a correlation matrix like this:

(100 variables from a 1000 variable system, p=0.05, s=0.3)

See those strong horizontal and vertical lines? That’s what a rank-one matrix looks like. See the diagonal? Those are components independent of the globally-correlated rank-one components. In this case, the rank-one component dominates the behavior of the system. Quantifying via the

Frobenius norm, the rank-one component accounts for about 98.9% of the correlation.We can roughly quantify the phenomenon using diagonal and rank-one contributions to the Frobenius norm of the correlation matrix. This only captures the biggest cluster of correlated variables (not any other clusters of perfectly-correlated variables), but looking at the output, the biggest component looks heavily dominant anyway. As we adjust the connection strength, we get this picture:

(200 variables from a 2000 variable system, p=0.05)

This looks like a classic phase transition: at low connection strength, everything varies approximately independently. As the connection strength increases, the system rapidly jumps to a state where most variables are near-perfectly correlated. (Physical examples of this sort of qualitative behavior include melting/freezing crystals, or magnetization of iron as we adjust the temperature.)

So this provides one idea of what “zero distributed information” behavior can look like. Most of our variables are near-completely uncorrelated in one regime, and near-perfectly correlated in another. In either case, any pair of variables is independent given any third variable.

Of course, this is still just scratching the surface of possible behaviors under zero distributed info. Next planned post will come at the problem from a different direction, and hopefully provide a stronger mathematical foundation for exploring what happens when models contain large numbers of unobserved variables.