A Robust Natural Latent Over A Mixed Distribution Is Natural Over The Distributions Which Were Mixed

36faul_sname

6johnswentworth

6faul_sname

4Thane Ruthenis

New Comment

Alright, I'm terrible at abstract thinking, so I went through the post and came up with a concrete example. Does this seem about right?

Suppose we have multiple distributions over the same random variables . (Speaking somewhat more precisely: the distributions are over the same set, and an element of that set is represented by values .)

We are a quantitative trading firm. Our investment strategy is such that we care about the prices of the stocks in the S&P 500 at market close today ().

We have a bunch of models of the stock market (), where we can feed in a set of possible prices of stocks in the S&P 500 at market close, and the model spits out a probability of seeing that exact combination of prices (where a single combination of prices is ).

We take a mixture of the distributions: , where and is nonnegative

We believe that some of our models are better than others, so our trading strategy is to take a weighted average of the predictions of each model, where the weight assigned to the th model is , and obviously the weights have to sum to 1 for this to be an "average".

Mathematically: the natural latent over is defined by , and naturality means that the distribution satisfies the

naturality conditions(mediation and redundancy).

We believe that there is some underlying factor which we will call "market factors" () such that if you control for "market factors", you no longer learn (approximately) anything about the price of say MSFT when you learn about the price of AAPL, and also such that if you order the stocks in the S&P 500 alphabetically and then take the odd-indexed stocks (i.e. A, AAPL, ABNB, ...) in that list and call them the S&P250odd, and call the even-indexed (i.e. AAL, ABBV, ABT, ...) ones the S&P250even, you will come to (approximately) the same estimation of "market factors" by looking at either the S&P250odd or the S&P250even. Further, this means that if you estimate "market conditions" by looking at S&P250odd, then your estimation of the price of AAL will be approximately unchanged if you learn the price of ABT.

Then our theorem says: if an approximate natural latent exists over , and that latent is robustly natural under changing the mixture weights , then the same latent is approximately natural over for all .

Anyway, if we find that the above holds for the weighted sum we use in practice, and we also find that it robustly ^{[1]} holds when we change the weights, that actually means that *all* of our market price models take "market factors" into account.

Alternatively stated, it means that if one of the models was written by an intern that procrastinated until the end of his internship and then on the last morning wrote `def predict_price(ticker): return numpy.random.lognormal()`

, then our weighted sum is *not* robust to changes in the weights.

Is this a reasonable interpretation? If so, I'm pretty interested to see where you go with this.

^{^}Terms and conditions apply. This information is not intended as, and shall not be understood or construed as, financial advice.

One point of confusion I still have is what a natural latent screens off information *relative to the prediction capabilities of*.

Let's say one of the models "YTDA" in the ensemble knows the beginning-of-year price of each stock, and uses "average year-to-date market appreciation" as its latent., and so learning the average year-to-date market appreciation of the S&P250odd will tell it *approximately* everything about that latent, and learning the year-to-date appreciation of ABT will give it almost no information it knows how to use about the year-to-date appreciation of AMGN.

So *relative to the predictive capabilities of the YTDA model*, I think it is true that "average year-to-date market appreciation" is a natural latent.

However, another model "YTDAPS" in the ensemble might use "*per-sector* average year-to-date market appreciation" as its latent. Since both the S&P250even and S&P250odd contain plenty of stocks in each sector, it is again the case that once you know the YTDAPS' latent conditioning on S&P250odd, learning the price of ABT will not tell the YTDAPS model anything about the price of AMGN.

But then if both of these are latents, does that mean that your theorem proves that any weighted sum of natural latents is also itself a natural latent?

Let's see if I get this right...

- Let's interpret the set as the set of all possible visual sensory experiences , where defines the color of the th pixel.
- Different distributions over elements of this set correspond to observing different objects; for example, we can have and , corresponding to us predicting different sensory experiences when looking at cars vs. apples.
- Let's take some specific specific set of observations , from which we'd be trying to derive a latent.
- We assume uncertainty regarding what objects generated the training-set observations, getting a mixture of distributions .
- We derive a natural latent for such that for all allowed .
- This necessarily implies that also induces independence between different sensory experiences for each individual distribution in the mixture: and .
*If*the set contains some observations generated by cars*and*some observations generated by apples,*yet*a nontrivial latent over the entire set nonetheless exists,*then*this latent must summarize information about some*feature*shared by both objects.- For example, perhaps it transpired that all cars depicted in this dataset are red, and all apples in this dataset are red, so ends up as "the concept of redness".

- This latent then could, prospectively, be applied to
*new*objects. If we later learn of the existence of – an object seeing which predicts yet another distribution over visual experiences – then would "know" how to handle this "out of the box". For example, if we have a set of observations such that it contains some red cars and some red ink, then would be natural over this set under both distributions, without us needing to recompute it. - This trick could be applied for learning new "features" of objects. Suppose we have some established observation-sets and , which have nontrivial natural latents and . To find new "object-agnostic" latents, we can try to form new sets of observations from subsets of those observations, define corresponding distributions, and see if mixtures of distributions over those subsets have nontrivial latents.
- Formally: where and , then , and we want to see if we have a new that induces (approximate) independence between all both under the "apple" and the "car" distributions.
- Though note that it could be done the other way around as well: we could
*first*learn the latents of "redness" and e. g. "greenness" by grouping all red-having and green-having observations, then try to find some subsets of those sets which also have nontrivial natural latents, and end up deriving the latent of "car" by grouping all red and green objects that happen to be cars.- (Which is to say, I'm not necessarily sure there's a sharp divide between "adjectives" and "nouns" in this formulation. "The property of car-ness" is interpretable as an adjective here, and "greenery" is interpretable as a noun.)

- I'd also expect that the latent over , i. e. , could be constructed out of and (derived, respectively, from a pure-cars dataset and an all-red dataset)? In other words, if we simultaneously condition a dateset of red cars on a latent derived from a dataset of any-colored cars and a latent derived from a dateset of red-colored objects, then this combined latent would induce independence across (which wouldn't be able to do on its own, due to the instances sharing color-related information in addition to car-ness)?

- All of this is interesting mostly in the approximate-latent regime (this allows us to avoid the nonrobust-to-tiny-mixtures trap), and in situations in which we
*already have*some established latents which we want to break down into interoperable features.- In principle, if we have e. g. two sets of observations that we already know correspond to nontrivial latents, e. g. and , we could
*directly*try to find subsets of their union that correspond to new nontrivial latents, in the hopes of recovering some features that'd correspond to grouping observations along some other dimension. - But if we already have established "object-typed" probability distributions and , then hypothesizing that the observations are generated by an arbitrary mixture of these distributions allows us to "wash out" any information that doesn't actually correspond to some
*robustly shared*features of cars-or-apples. - That is: consider if is 99% cars, 1% apples. Then an approximately correct natural latent over it is basically just , maybe with some additional noise from apples thrown in. This is what we'd get if we used the "naive" procedure in (1) above. But if we're allowed to mix up the distributions, then "ramping" up the "apple" distribution (defining , say) would end up with low probabilities assigned to all observations corresponding to cars, and now the approximately correct natural latent over this dataset would have more apple-like qualities. The demand for the latent to be valid on
*arbitrary*then "washes out" all traces of car-ness and apple-ness, leaving only redness.

- In principle, if we have e. g. two sets of observations that we already know correspond to nontrivial latents, e. g. and , we could

Is this about right? I'm getting a vague sense of some disconnect between this formulation and the OP...

This post walks through the math for a theorem. It’s intended to be a reference post, which we’ll link back to as-needed from future posts. The question which first motivated this theorem for us was: “Redness of a marker seems like maybe a natural latent over a bunch of parts of the marker, and redness of a car seems like maybe a natural latent over a bunch of parts of the car, but what makes redness of the marker ‘the same as’ redness of the car? How are they both instances of one natural thing, i.e. redness? (or ‘color’?)”. But we’re not going to explain in this post how the math might connect to that use-case; this post is just the math.

Suppose we have multiple distributions P1,…,Pk over the same random variables X1,…,Xn. (Speaking somewhat more precisely: the distributions are over the same set, and an element of that set is represented by values (x1,…,xn).) We take a mixture of the distributions: P[X]:=∑jαjPj[X], where ∑jαj=1 and α is nonnegative. Then our theorem says: if an approximate natural latent exists over P[X], and that latent is robustly natural under changing the mixture weights α, then the same latent is approximately natural over Pj[X] for all j.

Mathematically: the natural latent over P[X] is defined by (x,λ↦P[Λ=λ|X=x]), and naturality means that the distribution (x,λ↦P[Λ=λ|X=x]P[X=x]) satisfies the

naturality conditions(mediation and redundancy).The theorem says that, if the joint distribution (x,λ↦P[Λ=λ|X=x]∑jαjPj[X=x]) satisfies the naturality conditionsrobustlywith respect to changes in α, then (x,λ↦P[Λ=λ|X=x]Pj[X=x]) satisfies the naturality conditions for all j. “Robustness” here can be interpreted in multiple ways - we’ll cover two here, one for which the theorem is trivial and another more substantive, but we expect there are probably more notions of “robustness” which also make the theorem work.## Trivial Version

First notion of robustness: the joint distribution (x,λ↦P[Λ=λ|X=x]∑jαjPj[X=x]) satisfies the naturality conditions to within ϵ for all values of α (subject to ∑jαj=1 and α nonnegative).

Then: the joint distribution (x,λ↦P[Λ=λ|X=x]∑jαjPj[X=x]) satisfies the naturality conditions to within ϵ specifically for αj=δjk, i.e. α which is 0 in all entries except a 1 in entry k. In that case, the joint distribution is (x,λ↦P[Λ=λ|X=x]Pk[X=x]), therefore Λ is natural over Pk. Invoke for each k, and the theorem is proven.

... but that's just abusing an overly-strong notion of robustness. Let's do a more interesting one.

## Nontrivial Version

Second notion of robustness: the joint distribution (x,λ↦P[Λ=λ|X=x]∑jαjPj[X=x]) satisfies the naturality conditions to within ϵ, and the gradient of the approximation error with respect to (allowed) changes in α is (locally) zero.

We need to prove that the joint distributions (x,λ↦P[Λ=λ|X=x]Pj[X=x]) satisfy both the mediation and redundancy conditions for each j. We’ll start with redundancy, because it’s simpler.

## Redundancy

We can express the approximation error of the redundancy condition with respect to Xi under the mixed distribution as

DKL(P[Λ,X]||P[X]P[Λ|Xi])=EX[DKL(P[Λ|X]||P[Λ|Xi])]

where, recall, P[Λ,X]:=P[Λ|X]∑jαjPj[X].

We can rewrite that approximation error as:

EX[DKL(P[Λ|X]||P[Λ|Xi])]

=∑jαjPj[X]DKL(P[Λ|X]||P[Λ|Xi])

=∑jαjEjX[DKL(P[Λ|X]||P[Λ|Xi])]

Note that Pj[Λ|X]=P[Λ|X] is the same under all the distributions (by definition), so:

=∑jαjDKL(Pj[Λ,X]||P[Λ|Xi]Pj[X])

and by

factorization transfer:≥∑jαjDKL(Pj[Λ,X]||Pj[Λ|Xi]Pj[X])

In other words: if ϵji is the redundancy error with respect to Xi under distribution j, and ϵi is the redundancy error with respect to Xi under the mixed distribution P, then

ϵi≥∑jαjϵji

The redundancy error of the mixed distribution is at least the weighted average of the redundancy errors of the individual distributions.

Since the αjϵji terms are nonnegative, that also means

ϵji≤1αjϵi

which bounds the approximation error for the ith redundancy condition under distribution j. Also note that, insofar as the latent is natural across multiple α values, we can use the α value with largest αj to get the best bound for ϵji.

## Mediation

Mediation relies more heavily on the robustness of naturality to changes in α. The gradient of the mediation approximation error with respect to α is:

∂∂αjDKL(P[Λ,X]||P[Λ]∏iP[Xi|Λ])

=∑X,ΛP[Λ|X]Pj[X]lnP[Λ,X]P[Λ]∏iP[Xi|Λ]

(Note: it’s a nontrivial but handy fact that, in general, the change in approximation error of a distribution P[Y] over some DAG dDKL(P[Y]||∏iP[Yi|Ypa(i)]) under a change dP is ∑YdP[Y]lnP[Y]∏iP[Yi|Ypa(i)].)

Note that this gradient must be zero along

allowedchanges in α, which means the changes must respect ∑jαj=1. That means the gradient must be constant across indices j:constant=∑X,ΛP[Λ|X]Pj[X]lnP[Λ,X]P[Λ]∏iP[Xi|Λ]

To find that constant, we can take a sum weighted by αj on both sides:

constant=∑jαj∑X,ΛP[Λ|X]Pj[X]lnP[Λ,X]P[Λ]∏iP[Xi|Λ]

=DKL(P[Λ,X]||P[Λ]∏iP[Xi|Λ])

So, robustness tells us that the approximation error under the mixed distribution can be written as

DKL(P[Λ,X]||P[Λ]∏iP[Xi|Λ])=constant=∑X,ΛP[Λ|X]Pj[X]lnP[Λ,X]P[Λ]∏iP[Xi|Λ]

for any j.

Next, we’ll write out P[Λ,X] as a mixture weighted by α, and use Jensen’s inequality on that mixture and the logarithm:

=Ej[ln∑jαjP[Λ|X]Pj[X]P[Λ]∏iP[Xi|Λ]]

≥Ej[∑jαjlnP[Λ|X]Pj[X]P[Λ]∏iP[Xi|Λ]]

=∑jαjDKL(Pj[Λ,X]||P[Λ]∏iP[Xi|Λ])

Then factorization transfer gives:

≥∑jαjDKL(Pj[Λ,X]||Pj[Λ]∏iPj[Xi|Λ])

Much like redundancy, if ϵji is the mediation error with respect to Xi under distribution j (note that we’re overloading notation, ϵ is no longer the redundancy error), and ϵi is the mediation error with respect to Xi under the mixed distribution P, then the above says

ϵi≥∑jαjϵji

Since the αjϵji terms are nonnegative, that also means

ϵji≤1αjϵi

which bounds the approximation error for the ith mediation condition under distribution j.