Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Informal Problem Statement

We have an information channel between Alice and Bob. Alice picks a function. Bob gets to see the value of that function at some randomly chosen input values... but doesn't know exactly which randomly chosen input values. He does get to see the randomly chosen values of some of the input variables, but not all of them.

The problem is to find which functions Alice should pick with what frequencies, in order to maximize the channel capacity.

Why Am I Interested In This?

I'm interested in characterizing functions which are "insensitive" to subsets of their input variables, especially in high-dimensional spaces. For instance, xor of a bunch of random bits is maximally sensitive: if we have a 50/50 distribution over any one of the bits but know all the others, then all information about the output is wiped out. On the other end of the spectrum, a majority function of a bunch of random bits is highly insensitive: if we have a 50/50 distribution over, say, 10% of the bits, but know all the others, then in most cases we can correctly guess the function's output.

I have an argument here that the vast majority of functions  are pretty highly sensitive: as the number of unknown inputs increases, information falls off exponentially quickly. On the other hand, the example of majority functions shows that this is not the case for all functions.

Intuitively, in the problem, Alice needs to mostly pick from "insensitive" functions, since Bob mostly can't distinguish between "sensitive" functions.

... And Why Am I Interested In That?

I expect that natural abstractions have to be insensitive features of the world. After all, different agents don't all have exactly the same input data. So, a feature has to be fairly insensitive in order for different agents to agree on its value.

In fact, we could view the problem statement itself as a very rough way of formulating the coordination problem of language: Alice has to pick some function f which takes in an image and returns 0/1 representing whether the image contains an apple. (The choice of function defines what "apple" means, for our purposes.) Then Alice wants to teach baby Bob what "apple" means. So, there's some random stuff around them, and Alice points at the random stuff and says "apple" for some of it, and says something besides "apple" the rest of the time. Baby Bob is effectively observing the value of the function at some randomly-chosen points, and needs to back out which function Alice intended. And Bob doesn't have perfect access to all the bits Alice is seeing, so the function has to be robust.

Formal Problem Statement

Consider the following information channel between Alice and Bob:

  • Alice picks a function 
  • Nature generates m possible inputs , each sampled uniformly and independently from .
  • Nature also generates  subsets  of , each sampled uniformly and independently from subsets of size .
  • Bob observes  where .

The problem is to compute the distribution over  which achieves the channel capacity, i.e.

Bounty/Prize Info

The problem is to characterize the channel throughput maximizing distribution . The characterization should make clear the answers to questions like:

  • What functions have the highest probability?
  • How quickly does the probability fall off as we move "away" from the most probable functions, and what do marginally-less-probable functions look like?
  • How much probability is assigned to a typical function chosen uniformly at random?
  • Which functions, if any, are assigned zero probability?

All of these should have human-interpretable answers. No credit will be given for e.g. existence and uniqueness alone (the optimization is convex in the happy direction, so that's pretty easy anyway), or a program which would compute  given superexponential compute.

I may give partial or even full awards for variations on the above problem, depending on my own judgement of how useful they are. For instance, any of the following seem reasonable and potentially valuable:

  • Different domain/range for f.
  • (Nontrivial) big-O size restrictions on S
  • Asymptotic results in general

Deadline is end of June. If there are multiple qualifying answers, then I will award prize money based on how useful I judge each answer to be. 

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 1:17 PM

Here's an argument that I think works in the limit where . For very large , each subset  of size  will occur many times as an . Moreover, for each set of coodinate values , there will be many  such that  and . Therefore, using the law of large numbers for any subset  of size  and any set of coordinate values , Bob can infer , i.e. the number of inputs  that agree with  at the coordinates indexed by  for which . Additionally, the random variable  contains no other information about  that is not contained in this set of statistics.

Given a function , let  denote the set of all functions  such that for each subset  of size  and each set of coordinate values ,  . The argument of the previous paragraph then implies that upon observing  Bob can deduce the equivalence class , but no other information about  from .

If we have a random variable  and we define another random variable  as a deterministic function , then it is easy to show that the set of distributions of  which maximize the mutual information between  and  are precisely those for which  has a uniform distribution. Applying this to the above setup, we see that the distributions over  which maximize the channel capacity are those for which  is uniformly distributed over all the values that it can take.

If we suppose in addition that  has the maximum entropy distribution subject to this constraint, we find that:

.

Intuitively, this says that the probability of a given function  is inversely proportional to the number of other functions  that have the same number of ones on every hyperplane defined by fixing a set of some  coordinates . This seems to correspond roughly to sensitivity: we intuitively expect there to be a lot of such functions  when the number of ones that  outputs on most hyperplanes is approximately half of the number of total points on that hyperplane, and saying that a function's output is approximately half ones and half zeros on a given hyperplane is roughly saying that  is sensitive to the remaining unspecified coordinates.

It's not obvious to me that the above expression for  is feasible to compute in practice, but I think it is fairly naturally interpretable.

The key reason the problem is tractable in the case  is that the law of large numbers means the probabilities wash out and we get an essentially deterministic problem. In the case where  is finite, this won't happen and in particular I expect you'll run into complications that arise from interaction terms where the method for combining the information from two observations  and  with  is not clean.

I expect you're more likely to end up with a tractable solution if you rephrase the problem statement slightly so that the subsets of inputs over which you're agregating outputs when observed (in the case above, these subsets are the hyperplanes defined by the coordinates ) are disjoint or overlapping. (For example, taking all the  to be random but equal would ensure this.) It strikes me that there might be a formulation like this that still captures the key features of the problem you're interested in, but I've not thought about this in detail.

This answer clears the bar for at least some prize money to be paid out, though the amount will depend on how far other answers go by the deadline.

One thing which would make it stronger would be to provide a human-interpretable function for each equivalence class (so Alice can achieve the channel capacity by choosing among those functions).

The suggestions for variants of the problem are good suggestions, and good solutions to those variants would probably also qualify for prize money.

I'm interested in characterizing functions which are "insensitive" to subsets of their input variables, especially in high-dimensional spaces.

There's a field called "Analysis of boolean functions" (essentially Fourier analysis of functions ) that seems relevant to this question and perhaps to your specific problem statement. In particular, the notion of "total influence" of a boolean function is meant to capture its sensitivity (e.g. the XOR function on all inputs has maximal total influence). This is the standard reference, see section 2.3 for total influence. Boolean functions with low influence (i.e. "insensitive" functions) are an important topic in this field, so I expect there are some relevant results (see e.g. tribes functions and the KKL theorem, though those specifically address a somewhat different question than your problem statement).

I'm not sure whether this is helpful, but this reminds me of Error Correction Codes, a way of transmitting information through a noisy channel that trades bandwidth for reliability by encoding the intended message redundantly.

An explanation that I found helpful when learning about them was that you can think of a packet of N bits as specifying one corner of an N-dimensional hypercube, and an ECC as saying that you'll only intentionally transmit certain corners and not others.  If you select a subset of corners such that no allowed corner is adjacent to any other, then a 1-bit error will always land you on a disallowed corner, so the receiver will know an error occurred.  If all allowed corners are some distance from all other allowed corners, then you can guess the most likely intended corner based on distance from the corner received.

An XOR of all bits is maximally noisy because every corner with a value of "1" is surrounded by corners with a value of "0" and vice-versa.  The corners corresponding to a given answer are maximally dispersed, so every error changes the result.

The inverse of that strategy is to designate exactly-opposite corners as "0" and "1", and then map all the remaining corners by which of those they're closer to.  In other words, slice the hypercube in half, and then assign the same value to all corners in a given half.  (The "majority of bits" function does exactly this.)

I don't think I can personally convert that into an answer to the stated problem, but maybe it will give someone else a thread to pull on?

I tried and failed to formalize this. Let me sketch the argument, to show where I ran into problems.

Considering a code  with a corresponding decoding function , and assume that   .

For any function  we can define .  We then choose  randomly from the  such functions. We want to code to be such that for random   and random  the information  is enough to deduce , with high probability.  Then each  would give Bob one bit of information about  (its value at the point ) and hence one bit about . Here we use the assumption   to avoid collisions .

Unfortunately, this argument does not work. The issue is that  is chosen at random, instead of as an encoding  of a message. Because of this, we should not expect  to be close to a valid code, so we should not expect there to be a decoding method that will give consistent decodings of  for different values of .

It is not clear to me if this is a bug in the solution or a bug in the problem! The world is not random, so why do we want  to be uniform in ?

Here's a possible construction, building on Dweomite's suggestion to use error correction codes. We treat the selection of the random variables that Bob sees as a binary erasure channel with probability of erasure. Choose some error-correcting code for the BEC. Now, Alice constructs a function in the following way: For each input , randomly choose a subset of size . Apply the decoder for our error-correcting code to the subset; we obtain some string [1]. Alice chooses a function which is constant on pre-images of a given string under this mapping. She could either choose such an uniformly at random, or possibly treat the selection of inputs as another binary erasure channel(on sequences of 1s and 0s indexed by the string ) and use another error-correcting code.


  1. You could also repeat this procedure several times and take a majority vote. ↩︎

I don't follow the construction. Alice don't know x and S when choosing f. If she is taking the preimage for all 2^n values of x, each with a random S, she will have many overlapping preimages.

Yes. But I don't see why that's a problem? Which preimage a given x would be assigned to is random. The hope is that repeated trials would give the same preimage frequently enough for it to be a meaningful partition of the input space. How well it would work depends on the details of the ECC but I suspect it would work reasonably well in many cases. You could also just apply the decoder directly to the string x but I thought that might be a bit more unnatural since in reality Bob will never see the full string.

sampled uniformly and independently

 

🤔 I don't believe this definition fits the "apple" example - uniform samples from a concept space of "apple or not apple" would NEVER™ contain any positive example (almost everything is "not apple")... or what assumption am I missing that would make the relative target volume more than ~zero (for high n)?

Bob will observe a highly optimized set of Y, carefully selected by Alice, so the corresponding inputs will be Vastly correlated and interdependent at least for the positive examples (centeroid first, dynamically selected for error-correction later 🤷‍♀️), not at all selected by Nature, right?

I'm fairly sure you can get a result something like "it's not necessary to put positive probability mass on two different functions that can't be distinguished by observing only s bits", so some functions can get zero probability, e.g. the XOR of all combinations of at least s+1 bits.

edit: The proof is easy. Let  be two such indistinguishable functions that you place positive probability on, F be a random variable for the function, and F' be F but with all probability mass for  replaced by . Then . But this means  and so  You don't lose any channel capacity switching to 

This question is non-trivial even for . Here it becomes: let Alice choose a probability  (which has to be on the form  but this is irrelevant for large ) and Bob observes the binomially distributed number . With which distribution should Alice choose  to maximize the capacity of this channel.