Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is a linkpost for http://nostalgebraist.tumblr.com/post/160975105374/on-miris-logical-induction-paper

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is a linkpost for http://nostalgebraist.tumblr.com/post/160975105374/on-miris-logical-induction-paper

A few thoughts:

I agree that the LI criterion is "pointwise" in the way that you describe, but I think that this is both pretty good and as much as could actually be asked. A single efficiently computable trader can do a lot. It can enforce coherence on a polynomially growing set of sentences, search for proofs using many different proof strategies, enforce a polynomially growing set of statistical patterns, enforce reflection properties on a polynomially large set of sentences, etc. So, eventually the market will not be exploitable on all these things simultaneously, which seems like a pretty good level of accurate beliefs to have.

On the other side of things, it would be far to strong to ask for a uniform bound of the form "for every ε>0, there is some day t such that after step t, no trader can multiply its wealth by a factor more than 1+ε". This is because a trader can be hardcoded with arbitrarily many hard-to-compute facts. For every δ, there must eventually be a day t′>t on which the belief of your logical inductor assign probability less than δ to some true statement, at which point a trader who has that statement hardcoded can multiply its wealth by 1/δ. (I can give a construction of such a sentence using self-reference if you want, but it's also intuitively natural - just pick many mutually exclusive statements with nothing to break the symmetry.)

Thus, I wouldn't think of traders as "mistakes", as you do in the post. A trader can gain money on the market if the market doesn't already know all facts that will be listed by the deductive process, but that is a very high bar. Doing well against finitely many traders is already "pretty good".

What you can ask for regarding uniformity is for some simple function f such that any trader T can multiply its wealth by at most a factor f(T). This is basically the idea of the mistake bound model in learning theory; you bound how many mistakes happen rather than when they happen. This would let you say a more than the one-trader properties I mentioned in my first paragraph. In fact, LIA has this propery; f(T) is just the initial wealth of the trader. You may therefore want to do something like setting traders' initial wealths according to some measure of complexity. Admittedly this isn't made explicit in the paper, but there's not much additional that needs to be done to think in this way; it's just the combination of the individual proofs in the paper with the explicit bounds you get from the initial wealths of the traders involved.

I basically agree completely on your last few points. The traders are a model class, not an ensemble method in any substantive way, and it is just confusing to connect them to the papers on ensemble methods that the LI paper references. Also, while I use the idea of logical induction to do research that I hope will be relevant to practical algorithms, it seems unlikely than any practical algorithm will look much like a LI. For one thing, finding fixed points is really hard without some property stronger than continuity, and you'd need a pretty good reason to put it in the inner loop of anything.