To what degree can the paper "Approval-directed agency and the decision theory of Newcomb-like problems" be expressed in the CTMU's mathematical metaphysics?
One man's random bit string is another man's cyphertext.
Ha, ha! As if the half-silvered mirror did different things on different occasions!
Ha, ha! As if the photon source were known to emit photons that were in all respects identical on different occasions!
The machine learning world is doing a lot of damage to society by confusing "is" with "ought" which, within AIXI, is equivalent to confusing its two unified components: Algorithmic Information Theory (compression) with Sequential Decision Theory (conditional decompression). This is a primary reason the machine learning world has failed to provide anything remotely approaching the level of funding for The Hutter Prize that would be required to attract talent away from grabbing all of the low hanging fruit in the matrix multiply hardware lottery ...
This seems to be a red-herring issue. There are clear differences in description complexity of Turing machines so the issue seems merely to require a closure argument of some sort in order to decide which is simplest:
Decide on the Turing machine has the shortest program that simulates that Turing machine while running on that Turing machine.
Marcus Hutter provides a full formal approximation of Solomonoff induction which he calls AIXI-tl.
This is incorrect. AIXI is a Sequential Decision Theoretic AGI whose predictions are provided by Solomonoff Induction. AIXI-tl is an approximation of AIXI in which Solomonoff Induction's predictions are approximate but also in which Sequential Decision Theory's decision procedure is approximate.
Lossless compression is the correct unsupervised machine learning benchmark, and not just for language models. To understand this, it helps to read the Hutter Prize FAQ on why it doesn't use perplexity:
http://prize.hutter1.net/hfaq.htm
Although Solomonoff proved this in the 60s, people keep arguing about it because they keep thinking they can, somehow, escape from the primary assumption of Solomonoff's proof: computation. The natural sciences are about prediction. If you can't make a prediction you can't test your model. To mak...
The Hutter Prize for Lossless Compression of Human Knowledge reduced the value of The Turing Test to concerns about human psychology and society raised by Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum.
Sadly, people are confused about the difference between the techniques for model generation and and the techniques for model selection. This is no more forgivable than is confusion between mutation and natural selection and gets to the heart of the philosophy of science prior to any notion of hypothesis...
Phenomenology entails "bracketing" which suspends judgement, but may be considered, in terms of Quine, as quotations marks in an attempt to ascend to a level of discourse in which judgment is less contingent hence more sound: Source attribution. My original motivation for suggesting Wikipedia to Marcus Hutter as the corpus for his Hutter Prize For Lossless Compression of Human Knowledge, was to create an objective criterion for selecting language models that best-achieve source attribution on the road to text prediction. Indeed, I originally wa...
There's Occam's Guillotine and then there's Ockham's Guillotine. The latter is directly relevant to algorithmic information theory's truth claims and, if funded to the tune of the billions it should be, will terminate the quasi-religious squabbling over social policy that threatens to turn into another Thirty Years War in the not-too-distant future.