rotatingpaguro

Sorted by New

# Wiki Contributions

Ah guess it's a typo then, and your use is a nonstandard one.

infinatary

I can't find this term anywhere else, and "infinitary" means something else so I think it's not a typo, is it your own neologism? If not, where is it from?

Partial answer: I input Solomonoff (2009), "Algorithmic Probability: Theory and Applications" into ChatGPT and Claude, ansking the following question:

The attached article describes Solomonoff induction. Does it answer the question if there's always an UTM that induces a Solomonoff prior that approximates arbitrarily well a given positive distribution?

ChatGPT annoyingly waffles back what's in the article, as it often did when I tried it on such things in the past. Claude instead is concise and precise. Complete answers follow.

Claude:

No, the article does not directly address the question of whether there always exists a universal Turing machine (UTM) that induces a Solomonoff prior that approximates arbitrarily well a given positive distribution.

The article focuses on describing Solomonoff induction and algorithmic probability more generally. It discusses properties like completeness, incomputability, diversity, and subjectivity of the Solomonoff prior, as well as applications to sequence prediction and grammar induction.

But it does not analyze the specific question of whether for any positive distribution there exists a choice of UTM that makes the corresponding Solomonoff prior arbitrarily close to that distribution. The article does state that the choice of reference machine affects the Solomonoff prior, and that by modifying the computer language it is possible to "express any conceivable a priori probability distributions." However, it does not formally prove that any positive distribution can be approximated arbitrarily well.

So in summary, while the article provides good background on Solomonoff induction, it does not directly address or prove the specific claim in the question. The universality of approximating arbitrary positive distributions with a Solomonoff prior by choosing the right UTM is not explicitly analyzed.

ChatGPT:

The article "Algorithmic Probability—Theory and Applications" by Ray J. Solomonoff discusses the concept of Algorithmic Probability (ALP) and its properties, including its completeness and incomputability. ALP is described as a method that is theoretically capable of discovering any regularity in a dataset using a relatively small sample. This feature, referred to as the completeness of ALP, implies that it can approximate the true probabilities of symbols in data generated by any unknown probabilistic source P, with a very small error rate. The error in probability approaches zero more rapidly than 1/n, indicating rapid convergence to correct probabilities. This convergence holds for any P describable by a computer program, including many functions that are formally incomputable​​.

However, ALP is also characterized by its incomputability, meaning it is impossible to find the truly best models with any certainty due to the infinity of models to be tested, some of which take an unacceptably long time to evaluate. Despite this, ALP's incomputability does not inhibit its practical use for prediction. In practice, approximations to ALP can be used, though it is impossible to determine how close these approximations are to the true ALP. The practical approach involves using approximations of ALP and methods to estimate their effectiveness in future problems (Out of Sample Error), methods to search for good models, and quick and simple methods to compare models. Despite its theoretical incomputability, ALP's approximations can approach completeness, offering little error in prediction if sufficient time is spent searching for good models. This aspect makes ALP a desirable feature in practical prediction systems despite its incomputable nature​​.

In summary, the article does address the question of whether there's always an Universal Turing Machine (UTM) that induces a Solomonoff prior approximating arbitrarily well a given positive distribution. It explains that while ALP in its true form is incomputable, in practice, approximations of ALP can be employed effectively. These approximations can approach the completeness of ALP, thereby allowing for accurate predictions and the discovery of regularities in data, albeit without the ability to ascertain how close these approximations are to the true ALP. This implies that while a UTM can be used to induce a Solomonoff prior, the exactness of this approximation in practice is limited by the incomputability of ALP and relies on the effectiveness of the approximation methods employed.

19. Recursive Taleb's surgeon

If someone is significantly relying on Taleb's surgeon to make a choice, then they are not competent enough to discern actual competence.

The actions are inferred from the argmax, but they are also inputs to the prediction models.

The actions sui generis being "inputs to the prediction models" does not distinguish CDT from EDT.

(To be continued, leaving now.)

I always thought AIXI uses CDT because the actions are inputs to the Turing machines rather than outputs, so it's not looking at short Turing machines that output the action under consideration, the action is a given.

Care to explain why that's EDT? A link to an existing explanation would be fine.

Lesswrong supports math.

If the core problem is power-seekers inevitably bubbling up whatever social ladder, then take inspiration from politics and put a time limit on duties above a certain rank.

I interpreted it the standard way too initially, but then I had a hunch there was... I dunno, something fishy, and then indeed it turned out @StrivingForLegibility understood it in a completely different way, so somehow it wasn't clear! Magic.

Oh I see now,  just needs to work to pinpoint Nash equilibria, I did not make that connection.

But anyway, the reason I'm suspicious of your interpretation is not that your math is not correct, but that it makes the OP notation so unnatural. The unnatural things are:

1.  being context-dependent.
2.  not having its standard meaning.
3.  used implicitly instead of explicitly, when later it takes on a more important role to change decision theory.
4. Using  as condition without mentioning that already  if .

So I guess I will stay in doubt until the OP confirms "yep I meant that".