Yeah, I'm not convinced that the problem of induction is solvable at Teg 4. However, Universes with similar primitive laws and operations to ours will tend to produce intelligences with similar built-in priors. Thus, the right UTM to use is in a sense just the one that you happen to have in your possession.
I will propose an answer to No Free Lunch in an upcoming paper about Solomonoff induction. It is indeed subtle and important. In the interim, Schurz' book "Hume's Problem Solved" is a pretty good take. Schurz and Wolpert seem to argue against each other in their writing about NFL; I'll explain later why I think they're both right.
In practice, we only ever measure things to finite precision. To predict these observations, all we need is to be able to do these operations to any arbitrary specified precision. Runtime is not a consideration here; while time-constrained notions of entropy can also be useful, their theory becomes messier (e.g., the 2nd law won't hold in its current form).
I think roughly speaking the answer is: whichever UTM you've been given. I aim to write a more precise answer in an upcoming paper specifically about Solomonoff induction. The gist of it is that the idea of a "better UTM" U_2 is about as absurd as that of a UTM that has hardcoded knowledge of the future: yes such UTMs exists, but there is no way to obtain it without first looking at the data, and the best way to update on data is already given by Solomonoff induction.
"Reproducing in another Universe" is a tricky concept! I feel like simple beings that succeed in this manner should be thought of as memes from the perspective of Universes like A that instantiate them. Their presence in B is kind of irrelevant: maybe A instantiates the agents because of some trade in B, but A is free to place pretty much arbitrary weights on other Universes and the preferences therein. Given this ambiguity, we might as well remove one step and just say that A likes the B agent for some unstated arbitrary reason, without specific mention of trades. We could view Conway glider guns as a popular meme from the multiverse, but what use is that?
I'm reminded of Samuel Alexander's thought experiment, in which Earth has a one-way portal to Paradise. Perhaps most people would take this portal initially; however, from the perspective of Earth's ecosystem, entering this portal is equivalent to death. Therefore, after enough natural selection, we should expect that beings on Earth will treat the portal with the same degree of fear and avoidance as death, even if they can clearly see Paradise on the other side. Arguably, we already find ourselves in this situation with respect to our logical continuation in the blissful afterlife of many religions.
Ultimately, I feel that a multiverse trade only provides benefits in a Universe of our own imagination, which may be said to exist in some logical sense, but lacks an objective measure relative to all the other worlds that we could (or could not) have imagined. And in some of these worlds, the trade would instead be detrimental!
Hm I think LDT must be fleshed out in more detail, to clarify which consequences follow or which generalizations are most natural. Arguing from selection seems like a powerful tool here; nonetheless, this seems like a difficult project. Suppose you live in a Universe where you often get cloned with mutations, and made to play prisoner's dilemmas against your imperfect copies; how much correlation does the most successful version of LDT assign between the two competing policies? The full theory must deal with even more general scenarios.
I wrote another post specifically arguing for the selection-based view of rationality, and opening the floor to alternatives!
Yes I think both objections are considerably weaker when the probabilities come from the physics of our actual Universe. While it's still tricky to pin down the "correct" decision theory in this setting, quetzal_rainbow's comment here includes a paper that might contain the answer.
Thanks, I had been hoping to see an evolutionary analysis of decision theories, so I'll check out the paper sometime! Whichever decision theory turns out to be evolutionarily optimal, I imagine it still won't engage in multiverse trade; does the paper disagree?
Yes! I expect the temperatures won't quite be proportional to complexity, but we should be able to reuse the thermodynamic definition of temperature as a derivative of entropy, which we've now replaced by K-complexity.