(Update 0)
I'm starting by checking that there's actually a counterexample here. We also found some numerical counterexamples which were qualitatively similar (i.e. approximately-all of the weight was on one outcome), but thought it was just numerical error. Kudos for busting out the sympy and actually checking it.
Looking at the math on that third-order issue... note that the whole expansion is multiplied by . So even if , itself will still go to zero for small , so will go to zero. So it's not obviously a fatal flaw, though at the very least some more careful accounting would be needed at that step to make sure everything converges.
I plan to spend today digging into this, and will leave updates under this comment as I check things.
Yup! A Simple Toy Coherence Theorem walks through a toy version of that idea, and I do think it's a ripe area for someone to figure out more realistic theorems.
It's not that conscious/reflective. Respect is an emotion; my standards for it are more on the instinctive level. Which is not to say that there aren't consistent standards there, but they're not something I have easy direct control over or ready introspective access to.
Haven't thought about a name for this problem for a while, but I still don't have one.
Notably that post has a section arguing against roughly the sort of thing I'm arguing for:
Making the definition of what constitutes a low level language dependent on laws of physics is removing it from the realm of mathematics and philosophy. It is not a property of the language any more, but a property shared by the language and physical reality.
My response would be: yes, what-constitutes-a-low-level-language is obviously contingent on our physics and even on our engineering, not just on the language. I wouldn't even expect aliens in our own universe to have low-level programming languages very similar to our own. Our low level languages today are extremely dependent on specific engineering choices made in the mid 20th century which are now very locked in by practice, but do not seem particularly fundamental or overdetermined, and would not be at all natural in universes with different physics or cultures with different hardware architecture. Aliens would look at our low-level languages and recognize them as low-level for our hardware, but not at all low-level for their hardware.
Analogously: choice of a good computing machine depends on the physics of one's universe.
I do like the guy's style of argumentation a lot, though.
I think that's roughly correct, but it is useful...
'The best UTM is the one that figures out the right answer the fastest' is true, but not very useful.
Another way to frame it would be: after one has figured out the laws of physics, a good-for-these-laws-of-physics Turning machine is useful for various other things, including thermodynamics. 'The best UTM is the one that figures out the right answer the fastest' isn't very useful for figuring out physics in the first place, but most of the value of understanding physics comes after it's figured out (as we can see from regular practice today).
Also, we can make partial updates along the way. If e.g. we learn that physics is probably local but haven't understood all of it yet, then we know that we probably want a local machine for our theory. If we e.g. learn that physics is causally acyclic, then we probably don't want a machine with access to atomic unbounded fixed-point solvers. Etc.
I think you might have misread something? The graphical statement of theorem 2 does not say that if is determined by , then is a mediator; that would indeed be false in general. It says that:
In particular, the theorem says that under some conditions is determined by . Determination is in the conclusion, not the premises. On the flip side, being a mediator is in the premises, not the conclusion.
What I have in mind re:boundedness...
If we need to use a Turing machine which is roughly equivalent to physics, then a natural next step is to drop the assumption that the machine in question is Turing complete. Just pick some class of machines which can efficiently simulate our physics, and which can be efficiently implemented in our physics. And then, one might hope, the sort of algorithmic thermodynamic theory the paper presents can carry over to that class of machines.
Probably there are some additional requirements for the machines, like some kind of composability, but I don't know exactly what they are.
This would also likely result in a direct mapping between limits on the machines (like e.g. limited time or memory) and corresponding limits on the physical systems to which the theory applies for those machines.
The resulting theory would probably read more like classical thermo, where we're doing thought experiments involving fairly arbitrary machines subject to just a few constraints, and surprisingly general theorems pop out.
(Update 3)
We're now pursuing two main threads here.
One thread is to simplify the counterexamples into something more intuitively-understandable, mainly hopes of getting an intuitive sense for whatever phenomenon is going on with the counterexamples. Then we'd build new theory specifically around that phenomenon.
The other thread is to go back to first principles and think about entirely different operationalizations of the things we're trying to do here, e.g. not using diagram DKL's as our core tool for approximation. The main hope there is that maybe DKL isn't really the right error metric for latents, but then we need to figure out a principled story which fully determines some other error metric.
Either way, we're now >80% that this is a fundamental and fatal flaw for a pretty big chunk of our theory.