Master's student in applied mathematics, funded by Center on Long-Term Risk to investigate the cheating problem in safe pareto-improvements. Former dovetail fellow with @Alex_Altair.
For this we need a mechanism such that the maintenance of the mechanism is a schelling point. Specifically, the mechanism at T+1 should reward agents for actions at time T that reinforce the mechanism itself (in particular the actions are distributed). The incentive raises the probability of the mechanism being actualized at T+1, which in turn raises the "weight" of the reward offered by the mechanism at T+1, creating a self-fulfilling prophecy.
"Merging" forces parallelism back into sequential structures, which is why most blockchains are slow. You could make it faster by bundling a lot of actions together, but you need to make sure all actions are actually observable & checked by most of the agents (aka the data availability problem)
For translatability guarantees, we also want an answer for why agents have distinct concepts for different things, and the criteria for carving up the world model into different concepts. My sketch of an answer is that different hypotheses/agents will make use of different pieces of information under different scenarios, and having distinct reference handles to different types of information allows the hypotheses/agents to access the minimal amount of information they need.
For environment structure, we'd like an answer for what it means for there to be an object that persists through time, or for there to be two instances of the same object. One way this could work is to look at probabilistic predictions of an object over its Markov blanket, and require some sort of similarity in probabilistic predictions when we "transport" the object over spacetime
I'm less optimistic about the mind structure foundation because the interfaces that are the most natural to look at might not correspond to what we call "human concepts", especially when the latter requires a level of flexibility not supported by the former. For instance, human concepts have different modularity structures with each other depending on context (also known as shifting structures), which basically rules out any simple correspondence with interfaces that have fixed computational structure over time. How we want to decompose a world model is an additional degree of freedom to the world model itself, and that has to come from other ontological foundations.
Seems like the main additional source of complexity is that each interface has its own local constraint, and the local constraints are coupled with each other (but lower-dimensional than parameters themselves); whereas regular statmech usually have subsystems sharing the same global constraints (different parts of a room of ideal gas are independent given the same pressure/temperature etc)
To recover the regular statmech picture, suppose that the local constraints have some shared/redundant information with each other: Ideally we'd like to isolate that redundant/shared information into a global constraint that all interfaces has access to, and we'd want the interfaces to be independent given the global constraint. For that we need something like relational completeness, where indexical information is encoded within the interfaces themselves, while the global constraint is shared across interfaces.
IIUC there are two scenarios to be distinguished:
One is that the die has bias p unknown to you (you have some prior over p) and you use i.i.d flips to estimate bias as usual & get maxent distribution for a new draw. The draws are independent given p but not independent given your priors, so everything works out.
The other is that the die is literally i.i.d over your priors. In this case everything from your argument routes through: Whatever bias\constraint you happen to estimate from your outcome sequence doesn't say anything about a new i.i.d draw because they're uncorrelated, the new draw is just another sample from your prior
I think steering is basically learning, backwards, and maybe flipped sideways. In learning, you build up mutual information between yourself and the world; in steering, you spend that mutual information. You can have learning without steering---but not the other way around---because of the way time works.
Alternatively, for learning your brain can start out in any given configuration, and it will end up in the same (small set of) final configuration (one that reflects the world); for steering the world can start out in any given configuration, and it will end up in the same set of target configurations
It seems like some amount of steering without learning is possible (open-loop control), you can reduce entropy in a subsystem while increasing entropy elsewhere to maintain information conservation
Nice, some connections with why are maximum entropy distributions so ubiquitous:
So the system converges to the maxent invariant distribution subject to constraint, which is why langevin dynamics converges to the Boltzmann distribution, and you can estimate equilibrium energy by following the particle around
In particular, we often use maxent to derive the prior itself (=invariant measure), and when our system is out of equilibrium, we can then maximize relative entropy w.r.t our maxent prior to update our distribution
Congratulations!
I would guess the issue with KL relates to the fact that a bound on permits situations where is small but is large (as we take the expectation under ), whereas JS penalizes both ways.
In particular, in the original theorem on resampling using KL divergence, the assumption bounds KL w.r.t the joint distribution , so there may be situation where the resampled probability is large but is small. But the intended conclusion bounds the KL under the resampled distribution , so the error on the values would be weighted much more under than under . Since we're taking expectation under for the conclusion, the bound on the other resampling error under becomes insufficient.
Would this still give us guarantees on the conditional distribution ?
E.g. Mediation:
is really about the expected error conditional on individual values of , & it seems like there are distributions with high mediation error but low error when the latent is marginalized inside , which could be load-bearing when the agents cast out predictions on observables after updating on
The current theory is based on classical hamiltonian mechanics, but I think the theorems apply whenever you have a markovian coarse-graining. Fermion doubling is a problem for spacetime discretization in the quantum case, so the coarse-graining might need to be different. (E.g. coarse-grain the entire hilbert space, which might have locality issues but probably not load-bearing for algorithmic thermodynamics)
On outside view, quantum reduces to classical (which admits markovian coarse-graining) in the correspondence limit, so there must be some coarse-graining that works
Great to see the concreteness of this example, some thoughts on the candidate properties: