LESSWRONG
LW

415
Daniel C
184Ω86570
Message
Dialogue
Subscribe

Master's student in applied mathematics, funded by Center on Long-Term Risk to investigate the cheating problem in safe pareto-improvements. Former dovetail fellow with @Alex_Altair. 

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Three Kinds Of Ontological Foundations
Daniel C4d*30

For translatability guarantees, we also want an answer for why agents have distinct concepts for different things, and the criteria for carving up the world model into different concepts. My sketch of an answer is that different hypotheses/agents will make use of different pieces of information under different scenarios, and having distinct reference handles to different types of information allows the hypotheses/agents to access the minimal amount of information they need.


For environment structure, we'd like an answer for what it means for there to be an object that persists through time, or for there to be two instances of the same object. One way this could work is to look at probabilistic predictions of an object over its Markov blanket, and require some sort of similarity in probabilistic predictions when we "transport" the object over spacetime


I'm less optimistic about the mind structure foundation because the interfaces that are the most natural to look at might not correspond to what we call "human concepts", especially when the latter requires a level of flexibility not supported by the former. For instance, human concepts have different modularity structures with each other depending on context (also known as shifting structures), which basically rules out any simple correspondence with interfaces that have fixed computational structure over time. How we want to decompose a world model is an additional degree of freedom to the world model itself, and that has to come from other ontological foundations.

Reply1
Toward Statistical Mechanics Of Interfaces Under Selection Pressure
Daniel C7d30

Seems like the main additional source of complexity is that each interface has its own local constraint, and the local constraints are coupled with each other (but lower-dimensional than parameters themselves); whereas regular statmech usually have subsystems sharing the same global constraints (different parts of a room of ideal gas are independent given the same pressure/temperature etc)

To recover the regular statmech picture, suppose that the local constraints have some shared/redundant information with each other: Ideally we'd like to isolate that redundant/shared information into a global constraint that all interfaces has access to, and we'd want the interfaces to be independent given the global constraint. For that we need something like relational completeness, where indexical information is encoded within the interfaces themselves, while the global constraint is shared across interfaces.

Reply
The Zen Of Maxent As A Generalization Of Bayes Updates
Daniel C7d30

IIUC there are two scenarios to be distinguished: 

One is that the die has bias p unknown to you (you have some prior over p) and you use i.i.d flips to estimate bias as usual & get maxent distribution for a new draw. The draws are independent given p but not independent given your priors, so everything works out. 

The other is that the die is literally i.i.d over your priors. In this case everything from your argument routes through: Whatever bias\constraint you happen to estimate from your outcome sequence doesn't say anything about a new i.i.d draw because they're uncorrelated, the new draw is just another sample from your prior

Reply
Jemist's Shortform
Daniel C7d10

I think steering is basically learning, backwards, and maybe flipped sideways. In learning, you build up mutual information between yourself and the world; in steering, you spend that mutual information. You can have learning without steering---but not the other way around---because of the way time works.


Alternatively, for learning your brain can start out in any given configuration, and it will end up in the same (small set of) final configuration (one that reflects the world); for steering the world can start out in any given configuration, and it will end up in the same set of target configurations

It seems like some amount of steering without learning is possible (open-loop control), you can reduce entropy in a subsystem while increasing entropy elsewhere to maintain information conservation

Reply
The Zen Of Maxent As A Generalization Of Bayes Updates
Daniel C10d80

Nice, some connections with why are maximum entropy distributions so ubiquitous:

  • If your system is ergodic, time average=ensemble average. Hence expected constraints can be estimated via following your dynamical system over time
  • If your system follows the second law, then entropy increases subject to the constraints

So the system converges to the maxent invariant distribution subject to constraint, which is why langevin dynamics converges to the Boltzmann distribution, and you can estimate equilibrium energy by following the particle around

In particular, we often use maxent to derive the prior itself (=invariant measure), and when our system is out of equilibrium, we can then maximize relative entropy w.r.t our maxent prior to update our distribution

Reply
Resampling Conserves Redundancy & Mediation (Approximately) Under the Jensen-Shannon Divergence
Daniel C13d10

Congratulations!

 

I would guess the issue with KL relates to the fact that a bound on DKL(P∥Q) permits situations where P(X=x) is small but Q(X=x) is large (as we take the expectation under P), whereas JS penalizes both ways.

 

In particular, in the original theorem on resampling using KL divergence, the assumption bounds KL w.r.t the joint distribution P(X,Λ), so there may be situation where the resampled probabilityQ(X=x,Λ=λ)=P(X=x)P(Λ=λ|X2=x2) is large but P(X=x,Λ=λ) is small. But the intended conclusion bounds the KL under the resampled distribution Q, so the error on the values (X=x,Λ=λ) would be weighted much more under Q than under P. Since we're taking expectation under Q for the conclusion, the bound on the other resampling error under P becomes insufficient.

Reply
Resampling Conserves Redundancy (Approximately)
Daniel C21d30

Would this still give us guarantees on the conditional distribution P(X|Λ)?

E.g. Mediation: DKL(P(X1,X2,Λ)∥P(X1|Λ)P(X2|Λ)P(Λ))=DKL(P(X1,X2|Λ)P(Λ)∥P(X1|Λ)P(X2|Λ)P(Λ))=DKL(P(X1,X2|Λ)∥P(X1|Λ)P(X2|Λ))

is really about the expected error conditional on individual values of Λ, & it seems like there are distributions with high mediation error but low error when the latent is marginalized inside DKL, which could be load-bearing when the agents cast out predictions on observables after updating on Λ

Reply1
johnswentworth's Shortform
Daniel C1mo30

The current theory is based on classical hamiltonian mechanics, but I think the theorems apply whenever you have a markovian coarse-graining. Fermion doubling is a problem for spacetime discretization in the quantum case, so the coarse-graining might need to be different. (E.g. coarse-grain the entire hilbert space, which might have locality issues but probably not load-bearing for algorithmic thermodynamics)

On outside view, quantum reduces to classical (which admits markovian coarse-graining) in the correspondence limit, so there must be some coarse-graining that works

Reply2
johnswentworth's Shortform
Daniel C1mo61

I also talked to Aram recently & he's optimistic that there's an algorithmic version of the generalized heat engine where the hot vs cold pool correspond to high vs low k-complexity strings. I'm quite interested in doing follow-up work on that

Reply
johnswentworth's Shortform
Daniel C1mo76

The continuous state-space is coarse-grained into discrete cells where the dynamics are approximately markovian (the theory is currently classical) & the "laws of physics" probably refers to the stochastic matrix that specifies the transition probabilities of the discrete cells (otherwise we could probably deal with infinite precision through limit computability)

Reply2
Load More
16Sleeping Experts in the (reflective) Solomonoff Prior
Ω
2mo
Ω
0
29Towards building blocks of ontologies
9mo
0
11Can subjunctive dependence emerge from a simplicity prior?
Q
1y
Q
0
20Jonothan Gorard:The territory is isomorphic to an equivalence class of its maps
1y
18
23What program structures enable efficient induction?
1y
5
22My decomposition of the alignment problem
1y
22