LESSWRONG
LW

2079
Matt Dellago
872290
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1Matthias Dellago's Shortform
10mo
26
Matthias Dellago's Shortform
Matt Dellago2d10-2

Maximally coherent agents are indistinguishable from point particles. They have no internal degrees of freedom, one cannot probe their internal structure from the outside.

Epistemic Status: Unhinged

Reply
Coherence of Caches and Agents
Matt Dellago10d10

Excellent! Thank you!

Reply
Matthias Dellago's Shortform
Matt Dellago11d*30

Coherence as Purpose

Epistemic Status: Riffing

We know coherence when we see it. A craftsman working versus someone constantly fixing his previous mistakes. A functional organization versus bureaucratic churn. A healthy body versus one fighting itself. War, internal conflict, rework: these are wasteful. We respect people who act decisively, societies that build without tearing down, systems that run clean.

This intuition points somewhere real. In some sense, maximizing/expanding coherence is what the universe does: cutting friction, eliminating waste, building systems that don't fight themselves. Not from external design, but because coherent systems expand until they can't. Each pocket of coherence is the universe organizing itself better. The point is that coherence captures "good": low friction, low conflict, no self-sabotage.

I propose that this is measurable. Coherence could be quantified as thermodynamic efficiency. Pick a boundary and time window, track energy in. The coherent part becomes exported work, heat above ambient, or durable stores (raised water, charged batteries, separated materials). The rest is loss: waste heat, rework, reversals. Systems can expand until efficiency stops generating surplus. When new coordination tools raise that limit, growth resumes. Just observable flows, no goals needed.

An interesting coincidency: maximizing thermodynamic efficiency (coherence) maximally delays heat death of a system. Higher efficiency means slower entropy increase.

I am very interested in hearing counterexamples of coherent systems that are intuitively repellent!

Reply
Coherence of Caches and Agents
Matt Dellago11d10

It would also be quite interesting to look at how coherence scales with system size, and if/when this imposes a limit on growth.

Reply
Coherence of Caches and Agents
Matt Dellago11d30

Great post! A thought: we seem able to intuitively differentiate coherent and incoherent behavior even without knowing the terminal goal. Humans, for instance, visibly differ in how "coherent" they are, which we can infer from local observations alone. My conjecture is that coherence might overlap substantially with thermodynamic efficiency. If behavior is optimal for some terminal value, it must satisfy local Bellman-type consistency (no value loops). I suspect this has a physical parallel: where those local constraints hold tightly, you should see few avoidable losses (high Carnot efficiency); where they fail, you should find loss hotspots (rework, backtracking, waste heat). The local inconsistencies you describe might correspond directly to local inefficiencies, regions of high irreversibility.

Reply
Matthias Dellago's Shortform
Matt Dellago24d30

Is there an anthropic reason or computational (solomonoff-pilled) argument for why we would expect to the computational/causal graph of the universe to be this local (sparse)? Or at least appear local to a first approximation. (Bells-inequality)

This seems like a quite special property: I suspect that ether

  • it is not as rare in e.g. the solomonoff prior as we might first intuit, or
  • we should expect this for anthropic resons e.g. it is really hard to develop intelligence/do precidctions in nonlocal universes.
Reply
Matthias Dellago's Shortform
Matt Dellago2mo10

The Red Queen’s Race in Weight Space

In evolution we can tell a story that not only are genes selected for their function, but also for how easily modifiable they are. For example, having a generic antibiotic gene is much more useful than having an antibiotic locked into one target and far, in edit-distance terms, from any other useful variant.

Why would we expect the generic gene to be more common? There is selection pressure on having modifiable genes because environments are constantly shifting (the Red Queen hypothesis). Genes are modules with evolvability baked in by past selection.

Can we make a similar argument for circuits/features/modes in NNs? Obviously it is better to have a more general circuit, but can we also argue that “multitool circuits” are not only better at generalising but also more likely to be found?

SGD does not optimise loss but rather something like free energy, taking degeneracy (multiplicity) into account with some effective temperature.
But evolvability seems distinct from degeneracy. Degeneracy is a property of a single loss landscape, while evolvability is a claim about distribution shift. And the claim is not “I have low loss in the new distribution” but rather “I am very close to a low-loss solution of the new distribution.”

Degeneracy in ML ≈ mutational robustness in biology, which is straightforward, but that is not what I am pointing at here. Evolvability is closer to out-of-distribution adaptivity: the ability to move quickly into a new optimum with small changes.

Are there experiments where a model is trained on a shifting distribution?

Is the shifting distribution relevant or can this just as well be modeled as a mixture of the distributions, and what we think of as OOD is actually in the mixture distribution? In that case degeneracy is all you need.

Related ideas: cryptographic one-way functions (examples of unevolvable designs), out-of-distribution generalisation, mode connectivity.

Reply
The Coding Theorem — A Link between Complexity and Probability
Matt Dellago3mo42

Excellent! Great to have a cleanly formulated article to point people to!

Reply
Thermodynamic entropy = Kolmogorov complexity
Matt Dellago3mo30

Good point! My intuition was that the Berkenstein bound (https://en.wikipedia.org/wiki/Bekenstein_bound) limits the amount of information in a volume. (Or more precisely the information surrounded by an area.) Therefore the number of states in a finite volume is also finite.

I must add: since writing this comment, a man called george pointed out to me that, when modeling the universe as a computation one must take care, to not accidentally derive ontological claims from it.

So today I would have a more 'whatever-works-works'-attitude; UTMs, DFAs both just models, neither likely to be ontologically true.

Reply
Mirror Organisms Are Not Immune to Predation
Matt Dellago5mo30

Wow, thank you for the kind and thorough reply! Obviously there is much more to this, I'll have a look at the report

Reply
Load More
27Mirror Organisms Are Not Immune to Predation
5mo
5
1Matthias Dellago's Shortform
10mo
26