Consider any property of any box of matter. Or consider any signal generated by a finite computer. Assume physics has true RNG. I claim that eventually either

  • this signal will stop changing, or
  • the system will reach a prior state and the signal will oscillate, or
  • the system will reach irrecoverably high entropy and the signal will be noise.

You won't see eg a never-ending mandelbrot zoom because the computer will run out of bits.

Steady state is just oscillation with a time period of 0, so really there's only two possible long term outcomes.


 Is this of any use? I already know that my shoe will stay shoe and that the radio plays static if nobody is broadcasting. However, McDonald's is not steady state or noise at a very detailed level. There aren't really any isolated finite boxes of matter unless you take the whole lightcone to be one.

Perhaps a weaker & less formal version is of some use:

 Consider any property of any person, organization, music genre, dog, rock, friendship, computer network, star, or body of water. Eventually that property will either oscillate, stop changing, or become noise.

So you have a way to categorize your goals and things. You can ask yourself eg "This job is alright now. In one year will this job max out (be great) or min out (suck) or will there be cycles or will things keep meaningfully changing (like going further on the mandelbrot zoom)?" Maybe this is somehow useful.


Or the short version you've heard before:

There are no constant nonzero derivatives in nature.


What's wrong here? What's right? Is this a nothingburger or is it useful? Who said all this already?

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 9:19 AM

There’s a saying about investing which somewhat applies here. “The market can stay irrational longer than you can stay solvent”. Another is “in the long run, we’re all dead.”

Nothing is forever, but many things can outlast your observations. Eventually everything is steady state, fine. But there can be a LOT of signal before then.

Note that your computer doesn’t run out of bits when exploring the Mandelbrot set. Bits can encode an exponential number of states, and a few megabytes is enough to not terminate for millennia if it’s only zooming in and recalculating thousands of times per second. Likewise with your job - if it maxes or mins a hundred years out, rather than one, it’s a very different frame.

Interesting, I thought that zooming at a constant speed increased RAM usage at a constant rate but I hadn't checked.

There's some subtlety here about exactly what "zooming" means.  In standard implementations, zooming recalculates a small area of the current view, such that the small area has higher precision ("zoomed"), but the rest of the space ("unzoomed") goes out of frame and the memory gets reused.  The end result is the same number of sampled points ("pixels" in the display) each zoom level.

Memory-efficient mandelbrot zooms are an interesting rabbit hole apparently. But I think that with any of them you must store at least one number in full precision. If you zoom 2x/sec then you use minimum 1 bit more ram per second pretty sure. Not certain.

I think what you're pointing at is adjacent to many interesting or useful things. For instance, poincare recurrence and how predictions of boltzmann brains can be a death-knell for any model of the world. Or the technique of solving for equillibrium, which if everyone internalized would probably prevent us from shoving ourselves even further away from the pareto frontier. Or the suprising utility of modelling a bunch of processes as noise and analysing their affects on dynamics. 

But the idea that everything either reaches a steady state, a periodic sequence of states, or becomes noise seems useful only insofar as it lets us see if something is noisy/periodic/steady-state by checking that it isn't the other two. (I'm not sure that this is true. The universe may well have negative curvature and we could get aperiodic, non-noisy dynamics forever.) 

What exactly do you mean by "the technique of solving for equillibrium"? I would've said I 'internalized' it but you're making me wonder if I actually have.

Solving for equillibrium is when you analyse the impact of a proposed action by asking what the resulting equillibrium solution looks like. Internalizing it means that you automatically notice when/where to apply this skill and do it without conscious effort. It also includes noticing when a system is already in equillibrium. A side-effect of this is noticing when an equillibrium doesn't make sense to you, indicating that you're missing some factor. 

Oh yes I think I do that all the time. "How will that settle out?" and "How is it staying like this idgi"

Okay my computer right here has 10^13 bits of storage and without too much trouble I could get it to use all that memory as a counter and just count to the highest value possible, which would be 2^(10^13) or in other words much much longer than the age of the universe even at a fast clock speed.

Now technically yes, after it got to that 2^(10^13) value it would have to either halt or start over from 0 or something... but that seems not so practically relevant to me because it's such a huge integer value.

Yes excellent point. Although bit flips would get you there a bit faster. And rust I suppose. Oh also SSDs can only take 1-100k rewrites on each block. RAM and CPU are more durable. I bet it would run for at least a century. If your computer isn't a laptop with batteries.