This came up as a tangent from this question, which is itself a tangent from a discussion on The Hidden Complexity of Wishes.

Suppose we have a perfect cubical box of length 1 meter containing exactly 1 mol of argon gas at room temperature.

  • At t=0, the gas is initialized with random positions and velocities  drawn from the Maxwell-Boltzmann distribution.
  • Right after t=0 we perturb one of the particles by 1 angstrom in a random direction  to get the state .
  • All collisions are perfectly elastic, so there is no viscosity [edit, this is wrong; even ideal gases have viscosity] and energy is conserved.
  • For each possible perturbation, we run physics forward for 20 seconds and measure whether there are more gas molecules in the left side or right side of the box at t=20 seconds (the number on each side will be extremely close to equal, but differ slightly). Do more than 51% of the possible perturbations result in the same answer? That is, if  is the predicate "more gas molecules on the left at t=20", is ?

This is equivalent to asking if an omniscient forecaster who knows the position and velocity of all atoms at t=0 except for 1 angstrom of uncertainty in 1 atom can know with >51% confidence which side has more gas molecules at t=20.

I think the answer is no, because multiple billiard balls is a textbook example of a chaotic system that maximizes entropy quickly, and there's no reason information should be preserved for 20 seconds. This is enough time for each atom to collide with others millions of times, and even sound waves will travel thousands of meters and have lots of time to dissipate.

@habryka thinks the answer is yes and the forecaster could get more than 99.999% accuracy, because with such a large number of molecules, there should be some structure that remains predictable.

Who is right?

New Answer
New Comment

6 Answers sorted by

In the 2D case, there's no escaping exponential decay of the autocorrelation function for any observable satisfying certain regularity properties. (I'm not sure if this is known to be true in higher dimensions. If it's not, then there could conceivably be traps with sub-exponential escape times or even attractors, but I'd be surprised if that's relevant here—I think it's just hard to prove.) Sticking to 2D, the question is just how the time constant in that exponent for the observable in question compares to 20 seconds.

The presence of persistent collective behavior is a decent intuition but I'm not sure it saves you. I'd start by noting that for any analysis of large-scale structure—like a spectral analysis where you're looking at superpositions of harmonic sound waves—the perturbation to a single particle's initial position is a perturbation to the initial condition for every component in the spectral basis, all of which perturbations will then see exponential growth.

In this case you can effectively decompose the system into "Lyapunov modes" each with their own exponent for the growth rate of perturbations, and, in fact, because the system is close to linear in the harmonic basis, the modes with the smallest exponents will look like the low-wave-vector harmonic modes. One of these, conveniently, looks like a "left-right density" mode. So the lifetime (or Q factor) of that first harmonic is somewhat relevant, but the actual left-right density difference still involves the sum of many harmonics (for example, with more nodes in the up-down dimension) that have larger exponents. These individually contribute less (given equipartition of initial energy, these modes spend relatively more of their energy in up-down motion and so affect left-right density less), but collectively it should be enough to scramble the left-right density observable in 20 seconds even with a long-lived first harmonic.

On the other hand, 1 mol in 1 m^3 is not very dense, which should tend to make modes longer-lived in general. So I'm not totally confident on this one without doing any calculations. Edit: Wait, no, I think it's the other way around. Speed of sound and viscosity are roughly constant with gas density and attenuation otherwise scales inversely with density. But I think it's still plausible that you have a 300 Hz mode with Q in the thousands.

I think you're probably right. It does seem plausible that there is some subtle structure which is preserved after 20 seconds, such that the resulting distribution over states is feasibly distinguishable from a random configuration, but I don't think we have any reason to think that this structure would be strongly correlated with which side of the box contains the majority of particles.

The variance in density will by-default be very low, so the effect size of such structure really doesn't have to be very high. Also, if you can identify multiple such structures which are uncorrelated, you can quickly bootstrap to relatively high confidence. 

I don't think "strong correlation" is required. I think you just need a few independent pieces of evidence. Determining such independence is usually really hard to establish, but we are dealing with logical omniscience here.

For example, any set of remotely coherent waves that form in the box with ... (read more)

Predicting the ratio at t=20s is hopeless. The only sort of thing you can predict is the variance in the ratio over time, like the ratio as a function of time is  , where  . Here the large number of atoms lets you predict  , but the exact number after 20 seconds is chaotic. To get an exact answer for how much initial perturbation still leads to a predictable state, you'd need to compute the lyapunov exponents of an interacting classical gas system, and I haven't been able to find a paper that does this within 2 min of searching. (Note that if the atoms are non-interacting the problem stops being chaotic, of course, since they're just bouncing around on the walls of the box)

https://www.sciencedirect.com/science/article/abs/pii/S1674200121001279

They find Lyapunov exponent of about 1 or 2 (where time is basically in units of time it takes for a particle at average velocity to cover the length of the box).

For room temp gas, this timescale is about 1/400 seconds. So the divergence after 20 seconds should increase by a factor of over e^8000 (until it hits the cieling of maximum possible divergence).

Since an Angstrom is only 10^-10 m, if you start with an Angstrom offset, the divergence reaches maximum by about a tenth of a second.

4habryka1mo
Do you know how to interpret "maximum divergence" in this context? Also, IIRC aren't there higher-order exponents that might decay slower? (I just read about this this morning, so I am quite unfamiliar with the literature here)
3Charlie Steiner1mo
Hm, this is a good question. In writing my original reply, I figured "maximum divergence" was a meter. You start with two trajectories an angstrom apart, and they slowly diverge, but they can't diverge more than 1 meter. I think this is true if you're just looking at the atom that's shifted, but not true if you look at all the other atoms as well. Then maybe we actually have a 10^24-dimensional state space, and we've perturbed the state space by 1 angstrom in 1 dimension, and "maximum divergence" is actually more like the size of state space (√12+12...1024 times=1012 meters). In which case it actually takes two tenths of a second for exponential chaos to go from 10^-10 to 10^12. Nah, I don't think that's super relevant here. All the degrees of freedom of the gas are coupled to each other, so the biggest source of chaos can scramble everything just fine.
5habryka1mo
Hmm, I don't super buy this. For example, this model predicts no standing wave would survive for multiple seconds, but this is trivial to disprove by experiment. So clearly there are degrees of freedom that remain coupled. No waves of substantial magnitude are present in the initialization here, but your argument clearly implies a decay rate for any kind of wave that is too substantial.
4Charlie Steiner1mo
Yeah, good point (the examples, not necessarily any jargon-ful explanation of them). Sound waves, or even better, slow-moving vortices, or also better and different, diffusion of a cloud of one gas through a room filled with a different gas, show that you don't get total mixing of a room on one-second timescale. I think most likely, I've mangled something in the process of extrapolating a paper on a tiny toy model of a few hundred gas atoms to the meter scale.

The goal is not to predict the ratio, but to just predict which side will have more atoms (no matter how small the margin). It seems very likely to me that any such calculation would be extremely prohibitively expensive and would approximately require logical omniscience. 

To clarify this, we are assuming that without random perturbation, you would get 100% accuracy in predicting which side of the system has more atoms at t=20s. The question is how much of that 100% accuracy you can recover with a very very small unknown perturbation.

2JBlack1mo
Is this supposed to involve quantum physics, or just some purely classical toy model? In a quantum physics model, the probability of observing more atoms on one side than the other will be indistinguishable from 50% (assuming that your box is divided exactly in half and all other things are symmetric etc). The initial perturbation will make no difference to this.
2habryka1mo
Quantum physics. I don't see why it would be indistinguishable from 50%. Agree that there will be some decoherence. My guess is decoherence would mostly leave particle position at this scale intact, and if it becomes a huge factor, I would want the question to be settled on the basis being able to predict which side has higher irreducible uncertainty (i.e. which side had higher amplitude, if I am using that concept correctly).
3red75prime1mo
Citing https://arxiv.org/abs/cond-mat/9403051: "Furthermore if a quantum system does possess this property (whatever it may be), then we might hope that the inherent uncertainties in quantum mechanics lead to a thermal distribution for the momentum of a single atom, even if we always start with exactly the same initial state, and make the measurement at exactly the same time." Then the author proceed to demonstrate that it is indeed the case. I guess it partially answers the question: quantum state thermalises and you'll get classical thermal distribution of measurement results of at least some measurements even when measuring the system in the same quantum state. The less initial uncertainty in energy the faster the system thermalises. That is to slow quantum thermalisation down you need to initialize the system with atoms in highly localized positions, but then you can't know their exact velocities and can't predict classical evolution.
2JBlack1mo
Decoherence (or any other interpretation of QM) will definitely lead to a pretty uniform distribution over this sort of time scale. Just as in the classical case, the underlying dynamics is extremely unstable within the bounds of conservation laws, with the additional problem that the final state for any given perturbation is a distribution instead of a single measurement. If there is any actual asymmetry in the setup (e.g. one side of the box was 0.001 K warmer than the other, or the volumes of each side were 10^-9 m^3 different), you will probably get a very lopsided distribution for an observation of which side has more molecules regardless of initial perturbation. If the setup is actually perfectly symmetric though (which seems fitting with the other idealizations in the scenario), the resulting distribution of outcomes will be 50:50, essentially independent of the initial state within the parameters given.
1red75prime1mo
That is the question is not about the real argon gas, but about a billiard ball model? It should be stated in the question.

Tangential. 

Is part of the motivation behind this question to think about the level of control that a super-intelligence could have on a complex system if it was only able to only influence a small part of that system?

There are a lot of assumptions in "omniscient forecaster knows the position and velocity of all molecules at t=0" that make the answer "probably possible to calculate, probably not in real-time on current hardware".  

Edit (motivated by downvote, though I'd have preferred a textual disagreement): I actually fight the premise.  "omnicient forecaster" is so far from current tech that it's impossible to guess what it could calulate.  Say it only has 32 bits of precision in 3 dimensions of position and velocity, so 24 bytes for each of 6x10^23 particles.  1.4x10^25 bytes.

Call it 2 yottabytes.  There's no way we can predict what such a being might or might not be able to calculate, to what precision.  

18 comments, sorted by Click to highlight new comments since: Today at 1:56 AM

You could instead ask whether or not the observer could predict the location of a single particle p0, perhaps stipulating that p0 isn't the particle that's randomly perturbed.

My guess is that a random 1 angstrom perturbation is enough so that p0's location after 20s is ~uniform. This question seems easier to answer, and I wouldn't really be surprised if the answer is no?

Here's a really rough estimate: This says 10^{10} s^{-1} per collision, so 3s after start ~everything will have hit the randomly perturbed particle, and then there are 17 * 10^{10} more collisions, each of which add's ~1 angstrom of uncertainty to p0. 1 angstrom is 10^{-10}m, so the total uncertainty is on the order of 10m, which means it's probably uniform? This actually came out closer than I thought it would be, so now I'm less certain that it's uniform.

This is a slightly different question than the total # of particles on each side, but it becomes intuitively much harder to answer # of particles if you have to make your prediction via higher order effects, which will probably be smaller.

The system is chaotic, so the uncertainty increases exponentially with each collision. Also atoms are only about 1 angstrom wide, so the first unpredictable collision p0 makes will send it flying in some random direction, and totally miss the 2nd atom it should have collided with, instead probably hitting some other atom.

However, the position of a single particle p0 after 20 seconds is not uniform. It simply doesn't have time to diffuse far enough; with 2*10^11 collisions and mean free path 70 nm, it will travel a total displacement of 70 nm * sqrt(2*10^11) = 3 cm. Even if the box contains only 1 mol of gas, lower than atmospheric pressure, I think it should only travel 20 cm on average.

It could still be that the total uncertainty in many particles adds up to the majority side being unpredictable.

Doing it for one particle seems like it would be harder than doing it for all particles, since even if you are highly uncertain about each individual particle, in-aggregate that could still produce a quite high confidence about which side has more particles. So my guess is it matters a lot whether it's almost uniform or not.

A little (perhaps pedantic) point of observation:

 "there's no reason information should be preserved for 20 seconds" - A sort of aside on this comment. For an idealised classical mechanical (reversible) system the information will be preserved forever. Chaos, roughly speaking, is moving the same information into different significant figures (or relations between them). So an initial uncertainty in the 20th decimal place soon becomes an uncertainty in the first significant figure as the information moves about. All that information you had about the leading digits of precision describing the initial state is still there in some sense, it has been mapped into some bizarre constraints connecting different figures deep behind the decimals, and is practically useless.

So, in this sense the question (I think) is whether that remaining information you have tells you anything about which side of the box will have more molecules at t=20. So, instead of a 50/50 guess, does the information let you get to 60/40 or whatever.

My feeling is that it must be almost worthless. Lets say that information takes you from a 50/50 guess to a  50+s/50-s guess. My intuition is that if we plotted this "s" value as a function of time it is likely an exponential decay, and that 20 seconds feels like a very long time compared to the timescales involved in the molecular motion. At t=0  s will be very close to 50 (only if the perturbed molecule is within one angstrom of the dividing line between left and right will s be less than 50 at t=0). But at t=20 it has undergone many half-lives. So its probably 10^{-big number} after 20 seconds.

The prediction that the information would be significant implies an assumption that s does not exponentially decay with time, but is described by some other function (maybe a constant). So I think the core of the dispute might different assumptions on the shape of the s(t) function.

Standing waves of pressure (a.k.a. sound resonances) are macroscopic patterns with some persistence. Their amplitudes would be really low (≈kT of sound energy per resonance) in a random initial configuration, but if you knew every particle, you would still know the starting amplitude and phase of each standing wave, and could project it out, and one particle won’t appreciably affect the starting amplitude and phase I think.

So, do the standing waves have enough persistence to matter? How much do those standing waves decay after 20 seconds? I dunno. I know how to calculate all the frequencies, but I have no idea how to calculate the Q-factors—at least, not off the top of my head.

I'm also not sure which standing waves (if any?) bear on left-right difference in particle count.

All this is on the edge of my knowledge, so I could well be wrong. Insert "I thinks" and "from what I remembers" as appropriate throughout what follows.

If we start with non-interacting air molecules then the standing waves of pressure are the normal modes of the container. With non-interacting molecules the movement of a single molecule is not necessarily chaotic, whether it is or not depends on the shape of the container.

Assuming no loss (Q factor of infinity) then, knowing that the motion contains some contribution from a particular normal mode allows us to plot that normal mode (sine wave say) out to infinite future (and past) times. However, in a chaotic system it is required that the frequencies of the normal modes are approximately equally spaced. Their are no big gaps in the frequencies. I think the relevance of this to this question is that if all we know is that normal mode number 27 has some amplitude that sine wave we can infer out is added to all the other modes, which add white noise. (The mode spacing argument ensuring the noise is in fact white, and not colored noise that we could exploit to actually know something). So, assuming that mode 27 only has a typical amplitude we learn very little.

When we add collisions between the air molecules back in, then I believe it is chaotic for any shape of container.  Here the true normal modes of the total system include molecule bumping, but the standing waves we know about from the non-interacting case are probably reasonably long-lived states.

Yeah, standing waves where what me and Thomas also most talked about when we had a long conversation about this. Seems like there would be a bunch, and they wouldn't obviously decay that fast. 

My guess is that it's extremely unlikely that enough energy is concentrated in a standing wave at initialization for it to not dissipate in 20 seconds. By equipartition it should be extremely unlikely for energy to be concentrated in any degree of freedom in any physical system, but I don't know enough physics to be confident that this argument applies.

Certainly there is no conservation of standing wave amplitude, because with two billiard balls waves can form and dissipate. The question is how long it takes for waves of the tiny amplitudes caused by initialization to dissipate.

Why are you guys talking about waves necessarily dissipating, wouldn't there be an equal probability of waves forming and dissipating given that we are sampling a random initial configuration, hence in equilibrium w.r.t. formation/dispersion of waves?

If you look at a noise-driven damped harmonic oscillator, the autocorrelation of “oscillator state at time t1” and “oscillator state at time t2” cycles positive and negative at the oscillator frequency [if it’s not overdamped], but with an envelope that gradually decays to zero when t1 and t2 get far enough apart from each other.

This whole thing is time-symmetric—knowing the state at time 0 is unhelpful for guessing the state at very positive timestamps AND unhelpful for guessing the state at very negative timestamps.

But the OP question was about fixing an initial state and talking about later times, so I was talking in those terms, which is more intuitive anyway. I.e., as time moves forward, the influence of the initial state gradually decays to zero (because the wave is damped), while meanwhile the accumulated influence of the noise driver gradually increases.

Yes, it would be more correct to say the question is how long it takes for the probability distribution of the amplitude and phase of a given oscillation mode to be indistinguishable from that of any other random box of gas.

Yes by the equipartition theorem there’s an average of kT of energy in each standing wave mode at any given moment. Might be fun to calculate how many left-right atoms that corresponds to—I think that calculation should be doable. I imagine that for the fundamental mode, it would be comparable to the √(number of atoms in the box) difference that we expect for other reasons.

It’s continuous and exponential. If amplitude of standing wave mode N decays by a factor of 2 in X seconds, then it‚ it’s the same X whether the initial amplitude in that mode is macroscopic versus comparable-to-the-noise-floor. (Well, unless there are nonlinearities / anharmonicities, but that’s probably irrelevant in this context.) But meanwhile, noise is driving the oscillation too. So anyway, I think it really matters how X compares to 20 seconds, which again is something I don’t know.

I think the term "forecaster" is perhaps confusing here and it would be more clear to say "what fraction of the time is the final configuration in terms of left/right the same under a tiny random perturbation".

That is, let be the fixed initial random configuration and let be whether after 20 seconds the final configuration (when starting from ) is to the left. (Note that this is a fully deterministic quantity.) Then, let pertub a particle in the state as described given the random variable .

Now, we care about the value of:

(In particular whether it is very close to or is or .)

agree, I changed the question wording

Minor nitpicks: -I read "1 angstrom of uncertainty in 1 atom" as the location is normally distributed with mean <center> and SD 1 angstrom, or as uniformly distributed in solid sphere of radius 1 angstrom. Taken literally, though, "perturb one of the particles by 1 angstrom in a random direction" is distributed on the surface of the sphere (particle is known to be exactly 1 angstrom from <center>). -The answer will absolutely depend on the temperature. (in a neighborhood of absolute zero, the final positions of the gas particles are very close to the initial positions.) -The answer also might depend on the exact starting configuration. While I think most configurations would end up ~50/50 chance after 20 seconds, there are definitely configurations that would be stably strongly on one side.

Nothing conclusive below, but things that might help: -Back-of-envelope calculation said the single uncertain particle has ~(10 million * sqrt(temp in K)) collisions /sec. -If I'm using MSD right (big if!) then at STP, particles move from initial position only by about 5 cm in 20 seconds (cover massive distance, but the brownian motion cancels in expectation.) -I think that at standard temp, this would be at roughly 1/50 standard pressure?

Does this question, as posed, admit the possibility that the answer could be "it depends on what the initial random configuration is"? I suspect that's what it is: there are states that robustly result in the left side having more particles than the right after 20s.

The bet would then be over the integral of all the random initializations (and random perturbations). I.e. does a random initializations in-expectation leave enough information intact for 20 seconds if you change it a tiny bit.