[ Question ]

What's the optimal procedure for picking macrostates?

by Adam Scholl 3mo26th Aug 20191 min read3 comments

12


Say I want to estimate the change in entropy between two observed microstates: "cheerios in a box" and "cheerios on the floor." Assuming I have infinite time and compute, what's the best known way to choose which macrostates are most appropriate for use in measuring the change? Should I even be thinking about macrostates, or should I measure the change in some other way? 

A friend once told me the answer was something along the lines of "pick the shortest string which outputs a class that includes the observed microstate, where that observed microstate is a non-unique member of that class", i.e. where the most-precise way to communicate the location of that microstate in class-space would be to mention simply that it was member #x of said class; if it were, for example, the "only prime member" of said class, then it would be considered unique.

Does (my perhaps-faulty memory of) this answer seem legit? Are there better proposals?

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

1 Answers

Within the Bayesian version of statistical mechanics, "macrostates" are determined by the information available to you, in particular from observations. We don't observe a microstate directly - we observe some averaged data about the microstate, e.g. a readout from a thermometer or pressure sensor. The macrostate is then the class of microstates which would yield the same observed measurements (within uncertainty).

(That is not the whole picture - there are other ways of gaining information besides just the measurements. See the paper linked above for a more accurate explanation.)

Under this view, physical entropy becomes essentially the same as information entropy. Entropy is strictly a property of macrostates; it quantifies the number of microstates compatible with the macrostate, and therefore the number of bits needed to communicate a microstate once the macrostate is known. In other words, entropy quantifies our own uncertainty about the system's microstate, given the known (macroscopic) information.