# 13

As before, we will consider particles moving in boxes in an abstract and semi-formal way.

Imagine we have two types of particle, red and blue, in a box. Imagine they can change colour freely, and as before let's forget as much as possible. Our knowledge over particle states now looks like:

For each particle:

Let's connect these particles to a second box, but also introduce a colour-changing gate to the passage between the boxes. Particles can only go through the gate if they're willing to change from red to blue, and only go back if they change colour in the opposite direction.

Without further rules, lots of particles will approach the gate. The blue ones will be turned away, but the red ones happily switch to blue as they move into the second box, and ... promptly change back to red. We've not done anything interesting. We need a new rule: particles can't change colour without a good reason, they must swap colour by bumping into each other.

Now the particles that end up the second box must remain blue. Lets think about what happens when we start the box off. Unfortunately this question as posed is un-answerable, because we've introduced a system conserved quantity. Any time we do this, we must specify how much of the quantity is in the system. As an exercise, think about what the conserved quantity is.

...

The conserved quantity is:

Or, conversely we could also consider:

To be constant, since the number of particles is constant. Writing things out like this also makes the states of the system more explicit, so we can reason about the system more accurately. We expect the constant number of blue-and-box-1 particles to be distributed evenly throughout box 1, and we expect the constant number of red-or-box-2 particles to be distributed evenly throughout boxes 1 and 2. If (as above) both boxes are equally-sized, this caches out to the following rule:

For  particles starting in box 1, of which  start off red, we expect to end up with  blue particles in box 1,  red particles in box 1, and  blue particles in box 1.

Remember, this only works because of the conserved quantity we have induced in our system.

# Global Conservation

Now lets imagine that the walls of box 1 are somewhat permeable. Particles on either side cannot cross, but they can swap red-ness with external particles. We've now swapped our locally conserved quantity for a globally-conserved quantity.

In this case, we can't eyeball things anymore. We have to go back to the maths of entropy. We can write the entropy of our system  as the sum of the entropy of each individual particle. The entropy of each particle can then be written as a sum of the entropy arising from our uncertainty over the three states  (denoting colour and box) plus the entropy coming from our uncertainty over the position of a particle within a given box, which is constant.

Thanks to our previous calculation, we can write all of these in terms of a single probability , which we will choose to be .

What we actually care about, as it turns out, is the derivative of entropy with respect to this parameter:

Which is almost the same as the derivative of entropy in a simple two-state system where  is the probability of being in one of these states. In a way, we do have a two state system, where the states are . The only difference is that for particles in the state , there is an extra bit of uncertainty over position, hence the  term. We can think of this system in two equivalent ways:

1. A three-state system with states , where . The entropy is just the entropy calculated over all three states.
2. A two-state system with states  with no restrictions on the distribution other that . The entropy is calculated over two states but an extra  term is added to correct for the intrinsically higher entropy of state 2.

The second one is most commonly done in stat mech, when we have very complex systems. In fact, we have already seen this concept in the last post when we considered the entropy of a particle distributed over two boxes of different size.

## Derivatives In Terms of Red-ness

Our previous calculation found the derivative  in terms of . This is a bad choice, since we don't want to think of changing  directly. Instead we have to think in terms of changing  which for short-hand I'll call  as before. Because , and  is constant, we can just divide by  to get the derivative of  in terms of .

This is even better, since it doesn't depend on  at all! Now we must consider the external entropy . Lets say we have  particles outside the box, which have a probability  of being red. If  is the entropy coming from the entire exterior of the box, and this is equal between red and blue external particles, we can find the derivative of  with respect to  quite simply:

From which follows the derivative in terms of the number of red particles outside the box :

The important part here is that, as , the derivative of  with respect to  remains constant. For sufficiently large , we can totally ignore the change in  when  changes. Finally, if we assume that  is a constant, we can write down the following derivative:

If we want to find the maximum entropy, i.e. forget as much as possible then we want to set this to zero, which  gives the following relation:

Normally it's not so easy to solve the relevant equation in terms of obvious parameters of the external world like . In most cases, we work out the solution in terms of the derivative:

Which is often easier to measure than you might think! But you'll have to wait for the next post for that.

# Conclusions

1. When we induce a conserved quantity in our system, we must specify how much of that quantity is present.
2. When we look at a globally conserved quantity, we must instead specify the derivative of the total external entropy  with respect to the total external amount of that quantity.
3. We can switch between more micro-level views of individual states, and more macro-level views of states with "intrinsic" entropy.
New Comment