# 39

Eliezer Yudkowsky previously wrote (6 years ago!) about the second law of thermodynamics. Many commenters were skeptical about the statement, "if you know the positions and momenta of every particle in a glass of water, it is at absolute zero temperature," because they don't know what temperature is. This is a common confusion.

Entropy

To specify the precise state of a classical system, you need to know its location in phase space. For a bunch of helium atoms whizzing around in a box, phase space is the position and momentum of each helium atom. For N atoms in the box, that means 6N numbers to completely specify the system.

Lets say you know the total energy of the gas, but nothing else. It will be the case that a fantastically huge number of points in phase space will be consistent with that energy.* In the absence of any more information it is correct to assign a uniform distribution to this region of phase space. The entropy of a uniform distribution is the logarithm of the number of points, so that's that. If you also know the volume, then the number of points in phase space consistent with both the energy and volume is necessarily smaller, so the entropy is smaller.

This might be confusing to chemists, since they memorized a formula for the entropy of an ideal gas, and it's ostensibly objective. Someone with perfect knowledge of the system will calculate the same number on the right side of that equation, but to them, that number isn't the entropy. It's the entropy of the gas if you know nothing more than energy, volume, and number of particles.

Temperature

The existence of temperature follows from the zeroth and second laws of thermodynamics: thermal equilibrium is transitive, and entropy is maximum in equilibrium. Temperature is then defined as the thermodynamic quantity that is the shared by systems in equilibrium.

If two systems are in equilibrium then they cannot increase entropy by flowing energy from one to the other. That means that if we flow a tiny bit of energy from one to the other (δU1 = -δU2), the entropy change in the first must be the opposite of the entropy change of the second (δS1 = -δS2), so that the total entropy (S1 + S2) doesn't change. For systems in equilibrium, this leads to (∂S1/∂U1) = (∂S2/∂U2). Define 1/T = (∂S/∂U), and we are done.

Temperature is sometimes taught as, "a measure of the average kinetic energy of the particles," because for an ideal gas U/= (3/2) kBT. This is wrong as a definition, for the same reason that the ideal gas entropy isn't the definition of entropy.

Probability is in the mind. Entropy is a function of probabilities, so entropy is in the mind. Temperature is a derivative of entropy, so temperature is in the mind.

Second Law Trickery

With perfect knowledge of a system, it is possible to extract all of its energy as work. EY states it clearly:

So (again ignoring quantum effects for the moment), if you know the states of all the molecules in a glass of hot water, it is cold in a genuinely thermodynamic sense: you can take electricity out of it and leave behind an ice cube.

Someone who doesn't know the state of the water will observe a violation of the second law. This is allowed. Let that sink in for a minute. Jaynes calls it second law trickery, and I can't explain it better than he does, so I won't try:

A physical system always has more macroscopic degrees of freedom beyond what we control or observe, and by manipulating them a trickster can always make us see an apparent violation of the second law.

Therefore the correct statement of the second law is not that an entropy decrease is impossible in principle, or even improbable; rather that it cannot be achieved reproducibly by manipulating the macrovariables {X1, ..., Xn} that we have chosen to define our macrostate. Any attempt to write a stronger law than this will put one at the mercy of a trickster, who can produce a violation of it.

But recognizing this should increase rather than decrease our con fidence in the future of the second law, because it means that if an experimenter ever sees an apparent violation, then instead of issuing a sensational announcement, it will be more prudent to search for that unobserved degree of freedom. That is, the connection of entropy with information works both ways; seeing an apparent decrease of entropy signi fies ignorance of what were the relevant macrovariables.

Homework

I've actually given you enough information on statistical mechanics to calculate an interesting system. Say you have N particles, each fixed in place to a lattice. Each particle can be in one of two states, with energies 0 and ε. Calculate and plot the entropy if you know the total energy: S(E), and then the energy as a function of temperature: E(T). This is essentially a combinatorics problem, and you may assume that N is large, so use Stirling's approximation. What you will discover should make sense using the correct definitions of entropy and temperature.

*: How many combinations of 1023 numbers between 0 and 10 add up to 5×1023?

# 39

New Comment
Some comments are truncated due to high volume. Change truncation settings

This is a good article making a valuable point. But this —

Temperature is sometimes taught as, "a measure of the average kinetic energy of the particles," because for an ideal gas U/N = (3/2) kBT. This is wrong, for the same reason that the ideal gas entropy isn't the definition of entropy.

— is a confusing way to speak. There is such a thing as "the average kinetic energy of the particles", and one measure of this thing is called "temperature" in some contexts. There is nothing wrong with this as long as you are clear ab...

8B_For_Bandana9y
An alternate phrasing (which I think makes it clearer) would be: "the distinction between mechanical and thermal energy is in the mind, and because we associate temperature with thermal but not mechanical energy, it follows that two observers of the same system can interpret it as having two different temperatures without inconsistency." In other words, if you fall into the sun, your atoms will be strewn far and wide, yes, but your atoms will be equally strewn far and wide if you fall into an ice-cold mechanical woodchipper. The distinction between the types of energy used for the scattering process is what is subjective.
2Lumifer9y
The high-school definition of temperature as "a measure of the average kinetic energy of the particles" (see the grandparent comment) actually erases that distinction as it defines temperature through kinetic (mechanical) energy.
1B_For_Bandana9y
0B_For_Bandana9y
Right, but we don't think of a tennis ball falling in a vacuum as gaining thermal energy or rising in temperature. It is "only" gaining mechanical kinetic energy; a high school student would say that "this is not a thermal energy problem," even though the ball does have an average kinetic energy (kinetic energy, divided by 1 ball). But if temperature of something that we do think of as hot is just average kinetic energy, then there is a sense in which the entire universe is "not a thermal energy problem."
0Lumifer9y
That's because temperature is a characteristic of a multi-particle system. One single particle has energy, a large set of many particles has temperature. And still speaking of high-school physics, conversion between thermal and kinetic energy is trivially easy and happens all the time around us.
0jbay9y
A tennis ball is a multi-particle system; however, all of the particles are accelerating more or less in unison while the ball free-falls. Nonetheless, it isn't usually considered to be increasing in temperature, because the entropy isn't increasing much as it falls.
5calef9y
I think more precisely, there is such a thing as "the average kinetic energy of the particles", and this agrees with the more general definition of temperature "1 / (derivative of entropy with respect to energy)" in very specific contexts. That there is a more general definition of temperature which is always true is worth emphasizing.
2Luke_A_Somers9y
Rather than 'in very specific contexts' I would say 'in any normal context'. Just because it's not universal doesn't mean it's not the overwhelmingly common case.

if you know the states of all the molecules in a glass of hot water, it is cold in a genuinely thermodynamic sense: you can take electricity out of it and leave behind an ice cube.

I am not sure this is true as stated. An omniscient Maxwell demon that would only allow hot molecules out runs into a number of problems, and an experimentally constructed Maxwell's demon works by converting coherent light (low entropy) into incoherent (high entropy).

Maxwell's demon, as criticized in your first link, isn't omniscient. It has to observe incoming particles, and the claim is that this process generates the entropy.

[Spoiler alert: I can't find any 'spoiler' mode for comments, so I'm just going to give the answers here, after a break, so collapse the comment if you don't want to see that]

.

.

.

.

.

.

.

.

.

.

For the entropy (in natural units), I get

$S\(E\$%20=%20N%20\ln%20N%20-%20\frac{E}{\epsilon}%20\ln%20\frac{E}{\epsilon}%20-%20\left(%20N%20-%20\frac{E}{\epsilon}%20\right)%20\ln%20\left(%20N%20-%20\frac{E}{\epsilon}%20\right))

and for the energy, I get

$E\(T\$%20=%20\frac{\epsilon%20N}{e%5E{\epsilon%20/%20T}%20-%201})

Is this right? (upon reflection and upon consulting graphs, it seems right ...

4spxtr9y
Not quite, but close. It should be a + instead of a - in the denominator. Nice work, though. You have the right formula for the entropy. Notice that it is nearly identical to the Bernoulli distribution entropy. That should make sense: there is only one state with energy 0 or Nε, so the entropy should go to 0 at those limits. It's maximum is at Nε/2. Past that point, adding energy to the system actually decreases entropy. This leads to a negative temperature! But we can't actually reach that by raising its temperature. As we raise temperature to infinity, energy caps at Nε/2 (specific heat goes to 0). To put more energy in, we have to actually find some particles that are switched off and switch them on. We can't just put it in equilibrium with a hotter thing.
3spxtr9y
I made a plot of the entropy and the (correct) energy. Every feature of these plots should make sense. Note that the exponential turn-on in E(T) is a common feature to any gapped material. Semiconductors do this too :)
2Luke_A_Somers9y
Why did you only show the E(T) function for positive temperatures?
1calef9y
This is a good point. The negative side gives good intuition for the "negative temperatures are hotter than any positive temperature" argument.
2Luke_A_Somers9y
What gives a better intuition is thinking in inverse temperature. Regular temperature is, 'how weakly is this thing trying to grab more energy so as to increase its entropy'. Inverse temperature is 'how strongly...' and when that gets down to 0, it's natural to see it continue on into negatives, where it's trying to shed energy to increase its entropy.
0spxtr9y
No reason. Fixed.
1DanielFilan9y
The energy/entropy plot makes total sense, the energy/temperature doesn't really because I don't have a good feel for what temperature actually is, even after reading the "Temperature" section of your argument (it previously made sense because Mathematica was only showing me the linear-like part of the graph). Can you recommend a good text to improve my intuition? Bonus points if this recommendation arrives in the next 9.5 hours, because then I can get the book from my university library.
1spxtr9y
Depends on your background in physics. Landau & Lifshitz Statistical Mechanics is probably the best, but you won't get much out of it if you haven't taken some physics courses.
1Falacer9y
I gave this a shot as well as since your value for E(T) → ∞ as T → ∞, while I would think the system should cap out at εN. I get a different value for S(E), reasoning: If E/ε is 1, there are N microstates, since 1 of N positions is at energy ε. If E/ε is 2, there are N(N-1) microstates. etc. etc, giving for E/ε = x that there are N!/(N-x)! so S = ln [N!/(N-x)!] = ln(N!) - ln((N-x)!) = NlnN - (N-x)ln(N-x) S(E) = N ln N - (N - E/ε) ln (N - E/ε) Can you explain how you got your equation for the entropy? Going on I get E(T) = ε(N - e^(ε/T - 1) ) This also looks wrong, as although E → ∞ as T → ∞, it also doesn't cap at exactly εN, and E → -∞ for T→ 0... I'm expecting the answer to look something like: E(T) = εN(1 - e^(-ε/T))/2 which ranges from 0 to εN/2, which seems sensible. EDIT: Nevermind, the answer was posted while I was writing this. I'd still like to know how you got your S(E) though.
1spxtr9y
S(E) is the log of the number of states in phase space that are consistent with energy E. Having energy E means that E/ε particles are excited, so we get (N choose E/ε) states. Now take the log :)

This is related to the physics of computation: ultimate physical computers coincides with temperatures approaching need to 0 K - (reversible computing, Landauer Principle) . Heat/entropy is computational stupidity.

Incidentally, this also explains the fermi paradox: post singularity civilizations migrate out away from their hot stars into the cold interstellar spaces, becoming dark matter. (which however, does not imply that all cold dark matter is intelligent)

Temperature is then defined as the thermodynamic quantity that is the shared by systems in equilibrium.

I think I've figured out what's bothering me about this. If we think of temperature in terms of our uncertainty about where the system is in phase space, rather than how large a region of phase space fits the macroscopic state, then we gain a little in using the second law, but give up a lot everywhere else. Unless I am mistaken, we lose the following:

• Heat flows from hot to cold
• Momentum distribution can be predicted from temperature
• Phase changes ca
...
1spxtr9y
We don't lose those things. Remember, this isn't my definition. This is the actual definition of temperature used by statistical physicists. Anything statistical physics predicts (all of the things you listed) is predicted by this definition. You're right though. If you know the state of the molecules in the water then you don't need to think about temperature. That's a feature, not a bug.
2Richard Korzekwa 9y
Suppose that you boil some water in a pot. You take the pot off the stove, and then take a can of beer out of the cooler (which is filled with ice) and put it in the water. The place where you're confusing your friends by putting cans of beer in pots of hot water is by the ocean, so when you read the thermometer that's in the water, it reads 373 K. The can of beer, which was in equilibrium with the ice at a measured 273 K, had some bits of ice stuck to it when you put it in. They melt. Next, you pull out your fancy laser-doppler-shift-based water molecule momentum spread measurer. The result jives with 373 K liquid water. After a short time, you read the thermometer as 360 K (the control pot with no beer reads 371 K). There is no ice left in the pot. You take out the beer, open it, and measure it's temperature to be 293 K and its momentum width to be smaller than that of the boiling water. What we observed was: * Heat flowed from 373 K water to 273 K beer * The momentum distribution is wider for water at 373 K than at 293 K * Ice placed in 373 K water melts * Our thermometer reads 373 K for boiling water and 273 K for water-ice equilibrium Now, suppose we do exactly the same thing, but just after putting the beer in the water, Omega tells us the state of every water molecule in the pot, but not the beer. Now we know the temperature of the water is exactly 0 K. We still anticipate the same outcome (perhaps more precisely), and observe the same outcome for all of our measurements, but we describe it differently: * Heat flowed from 0 K water to 273 K beer * The momentum distribution is wider for water at 0 K (or recently at 0 K) than at 293 K * Ice placed in 0 K water melts * Our thermometer reads 373 K for water boiling at 0 K, and 273 K for water-ice equilibrium So the only difference is in the map, not the territory, and it seems to be only in how we're labeling the map, since we anticipate the same outcome using the same model (assuming you didn't use
4spxtr9y
Omega tells us the state of the water at time T=0, when we put the beer into it. There are two ways of looking at what happens immediately after. The first way is that the water doesn't flow heat into the beer, rather it does some work on it. If we know the state of the beer/water interface as well then we can calculate exactly what will happen. It will look like quick water molecules thumping into slow boundary molecules and doing work on them. This is why the concept of temperature is no longer necessary: if we know everything then we can just do mechanics. Unfortunately, we don't know everything about the full system, so this won't quite work. Think about your uncertainty about the state of the water as you run time forward. It's initially zero, but the water is in contact with something that could be in any number of states (the beer), and so the entropy of the water is going to rise extremely quickly. The water will initially be doing work on the beer, but after an extremely short time it will be flowing heat into it. One observer's work is another's heat, essentially.
1Richard Korzekwa 9y
This actually clears things up quite a lot. I think my discomfort with this description is mainly aesthetic. Thank you for being patient.
0spxtr9y
The rule that all microstates that are consistent with a given macrostate are equally probable is a consequence of the maximum entropy principle. See this Jaynes paper.

Probability is in the mind.

I am not sure what this means. In what sense is probability in the mind, but energy isn't? Or if energy is in the mind, as well, what physical characteristic is not and why?

3spxtr9y
In the standard LW map/territory distinction, probability, entropy, and temperature are all features of the map. Positions, momenta, and thus energy are features of the territory. I understand that this doesn't fit your metaphysics, but I think it should still be a useful concept. Probably.
2shminux9y
Sorry, I wasn't clear. I didn't use "my metaphysics" here, just the standard physical realism, with maps and territories. Suppose energy is the feature of the territory... because it's the "capacity to do work", using the freshman definition. Why would you not define temperature as the capacity to transfer heat, or something? And probability is already defined as the rate of decay in many cases... or is that one in the mind, too?
3spxtr9y
Energy is a feature of the territory because it's a function of position and momenta and other territory-things. "Capacity to transfer heat" can mean a few things, and I'm not sure which you want. There's already heat capacity, which is how much actual energy is stored per degree of temperature. To find the total internal heat energy you just have to integrate this up to the current temperature. The usual example here is an iceberg which stores much more heat energy than a cup of coffee, and yet heat flows from the coffee to the iceberg. If you mean something more like "quantity that determines which direction heat will flow between two systems," then that's just the definition I presented :p I actually have trouble defending "probability is in the mind" in some physics contexts without invoking many-worlds. If it turns out that many-worlds is wrong and copenhagen, say, is right, then it will be useful to believe that for physical processes, probability is in the territory. I think. Not too sure about this.
2shminux9y
Yeah, that's a better definition :) Feel free to elaborate. I'd think that probability is either in the map or in the territory, regardless of the context or your QM ontology, not sometimes here and sometimes there. And if it is in the territory, then so is entropy and temperature, right?
3spxtr9y
Say we have some electron in equal superposition of spin up and down, and we measure it. In Copenhagen, the universe decides that it's up or down right then and there, with 50% probability. This isn't in the mind, since it's not a product of our ignorance. It can't be, because of Bell stuff. In many-worlds, the universe does a deterministic thing and one branch measures spin up, one measures spin down. The probability is in my mind, because it's a product of my ignorance - I don't know what branch I'm in.
2shminux9y
Hmm, so, assuming there is no experimental distinction between the two interpretations, there is no way to tell the difference between map and territory, not even in principle? That's disconcerting. I guess I see what you mean by " trouble defending "probability is in the mind"".
3spxtr9y
If we inject some air into a box then close our eyes for a few seconds and shove in a partition, there is a finite chance of finding the nitrogen on one side and the oxygen on the other. Entropy can decrease, and that's allowed by the laws of physics. The internal energy had better not change, though. That's disallowed. If energy changes, our underlying physical laws need to be reexamined. In the official dogma, these are firmly in the territory. If entropy goes down, our map was likely wrong.
0spxtr9y
That doesn't follow.
2spxtr9y
Even if Copenhagen is right, I as a rational agent should still ought to use mind-probabilities. It may be the case that the quantum world is truly probabilistic-in-the-territory, but that doesn't affect the fact that I don't know the state of any physical system precisely.
0TheAncientGeek9y
Can't there be forms of probability in the territoryand the map?
1TheAncientGeek9y
You're most of the way towards why you shouldn't believe the Jaynes -Yudkowaky argument. If you really can infer the absence of probability in the territory by reflecting on human reasoning alone, then the truth ofCLI versus MWI shouldn't matter. If matters, as you seem to think, then armchair reasoning can't do what Jaynes amd Yudkowsky think it can (in this case)
-3TheAncientGeek9y
Its a reference to a bad,but locally popular argument from Jaynes. It holds, that since since, some forms of probability are subjective, they all are, and...ta-daaa ....the territory is therefore deterministic.
-3dxu9y
Name a probability that is not subjective. (And before you bring up quantum-mechanical collapse, I'd just like to say one thing: MWI. And before you complain about unfalsifiability, let me link you here.)
1TheAncientGeek9y
I don't need definite proof of in-the-territory probability to support my actual point, which is that you can't determine the existence or non existence of features of the territory by armchair reflection.
3dxu9y
Of course you can't determine whether something exists or not. There might yet be other probabilities out there that actually are objective. The fact that we have not discovered any such thing, however, is telling. Absence of evidence is evidence of absence. Therefore, it is likely--not certain, but likely--that no such probabilities exist. If your claim is that we cannot be certain of this, then of course you are correct. Such a claim, however, is trivial.

Thanks for this. I am definitely going to use the Gibbs paradox (page 3 of the Jaynes paper) to nerd-snipe my physics-literate friends.

I'l follow suit with the previous spoiler warning.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

I took a bit different approach from the others that have solved this, or maybe you'd just say I quit early once I thought I'd shown the thing I thought you were trying to show:

If we write entropy in terms of the number of particles, N and the fraction of them that are excited: α ≡ E/(Nε) , and take the derivative with respect to α, we get:

dS/dα = N log [(1-α)/α]

Or if that N is bothering you (since temperature is usually an intensive property), we can just write:

T = 1/(dS/dE) =...

2spxtr9y
I posted some plots in the comment tree rooted by DanielFilan. I don't know what you used as the equation for entropy, but your final answer isn't right. You're right that temperature should be intensive, but the second equation you wrote for it is still extensive, because E is extensive :p
2Richard Korzekwa 9y
You're right. That should be ε, not E. I did the extra few steps to substitute α = E/(Nε) back in, and solve for E, to recover DanielFilan's (corrected) result: E = Nε / (exp(ε/T) + 1) I used S = log[N choose M], where M is the number of excited particles (so M = αN). Then I used Stirling's approximation as you suggested, and differentiated with respect to α.
0spxtr9y
Good show!

so temperature is in the mind

I am not quite sure in which way this statement is useful.

"..and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing." -- Douglas Adams

3DanielFilan9y
I had that thought as well, but the 'Second Law Trickery' section convinced me that it was a useful statement.
2Lumifer9y
I'll grant that it is an interesting statement, but at the moment my impression is that it's just redefining the word "temperature" in a particular way.
1DanielLC9y
I don't know of any way that statement in particular is useful, but understanding the model that produces it can be helpful. For example, it's possible to calculate the minimum amount of energy necessary to run a certain computation on a computer at a certain temperature. It's further useful in that it shows that if the computation is reversible, there is no minimum energy.
0Lumifer9y
The model is fine, what I'm having problems with is the whole "in the mind" business which goes straight to philosophy and seems completely unnecessary for the discussion of properties of classic systems in physics.
2DanielLC9y
Entropy is statistical laws. Thus, like statistics, it's in the mind. It's also no more philosophical than statistics is, and not psychological at all.
3Lumifer9y
I have a feeling you're confusing the map and the territory. Just because statistics (defined as a toolbox of methods for dealing with uncertainty) exists in the mind, there is no implication that uncertainty exists only in the mind as well. Half-life of a radioactive element is a statistical "thing" that exists in real life, not in the mind. In the same way, phase changes of a material exist in the territory. You can usefully define temperature as a particular metric such that water turns into gas at 100 and turns into ice at zero. Granted, this approach has its limits but it does not seem to depend on being "in the mind".
0DanielLC9y
The half-life of a radioactive element is something that can be found without using probability. It is the time it takes for the measure of the universes in which the atom is still whole to be exactly half of the initial measure. Similarly, phase change can be defined without using probability. The universe may be indeterministic (though I don't think it is), but all this means is that the past is not sufficient to conclude the future. A mind that already knows the future (perhaps because it exists further in the future) would still know the future.
1Lumifer9y
So, does your probability-less half-life require MWI? That's not a good start. What happens if you are unwilling to just assume MWI? Why do you think such a thing is possible?
4Kindly9y
Even without references to MWI, I'm pretty sure you can just say the following: if at time t=0 you have an atom of carbon-14, at a later time t>0 you will have a superposition of carbon-14 and nitrogen-14 (with some extra stuff). The half-life is the value of t for which the two coefficients will be equal in absolute value.
0DanielLC9y
Uncertainty in the mind and uncertainty in the territory are related, but they're not the same thing, and calling them both "uncertainty" is misleading. If indeterminism is true, there is an upper limit to how certain someone can reliably be about the future, but someone further in the future can know it with perfect certainty and reliability. If I ask if the billionth digit of pi is even or odd, most people would give even odds to those two things. But it's something that you'd give even odds to on a bet, even in a deterministic universe. If I flip a coin and it lands on heads, you'd be a fool to bet otherwise. It doesn't matter if the universe is nondeterministic and you can prove that, given all the knowledge of the universe before the coin was flipped, it would be exactly equally likely to land on heads or tails. You know it landed on heads. It's 100% certain.
0Lumifer9y
Yes, future is uncertain but past is already fixed and certain. So? We are not talking about probabilities of something happening in the past. The topic of the discussion is how temperature (and/or probabilities) are "in the mind" and what does that mean.
-2DanielLC9y
The past is certain but the future is not. But the only difference between the two is when you are in relation to them. It's not as if certain time periods are inherently past or future. An example of temperature being in the mind that's theoretically possible to set up but you'd never manage in practice is Maxwell's demon. If you already know where all of the particles of gas are and how they're bouncing, you could make it so all the fast ones end up in one chamber and all the slow ones end up in the other. Or you can just get all of the molecules into the same chamber. You can do this with an arbitrarily small amount of energy.
I think his "in the mind" is correct in his context, because in the model of entropy he is discussing, temperature_entropy is dependent on entropy, is dependent on your knowledge of the states of the system. I'll repeat what I said earlier in the context of the discussion of different theories of time. New physics didn't make old ideas useless. Temperature_kineticenergy is probably more relevant in most situations. The OP makes his mistake by identifying temperature_entropy with temperature_kineticenergy.
1calef9y
I'm don't see the issue in saying [you don't know what temperature really is] to someone working with the definition [T = average kinetic energy]. One definition of temperature is always true. The other is only true for idealized objects.
Nobody knows what anything really is. We have more or less accurate models.
0DanielLC9y
What do you mean by "true"? They both can be expressed for any object. They are both equal for idealized objects.
1calef9y
Only one of them actually corresponds with temperature for all objects. They are both equal for one subclass of idealized objects, in which case the "average kinetic energy" definition follows from the the entropic definition, not the other way around. All I'm saying is that it's worth emphasizing that one definition is strictly more general than the other.
3DanielLC9y
Average kinetic energy always corresponds to average kinetic energy, and the amount of energy it takes to create a marginal amount of entropy always corresponds to the amount of energy it takes to create a marginal amount of entropy. Each definition corresponds perfectly to itself all of the time, and applies to the other in the case of idealized objects. How is one more general?
1nshepperd9y
Two systems with the same "average kinetic energy" are not necessarily in equilibrium. Sometimes energy flows from a system with lower average kinetic energy to a system with higher average kinetic energy (eg. real gases with different degrees of freedom). Additionally "average kinetic energy" is not applicable at all to some systems, eg. ising magnet.
0calef9y
I just mean as definitions of temperature. There's temperature(from kinetic energy) and temperature(from entropy). Temperature(from entropy) is a fundamental definition of temperature. Temperature(from kinetic energy) only tells you the actual temperature in certain circumstances.
1DanielLC9y
Why is one definition more fundamental than another? Why is only one definition "actual"?
0calef9y
Because one is true in all circumstances and the other isn't? What are you actually objecting to? That physical theories can be more fundamental than each other?
1DanielLC9y
I admit that some definitions can be better than others. A whale lives underwater, but that's about the only thing it has in common with a fish, and it has everything else in common with a whale. You could still make a word to mean "animal that lives underwater". There are cases where where it lives is so important that that alone is sufficient to make a word for it. If you met someone who used the word "fish" to mean "animal that lives underwater", and used it in contexts where it was clear what it meant (like among other people who also used it that way), you might be able to convince them to change their definition, but you'd need a better argument than "my definition is always true, whereas yours is only true in the special case that the fish is not a mammal".
1calef9y
The distinction here goes deeper than calling a whale a fish (I do agree with the content of the linked essay). If a layperson asks me what temperature is, I'll say something like, "It has to do with how energetic something is" or even "something's tendency to burn you". But I would never say "It's the average kinetic energy of the translational degrees of freedom of the system" because they don't know what most of those words mean. That latter definition is almost always used in the context of, essentially, undergraduate problem sets as a convenient fiction for approximating the real temperature of monatomic ideal gases--which, again, is usually a stepping stone to the thermodynamic definition of temperature as a partial derivative of entropy. Alternatively, we could just have temperature(lay person) and temperature(precise). I will always insist on temperature(precise) being the entropic definition. And I have no problem with people choosing whatever definition they want for temperature(lay person) if it helps someone's intuition along.
0Lumifer9y
So, effectively there are two different things which go by the same name? Temperature_entropy is one measure (coming from the information-theoretic side) and temperature_kineticenergy is another measure (coming from, um, pre-Hamiltonian mechanics?)..? That makes some sense, but then I have a question. If you take an ice cube out of the freezer and put it on a kitchen counter, will it melt if there is no one to watch it? In other words, how does the "temperature is in the mind" approach deal with phase transitions?
They look like two different concepts to me. I don't know. I suppose that would depend on how much that mind knows about phase transitions.
0DanielLC9y
That's difficult to say. If you build a heat pump, you deal with entropy. If you radiate waste heat, you deal with kinetic energy. If you want to know how much waste heat you're going to have, you deal with entropy. If you significantly change the temperature of something with a heat pump, then you have to deal with both for a large variety of temperatures. Calling them Temperature_kineticenergy and Temperature_entropy is somewhat misleading, since both involve kinetic energy. Temperature_kineticenergy is average kinetic energy, and Temperature_entropy is the change in kinetic energy necessary to cause a marginal increase in entropy. Also, if you escape your underscores with backslashes, you won't get the italics.
0nshepperd9y
Is that because you didn't read the rest of the post? "Temperature is in the mind" doesn't mean that you can make a cup of water boil just by wishing hard enough. It means that whether or not you should expect a cup of water to boil depends on what you know about it. (It also doesn't mean that whether an ice cube melts depends on whether anyone's watching. The ice cube does whatever the ice cube does in accordance with its initial conditions and the laws of mechanics.)
0Lumifer9y
So now that you've told me what it does NOT mean, perhaps you can clarify what it DOES mean? I still don't understand. In particular, the phrase "in the mind" implies that temperature requires a mind and would not exist if there were no minds around. Given that we are talking about classical systems, this seems an unusual position to take. Another implication of "in the mind" is that different minds would see temperature differently. In fact, if you look into the original EY post, it explicitly says And that makes me curious about phase changes. Can I freeze water into ice by knowing more about it? Note: not by doing things like separating molecules by energy and ending up with ice and electricity, but purely by knowing?

Probability is in the mind. Entropy is a function of probabilities, so entropy is in the mind. Temperature is a derivative of entropy, so temperature is in the mind.

If I plunge my hand into boiling water, I will get scalded. Will I still get scalded if I know the position and momentum of every particle involved? If so, what causes it? If not, where does this stop -- is everything in the mind?

ETA: I should have reread the discussion first, because there has been a substantial amount about this very question. However, I'm not sure it has come to a conclus...

0dxu9y
Assuming that you plunge your hand into the water at a random point in time, yes you will get scalded with probability ~1. This means that the water is "hot" in the same sense that the lottery is "fair" even if you know what the winning numbers will be--if you don't use that winning knowledge and instead just pick a series of random numbers, as you would if you didn't know the winning numbers, then of course you will still lose. I suppose if you are willing to call such a lottery "fair", then by that same criterion, the water is hot. However, if you use this criterion, I suspect a large number of people would disagree with you on what exactly it means for a lottery to be "fair". If, on the other hand, you would call a lottery in which you know the winning numbers "unfair", you should be equally willing to call water about which you know everything "cold".
0Lumifer9y
Well, if I know the winning numbers but Alice doesn't, the lottery is "fair" for Alice. If I know everything about that cup of water, but Alice doesn't, is the water at zero Kelvin for me but still hot for Alice?
1Richard_Kennaway9y
And will we both predict the same result when someone puts their hand in it?
-1Lumifer9y
Probably yes, but then I will have to say things like "Be careful about dipping your finger into that zero-Kelvin block of ice, it will scald you" X-)
0spxtr9y
It won't be ice. Ice has a regular crystal structure, and if you know the microstate you know that the water molecules aren't in that structure.
1Lumifer9y
So then temperature has nothing to do with phase changes?
1gjm9y
Temperature in the thermodynamic sense (which is the same as the information-theoretic sense if you have only ordinary macroscopic information) is the same as average energy per molecule, which has a lot to do with phase changes for the obvious reason. In exotic cases where the information-theoretic and thermodynamic temperatures diverge, thermodynamic temperature still tells you about phase changes but information-theoretic temperature doesn't. (The thermodynamic temperature is still useful in these cases; I hope no one is claiming otherwise.)
1spxtr9y
You probably know this, but average energy per molecule is not temperature at low temperatures. Quantum kicks in and that definition fails. dS/dE never lets you down.
0gjm9y
Whoops! Thanks for the correction.
0Lumifer9y
Aha, thanks. Is information-theoretic temperature observer-specific?
0gjm9y
In the sense I have in mind, yes.
0dxu9y
I am somewhat amused that you linked to the same post on which we are currently commenting. Was that intentional?
0gjm9y
Actually, no! There have been kinda-parallel discussions of entropy, information, probability, etc., here and in the Open Thread, and I hadn't been paying much attention to which one this was. Anyway, same post or no, it's as good a place as any to point someone to for a clarification of what notion of temperature I had in mind.
0Richard_Kennaway9y
In the lottery, there is something I can do with foreknowledge of the numbers: bet on them. And with perfect knowledge of the microstate I can play Maxwell's demon to separate hot from cold. But still, I can predict from the microstate all of the phenomena of thermodynamics, and assign temperatures to all microstates that are close to equipartition (which I am guessing to be almost all of them). These temperatures will be the same as the temperatures assigned by someone ignorant of the microstate. This assignation of temperature is independent of the observer's knowledge of the microstate.

There is a peculiar consequence of this, pointed out by Cosma Shalizi. Suppose we have a deterministic physical system S, and we observe this system carefully over time. We are steadily gaining information about its microstates, and therefore by this definition, its entropy should be decreasing.

You might say, "the system isn't closed, because it is being observed." But consider the system "S plus the observer." Saying that entropy is nondecreasing over time seems to require that the observer is in doubt about its own microstates. What does that mean?