Thanks much for the brainpower! Agreed that this is easier to think about in terms of classical thermodynamics with its continuous fluids; I'm just on a bit of a stubbornly fundamentalist kick (ex).
If it entertains you to continue chatting, I have a couple clarifying questions:
They are in an environment where the pressure is low enough that [the droplets] can vaporize [...] the system is a vapor+liquid saturated equilibrium.
"Saturated equilibrium" sounds at odds with "pressure low enough that the droplets can vaporize." Reconcile? (My best guess: you're saying that the droplets evaporate enough to establish an equilibrium partial pressure very quickly after the expansion valve.)
the refrigerant droplets vaporize completely because of heat transfer from the outside in the evaporator, not vaporizing THEN absorbing heat from the outside.
IIUC: you're saying that my diagram is incorrect in depicting the droplets vaporizing completely in the bulk of the gas; actually, the vaporization mostly (entirely?) occurs on the surface of the evaporator. Seems totally plausible. But, for heat to transfer from the exterior to the droplets, the droplets must be colder than the exterior; am I correct in identifying post-expansion-valve evaporative cooling as the reason the droplets are cold?
Trying to synthesize your response into my stubbornly-statistical-mechanical model, my update is:
(Tongue partly-but-not-entirely in cheek: if you painstakingly prepare a second box of gas with the exact same initial conditions as the first, it will have exactly the same temperature; corollary, if you painstakingly prepare a second person with the exact same initial conditions as the first, they will have exactly the same free will.)
Free will is like temperature: a useful tool for analyzing the behavior of certain systems which are too big and complicated to model in exact detail.
If you know the positions and velocities of every atom in a box of gas, with enough compute you can predict its future to arbitrary precision; does it "have a temperature"? Irrelevant! Technically yes, I guess, but it's sorta an epiphenomenon, screened off from reality by your exact knowledge of the initial conditions and your willingness to throw compute at your model. But if you're less-than-perfectly omniscient, it might be more convenient to consider the box as having a "temperature" and model it more abstractly.
Substitute "person+environment"/"free will" for "box of gas"/"temperature" and that's all still true.
I see! Thanks for the thoughtful response. I think my problem is caused by not having brought enough neuroscience and psychology textbooks to my armchair, leaving me in too-many-plausible-hypotheses-land, rather than your too-few-. I'll take another stab at this sequence if/when I collect more background knowledge!
I've read about half of this sequence, and it's certainly the most palatable, well-founded-seeming discussion of consciousness I've ever encountered.
But... I've kind of run aground on the question: how would I tell if this is true? (Or, you know, all models are false etc., but how would I tell if this is useful?)
Three examples of how a theory can useful: "Hey, I came up with this new theory of blurtzian phenomena! ...
This sequence doesn't feel like (1) or (2) to me. Is it (3), or something else?
Heuristic: distrust any claim that's much memetically fitter than its retraction would be. (Examples: "don't take your vitamins with {food}, because it messes with {nutrient} uptake"; "Minnesota is much more humid than prior years because of global-warming-induced corn sweat"; "sharks are older than trees"; "the Great Wall of China is visible from LEO with the naked eye")
It sounds like you're assuming you have access to some "true" probability for each event; do I misunderstand? How would I determine the "true" probability of e.g. Harris winning the 2028 US presidency? Is it 0/1 depending on the ultimate outcome?
(Hmm. Come to think of it, if the y-axis were in logits, the error bars might be ill-defined, since "all the predictions come true" would correspond to +inf logits.)
Ah-- I took every prediction with p<0.50 and flipped 'em, so that every prediction had p>=0.50, since I liked the suggestion "to represent the symmetry of predicting likely things will happen vs unlikely things won't."
Thanks for the close attention!
Oh, that's very interesting. I'll have to look into that more. (And learn more about Joule-Thomson, which I know... of... but was hoping I could ignore.)
Thanks again for the brain!