All of spxtr's Comments + Replies

This is very much babble, but try to keep it to legal and mostly moral suggestions (though with less strong constraints on the latter).

I downvoted, because

  • Becoming religious
  • Having kids early in life
  • Having many kids

... are not low status fun, but long-term life decisions that should not be taken lightly.

  • Extract as much information as possible from people in conversations, never offer any yourself (being a detective in real life)
  • Asking out many people in your social circle
    • Even if they're in relationships (?)

... are just "be rude to friends," which I consider... (read more)

Thank you for making this gradient way more legible! (Just fyi, I upvoted your comment). I disagree that {becoming religious, child number/earliness} are against the spirit of the question. I believe that extracting as much information as possible can be done non-rudely, asking out people is rude. Not sure whether rudeness is immoral. Agree on the illegality, although that really depends on the country/type of sex work.

I had the same experience. I was essentially going to say "meta is only useful insofar as it helps you and others do object-level things, so focus on building object-level things..." oh.

This whole post seems to mostly be answering "who has the best ethnic restaurants in Europe/America?" along with "which country has the best variety of good restaurants?" and not "who has the best food?" I think that's an important distinction. Clearly, Indian, Chinese, and middle eastern foods are the best.

I haven't heard of ECL before, so I'm sorry if this comes off as naive, but I'm getting stuck on the intro.

For one, I assume that you care about what happens outside our light cone. But more strongly, I’m looking at values with the following property: If you could have a sufficiently large impact outside our lightcone, then the value of taking different actions would be dominated by the impact that those actions had outside our lightcone.

The laws of physics as we know them state that we cannot have any impact outside our light cone. Does ECL (or this post)... (read more)

6Lukas Finnveden6mo
You might want to check out the paper and summary that explains ECL, that I linked. In particular, this section of the summary has a very brief introduction to non-causal decision theory, and motivating evidential decision theory is a significant focus in the first couple of sections of the paper.

No you're right, use 2 or 3 instead of 4 as an average dielectric constant. The document you linked cites which gives measured resistances and capacitances for the various layers. For Intel's 14 nm process making use of low-k, ultra-low-k dielectrics, and air gaps, they show numbers down to 0.15 fF/micron, about 15 times higher than .

I remember learning that aspect ratio and dielectric constant alone don't suffice to explain the high capacitances of interconnects. Instead, you have to include fri... (read more)

2Steven Byrnes8mo
I just edited the text, thanks.

This is an excellent writeup.


Minor nit, your assertion of  is too simple imo, even for a Fermi estimate. At the very least, include a factor of 4 for the dielectric constant of SiO2, and iirc in real interconnects there is a relatively high "minimum" from fringing fields. I can try to find a source for that later tonight, but I would expect it ends up significantly more than . This will actually make your estimate agree even better with Jacob's.

5Steven Byrnes8mo
This page suggests that people have stopped using SiO2 as the “interlayer dielectric” in favor of (slightly) lower-dielectric constant materials, and also that Intel has a process for using air gaps for at least some of the interconnect layers, I think? Looking at images like this, yeah there do seem to be lots of pretty narrow gaps. I am very open-minded to editing the central estimate of what is feasible. It sounds like you know more about this topic than me.

Active copper cable at 0.5W for 40G over 15 meters is ~J/nm, assuming it actually hits 40G at the max length of 15m.

I can't access the linked article, but an active cable is not simple to model because its listed power includes the active components. We are interested in the loss within the wire between the active components.

This source has specs for a passive copper wire capable of up to 40G @5m using <1W, which works out to ~J/nm, or a bit less.

They write <1 W for every length of wire, so all you can say is <5 fJ/mm. You don't know how... (read more)

Indeed, the theoretical lower bound is very, very low.

Do you think this is actually achievable with a good enough sensor if we used this exact cable for information transmission, but simply used very low input energies?

The minimum is set by the sensor resolution and noise. A nice oscilloscope, for instance, will have, say, 12 bits of voltage resolution and something like 10 V full scale, so ~2 mV minimum voltage. If you measure across a 50 Ohm load then the minimum received power you can see is  This is an underestimate, but t... (read more)

Maybe I'm interpreting these energies in a wrong way and we could violate Jacob's postulated bounds by taking an Ethernet cable and transmitting 40 Gbps of information at a long distance, but I doubt that would actually work.

Ethernet cables are twisted pair and will probably never be able to go that fast. You can get above 10 GHz with rigid coax cables, although you still have significant attenuation.

Let's compute heat loss in a 100 m LDF5-50A, which evidently has 10.9 dB/100 m attenuation at 5 GHz. This is very low in my experience, but it's what they cla... (read more)

In the original article I discuss interconnect wire energy, not a "theoretical lower bound" for any wire energy communication method - and immediately point out reversible communication methods (optical, superconducting) that do not dissipate the wire energy. Coax cable devices seem to use around 1 to 5 fJ/bit/mm at a few W of power, or a few OOM more than your model predicts here - so I'm curious what you think that discrepancy is, without necessarily disagreeing with the model. I describe a simple model of wire bit energy for EM wave transmission in coax cable here which seems physically correct but also predicts a bit energy distance range somewhat below observed.
2Ege Erdil9mo
I think this calculation is fairly convincing pending an answer from Jacob. You should have probably just put this calculation at the top of the thread, and then the back-and-forth would probably not have been necessary. The key parameter that is needed here is the estimate of a realistic attenuation rate for a coaxial cable, which was missing from DaemonicSigil's original calculation that was purely information-theoretic. As an additional note here, if we take the same setup you're using, then if you take the energy input x to be a free parameter, then the energy per bit per distance is given by f(x)=0.906x5⋅1014⋅log2(1+0.094x2.5⋅10−11) in units of J/bit/mm. This does not have a global optimum for x>0 because it's strictly increasing, but we can take a limit to get the theoretical lower bound limx→0f(x)=3.34⋅10−25 which is much lower than what you calculated, though to achieve this you would be sending information very slowly - indeed, infinitely slowly in the limit of x→0.

Ah, I was definitely unclear in the previous comment. I'll try to rephrase.

When you complete a circuit, say containing a battery, a wire, and a light bulb, a complicated dance has to happen for the light bulb to turn on. At near the speed of light, electric and magnetic fields around the wire carry energy to the light bulb. At the same time, the voltage throughout the wire establishes itself at the the values you would expect from Ohm's law and Kirchhoff's rules and such. At the same time, electrons throughout the wire begin to feel a small force from an e... (read more)

3Ege Erdil9mo
I'm aware of all of this already, but as I said, there seems to be a fairly large gap between this kind of informal explanation of what happens and the actual wire energies that we seem to be able to achieve. Maybe I'm interpreting these energies in a wrong way and we could violate Jacob's postulated bounds by taking an Ethernet cable and transmitting 40 Gbps of information at a long distance, but I doubt that would actually work. I'm in a strange situation because while I agree with you that the tile model of a wire is unphysical and very strange, at the same time it seems to me intuitively that if you tried to violate Jacob's bounds by many orders of magnitude, something would go wrong and you wouldn't be able to do it. If someone presented a toy model which explained why in practice we can get wire energies down to a certain amount that is predicted by the model while in theory we could lower them by much more, I think that would be quite persuasive.

The part your calculation fails to address is what happens if we attempt to drive this transmission by moving electrons around inside a wire made of an ordinary resistive material such as copper.

I have a number floating around in my head. I'm not sure if it's right, but I think that at GHz frequencies, electrons in typical wires are moving sub picometer distances (possibly even femtometers?) per clock cycle.

The underlying intuition is that electron charge is "high" in some sense, so that 1. adding or removing a small number of electrons corresponds to a hu... (read more)

2Ege Erdil9mo
The absolute speed of conduction band electrons inside a typical wire should be around 1e6 m/s at room temperature. At GHz frequencies, the electrons are therefore moving distances comparable to 1 mm per clock cycle. If you look at the average velocity, i.e. the drift velocity, then that's of course much slower and the electrons will be moving much more slowly in the wire - the distances you quote should be of the right order of magnitude in this case. But it's not clear why the drift velocity of electrons is what matters here. By Maxwell, you only care about electron velocity on the average insofar as you're concerned with the effects on the EM field, but actually, the electrons are moving much faster so could be colliding with a lot of random things and losing energy in the process. It's this effect that has to be bounded, and I don't think we can actually bound it by a naive calculation that assumes the classical Drude model or something like that. If someone worked all of this out in a rigorous analysis I could be convinced, but your reasoning is too informal for me to really believe it.

I come more from the physics side and less from the EE side, so for me it would be Datta's "Electronic Transport in Mesoscopic Systems", assuming the standard solid state books survive (Kittel, Ashcroft & Mermin, L&L stat mech, etc). For something closer to EE, I would say "Principles of Semiconductor Devices" by Zeghbroeck because it is what I have used and it was good, but I know less about that landscape.

Hi Alexander,

I would be happy to discuss the physics related to the topic with others. I don't want to keep repeating the same argument endlessly, however.

Note that it appears that EY had a similar experience of repeatedly not having their point addressed:

I'm confused at how somebody ends up calculating that a brain - where each synaptic spike is transmitted by ~10,000 neurotransmitter molecules (according to a quick online check), which then get pumped back out of the membrane and taken back up by the synapse; and the impulse is then shepherded along cell

... (read more)

It depends on your background in physics.

For the theory of sending information across wires, I don't think there is any better source than Shannon's "A Mathematical Theory of Communication."

I'm not aware of any self-contained sources that are enough to understand the physics of electronics. You need to have a very solid grasp of E&M, the basics of solid state, and at least a small amount of QM. These subjects can be pretty unintuitive. As an example of the nuance even in classical E&M, and an explanation of why I keep insisting that "signals do not... (read more)

4Adele Lopez9mo
What is the most insightful textbook about nanoelectronics you know of, regardless of how difficult it may be? Or for another question trying to get at the same thing: if only one book about nanoelectronics were to be preserved (but standard physics books would all be fine still), which one would you want it to be? (I would be happy with a pair of books too, if that's an easier question to answer.)

This is the right idea, but in these circuits there are quite a few more noise sources than Johnson noise. So, it won't be as straightforward to analyze, but you'll still end up with essentially a relatively small (compared to L/nm) constant times kT.

Ok, I will disengage. I don't think there is a plausible way for me to convince you that your model is unphysical.

I know that you disagree with what I am saying, but from my perspective, yours is a crackpot theory. I typically avoid arguing with crackpots, because the arguments always proceed basically how this one did. However, because of apparent interest from others, as well as the fact that nanoelectronics is literally my field of study, I engaged. In this case, it was a mistake.

Sorry for wasting our time.

7the gears to ascension9mo
If this is your field but also you don't have the mood for pedagogy when someone from another field has strong opinions, which is emotionally understandable, I'm curious what learning material you'd recommend working through to find your claims obvious; is a whole degree needed? Are there individual textbooks or classes or even individual lectures?
6Ege Erdil9mo
I strongly disapprove of your attitude in this thread. You haven't provided any convincing explanation of what's wrong with Jacob's model beyond saying "it's unphysical". I agree that the model is very suspicious and in some sense doesn't look like it should work, but at the same time, I think there's obviously more to the agreement between his numbers and the numbers in the literature than you're giving credit for. Your claim that there's no fundamental bound on information transmission that relies on resistive materials of the form energy/bit/length (where the length scale could depend on the material in ways Jacob has already discussed) is unsupported and doesn't seem like it rests on any serious analysis. You can't blame Jacob for not engaging with your arguments because you haven't made any arguments. You've just said that his model is unphysical, which I agree with and presumably he would also agree with to some extent. However, by itself, that's not enough to show that there is no bound on information transmission which roughly has the form Jacob is talking about, and perhaps for reasons that are not too dissimilar from the ones he's conjectured.

Dear spxtr,

Things got heated here. I and many others are grateful for your effort to share your expertise. Is there a way in which you would feel comfortable continuing to engage?

Remember that for the purposes of the prize pool there is no need to convince Cannell that you are right. In fact I will not judge veracity at all just contribution to the debate (on which metric you're doing great!)

Dear Jake,

This is the second person in this thread that has explicitly signalled the need to disengage. I also realize this is charged topic and it's easy for it to get heated when you're just honestly trying to engage.

Best, Alexander

Please respond to the meat of the argument.

  1. Resistive heat loss is not the same as heat loss from Landauer's principle. (you agree!)
  2. The Landauer limit is an energy loss per bit flip, with units energy/bit. This is the thermodynamic minimum (with irreversible computing). It is extremely small and difficult to measure. It is unphysical to divide it by 1 nm to model an interconnect, because signals do not propagate through wires by hopping from electron to electron.
  3. The Cavin/Zhirnov paper you cite does not concern the Landauer principle. It models ordinary dis
... (read more)
I'm really not sure what your argument is if this is the meat, and moreover don't really feel morally obligated to respond given that you have not yet acknowledged that my model already made roughly correct predictions and that Byrne's model of wire heating under passive current load is way off theoretically and practically. Interconnect wire energy comes from charging and discharging 12CV2 capacitance energy, not resistive loss for passive constant (unmodulated) current flow. 1. The landauer limit connects energy to probability of state transitions, and is more general than erasure. Reversible computations still require energies that are multiples of this bound for reliability. It is completely irrelevant how signals propagate through the medium - whether by charging wire capacitance as in RC interconnect, or through changes in drift velocity, or phonons, or whatever. As long as the medium has thermal noise, the landauer/boltzmann relationship applies. 2. Cavin/Zhirnov absolutely cite and use the Landauer principle for bit energy. 3. I make no such claim as i'm not using a "modified Landauer energy". 4. I'm not making any claims of novel physics or anything that disagrees with known wire equations. Comments like this suggest you don't have a good model of my model. The actual power usage of actual devices is a known hard fact and coax cable communication devices have actual power usage within the range my model predicted - that is a fact. You can obviously use the wire equations (correctly) to precisely model that power use (or heat loss)! But I am more concerned with the higher level general question of why both human engineering and biology - two very separate long running optimization processes - converged on essentially the same wire bit energy.

1 nm is somewhat arbitrary but around that scale is a sensible estimate for minimal single electron device spacing ala Cavin/Zhirnov. If you haven’t actually read those refs you should - as they justify that scale and the tile model.

They use this model to figure out how to pack devices within a given area and estimate their heat loss. It is true that heating of a wire is best described with a resistivity (or parasitic capacitance) that scales as 1/L. If you want to build a model out of tiles, each of which is a few nm on a side (because the FETs are roughl... (read more)

Of course - as I pointed out in my reply here. False. I never at any point modeled the resistive heat/power loss for flowing current through a wire sans communication. It was Byrnes who calculated the resistive loss for a coax cable, and got a somewhat wrong result (for wire communication bit energy cost), whereas the tile model (using mean free path for larger wires) somehow outputs the correct values for actual coax cable communication energy use as shown here.

I don’t think the thermal de Broglie wavelength is at all relevant in this context, nor the mean free path, and instead I’m trying to shift discussion to “how wires work”.

This is the crux of it. I made the same comment here before seeing this comment chain.

People have been sending binary information over wires since 1840, right? I don’t buy that there are important formulas related to electrical noise that are not captured by the textbook formulas. It’s an extremely mature field.

Also a valid point. @jacob_cannell is making a strong claim: that the energy l... (read more)

For what it's worth, I think both sides of this debate appear strangely overconfident in claims that seem quite nontrivial to me. When even properly interpreting the Landauer bound is challenging due to a lack of good understanding of the foundations of thermodynamics, it seems like you should be keeping a more open mind before seeing experimental results.

At this point, I think the remarkable agreement between the wire energies calculated by Jacob and the actual wire energies reported in the literature is too good to be a coincidence. However, I suspect th... (read more)

1 nm is somewhat arbitrary but around that scale is a sensible estimate for minimal single electron device spacing ala Cavin/Zhirnov. If you haven’t actually read those refs you should - as they justify that scale and the tile model. This is just false, unless you are claiming you have found some error in the cavin/zhirnov papers. It’s also false in the sense that the model makes reasonable predictions. I’ll just finish my follow up post, but using the mean free path as the approx scale does make sense for larger wires and leads to fairly good predictions for a wide variety of wires from on chip interconnect to coax cable Ethernet to axon signal conduction.

The post is making somewhat outlandish claims about thermodynamics. My initial response was along the lines of "of course this is wrong. Moving on." I gave it another look today. In one of the first sections I found (what I think is) a crucial mistake. As such, I didn't read the rest. I assume it is also wrong.

The original post said:

A non-superconducting electronic wire (or axon) dissipates energy according to the same Landauer limit per minimal wire element. Thus we can estimate a bound on wire energy based on the minimal assumption of 1 minimal energy un

... (read more)

I suspect my experience is somewhat similar to shminux's.

I simply can't follow these posts, and the experience of reading them feels odd, and even off-putting at times (in an uncanny valley sort of way). At the same time, I can see that a number of people in the comments are saying that they find great value in them.

My first guess as to why I had trouble with them was that there are basically no concrete examples given, but now I don't think that's the reason. Personally, I get a strong sense of "I must be making some sort of typical mind fallacy" here. So... (read more)

Visual Information Theory. I was already comfortable with information theory and this was still informative. This blogger's other posts are similarly high-quality.

In the end, it is just another Abrams movie: slick, SFX-heavy, and as substantial & satisfying as movie theater popcorn.


You might want to add a spoiler note at the top, though.

It might be wishful thinking, but I feel like my smash experience improved my meatspace-agency as well.

Story time! Shortly after Brawl came out, I got pretty good at it. I could beat all my friends without much effort, so I decided to enter in a local tournament. In my first round I went up against the best player in my state, and I managed to hit him once, lightly, over the course of two games. I later became pretty good friends and practiced with him regularly.

At some point I completely eclipsed my non-competitive friends, to the extent that playing with them felt like a chore. All I had to do was put them in certain situations where I knew how they would... (read more)

To a lesser degree, I feel the same is happening to me in go. I regularly play against GnuGo, and at lower levels (that is, when the CPU has more handicap) I can strongly feel where it's going, and beat him pretty solidly. At the same time, when confronted without handicap all I can manage is a tie, it feels a lot more unpredictable.
Did you learn from it? Improved your Brawl-agency?

An exact copy of me may be "me" from an identity perspective, but it is a separate entity from a utilitarian perspective. The death of one is still a tragedy, even if the other survives.

You should know this intuitively. If a rogue trolley is careening toward an unsuspecting birthday cake, you'll snatch it out of the way. You won't just say, "eh, in another time that cake will survive," and then watch it squish. Unless you're some sort of monster.

Suppose some wizard casts a spell to make your neurons twice as thick, with everything still functioning normally. Suppose furthermore that for some reason the counterspell involves being hit by a trolley. Did the trolley kill someone? Now ask the same question where the neurons are duplicated and each neuron is replaced by a pair of two running in parallel. Now ask the same question where the parallel sets are running in physically separated brains....

I am impressed. The production quality on this is excellent, and the new introduction by Rob Bensinger is approachable for new readers. I will definitely be recommending this over the version on this site.

I didn't want to tell it to you before because I thought it might prejudice your decision unfairly.

If Draco has has the last half-hour of his memory sealed off, then why does Harry say these words to him? Shouldn't Draco respond, "What decision?"

Unless it's a more nuanced memory charm, such that he only subconsciously remembers the conversation.

Right, presumably the spell seals memories, but the associated emotions manage to color the actions, hence "Draco ignored him". This would not be surprising, given that conscious memories are mostly System 2, while emotions are mostly System 1. The memory sealing spell only interrupts the conscious retrieval pathways. .

If you have a different version of QM (perhaps what Ted Bunn has called a “disappearing-world” interpretation), it must somehow differ from MWI, presumably by either changing the above postulates or adding to them. And in that case, if your theory is well-posed, we can very readily test those proposed changes. In a dynamical-collapse theory, for example, the wave function does not simply evolve according to the Schrödinger equation; it occasionally collapses (duh) in a nonlinear and possibly stochastic fashion. And we can absolutely look for experimental

... (read more)
I don't have the expertise to evaluate it, but Brian Greene suggests this experiment.
Yes, dynamical collapse appears to make new falsifiable predictions. MWI doesn't, unless you take Deutsch's reversible quantum consciousness seriously.

I recommend reading the sequences, if you haven't already. In particular, the fun theory sequence discusses exactly these issues.

Upvoted for suggesting a specific sequence.
Thanks. Interesting. I think one issue is maintaining the will to live indefinitely, not getting tired of life, of cumulative stress, basically anti-depresison. I think it would be more useful to focus on fixing that, and when everybody totally wants to live on and on and on, then that generates motivation to throw more resources on anti-aging. Without that, there is a lesser motivation, as people who are not very happy, like myself, will not support it vigorously. Senescence is an acceptable, honorable, non-shameful way of slow suicide, suitable if you are only lightly depressed. You can get old and die without ever having to admit you are depressed or you want to die. If life is extended, you basically either have to endure it longer, or have to own up to, admit defeat, admit you fail at life, and choose suicide. Neither are very attractive options. This is why I recommend fixing the will to live i.e. light depression first. Fixing the will to live is not going to be easy because it goes against the logic of evolution (which does not care if you are depressed or dead after you have reproduced) and much of human biology. You essentially want to keep people in a constant "expecting rewards" mindset, biologically speaking. In a look forward to tomorrow because something cool with happen mindset. However, it is likely that would lead to some kind of burnout, like, dopamine receptors becoming desensitized from over-use or something like that. Fixing this would be a major brain rewiring.

This is a little misleading. Feynman diagrams are simple, sure, but they represent difficult calculations that weren't understood at the time he invented them. There was certainly genius involved, not just perseverance.

Much more likely his IQ result was unreliable, as gwern thinks.

Feynman was younger than 15 when he took it, and very near this factoid in Gleick's bio, he recounts Feynman asking about very basic algebra (2^x=4) and wondering why anything found it hard - the IQ is mentioned immediately before the section on 'grammar school', or middle school, implying that the 'school IQ test' was done well before he entered high school, putting him at much younger than 15. (15 is important because Feynman had mastered calculus by age 15, Gleick says, so he wouldn't be asking his father why algebra is useful at age >15.) - Given t

... (read more)

The idea is that it's not specifically for quotes related to rationality or other LessWrong topics.

Then you put the tea and the water in thermal contact. Now, for every possible microstate of the glass of water, the combined system evolves to a single final microstate (only one, because you know the exact state of the tea).

After you put the glass of water in contact with the cup of tea, you will quickly become uncertain about the state of the tea. In order to still know the microstate, you need to be fed more information.

If you have a Boltzmann distribution, you still know all the microstates - you just have a probability distribution over them. Time evolution in contact with a zero-entropy object moves probability from one microstate to another in a predictable way, with neither compression nor spreading of the probability distribution. Sure, this requires obscene amounts of processing power to keep track of, but not particularly more than it took to play Maxwell's demon with a known cup of tea.

That's probably it. When fitting a line using MCMC you'll get an anticorrelated blob of probabilities for slope and intercept, and if you plot one deviation in the fit parameters you get something that looks like this. I'd guess this is a non-parametric analogue of that. Notice how both grow significantly at the edges of the plots.

Quantum mysticism written on a well-known and terrible MRA blog? -8 seems high. See the quantum sequence if you haven't already. It looks like advancedatheist and ZankerH got some buddies to upvote all of their comments, though. They all jumped by ~12 in the last couple hours.

For real, though, this is actually useless and deserves a very low score.

Super-resolution microscopy is an interesting recent development that won the Nobel Prize in Chemistry last year. Here's another article on the subject. It has been used to image mouse brains, but only near the surface. It won't be able to view the interior of any brain, but still interesting.

What's the shaded area in the very first plot? Usually this area is one deviation around the fit line, but here it's clearly way too small to be that.

I don't have a good answer to that. The curve was generated using LOESS, which I haven't studied, and I assume that the shaded area has a interpretation in that framework.

You probably know this, but average energy per molecule is not temperature at low temperatures. Quantum kicks in and that definition fails. dS/dE never lets you down.

Whoops! Thanks for the correction.

It won't be ice. Ice has a regular crystal structure, and if you know the microstate you know that the water molecules aren't in that structure.

So then temperature has nothing to do with phase changes?

Expanding on the billiard ball example: lets say one part of the wall of the pool table adds some noise to the trajectory of the balls that bounce off of that spot, but doesn't sap energy from them on average. After a while we won't know the exact positions of the balls at an arbitrary time given only their initial positions and momenta. That is, entropy has entered our system through that part of the wall. I know this language makes it sound like entropy is in the system, flowing about, but if we knew the exact shape of the wall at that spot then it would... (read more)

THank you. The thing that leaps out at me is that the rhetorical equation in that article between the sexiness of a woman being in the mind and the probability of two male children being in the mind is bogus. I look at a woman and think she is sexy. If I assume the sexiness is in the woman, and that an alien creature would think she is sexy, or my wife would think she is sexy, because they would see the sexiness in her, then the article claims I have been guilty of the mind projection fallacy because the woman's sexiness is in my mind, not in the woman. The article then proceeds to enumerate a few situations in which I am given incomplete information about reality and each different scenario corresponds to a different estimate that a person has two boy children. BUT... it seems to me, and I would love to know if Eliezer himself would agree, even an alien given the same partial information would, if it were rational and intelligent, reach the same conclusions about the probabilities involved! So... probability, even Bayesian probability based on uncertainty is no more or less in my head than is 1+1=2. 1+1=2 whether I am an Alien mind or a Human mind, unlike that woman is sexy which may only be true in heterosexual male, homosexual female, and bisexual human minds, but not Alien minds. But be that as it may, your comment still ignores the entire discussion, which is is Entropy and more or less "real" than Energy? The fact is that Aliens who had steam engines, internal combustion engines, gas turbines, and air conditioners would almost certainly have thermodynamics, and understand entropy, and agree with Humans on the laws of thermodynamics and the trajectories of entropy in the various machines. If Bayesian probability is in the mind, and Entropy is in the mind, then they are like 1+1=2 being in the mind, things which would be in the mind of anything which we considered rational or intelligent. They would NOT be like "sexiness."

An easy toy system is a collection of perfect billiard balls on a perfect pool table, that is, one without rolling friction and where all collisions conserve energy. For a few billiard balls it would be quite easy to extract all of their energy as work if you know their initial positions and velocities. There are plenty of ways to do it, and it's fun to think of them. This means they are at 0 temperature.

If you don't know the microstate, but you do know the sum of the square of their velocities, which is a constant in all collisions, you can still tell som... (read more)

Entropy is in the mind in exactly the same sense that probability is in the mind. See the relevant Sequence post if you don't know what that means.

The usual ideal gas model is that collisions are perfectly elastic, so even if you do factor in collisions they don't actually change anything. Interactions such as van der Waals have been factored in. The ideal gas approximation should be quite close to the actual value for gases like Helium.

They don't change ANYTHING? Suppose I start with a gas of molecules all moving at the same speed but in different directions, and they have elastic collisions off the walls of the volume. If they do not collide with each other, they never "thermalize," their speeds stay the same forever as they bounce off the walls but not off each other. But if they do bounce off each other, the velocity distribution does become thermalized by their collisions, even when these collisions are elastic. So collisions don't chage ANYTHING? They change the distribution of velocities to a thermal one, which seems to me to be something. So even if an ideal gas maintained perfect decorrelation between molecule positions in an ideal gas with collisions, which I do not think you can demonstrate (and appealing to an unlinked sequence does not count as a demonstration), you would still have to face the fact that an actual gas like Helium would be "quite close" to uncorrelated, which is another way of saying... correlated.
Without a link! So I went to the sequences page in the wiki and the word entropy doesn't even appear on the page! Good job referring me there without a link. Okay... Is that the same sense in which Energy is in the mind? Considering that this seems to be my claim that you are responding to, AND there is no reasonable way to get to a sequence page that corresponds to your not-quite-on-topic-but-not-quite-orthogonal response, that would be awfully nice to know. Are you agreeing with me and amplifying, or disagreeing with me and explaining?

I agree with passive_fist, and my argument hasn't changed since last time.

If we learn that energy changes in some process, then we are wrong about the laws that the system is obeying. If we learn that entropy goes down, then we can still be right about the physical laws, as Jaynes shows.

Another way: if we know the laws, then energy is a function of the individual microstate and nothing else, while entropy is a function of our probability distribution over the microstates and nothing else.

I agree that it feels different. It certainly does to me. Energy feels real, while entropy feels like an abstraction. A rock falling on one's head is a clear manifestation of its potential (turned kinetic) energy, while getting burned by a hot beverage does not feel like a manifestation of the entropy increase. it feels like the beverage's temperature is to blame. On the other hand, if we knew precisely the state of every water molecule in the cup, would we still get burned? The answer is not at all obvious to me. Passive_fist claims that the cup would appear to be a absolute zero then: I do not know enough stat mech to assess this claim, but it seems wrong to me, unless the claim is that we cannot know the state of the system unless it's already at absolute zero to begin with. I suppose a toy model with only a few particles present might shed some light on the issue. Or a link to where the issue is discussed.

I made a post about this a month or so ago. Yay!

That's pretty much exactly what I had in mind. Thanks.

Great! It will certainly be accepted for publication in a peer-reviewed journal. The author will most likely win a Nobel Prize for his work and be hired to work at the top institution of his choice.

Yeah. One probably can read that PDF only if one is devoid od status regulating emotions.

At this point I have to stop and ask for your credentials in astronomy. The link you posted reeks strongly of crackpot, and it's most likely not worth my time to study. Maybe you've studied cosmology in detail and think differently? If you think the author is wrong about their pet theory of general relativity, why do you think they're right in their disproof of LCDM?

I don't know whether his theory is wrong. In the end I'm not qualified to make that claim. Despite all the crackpottery of the auther ("'dark' age", "Einsteins blunder"...) there are some things that he does different than other crackpots. He doesn't resort to interpreting the words instead of the math of physics nor does he avoid to make testable predictions nor does he cherry-pick or creatively reinterpret data. For the SDSS data less so than serious astro physicists apparently. His curves which quite well fit the raw SDSS data seem to be derived from an unusual but 'simple' spacetime geometry and are not notably fitted to parameters - quite opposite to the LCDM curves which he took from standard sources (those only fit comparatively cherry picked galaxies). Thus judging from raw SDSS data LCDM could be considered severely challenged. He also gives all the sources he uses, the SDSS queries used in the graph generation, how the magnitudes are calculated, how the results apply in different spectral lines. All relevant considerations you'd rather expect in serious work. It is only tained by his extraordinary claims, his ego and other crackpottery traits (like making grande generalizations about everyhing).
Totally. That's why I added the disclaimer. I edited it a bit to make that more clear. The author matches all criteria for crackpot no doubt. But even a crackpot can stumble upon something. I do not have credentials in astronomy. I'm somewhat well-read in the subject and can handle sufficient math. And when checking the presented data (I did actual SDSS queries; the SDSS explorer and query facilities are genuinely cool) it appears that there is something to his claims - if not to his theory itself.

Astronomy is extremely difficult. We don't know the relevant fundamental physics, and we can't perform direct experiments on our subjects. We should expect numerous problems with any cosmological model that we propose at this point. The only people who are certain of their cosmologies are the religious.

You need to do a lot more work for this sort of post to be useful. Cherry-picking weak arguments spread across the entire field of astronomy isn't enough.

There are some areas where rigorous analysis can be made. No need to cherry pick some sample points. Actually lots of dark matter physics and cosmic inflation modelling apparently uses cherry picking too to arrive at their results. The Sloan Digital Sky Survey provides such an immense trove of data on galaxies very far back into the past (up to z>10) that statistical analysis with millions of galaxies is possible. Basically 'all there are'. Enough for a crackpot to use it to 'disprove' inflation: SDSS Renaisance - end of the 'dark age' in cosmology I'm not sure whether I should recommend reading that. It is based on lots of real data and shows clear failures of LCDM on that but in the end it all drives toward the author's (apparently a crackpot) pet theory of a static universe based on a 'small' correction to general relativity (complex time).
Load More