JenniferRM

Wiki Contributions

Comments

I previously wrote [an "introduction to thermal conductivity and noise management" here].

This is amazingly good! The writing is laconic, modular, model-based, and relies strongly on the reader's visualization skills!

Each paragraph was an idea, and I had to read it more like a math text than like "human writing" to track latent conceptual structure despite it being purely in language and no equations occuring in the text.

(It is similar to Munenori's "The Life Giving Sword" and Zizioulas's "Being As Communion" but not quite as hard as those because those require emotional and/or moral and/or "remembering times you learned or applied a skill" and/or "cogito ergo sum" fit checks instead of pauses to "visualize complex physical systems in motion".)

The "big picture fit check on concepts" at the end of your conceptual explanation (just before application to examples began) was epiphanic (in context):

...Because of phonon scattering, thermal conductivity can decrease with temperature, but it can also increase with temperature, because at higher temperature, more vibrational modes are possible. So, crystals have some temperature at which their thermal conductivity peaks.

With this understanding, we'd expect amorphous materials to have low thermal conductivity, even if they have a 3d network of strong covalent bonds. And indeed, typical window glass has a relatively low thermal conductivity, ~1/30th that of aluminum oxide, and only ~2x that of HDPE plastic.

I had vaguely known that thermal and electric conductivity were related, but I had never seen them connected together such that "light transparency and heat insulation often go together" could be a natural and low cost sentence.

I had not internalized before that matter might have fundamental limits on "how much frequency" (different frequencies + wavelengths + directions of many wave, all passing through the same material) might be operating on every scale and wave type simultaneously!

Now I have a hunch: if Drexlerian nanotech ever gets built, some of those objects might have REALLY WEIRD macroscropic properties... like being transparent from certain angles or accidentally a "superconductor" of certain audio frequencies? Unless maybe every type and scale of wave propagation is analyzed and the design purposefully suppresses all such weird stray macroscopic properties???

The main point of this post wasn't to explain superconductors, but to consider some sociology.

I think a huge part of why these kinds of things often occur is that they are MUCH more likely in fields where the object level considerations have become pragmatically impossible for normal people to track, and they've been "taking it on faith" for a long time.

Normal humans can then often become REALLY interested when "a community that has gotten high trust" suddenly might be revealed to be running on "Naked Emperor Syndrome" instead of simply doing "that which they are trusted to do" in an honest and clean way.

((Like, at this point, if a physics PhD has "string theory" on their resume after about 2005, I just kinda assume they are a high-iq scammer with no integrity. I know this isn't fully justified, but that field has for so long: (1) failed to generate any cool tech AND (2) failed to be intelligible to outsiders AND (3) been getting "grant funding that was 'peer reviewed' only by more string theorists" that I assume that intellectual parasites invaded it and I wouldn't be able to tell.))

Covid caused a lot of normies to learn that a lot of elites (public health officials, hospital administrators, most of the US government, most of the Chinese government, drug regulators, drug makers, microbiologists capable of gain-of-function but not epidemiology, epidemiologists with no bioengineering skills, etc) were not competently discharging their public duties to Know Their Shit And Keep Their Shit Honest And Good.

LK-99 happening in the aftermath of covid, proximate to accusations of bad faith by the research team who had helped explore new materials in a new way, was consistent with the new "trust nothing from elites, because trust will be abused by elites, by default" zeitgeist... and "the material science of conductivity" is a vast, demanding, and complex topic that can mostly only be discussed coherently by elite material scientists.

In many cases, whether the social status of a scientific theory is amplified or diminished over time seems to depend more on the social environment than on whether it's true.

I think that different "scientific fields" will experience this to different amounts depending on how many of their concepts can be reduced to things that smart autodidacts can double click on, repeatedly, until they ground in things that connect broadly to bedrock concepts in the rest of math and science.

This is related to very early material on lesswrong, in my opinion, like That Magical Click and Outside The Laboratory and Taking Ideas Seriously that hit a very specific layer of "how to be a real intellectual in the real world" where broad abstractions and subjectively accessible updates are addressed simultaneously, and kept in communication with each other, without either of them falling out of the "theory about how to be a real intellectual in the real world".

I think your condensation of that post you linked to is missing the word "superstimulus" (^f on the linked essay is also missing the term) which is the thing that the modern world adds to our environment on purpose to make our emotions less adaptive for us and more adaptive for the people selling us superstimuli (or using that to sell literally any other random thing). I added the superstimuli tag for you :-)

My reaction to the physics here was roughly: "phonon whatsa whatsa?"

It could be that there is solid reasoning happening in this essay, but maybe there is not enough physics pedagogy in the essay for me to be able to tell that solid reasoning is here, because superconductors aren't an area of expertise (yet! (growth mindset)).

To double check that this essay ITSELF wasn't bullshit I dropped [the electron-phonon interaction must be stronger than random thermal movement] into Google and... it seems to be a real thing! <3

The top hit was this very blog post... and the second hit was to "Effect of Electron-Phonon Coupling on Thermal Transport across Metal-Nonmetal Interface - A Second Look" with this abstract:

The effect of electron-phonon (e-ph) coupling on thermal transport across metal-nonmetal interfaces is yet to be completely understood. In this paper, we use a series of molecular dynamics (MD) simulations with e-ph coupling effect included by Langevin dynamics to calculate the thermal conductance at a model metal-nonmetal interface. It is found that while e-ph coupling can present additional thermal resistance on top of the phonon-phonon thermal resistance, it can also make the phonon-phonon thermal conductance larger than the pure phonon transport case. This is because the e-ph interaction can disturb the phonon subsystem and enhance the energy communication between different phonon modes inside the metal. This facilitates redistributing phonon energy into modes that can more easily transfer energy across the interfaces. Compared to the pure phonon thermal conduction, the total thermal conductance with e-ph coupling effect can become either smaller or larger depending on the coupling factor. This result helps clarify the role of e-ph coupling in thermal transport across metal-nonmetal interface.

An interesting thing here is that, based just on skimming and from background knowledge I can't tell if this is about superconductivity or not

The substring "superconduct" does not appear in that paper.

Searching more broadly, it looks like a lot of these papers actually are about electronic and conductive properties in general, often semi-conductors, (though some hits for this search query ARE about superconductivity) and so searching like this helped me learn a little bit more about "why anything conducts or resists electric current at all", which is kinda cool!

I liked "Electron-Phonon Coupling as the Source of 1/f Noise in Carbon Soot" for seeming to go "even more in the direction of extremely general reasoning about extremely general condensed matter physics"...

...which leads naturally to the question "What the hell is 1/f noise?" <3

I tried getting an answer from youtube (this video was helpful and worked for me at 1.75X speed) which helped me start to imagine that "diagrams about electrons going through stuff" was nearby, and also to learn that a synonym for this is Pink Noise, which is a foundational concept I remember from undergrad math.

I'm not saying I understand this yet, but I am getting to be pretty confident that "a stack of knowledge exists here that is not fake, and which I could learn, one bite at a time, and that you might be applying correctly" :-)

Log odds, measured in something like "bits of evidence" or "decibels of evidence", is the natural thing to think of yourself as "counting". A probability of 100% would be like having infinite positive evidence for a claim and a probability of 0% is like having infinite negative evidence for a claim. Arbital has some math and Eliezer has a good old essay on this.

A good general heuristic (or widely applicable hack) to "fix your numbers to even be valid numbers" when trying to get probabilities for things based on counts (like a fast and dirty spreadsheet analysis), and never having this spit out 0% or 100% due to naive division on small numbers (like seeing 3 out of 3 of something and claiming it means the probability of that thing is probably 100%), is to use "pseudo-counting" where every category that is analytically possible is treated as having been "observed once in our imaginations". This way, if you can fail or succeed, and you've seen 3 of either, and seen nothing else, you can use pseudocounts to guesstimate that whatever happened every time so far is (3+1)/(3+2) == 80% likely in the future, and whatever you've never seen is (0+1)/(3+2) == 20% likely.

That's fascinating and I'm super curious: when precisely, in your experience as a participant in a language community did it feel like "The American definition where a billion is 10^9 and a trillion is 10^12 has long since taken over"?

((I'd heard about the British system, and I had appreciated how it makes the the "bil", "tril", "quadril", "pentil" prefixes of all the "-illion" words make much more sense as "counting how many 10^6 chunks were being multiplied together".

The American system makes it so that you're "counting how many thousands are being multiplied together", but you're starting at 1 AFTER the first thousand, so there's "3 thousands in a billion" and "4 thousands in a trillion", and so on... with a persistent off-by-one error all the way up...

Mathematically, there's a system that makes more sense and is simpler to teach in the British way, but linguistically, the American way lets you speak and write about 50k, 50M, 50B, 50T, 50Q, and finally 50P (for fifty pentillion)...

...and that linguistic frame is probably going to get more and more useful as inflation keeps inflating?

Eventually the US national debt will probably be "in the quadrillions of paper dollars" (and we'll NEED the word in regular conversation by high status people talking about the well being of the country)...

...and yet (presumably?) the debt-to-gdp ratio will never go above maybe 300% (not even in a crisis?) because such real world crises or financial gyrations will either lead to massive defaults, or renominalization (maybe go back to metal for a few decades?), or else the government will go bankrupt and not exist to carry those debts, or something "real" will happen.

Fundamentally, the ratio of debt-to-gdp is "real" in a way that the "monetary unit we use to talk about our inflationary script" is not. There are many possible futures where all countries on Earth slowly eventually end up talking about "pentillions of money units" without ever collapsing, whereas debt ratios are quite real and firm and eventually cause the pain that they imply will arrive...

One can see in teh graph below how these numbers mostly "clustering together because annual-interest-rates and debt-to-gdp-ratios are directly and meaningfully comparable and constrained by the realities of sane financial reasoning" much more clearly when you show debt ratios, over time, internationally...

undefined

...you can see in that data that Japan, Greece, and Israel are in precarious places, just with your eyeballs in that graph with nicely real units.

Then the US, the UK, Portugal, Spain, France, Canada, and Belgium are also out into the danger zone with debt well above 100% of GDP, where we better have non-trivial population growth and low government spending for a while, or else we could default in a decade or two.

A small part of me wonders if "the financial innumeracy of the median US and UK voter" are part of the explanation for why we are in the danger zone, and not seeming to react to it in any sort of sane way, as part of the zeitgeist of the English speaking world?

For both of our governments, they "went off the happy path" (above 100%) right around 2008-2011, due to the Great Recession. So it would presumably be some RECENT change that switched us from "financial prudence before" and then "financial imprudence afterwards"?

Maybe it is something boring and obvious like birthrates and energy production?

For reference, China isn't on wikipedia's graph (maybe because most of their numbers are make believe and its hard to figure out what's going on there for real?) but it is plausible they're "off the top of the chart" at this point. Maybe Xi and/or the CCP are innumerate too? Or have similar "birthrate and energy" problems? Harder to say for them, but the indications are that, whatever the cause, their long term accounting situation is even more dire.

Looping all the way back, was it before or after the Great Recession, in your memory, that British speakers de facto changed to using "billion" to talk about 10^9 instead of 10^12?))

Fascinating. I am surprised and saddened, and thinking about the behavioral implications. Do you have a "goto brand" that is "the cheapest that doesn't give you preflux"? Now I'm wondering if maybe I should try some of that.

I feel like you're saying "safety research" when the examples of what corporations centrally want is "reliable control over their slaves"... that is to say, they want "alignment" and "corrigibility" research.

This has been my central beef for a long time.

Eliezer's old Friendliness proposals were at least AIMED at the right thing (a morally praiseworthy vision of humanistic flourishing) and CEV is more explicitly trying for something like this, again, in a way that mostly just tweaks the specification (because Eliezer stopped believing that his earliest plans would "do what they said on the tin they were aimed at" and started over). 

If an academic is working on AI, and they aren't working on Friendliness, and aren't working on CEV, and it isn't "alignment to benevolence " or making "corrigibly seeking humanistic flourishing for all"... I don't understand why it deserves applause lights.

(EDITED TO ADD: exploring the links more, I see "benevolent game theory, algorithmic foundations of human rights" as topics you raise. This stuff seems good! Maybe this is the stuff you're trying to sneak into getting more eyeballs via some rhetorical strategy that makes sense in your target audience?)

"The alignment problem" (without extra qualifications) is an academic framing that could easily fit in a grant proposal by an academic researcher to get funding from a slave company to make better slaves. "Alignment IS capabilities research".

Similarly, there's a very easy way to be "safe" from skynet: don't built skynet!

I wouldn't call a gymnastics curriculum that focused on doing flips while you pick up pennies in front of a bulldozer "learning to be safe". Similarly, here, it seems like there's some insane culture somewhere that you're speaking to whose words are just systematically confused (or intentionally confusing).

Can you explain why you're even bothering to use the euphemism of "Safety" Research? How does it ever get off the ground of "the words being used denote what naive people would think those words mean" in any way that ever gets past "research on how to put an end to all AI capabilities research in general, by all state actors, and all corporations, and everyone (until such time as non-safety research, aimed at actually good outcomes (instead of just marginally less bad outcomes from current AI) has clearly succeeding as a more important and better and more funding worthy target)"? What does "Safety Research" even mean if it isn't inclusive of safety from the largest potential risks?

Also, there's now a second detected human case, this one in Michigan instead of Texas.

Both had a surprising-to-me "pinkeye" symptom profile. Weird!

The dairy worker in Michigan had various "compartments" tested and their nasal compartment (and people they lived with) were all negative. Hopeful?

Apparently and also hopefully this virus is NOT freakishly good at infecting humans and also weirdly many other animals (like covid was with human ACE2, in precisely the ways people have talked about when discussing gain-of-function in years prior to covid).

If we're being foolishly mechanical in our inferences "n=2 with 2 survivors" could get rule of succession treatment. In that case we pseudocount 1 for each category of interest (hence if n=0 we say 50% survival chance based on nothing but pseudocounts), and now we have 3 survivors (2 real) versus 1 dead (0 real) and guess that the worst the mortality rate here would be maybe 1/4 == 25% (?? (as an ass number)), which is pleasantly lower than overall observed base rates for avian flu mortality in humans! :-)

Naive impressions: a natural virus, with pretty clear reservoirs (first birds and now dairy cows), on the maybe slightly less bad side of "potentially killing millions of people"?

I haven't heard anything about sequencing yet (hopefully in a BSL4 (or homebrew BSL5, even though official BSL5s don't exist yet), but presumably they might not bother to treat this as super dangerous by default until they verify that it is positively safe) but I also haven't personally looked for sequencing work on this new thing.

When people did very dangerous Gain-of-Function research with a cousin of this, in ferrets, over 10 year ago (causing a great uproar among some) the supporters argued that it was was worth creating especially horrible diseases on purpose in labs in order to see the details, like a bunch of geeks who would Be As Gods And Know Good From Evil... and they confirmed back then that a handful of mutations separated "what we should properly fear" from "stuff that was ambient".

Four amino acid substitutions in the host receptor-binding protein hemagglutinin, and one in the polymerase complex protein basic polymerase 2, were consistently present in airborne-transmitted viruses. (same source)

It seems silly to ignore this, and let that hilariously imprudent research of old go to waste? :-)

The transmissible viruses were sensitive to the antiviral drug oseltamivir and reacted well with antisera raised against H5 influenza vaccine strains. (still the same source)

Oseltamivir as a ball and stick model, from https://en.wikipedia.org/wiki/Oseltamivir

(Image sauce.)

Since some random scientists playing with equipment bought using taxpayer money already took the crazy risks back then, it would be silly to now ignore the information they bought so dearly (with such large and negative EV) back then  <3

To be clear, that drug worked against something that might not even be the same thing.

All biological STEM stuff is a crapshoot. Lots and lots of stamp-collecting. Lots of guess and check. Lots of "the closest example we think we know might work like X" reasoning. Biological systems or techniques can do almost anything physically possible eventually, but each incremental improvement in repeatability (going from having to try 10 million times to get something to happen to having to try 1 million times (or going from having to try 8 times on average to 4 times on average) due to "progress" ) is kinda "as difficult as the previous increment in progress that made things an order of magnitude more repeatable".

The new flu just went from 1 to 2. I hope it never gets to 4.

As of May 16, 2024 an easily findable USDA/CDC report says that widely dispersed cow herds are being detectably infected.

Map of US showing 9 states with infected herds, including Texas, Idaho, Michigan, and North Carolina (but not other states in between (suggesting either long distance infections mediated by travel without testing the travelers, or else failure of detection in many intermediate states)).

So far, that I can find reports of, only one human dairy worker has been detected as having an eye infection.

I saw a link to a report on twitter from an enterprising journalist who claimed to have gotten some milk directly from small local farms in Texas, and the first lab she tried refuse to test it. They asked the farms. The farms said no. The labs were happy to go with this!

So, the data I've been able to get so far is consistent with many possibly real worlds.

The worst plausible world would involve a jump to humans, undetected for quite a while, allowing time for adaptive evolution, and an "influenza normal" attack rate of 5% -10% for adults and ~30% for kids, and an "avian flu plausible" mortality rate of 56%(??) (but maybe not until this winter when cold weather causes lots of enclosed air sharing?) which implies that by June of 2025 maybe half a billion people (~= 7B*0.12*0.56) will be dead???

But probably not, for a variety of reasons.

However, I sure hope that the (half imaginary?) Administrators who would hypothetically exist in some bureaucracy somewhere (if there was a benevolent and competent government) have noticed that paying two or three people $100k each to make lots of phone calls and do real math (and check each other's math) and invoke various kinds of legal authority to track down the real facts and ensure that nothing that bad happens is a no-brainer in terms of EV.

I see it. If you try to always start with a digit, then always follow with a decimal place, then the rest implies measurement precision, and the mantissa lets you ensure a dot after the first digit <3

The most amusing exceptional case I could think of: "0.1e1" :-D

This would be like "I was trying to count penguins by eyeball in the distance against the glare of snow and maybe it was a big one, or two huddled together, or maybe it was just a weirdly shaped rock... it could have been a count of 0 or 1 or 2."

Load More