Unlike 1, this isn't completely implausible, but it seems like a very ambitious claim.
Not really. To adjust frequency you just need to 1) adjust air resonance by changing length or 2) mechanical resonance by changing tension. (1) might be too slow here but (2) is not.
I agree with Zac. I also think you're misunderstanding the physics involved, and you're underestimating the researchers doing this research.
The mixed codas seem like strong evidence against this view
No, I don't think they are.
The fact is, the whales can produce multiple clicks with different spectral patterns. This is in quick succession so it's not a change of orientation. The patterns can vary so it's not a fixed mechanical thing. Therefore it comes from some mechanism of adjustment between clicks, which could be adjusted by muscles. So your whole thesis is off.
I don't know what you're trying to say here.
I'm saying that you're making a questionable leap from:
Then the alignment team RLHFs the models to follow the spec.
to "the model follows whatever is written in the spec". You were saying that "current LLMs are basically aligned so they must be following the spec" but that's not how things work. Different companies have different specs and the LLMs end up being useful in pretty similar ways. In other words, you had a false dichotomy between:
That's not:
If AI is misaligned, obviously nobody gets anything.
That depends on how it's misaligned. You can't just use "misaligned" to mean "maximally self-replication-seeking" or whatever you actually are trying to say here.
I think there's also a strong possibility that AI will be aligned in the same sense it's currently aligned - it follows its spec
Spec? What spec does GPT-5 or Claude follow? Its "helpful" behavior is established by RLHF. (And now, yes, a lot of synthetic RL and distillation of previous models, but I'm simplifying and including those in "RLHF".) That's not a "spec". Do you think LLMs are some kind of Talmudic golems that follow whatever Exact Wording they're given??
two ex-Recursion Pharmaceutical folks
How's Recursion been doing, then?
EG = ethylene glycol, PG = propylene glycol
About toxicity, tri-glycol is safer than EG because EG is partly metabolized to glyoxal which can permanently form cyclic compounds inside cells. PG is preferentially metabolized to lactic acid before the secondary OH is oxidized, which is why it's safer, tho yes you could get a small amount of methylglyoxal, so there is that issue, tho methylglyoxal is at least less reactive than glyoxal. The concern I have is that eg, ethoxyethanol is metabolized to ethoxyacetate which is somewhat toxic, and oxidized tri-glycol might be analogous. Note also that ethers eventually get oxidatively cleaved. I'm simplifying a bit here obviously.
Yes, there have been studies, but toxicity studies use high doses in mice to get obvious effects, and then we assume that much lower doses in humans don't have subtle long-term effects, but the effect of tri-glycol would be limited by the rate of metabolism, and the tri-glycol itself should be safe.
You have a good point about Transcriptic and "cloud labs", but one issue is, that model seems incompatible with the current structure of both university labs and drug companies. A university lab today is a barony ruled by a professor, and it does its own research. Labs generally don't even share reagents, they'll each buy little bottles instead of sharing a bigger bottle that could be half the total cost.
My impression is, Ginkgo and Automata don't understand manufacturing, and I don't think a "cloud lab" would buy their hardware when it's cheaper to buy generic general-purpose robotic arms. Also, Y Combinator companies in general...these days when I see "funded by Y Combinator" it gives me a similar impression to reading "MIT scientists have discovered".
About Briefly Bio and Tetsuwan, I personally think they should be sticking to text-based high-level descriptions. Yes, Unreal's Blueprints and ComfyUI are considered successful, but specifications having version control would be especially helpful for lab experiments. And of course LLMs are better at outputting text than custom GUI programming systems; if you have a good enough high-level description you could probably just translate natural language to it automatically.