Figure 6 doesn't have any timing information (about the gaps between the clicks)? It just has the spectral analysis of five different clicks (which presumably form a coda). When I looked up another paper to try to figure out what codas are, it suggested that 1s is a reasonable estimate for total time, with switching happening in about 200ms, which seems plausible to me.
However the whale filtered the pulse from the phonic lips, it would need to involve movements at time granularities much, much smaller than even the fastest known acoustic control systems in vertebrates. This is not biologically plausible, probably even if the articulatory control in question is at the coda level and not the click level. The whales are not in control of these clicks in the way the authors suppose.
Sorry, I think you skipped some steps here, or I'm not following along correctly. Could you expand this to have more detail?
I think you're trying to argue "the whale isn't choosing whether it makes an 'a' sound or an 'i' sound, it's being determined by some other factor." But I think your argument is more like implying that it's biologically implausible that the whales are making these sounds using just their anatomy--which I think is importantly distinct. Like:
The broadband impulse which originates from the phonic lips exits the head along a direct path and a delayed reflected path off the distal air sac. There’s a pretty detailed wikipedia page documenting this phenomenon.
What's the argument that this can't be under the whale's volitional control? Like, there's sounds that I can't make using just my vocal cords and mouth that I can make using my vocal cords, mouth, and hands. I think for your speed argument to be relevant, you need to show that the whales are switching between sounds at a biologically implausible speed, rather than configuring their anatomy to make sound A, then reconfiguring at a biologically plausible speed, then making sound B, which I think lines up with the data presented here.
(Of course, I'm not an expert in whale sounds, and so quite plausibly you just dropped some background detail that any expert would know and I don't. But if so, it should be easy to point to that detail.)
I think it's sad that the "let's actually do the research and figure out which interventions actually work" movement turned into a deferral network. I think it's bad, and I've publicly critiqued EA on these grounds in the past.
I think this is sad, but also... not surprising? (For different reasons than the ones you describe.) Specialization of labor and gains from trade also operate in the epistemic realm, and I don't think we were ever not going to end up with a deferral network.
I think the main place where I wish EA had outperformed more was the endpoint of that deferral network. Like, EAs / rationalists have been major early adopters of prediction markets and forecasting tools, and I think there was the potential to use them more centrally. (Like, encouraging people to trade in the cause prioritization markets instead of 'come to their own conclusions' would have, I think, been better at helping people realize when and how they should or shouldn't be deferring.)
Also I think it's worth considering the position that AIs will do better than humans, at figuring out philosophical dilemmas; to the extent philosophical maturity involves careful integration of many different factors, models might be superhuman at that as well.
[I think there's significant reason to think human judgment is worthwhile, here, but it is not particularly straightforward and requires building out some other models.]
IMO this argument also wants, like, support vector machines and distributional shift? The 'tails come apart' version feels too much like "political disagreement" and not enough like "uncertainty about the future". You probably know whether you're a Catholic or utilitarian, and don't know whether you're a hyper-Catholic or a cyber-Catholic, because you haven't come across the arguments or test cases that differentiate them.
What do you think of authentic relating / circling w/ relationalists, as opposed to rationalists? I don't think they have particularly good epistemic hygiene (or, that is, the ones that do I think become rationalists also) but I think they might have a way to embody philanthropy which is easier to tune into (than one based on, like, respecting the competence or thoughtfulness of humans as they are).
My spouse and I donated $100k, iirc twice what we did last year. This is mostly downstream of having less of our wealth tied up in private equity, rather than being twice as impressed with Lightcone's output; I also didn't take a salary for my work on Inkhaven, which is roughly a ~$10k contribution.
LessWrong has been my online home for a long time now; Lighthaven continues to be an impressive space that runs great events.
Sure; I'm not sure what fraction of the relevant innovations were additions instead of replacements (which might not impact total memory burden much).
Several people who worked at MIRI thought the book had new and interesting content for them; I don't remember having the "learned something new" experience myself, but I nevertheless enjoyed reading it.
I think a single implausible hop should be enough to persuade me that something else is going on?
I do have some lingering confusion about whether you're trying to establish that there's a recording problem and the whales aren't making the sounds that are being analyzed, or whether the whales are making the sounds but it's somehow unintentional.