I was just trying to adjust a loophole that often seemed to be missing in Umeshisms, but I think this made my statement more confusing: if you are 15 years old (the particular age is irrelevant, I am just saying an age exists), then you having sent 1-2 cold emails is not too little, nor did you invest too much time, you are just young and there weren't that many worthy occasions yet. If you have just taken a single flight in your life and missed 0, this is not large evidence that you spend too much time at airports.
One of the first things that (shy?) people use to gauge each other's interests before or instead of talking about anything explicit is eye contact. So I think that wearing your glasses puts you at a disadvantage unless you take them off when you are flirting. I'm not sure why you're wearing them, but taking them off in itself could be a flirty move. I am not particularly good at flirting. But I remember in 9th grade a girl I had flirted with for like half an hour at an event via eye contact. We didn't exchange more than ~3 sentences in person (there were no innuendos). Then she called me later that same day, asking me out explicitly if I wanted to be her boyfriend.
I learned the hard way that epigenetics is too new a field to get a good understanding by reading the textbook, because being 13 years out of date is in fact making a big difference.
Skill-issue. There is a free introductory e-textbook on epigenetics on Amazon from 2021 that I somehow completely missed.
I for one know that I interact very differently with children with different personalities! (Or, for that matter, with adults with different personalities.) One classic example of this is that children who are naturally compliant and "easy" are disciplined/punished less, because there's much less of a need to do so.
Yeah equating parenting with shared-environment can lead to confusion, but your example doesn't necessarily end up in the non-shared part I think. If the personality of the child was mostly downstream of the genes, then I think that would still end up in shared environment and would not be a problem (You treat both twins the same, because they have about the same temperament). If some parents treat twins differently because of "random" things like which twin left the womb first and is considered firstborn, which baby hit their head, inherent contingency in personality etc., then yeah, I think that would end up in a non-shared environment if you do twin experiments.
One reason why I find Lesswrong valuable is that it serves as a sort of "wisdom feed" for myself, where I got exposed to a lot of great writing. Especially writing that ambitiously attempts to build long-lasting gears. Sadly, for most of the good writers on Lesswrong, I have already read or at least skimmed all of their posts. I wonder, though, to which extent I am missing out on great content like that on the wider internet. There are textbooks, of course, but then there's also all this knowledge that is usually left out of textbooks. For myself, it probably makes sense to just curate my own RSS feed and ask language models for "The Matt Levine in Domain X" etc. But it also feels like it should be possible to create a feed for this type of category with language models? Sort of like the opposite of news minimalist. Gears maximalist?
As for successionists, and honestly utilitarians in general, but only when they apply it to situations which result in their own deaths, I cannot understand this viewpoint.
Quite a few people would die to save their children. I actually have never met someone who told me they can't relate to any thought experiment where they would die for someone else. I would be curious if you can relate to any of the thought experiments in the fake selfishness post. Presumably there are quite a few people who would not sacrifice their entire future for anything, but you can also just sacrifice it partially by taking risks like driving a car.
The neuroscience/psychology rather than ml side of the alignment problem seems quite neglected (because it harder on the one hand, but it's easier to not work on something capabilities related if you just don't focus on the cortex). There's reverse engineering human social instincts. In principle would benefit from more high quality experiments in mice, but those are expensive.
Also, it seems like there is not much of that in the field of alignment. I want there to be more work on unifying (previously frontier) alignment research and more effort to construct paradigms in this preparadigmatic field (but maybe I just haven't looked hard enough)
I am surprised regarding the lack of distillation claim. I'd naively expected that to be more neglected in physics compared to alignment. Is there something in particular that you think could be more distilled?
Regarding research that tries to come up with new paradigms, here are a few reasons why you might not be observing that much: I guess that is less funded by the big labs and is spread across all kinds of orgs or individuals. Maybe check MIRI, PIBBBS, ARC (theoretical research), Conjecture check who went to ILIAD. More of these researchers didn't publish all their research compared to AI safety researchers at AGI labs, so you would not have been aware it was going on? Some are also actively avoiding researching things that could be easily applied and tested, because of capability externalities (I think Vanessa Kosoy mentions this somewhere in the YouTube videos on Infrabayesianism).
The last year, my median conversation was about as entertaining as yours. The top 10% conversations are fun-in-their-own-right at that moment already because my brain anticipates some form of long-term value (with the exception of cracking jokes). I don't know if all those conversations would count as "casual". As intellectually stimulating as the Task Master TV-show is funny. Conversation is more heavy tailed than movies though. Long term value includes: learning or teaching (learning some new technical thing that's usually not written down anywhere (Podcasts tend to be better for that), getting a pointer about something to learn about, teaching something technical in the anticipation that the other person is actually going to do anything with that knowledge, incorporating the generating function behind someone's virtues/wisdom), thinking out loud with someone else in the expectation that this might lead to an interesting idea, gossip, life stories (sometimes preventing you from harm from people/situations that can't be trusted. Sometimes just illuminating parts of life you'd know less about). My most fun conversation had me grinning for 30 minutes after still, and my heartbeat after that time was also still 10–20 beats higher than usual.
My median conversations at parties over my entire life are probably less entertaining than your median ones. My bar for an interesting conversation also rose when I stumbled over the wider rationalist sphere. I remember two conversations from before that era where the main important information was essentially just "there are other smart people out there, and you can have interesting conversations with them where you can ask the questions you have etc.". One was at a networking event for startup founders, and the other was a Computer Science PhD student showing me his work and the university campus (same conversation that got my heart-beat up).