For me personally (and I've heard other math people share this sentiment) the only way to understand a new area is to largely build it up in my own way, using the literature as a guide. Then depth is improved each time it connects to something else I've built up an understanding of. Otherwise depth decays overtime (but is easier to rebuild if I wrote my own notes).
I also agree with the idea that deeply understanding something is not merely a consequence of being able to derive it. Sometimes derivations (especially with too much algebra or via induction/contradiction) feel incomplete. Sometimes seeing two derivations of the same thing make it all fit together.
This general phenomenon is something I'd like to understand better as well.
Okay it's been 6 months.
From early 2019 - April 2025 I had chronic pain in my right glute medius that would (starting in 2021) every ~2 months extend into the whole back and become so bad that I couldn't move at all at night and with great pain during the day.
I tried a lot of reasonable interventions. I did a lot to strengthen the glutes and glute medius, but the flare-ups would still come (with less fury). I started seeing a chiropractor who suggested putting lifts in my left shoe which also helped and seemed like the correct intervention since if the lift was too high I'd get pain in my left glute medius. The flare-ups would still come though and I figured it would just be part of my life.
I can't believe this worked.
Focusmate has been an absolute game-changer for effectively using my time after work over the last two weeks. Thank you for posting this.
Gonna be in Berkeley on the 14th and Princeton on the 16th :')
Discussions about possible economic future should account for the (imo high) possibility that everyone might have inexpensive access to sufficient intelligence to accomplish basically any task they would need intelligence for. There are some exceptions like quant trading where you have a use case for arbitrarily high intelligence, but for most businesses, the marginal gains for SOTA intelligence won't be so high. I'd imagine that raw human intelligence just becomes less valuable (as it has been for most of human history I guess this is worse because many businesses would also not need employees for physical tasks. But the point is that many such non-tech businesses might be fine).
Separately: Is AI safety at all feasible to tackle in the likely scenario that many people will be able to build extremely powerful but non-SOTA AI without safety mechanisms in place? Will the hope be that a strong enough gap exists between aligned AI and everyone else's non-aligned AI?
I would be very surprised if this FVU_B actually another definition and not a bug. It's not a fraction of the variance and those denominators can easily be zero or very near zero.
Not worth worrying about given context of imminent ASI.
This is something that confuses me as well: why do a lot of people in these circles seem care about the fertility crisis while also believing that ASI is coming very soon?
In both optimistic and pessimistic scenarios about what a post-ASI world looks like, I'm struggling to see a future where the fact that people in the 2020s had relatively few babies matters.
If this actually hasn't been explored, this is a really cool idea! So you want to learn a function (Player 1, Player 2, position) -> (probability Player 1 wins, probability of a draw)? Sounds like there are a lot of naive architectures to try and you have a ton of data since professional chess players play a lot of games.
Some random ideas:
This whole thing about "I would give my life for two brothers or eight cousins" is just nonsense formed by taking a single concept way too far. Blood relation matters but it isn't everything. People care about their adopted children and close unrelated friends.
Why naive determinism is suspect
I've long been fascinated by how Bell Tests "rule out" hidden variables but I'm never able to explain it in casual conversation because it takes me personally a long time to digest the full logic. I've seen Scott Aaronson's setup (done in more detail here) but it takes some time to fully believe the upper bound on a deterministic strategies' success, especially when it's arguing for something potentially hard to believe.
I really like the explanation given in the "Local Hidden Variables" section of this article. I think the full setup can fit in one's head and one can just point at the picture instead of needing to write down any math.