I know this is one of the universal human experiences, but I keep getting unpleasantly reminded by the passage of time.
pleasant "recent" memories are already one or two years ago. they feel recent enough that I still stubbornly believe my recollection is accurate, but in reality they're far enough away from the present day for the sepia tint of nostalgia to creep in and for all the frustrations and sorrows to be forgotten. no wonder it's so hard for the present to compete with the "recent" past.
I sometimes ask myself when I first met one my "recent" friends, and am startled to realize that I met them 2 or 3 or 4 years ago. "oh yeah, I met him 'recently' at that one party FOUR FUCKING YEARS AGO."
I still can't wrap my mind around the fact that Iater this year I will have been at openai for 5 years. I first started following ML about 10 years ago, so I will soon have spent more time at openai than I have spent reading openai papers from the outside, and thinking of openai as a far away citadel in a different universe.
where did all the time go?
Zilllis, in email to Musk about OpenAI (Id., Dkt 379-45):
Tech:
-Says Data 5v5 looking better than anticipated.
-The sharp rise in Data bot performance is apparently causing people internally to worry that the timeline to AGI is sooner than they'd thought before.
-Thinks they are on track to beat Montezuma's Revenge shortly.
ilya's AGI predictions circa 2017 (Musk v. Altman, Dkt. 379-40):
Within the next three years, robotics should be completely solved, AI should solve a long-standing unproven theorem, programming competitions should be won consistently by Als, and there should be convincing chatbots (though no one should pass the Turing test). In as little as four years, each overnight experiment will feasibly use so much compute compute that there's an actual chance of waking up to AGI, given the right algorithm - and figuring out the algorithm will actually happen within 2-4 further years of experimenting with this compute in a competitive multiagent simulation.
[...]
Each year, we'll need to exponentially increase our hardware spend, but we have reason to believe AGI can ultimately be built with less than $10B in hardware.
fwiw I also enjoy the rain, and I guess I just never cared enough about people thinking it was weird. I do have to admit that when it's raining especially heavily, it does suck a lot (the experience of fully wet clothing is very unpleasant in many ways). but most of the time it's not raining that hard / I'm not going to be in the rain that long.
having the right mental narrative and expectation setting when you do something seems extremely important. the exact same object experience can be anywhere from amusing to irritating to deeply traumatic depending on your mental narrative. some examples:
tbc, the optimal decision is not always the narrative that is maximally happy with everything. sometimes there are true tradeoffs, and being complacent is bad. but it is often worth shaping the narrative in a way that reduces unnecessary suffering.
a skill which I respect in other people and which I aspire towards is noticing when other people are experiencing suffering due to violations of positive narratives, or fulfillment of negative narratives, and comforting them and helping nudge them back into a good narrative.
this is another post of something that is obvious intellectually and yet I've failed to always do right in practice.
there are definitely status ladders within openai, and people definitely care about it. status has this funny tendency where once you've "made it" you realize that you have merely attained table stakes for another status game waiting to be played.
this matters because if being at openai counts as having "made it", then you'd predict that people will stop value drifting once they are already inside and feel secure that they won't be fired, or could easily find a new job if they did. but, in fact, i observe lots of value drift in people after they join the labs just because they are seeking status within the lab.
in ML, or at least in the labs, people don't really care that much about Nature. my strongest positive association with Nature is AlphaGo, and I honestly didn't know anyone in ML other than DeepMind published much in Nature (and even there, I had the sense that DeepMind had some kind of backchannel at nature.)
people at openai care firstly about what cool things you've done at openai, and then secondly about what cool things you've published in general (and of course zerothly, how other people they respect perceive you). but people don't really care if it's in neurips or just on arxiv, they only care about whether the paper is cool. people mostly think of publishing things as a hassle, and reviewer quality as low.
it is definitely not a problem with current devices. my phone has gotten quite wet hundreds of times and still works perfectly fine. note that this is different from survivability fully submerged; my guess is your phone could probably survive being submerged for a few minutes in a pool or something but if you left it there for a day it would be dead.