lw logistics thing, I'm annoyed that various replies are down voted rather than disagree voted,- even if readers find their tone not to lw standard they're an important enough exchange within the context of the original post and these threads they shouldn't get close to automatically hidden by being in negatives. Disageeevote or comment saying the tone is bad but keep these positive so future readers can find these easily
My quick impression is that strong aligned talent is still hard to find and it's worth increasing the funding now so that there will be talented people to fill in the demand coming in a few years. Fieldbuilding still matters a lot, especially high quality top end of pipeline, especially towards governance positions.
Thanks for this work.
In your technical dataset apart research appears twice, with 10 and 40 FTEs listed respectively, is that intentional? Is it meant to track the core team vs volunteers participating in hackathons etc?
Can you say a bit more about how these numbers are estimated? eg. 1 person looks at the websites and writes down how many they see, estimating from other public info, directly asking the orgs when possible?
Yes, it was.
I was teaching an ai safety bootcamp during Petrov day so I organized the participants in two teams with possibility to nuke each other and false alarms. One team did nuke the other. The other didn't respond, and was eliminated.
It was nice to take a break from that.
Much appreciated. the page itself doesn't have an upvote button to show recognition so I strong upvoted this one here. This is for me the best petrov day project (though of course made meaningful by all the previous) and I had a great moment reading this alone in my hotel room between work hours. May we all be well (and contribute to us being well).
With no shade to John in particular, as this applies to many insular lesswrong topics, I just wanna state this gives me a feeling of the blind leading the blind. I could believe someone reading this behaves in the world worse after reading it, mostly because it'd push them further in the same overwrought see-everything-through-status frame. I think it's particularly the case here because clothing and status are particularly complex and benefit from a wider diversity of frames to think of them in, and require diverse experiences and feedback from many types of communities to generalize well (or to realize just how narrow every "rule" is!)
I'm not saying John has bad social skills or that this doesn't contain true observations or that someone starting from zero wouldn't become better thanks to this, nor that John shouldn't write it, but I do think this is centrally the kind of article one should consider "reverse all advice you read" for, and would like to see more community pushback and articles providing more diverse frames on this.
I'm confident I could sensibly elaborate more on what's missing/wrong, but in the absence of motivation to, I'll just let this comment stand as an agree/disagree rod for the statement "We have no clear reason to believe the author is actually good at social skills in diverse environments, they are writing in a seemingly too confident and not caveated enough tone about a too complicated topic without acknowledging that and are potentially misleading/short term net negative to at least a fifth of lesswrong readers who are already on the worse side of social skills"
(I originally read the "they" to mean the AI agents themselves, would be cool if Open Phil did answer them directly)
The link seems broken
Thanks for the followup!
I'm not asking you to engage with Mikhail more, I believe I understand it's frustrating given your extensive prior conversations that still led to this post being made.
Nevertheless, I have found all these comments informative as well as op.
The post says Mikhail sent
and that you replied “lol, no" after a week.
I generally don't want to clash with you as I respect a lot of your public takes etc, but for the same reasons you're publicly disagreeable I do think it's worth pointing my disagreement here. Unless you were already on colloquial terms with Mikhail, I find it rude you'd answer "lol no" to that specific request, notably given it used "please". Even if it was an unreasonable ask, a "sorry but no" would have sufficed.
At the object level, as board member of enais and french centre for ai safety, I don't even take Mikhail's message as a surprising or unreasonable ask, unless interpreted stringently. Ofc if the formulation was "please make sure to act indistinguishably, even when assessed by a future superintelligence, on this info", then a lol no is fine, but if Mikhail sent me this message I'd interpret it as asking the 80/20 reasonable effort and say I broadly agree tho will take other stuff into account too. In fact I overall guess (or want to believe) that you in fact do broadly agree (you prefer for people to tell you stuff, and soft commit to not indirectly harming people who tell you stuff if you can avoid it) and that you and Mikhail disagree because you're interpreting Mikhail to be dogmatic and strict about his ask (which might actually be the case).
As an interested third party who generally would like to to work with LightConeInfra and you, unrelated to Mikhail's specific asks, I'm curious for if you broadly agree to put some non trivial decision weight on not using info people give you in ways they strongly disagree with, even if they didn't ask you to precomit to that, even if they were mistaken in some assumptions. (If you later get that info from other places you're ~released from the first obligations, tho this shouldn't be gamed)