Thanks for this work.
In your technical dataset apart research appears twice, with 10 and 40 FTEs listed respectively, is that intentional? Is it meant to track the core team vs volunteers participating in hackathons etc?
Can you say a bit more about how these numbers are estimated? eg. 1 person looks at the websites and writes down how many they see, estimating from other public info, directly asking the orgs when possible?
Yes, it was.
I was teaching an ai safety bootcamp during Petrov day so I organized the participants in two teams with possibility to nuke each other and false alarms. One team did nuke the other. The other didn't respond, and was eliminated.
It was nice to take a break from that.
Much appreciated. the page itself doesn't have an upvote button to show recognition so I strong upvoted this one here. This is for me the best petrov day project (though of course made meaningful by all the previous) and I had a great moment reading this alone in my hotel room between work hours. May we all be well (and contribute to us being well).
With no shade to John in particular, as this applies to many insular lesswrong topics, I just wanna state this gives me a feeling of the blind leading the blind. I could believe someone reading this behaves in the world worse after reading it, mostly because it'd push them further in the same overwrought see-everything-through-status frame. I think it's particularly the case here because clothing and status are particularly complex and benefit from a wider diversity of frames to think of them in, and require diverse experiences and feedback from many types of communities to generalize well (or to realize just how narrow every "rule" is!)
I'm not saying John has bad social skills or that this doesn't contain true observations or that someone starting from zero wouldn't become better thanks to this, nor that John shouldn't write it, but I do think this is centrally the kind of article one should consider "reverse all advice you read" for, and would like to see more community pushback and articles providing more diverse frames on this.
I'm confident I could sensibly elaborate more on what's missing/wrong, but in the absence of motivation to, I'll just let this comment stand as an agree/disagree rod for the statement "We have no clear reason to believe the author is actually good at social skills in diverse environments, they are writing in a seemingly too confident and not caveated enough tone about a too complicated topic without acknowledging that and are potentially misleading/short term net negative to at least a fifth of lesswrong readers who are already on the worse side of social skills"
(I originally read the "they" to mean the AI agents themselves, would be cool if Open Phil did answer them directly)
The link seems broken
Thanks for the followup!
o3 released today and I was impressed by the demo for working with images they gave so I wanted to try this out (using the prompt op linked to in appendices), but I don't have the machining experience to evaluate the answer quickly, so I thought I'd prompt it and share its result if anyone else wants to evaluate it : https://chatgpt.com/share/67fff70d-7d90-800e-a913-663b82ae7f33
PSA - at least as of March 2024, the way to create a Dialogue is by navigating to someone else's profile and to click the "Dialogue" option appearing near the right, next to the option to message someone.
My quick impression is that strong aligned talent is still hard to find and it's worth increasing the funding now so that there will be talented people to fill in the demand coming in a few years. Fieldbuilding still matters a lot, especially high quality top end of pipeline, especially towards governance positions.