LESSWRONG
LW

2664
Jonathan Claybrough
687Ω32800
Message
Dialogue
Subscribe

Software engineer transitioned into AI safety, teaching and strategy. Particularly interested in psychology, game theory, system design, economics.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
3Jonathan Claybrough's Shortform
2y
35
Reasons to sell frontier lab equity to donate now rather than later
Jonathan Claybrough16d105

My quick impression is that strong aligned talent is still hard to find and it's worth increasing the funding now so that there will be talented people to fill in the demand coming in a few years. Fieldbuilding still matters a lot, especially high quality top end of pipeline, especially towards governance positions.

Reply
AI Safety Field Growth Analysis 2025
Jonathan Claybrough16d40

Thanks for this work.
In your technical dataset apart research appears twice, with 10 and 40 FTEs listed respectively, is that intentional? Is it meant to track the core team vs volunteers participating in hackathons etc?
Can you say a bit more about how these numbers are estimated? eg. 1 person looks at the websites and writes down how many they see, estimating from other public info, directly asking the orgs when possible?

Reply
The Illustrated Petrov Day Ceremony
Jonathan Claybrough16d10

Yes, it was.
I was teaching an ai safety bootcamp during Petrov day so I organized the participants in two teams with possibility to nuke each other and false alarms. One team did nuke the other. The other didn't respond, and was eliminated.
It was nice to take a break from that.

Reply
The Illustrated Petrov Day Ceremony
Jonathan Claybrough19d70

Much appreciated. the page itself doesn't have an upvote button to show recognition so I strong upvoted this one here. This is for me the best petrov day project (though of course made meaningful by all the previous) and I had a great moment reading this alone in my hotel room between work hours. May we all be well (and contribute to us being well).

Reply2
How To Dress To Improve Your Epistemics
Jonathan Claybrough1mo135115

With no shade to John in particular, as this applies to many insular lesswrong topics, I just wanna state this gives me a feeling of the blind leading the blind. I could believe someone reading this behaves in the world worse after reading it, mostly because it'd push them further in the same overwrought see-everything-through-status frame. I think it's particularly the case here because clothing and status are particularly complex and benefit from a wider diversity of frames to think of them in, and require diverse experiences and feedback from many types of communities to generalize well (or to realize just how narrow every "rule" is!)

I'm not saying John has bad social skills or that this doesn't contain true observations or that someone starting from zero wouldn't become better thanks to this, nor that John shouldn't write it, but I do think this is centrally the kind of article one should consider "reverse all advice you read" for, and would like to see more community pushback and articles providing more diverse frames on this. 

I'm confident I could sensibly elaborate more on what's missing/wrong, but in the absence of motivation to, I'll just let this comment stand as an agree/disagree rod for the statement "We have no clear reason to believe the author is actually good at social skills in diverse environments, they are writing in a seemingly too confident and not caveated enough tone about a too complicated topic without acknowledging that and are potentially misleading/short term net negative to at least a fifth of lesswrong readers who are already on the worse side of social skills"

Reply81
My pitch for the AI Village
Jonathan Claybrough4mo51

(I originally read the "they" to mean the AI agents themselves, would be cool if Open Phil did answer them directly)

Reply2
The Best Reference Works for Every Subject
Jonathan Claybrough5mo10

The link seems broken

Reply
Frontier AI Models Still Fail at Basic Physical Tasks: A Manufacturing Case Study
Jonathan Claybrough6mo10

Thanks for the followup!

Reply
Frontier AI Models Still Fail at Basic Physical Tasks: A Manufacturing Case Study
Jonathan Claybrough6mo40

o3 released today and I was impressed by the demo for working with images they gave so I wanted to try this out (using the prompt op linked to in appendices), but I don't have the machining experience to evaluate the answer quickly, so I thought I'd prompt it and share its result if anyone else wants to evaluate it : https://chatgpt.com/share/67fff70d-7d90-800e-a913-663b82ae7f33 

Reply
Announcing Dialogues
Jonathan Claybrough7mo70

PSA - at least as of March 2024, the way to create a Dialogue is by navigating to someone else's profile and to click the "Dialogue" option appearing near the right, next to the option to message someone. 

Reply2
Load More
130The case for training frontier AIs on Sumerian-only corpus
Ω
2y
Ω
16
3Jonathan Claybrough's Shortform
2y
35
65News : Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI
2y
10
20An Overview of AI risks - the Flyer
2y
0