LESSWRONG
LW

358
agucova
3782320
Message
Dialogue
Subscribe

I’m a generalist and open sourcerer that does a bit of everything, but perhaps nothing particularly well. I'm also the Co-Director of Kairos, an AI safety fieldbuilding org.

I was previously the AI Safety Group Support Lead at CEA and a Software Engineer in the Worldview Investigations Team at Rethink Priorities.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
meemi's Shortform
agucova10mo40

I was the original commenter on HN, and while my opinion on this particular claim is weaker now, I do think for OpenAI, a mix of PR considerations, employee discomfort (incl. whistleblower risk), and internal privacy restrictions make it somewhat unlikely to happen (at least 2:1?).

My opinion has become weaker because OpenAI seems to be internally a mess right now, and I could imagine scenarios where management very aggressively pushes and convinces employees to employ these more "aggressive" tactics.

Reply
Understanding Shapley Values with Venn Diagrams
agucova1y30

Just asked him, will let you know!

Reply2
11SPAR Spring ‘26 mentor apps open—now accepting biosecurity, AI welfare, and more!
10d
0
8Kairos is hiring: Founding Generalist & SPAR Contractor
1mo
0
19Apply to SPAR Fall 2025—80+ projects!
4mo
0
6Introducing the Pathfinder Fellowship: Funding and Mentorship for AI Safety Group Organizers
4mo
0
10Apply to be a mentor in SPAR!
5mo
0
9Attend SPAR's virtual demo day! (career fair + talks)
7mo
0
17Interested in working from a new Boston AI Safety Hub?
8mo
0
6Kairos is hiring a Head of Operations/Founding Generalist
8mo
0
5Request for Information for a new US AI Action Plan (OSTP RFI)
9mo
0
11Apply now to SPAR!
11mo
0
Load More