I’m a generalist and open sourcerer that does a bit of everything, but perhaps nothing particularly well. I'm also the Co-Director of Kairos, an AI safety fieldbuilding org.
I was previously the AI Safety Group Support Lead at CEA and a Software Engineer in the Worldview Investigations Team at Rethink Priorities.
Just asked him, will let you know!
I was the original commenter on HN, and while my opinion on this particular claim is weaker now, I do think for OpenAI, a mix of PR considerations, employee discomfort (incl. whistleblower risk), and internal privacy restrictions make it somewhat unlikely to happen (at least 2:1?).
My opinion has become weaker because OpenAI seems to be internally a mess right now, and I could imagine scenarios where management very aggressively pushes and convinces employees to employ these more "aggressive" tactics.