Émile Torres would be the most well-known person in that camp.
I think @rife is talking either about mutual cooperation betwen safety advocates and capabilities researchers, or mutual cooperation between humans and AIs.
Pause AI is clearly a central member of Camp B? And Holly signed the superintelligence petition.
If it is a concern that your tool might be symmetric between truth and bullshit, then you should probably not have made the tool in the first place.
I think one can make a stronger claim that the Curry-Howard isomorphism mean a superhuman (constructive?) mathematician would near-definitionally be a superhuman (functional?) programmer as well.
Trying to outline the cruxes:
He was a commenter Overcoming Bias as @Shane_Legg, received a monetary prize from SIAI for his work, commented on SIAI's strategy on his blog, and took part in the 2010 Singularity Summit, where he and Hassabis would be introduced to Thiel as the first major VC funder of DeepMind (as recounted by both Altman in the tweet mentioned in OP, and in IABIED). I'm not sure this is "being influenced by early Lesswrong" as much as originating in the same memetic milieu – Shane Legg was the one who popularized the term "AGI" and wrote papers like this with Hutter, for example.
IIRC Aella and Grimes got copies in advance and AFAIK haven't written book reviews (at least not in the sense Scott or the press did).
On the flip side the OpenAI foundation now have the occasion to do the funniest thing.