Vael Gates

Wiki Contributions


Re: "Targeted Outreach to Experienced Researchers"

Please apply to work with the aforementioned AISFB Hub! I am actively trying to hire for people who I think would be good fits for this type of role, and offer mentorship / funding / access to and models of the space. Note that you'll need to have AI safety knowledge (for example, I want you to have read / have a plan for reading all of the main readings in the AGISF Technical Curriculum) and high generalist competence, as two of the most important qualifications.

I think most people will not be a good fit for this role (there's more complicated status hierarchies and culture within experienced researchers than are visible at first glance), and like Akash I caution against unilateral action here. I'm psyched about meeting people who are good fits, however, and urge you to apply to work with me if you think that could be you!

Some Rudi communication style anecdotes:

  • Rudi: "Aren't you a beautiful young woman!" almost immediately when we saw each other on video call for the first time (I identify as nonbinary) (<-- this anecdote is a from a few years ago and from memory, though, so he might have just said something quite similar)
  • Rudi, in a Google Calendar invite note, as a closing: "Let's talk, dear I recall, we liked each other a lot.:)"
    • Me in an email back:

      "(Ah, and just one other quick note: in the Google Calendar invite, you've included the line "Let's talk, dear I recall, we liked each other a lot.:)". This feels like flirting to me, and I'm not sure but imagine you wouldn't include this in emails to men, so I just wanted to state a preference that I'd enjoy if sentences like this weren't included in the future! Many thanks, and looking forward to talking to you in June!)"
    • Rudi back: 

      "Hi Vael, 

      Of course, and thank you for nicely stating your preference, and just for the record I would include a phrase like this with men, women, or non-gendered individuals. (Also for the record, maybe I should re-think this.)  And I still appreciate your observation, and will endeavor to be more circumspect in the future. :)  

      Warm and decidedly professional regards, 

      Rudi :)"

I've similarly heard he doesn't do this with men. He also answered my questions when emailing back and forth. But yeah, be ready!

Seems like it's great to do one-on-ones with people who could be interested and skilled from all sorts of fields, and top researchers in similar fields could be a good group to prioritize! Alas, I feel like the current bottleneck is people who are good fits to do these one-on-ones (I'm looking to hire people, but not currently doing them myself); there's many people I'd ideally want to reach. 

Sure! This isn't novel content; the vast majority of it is drawn from existing lists, so it's not even particularly mine. I think just make sure the things within are referenced correctly, and you should be good to go!

With respect to the fact that I don't immediately point people at LessWrong or the Alignment Forum (I actually only very rarely include the "Rationalist" section in the email-- not unless I've decided to bring it up in person, and they've reacted positively), there's different philosophies on AI alignment field-building. One of the active disagreements right now is how much we want new people coming into AI alignment to be the type of person who enjoy LessWrong, or whether it's good to be targeting a broader audience. 

I'm personally currently of the opinion that we should be targeting a broader audience, where there's a place for people who want to work in academia or industry separate from the main Rationalist sphere, and the people who are drawn towards the Rationalists will find their way there either on their own (I find people tend to do this pretty easily when they start Googling), or with my nudging if they seem to be that kind of person. 

I don't think this is much "shying away from reality" -- it feels more like engaging with it, trying to figure out if and how we want AI alignment research to grow, and how to best make that happen given the different types of people with different motivations involved.

A great point, thanks! I've just edited the "There's also a growing community working on AI alignment" section to include MIRI, and also edited some of the academics' names and links.

I don't think it makes sense for me to list Eliezer's name in the part of that section where I'm listing names, since I'm only listing some subset of academics who (vaguely gesturing at a cluster) are sort of actively publishing in academia, mostly tenure track and actively recruiting students, and interested in academic field-building. I'm not currently listing names of researchers in industry or non-profits (e.g. I don't list Paul Christiano, or Chris Olah), though that might be a thing to do. 

Note that I didn't choose this list of names very carefully, so I'm happy to take suggestions! This doc came about because I had an email draft that I was haphazardly adding things to as I talked to researchers and needed to promptly send them resources, getting gradually refined when I spotted issues. I thus consider it a work-in-progress and appreciate suggestions. 

I've been finding "A Bird's Eye View of the ML Field [Pragmatic AI Safety #2]" to have a lot of content that would likely be interesting to the audience reading these transcripts. For example, the incentives section rhymes with the type of things interviewees would sometimes say. I think the post generally captures and analyzes a lot of the flavor / contextualizes what it was like to talk to researchers.

It was formatted based on typical academic "I am conducting a survey on X, $Y for Z time", and notably didn't mention AI safety. The intro was basically this:

My name is Vael Gates, and I’m a postdoctoral fellow at Stanford studying how productive and active AI researchers (based on submissions to major conferences) perceive AI and the future of the field. For example:

- What do you think are the largest benefits and risks of AI?

- If you could change your colleagues’ perception of AI, what attitudes/beliefs would you want them to have?

My response rate was generally very low, which biased the sample towards... friendly, sociable people who wanted to talk about their work and/or help out and/or wanted money, and had time. I think it was usually <5% response rate for the NeurIPS / ICML sample off the top of my head. I didn't A/B test the email. I also offered more money for this study than the main academic study, and expect I wouldn't have been able to talk to the individually-selected researchers without the money component.

Load More