Apply at the link.

We're expanding our red team, whose job it is to try breaking our LLMs to make them exhibit unexpected and unsafe behaviors. Note that one of the requirements for this specific position is a PhD in linguistics. If you have prior red teaming experience (professional or personal), even better :) If you do apply at the link, please send me a message on here as well.

If you do not meet the PhD requirement but you think you're a good candidate for the red team (e.g. have discovered new jailbreaks or adversarial techniques, are curious and have a hacker mindset, have done red teaming before, etc), you can still message me on here so we can keep you in mind if other red teaming positions open up in the future without the PhD qualification.

New Comment