Hey folks! Pleased to announce we have a new open position for a Founding Research Engineer at Harmony Intelligence. You’ll be responsible for measuring and identifying dangerous AI capabilities across various domains: cybersecurity, biosecurity, persuasion and manipulation, self-exfiltration and self-replication, and more.

Location:
We're a small but quickly growing remote-first startup, with folks in San Francisco and Sydney.

About Harmony:
We're specifically focused on reducing catastrophic AI risk by building evals and conducting red teaming/audits. Though we’re still young, we're already making a big impact! We’ve published our first automated red teaming paper to massively positive reception from the AI community, and are actively working on model evaluations and other AI safety research. For any further questions, please reach out to our co-founder and CTO Alex Browne. To apply or see more information, go to harmonyintelligence.com/jobs.

New Comment